<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<span id="mailbox-conversation"><div># Studying verification of fingerprints</div>
<div><br></div>
<div>## Note</div>
<div><br></div>
<div>My goal is to prepare a brief summary of the fingerprint usability study suitable for presenting to behavioral economists / cognitive scientists for review. <span style="-webkit-tap-highlight-color: transparent;">I'd very much appreciate any comments, suggestions, or corrections</span>
</div>
<div><br></div>
<div>(I used to design and review similar experiments -- howbeit in rather different subject areas -- in experimental philosophy/cogsci and behavioral economics in my law school days; my intention is to flog this study around some former colleagues, and see if anyone has time to review or comment.)</div>
<div><br></div>
<div>## Summary</div>
<div><br></div>
<div>The overall goal: Determine whether fingerprint format affects the reliability of user comparison of fingerprints.</div>
<div><br></div>
<div>It's obvious that there a lot of interesting studies that can be carried out in this area. To try to summarize some of the prior discussion (and perhaps add some thoughts of my own), the (indirect) factors we'd expect to influence performance:</div>
<div><br></div>
<div>## Experiment background factors and metrics</div>
<div><br></div>
<div>### Factors controllable by experiment design:</div>
<div><br></div>
<div>*Factor A.* 'Type' of memory:</div>
<div><br></div>
<div>1. Short term</div>
<div>2. Medium/long-term single shot</div>
<div>3. Medium/long-term with rehearsals</div>
<div><br></div>
<div>*Factor B.* Incentive to reject fakes:</div>
<div><br></div>
<div>1. None</div>
<div>2. Desire to "do well" or please experimenter</div>
<div>3. Game-like incentive (e.g., Mechanical Turk performance-based compensation)</div>
<div>4. 'Real-world' privacy-preservation-like incentive (e.g., belief that security of answers to personally sensitive questions rests on correct performance)</div>
<div><br></div>
<div>*Factor C.* Psychological incentive to accept fakes:</div>
<div><br></div>
<div>1. None</div>
<div>2. Game-like (e.g., performance compensation + directive to answer as quickly as possible)</div>
<div>3. Realistic pressure (e.g., pressure to please experimenter)</div>
<div><br></div>
<div>*Factor D.* Expected baseline error rate. (Approximately continuous variable on a repeated task? On a single-shot task, likely highly correlated with other experimental parameters.)</div>
<div><br></div>
<div>### Factors that are measurable, but hard to select for</div>
<div><br></div>
<div>*Factor E.* Subject type:</div>
<div><br></div>
<div>1. Pure novice subjects (e.g., an Internet user who doesn't know what a fingerprint is, doesn't understand the cost of generating collisions, and has never attempted this tasks)</div>
<div>2. Educated novice subjects</div>
<div>3. Experienced subjects</div>
<div>4. Educated and experienced subjects</div>
<div><br></div>
<div>*Factor F.* Learning style:</div>
<div><br></div>
<div>(Needs research; likely needs to be measured and results normalized to population prevalences. Note that I believe that there is substantial evidence that a one-size-fits-all fingerprint verification format will be inferior to allowing users to choose a preferred fingerprint format. Here, it might be interesting to do an experiment with feedback; e.g., have a subject choose a fingerprint format to verify, provide feedback on accuracy, then allow choosing another format, etc.)</div>
<div> </div>
<div>*Factor G.* General memory capacity. For short-term multi-shot tests, easy to control for by, e.g., digit-span tests administered to (a portion of) the experimental population. (This is important to measure because Mech. Turk subjects taking the study may not be representative of users.)</div>
<div><br></div>
<div>## The proposed experiment</div>
<div><br></div>
<div>As I understand it, the consensus is that an experiment that is likely to have discriminatory power among fingerprint types is infeasible to conduct in a realistic setting. (I.e., the 'head fake' type scenarios.) I'd tend to agree.</div>
<div><br></div>
<div>So, the proposed experiment is, approximately: A1/B3/C1.</div>
<div><br></div>
<div>For that experiment, I'd note that the actual probability of a fake fingerprint (and perhaps the 'goodness' of the fake) has to vary so to allow extrapolation to the zero-cheater case. (Though I'd expect that very few participants will cheat unless the compensation scheme is extremely imbalanced.)</div>
<div><br></div>
<div>## The gold-standard experiment</div>
<div><br></div>
<div>(The above is obviously a useful preliminary towards a realistic experiment; the following is my idea of what a 'gold-standard' experiment on this would look like.)</div>
<div><br></div>
<div>A large trial among users of messaging software that requires fingerprint verification, in which errors are introduced (with some small probability) in fingerprints.</div>
<div><br></div>
<div>If this is set up so that (1) users give some form of consent to the experiment and (2) the experiment never causes a user to falsely accept a forgery (i.e., if a fake fingerprint is accepted, the user is reprompted suitably*)<span style="-webkit-tap-highlight-color: transparent;">, are there any ethical objections?</span>
</div>
<div><br></div>
<div>- David</div>
<div><br></div>
<div>*(This would probably require highlighting the position of the introduced error.)</div>
<div><br></div>
<div><br></div>
<div>PS. And apologies for the post about ring signatures last night; as Trevor was kind enough to point out to me, the curves list is a much more appropriate place for discussion of that.</div></span><div class="mailbox_signature">—<br>Sent using alpine: an Alternatively Licensed Program for Internet News and Email</div>
</body></html>