<div dir="ltr">This is a great breakdown, thank you David! A few comments:<div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jun 16, 2014 at 9:59 AM, David Leon Gil <span dir="ltr"><<a href="mailto:coruus@gmail.com" target="_blank">coruus@gmail.com</a>></span> wrote:<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div><span>
<div>## Summary</div>
<div><br></div>
<div>The overall goal: Determine whether fingerprint format affects the reliability of user comparison of fingerprints.</div></span></div></blockquote><div><br></div><div>I think our real-world goal is "helps users ensure they're communicating with the intended party" We can specify that we're only looking at solutions here that involve fingerprint comparison but it's worth keeping the real-world goal in mind.</div>
<div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><span>
<div>*Factor B.* Incentive to reject fakes:</div>
<div><br></div>
<div>1. None</div>
<div>2. Desire to "do well" or please experimenter</div>
<div>3. Game-like incentive (e.g., Mechanical Turk performance-based compensation)</div>
<div>4. 'Real-world' privacy-preservation-like incentive (e.g., belief that security of answers to personally sensitive questions rests on correct performance)</div></span></div></blockquote><div><br></div><div>Not sure if #3 encapsulates this, but you can use monetary incentives in an experiment.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span>
<div>### Factors that are measurable, but hard to select for</div>
<div><br></div>
<div>*Factor E.* Subject type:</div>
<div><br></div>
<div>1. Pure novice subjects (e.g., an Internet user who doesn't know what a fingerprint is, doesn't understand the cost of generating collisions, and has never attempted this tasks)</div>
<div>2. Educated novice subjects</div>
<div>3. Experienced subjects</div>
<div>4. Educated and experienced subjects</div></span></blockquote><div><br></div><div>This can be approximated by giving users more instructions in one treatment than another, so perhaps this belongs above. </div><div><br>
</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span><div></div>
<div>*Factor F.* Learning style:</div>
<div><br></div>
<div>(Needs research; likely needs to be measured and results normalized to population prevalences. Note that I believe that there is substantial evidence that a one-size-fits-all fingerprint verification format will be inferior to allowing users to choose a preferred fingerprint format. Here, it might be interesting to do an experiment with feedback; e.g., have a subject choose a fingerprint format to verify, provide feedback on accuracy, then allow choosing another format, etc.)</div>
</span></blockquote><div><br></div><div>I would discourage any attempt to vary this factor. For the conceivable future a one-size-fits-all format is necessary.</div><div> </div><div>Missing factor H? The similarly of the fake fingerprints to the expected genuine ones.</div>
<div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><span><div></div>
<div>## The proposed experiment</div>
<div><br></div>
<div>As I understand it, the consensus is that an experiment that is likely to have discriminatory power among fingerprint types is infeasible to conduct in a realistic setting. (I.e., the 'head fake' type scenarios.) I'd tend to agree.</div>
<div><br></div>
<div>So, the proposed experiment is, approximately: A1/B3/C1.</div>
<div><br></div>
<div>For that experiment, I'd note that the actual probability of a fake fingerprint (and perhaps the 'goodness' of the fake) has to vary so to allow extrapolation to the zero-cheater case. (Though I'd expect that very few participants will cheat unless the compensation scheme is extremely imbalanced.)</div>
</span></div></blockquote><div><br></div><div>Why are participants cheating? I think all errors should be introduced by experiments rather than have subjects insert their own error.s</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div><span>
<div>## The gold-standard experiment</div>
<div><br></div>
<div>A large trial among users of messaging software that requires fingerprint verification, in which errors are introduced (with some small probability) in fingerprints.</div></span></div></blockquote><div><br></div><div>
Personally I think this is the way to go.</div></div><br></div></div>