On Tuesday, June 17, 2014, Daniel Kahn Gillmor <<a href="javascript:_e(%7B%7D,'cvml','dkg@fifthhorseman.net');" target="_blank">dkg@fifthhorseman.net</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On 06/16/2014 09:59 AM, David Leon Gil wrote:<br>
> *Factor C.* Psychological incentive to accept fakes:<br><br>
In the real world, the incentive to accept fakes is slightly different<br>
than either of the above. In nearly all scenarios [0] where a<br>
fingerprint is presented and needs to be confirmed or denied, it is *an<br>
obstacle in the way of doing what you were trying to do*.</blockquote><div><br></div>Agreed. There is, however, a fairly rich literature showing that, under many circumstances, experimental subjects want to please whomever is conducting the experiment. I think that you can therefore come close to real-world conditions:<div>
<br></div><div>E.g., (a non-MT) experiment might be: Subject is told that the study is about the user interface for playing some game* via ChatSecure; will play 10 games. As part of instructions, are told to verify fingerprint for games; if fingerprint doesn't verify, won't be able to play. Do a few sessions, presenting valid fingerprints; then present invalid fingerprint (before, e.g., session 5).</div>
<div><br></div>The idea is to make verifying the fingerprint purely instrumental, as it is in real life, and an invalid fingerprint an obstacle.<div><div><br></div><div>(Some of the SSL warning studies have played with this -- but the designs get complicated quickly. <a href="http://scholar.google.com/scholar?oe=UTF-8&hl=en&client=safari&um=1&ie=UTF-8&lr=&cites=17250262855322484964">http://scholar.google.com/scholar?oe=UTF-8&hl=en&client=safari&um=1&ie=UTF-8&lr=&cites=17250262855322484964</a>)</div>
<div><br></div><div>*The activity needs to be something modestly engaging.<br><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
That is, if you say "this doesn't match", then you don't get to talk to<br>
the other person, or you don't get to visit the web site, or you don't<br>
get to log into the server.<br>
<br>
I'm not sure how you'd model this incentive properly in an experiment.</blockquote><div><br></div>But you'd agree, surely, that the in-place test is fine? <br><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
--dkg<br>
<br>
[0] OTR is just about the only exception to this obstacle situation, and<br>
in practice, many users of OTR simply skip the fingerprint comparison or<br>
SMP confirmation step entirely (which i think might even be strictly<br>
worse than accepting an unverified fingerprint once and getting<br>
TOFU-like alerts upon peer key change).<br>
</blockquote><div><br></div></div></div>