<div dir="ltr"><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Mar 26, 2014 at 5:24 PM, Moxie Marlinspike <span dir="ltr"><<a href="mailto:moxie@thoughtcrime.org" target="_blank">moxie@thoughtcrime.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><br>
On 03/25/2014 08:24 PM, Joseph Bonneau wrote:<br>
> *The service runs a Certificate Transparency-style log for every<br>
> certificate it issues and a similar transparency log for revocation<br>
> (Revocation Transparency or Mark Ryan's Enhanced Certificate<br>
> Transparency). Users query these structures to get proof that the certs<br>
> they are using are genuine and not revoked.<br>
> *Outside auditors scan the log for correctness and provide a web<br>
> interface to check which certs were issued for your username and when.<br>
<br>
</div>It seems like you're trading a user's ability to deal with a<br>
key-conflict in band with a user's ability to audit a key-conflict at<br>
some periodic interval.<br></blockquote><div><br></div><div>I hope not, because I fully expect most users will do neither. My goal was to provide *some* protection for users who don't do anything thanks to a few users who actually do audit the log against their own history. Security comes from the fact that the authorities might be able to MITM some users who aren't checking, but eventually they'll slip up and attack a users who does check and can then prove it. Hopefully this is enough to make the attacker wary. This has been referred to as the "malicious but cautious" adversary model, and I like Tom Ritter's explanation that 98% of users trust and 2% of users verify.</div>
<div><br></div><div>That's the goal here. With individual fingerprint-checking, the verifying users are really only protecting themselves since proving the bad fingerprint is an actual attack is hard.</div></div></div>
</div>