<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="">> Hopefully this is<br>
> enough to make the attacker wary. This has been referred to as the<br>
> "malicious but cautious" adversary model, and I like Tom Ritter's<br>
> explanation that 98% of users trust and 2% of users verify.<br>
<br>
</div>If we're going for 2% verify, does this accomplish more than making the<br>
visibility of key conflicts an option that defaults to off, but which 2%<br>
of users will flip on?</blockquote><div><br></div><div>An important point I've forgotten to stress is that there's no reason building this audit infrastructure prevents normal key fingerprint verification. The centralized service can show key fingerprints and let users manually mark them as out-of-band verified, so the 2%-style users can check and get similar security to what they'd get in a decentralized system.</div>
<div><br></div><div>I'm certainly not arguing against building those mechanisms in, just that we can aim for a weaker property for everybody who will ignore them anyways: trusting the centralized service to not have issued fraudulent certificates because this would be visible in the audit logs. This is not as strong as we'd like, because even if paranoid users find a fraudulent certificate issued in their name, it's difficult to prove it to anybody else (fixing this would be nice). But hopefully it's a safety valve... if enough reputable users claim fraudulent certs have been issued this might create a problem for the centralized service.</div>
<div><br></div><div>I agree the security isn't what we'd like for users who aren't confirming their contact's keys themselves. My goal in tabling this for discussion was to discuss if there's any tangible benefit. I think there is... widespread certificate misissuance is not going to go undetected, and if the attacker is going to be very targeted to try to avoid detection, they likely have the capability to do TAO against those individuals anyway and won't attack this system (John-Mark mentioned TAO as a reason this system can be beaten, but I think basically any crypto will fall in that scenario and this suggests we should think about crypto more as preventing highly scalable attacks).</div>
<div><br></div><div>If the community doesn't think building such an auditing system buys much perhaps it's a waste of engineering effort better spent elsewhere, because the cost of running this would not be zero.</div>
</div></div></div>