<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>> There is maybe some sense that the log provides "proof" that the people<br>
> verifying their own communication can use to publish their findings of a<br>
> MITM, but since the log itself is controllable by those parties (they<br>
> are capable of changing their own keys to whatever they would like in<br>
> the log), everyone still just has to take their word for it.<br>
<br>
</div>For this reason, while I agree there's some "deterrence value" in<br>
threatening to expose the service if it launches a MITM, I think this<br>
value is limited (the service has a good chance of trying out a MITM<br>
and getting away with it, while the user either ignores the notice or<br>
freaks out to a world that doesn't believe them).<br>
<br>
I also think services would be reluctant to advertise this as a<br>
feature ("If you ever get a key-change notification you didn't expect,<br>
just freak out and tell everyone we're discredited!"), and might be<br>
reluctant to adopt this due to the reputation risk.<br>
<br>
So how to explain and market this to services, as well as their users,<br>
seems like an open question.<br></blockquote><div><br></div><div>Worth stepping back and re-stating the original design motivation. This is not intended to be a better solution for users capable of maintaining a secure client, installing a good crypto app properly, and securely verifying fingerprints with their contacts ("everybody downloads and uses PGP world"). The goal is to provide *something* for users at a centralized service that gives end-to-end encryption in such a way that some public confidence can be built up in the service provider, and the service provider has a technical reason to refuse surveillance requests.</div>
<div><br></div><div>This thread has raised a number of doubts about how possible this is but the main question here should really be, would the centralized service have a significantly enhanced motivation and ability to fight off surveillance requests? I think there would be some increase, but the significance is debatable, and I'm somewhat persuaded by Trevor's "inaccurate loaded gun" thinking that companies wouldn't want to take this risk.</div>
<div><br></div><div>Perhaps there's a deeper impossibility result here-this system would be great if only there were a way to build the system such that disputes over certificate issuance were easy for the public to adjudicate. But that would necessarily imply that you had a way to build a publicly-agreed upon map of people to certs, which is the original problem...</div>
</div></div></div>