[messaging] Thoughts on keyservers

Ximin Luo infinity0 at pwned.gg
Tue Aug 19 05:45:47 PDT 2014

On 19/08/14 00:02, elijah wrote:
> On 08/18/2014 12:09 AM, Trevor Perrin wrote:
>> Keyservers could also support "transparency"
>> similar to CT... 
> On 08/18/2014 02:35 PM, Tony Arcieri wrote:
>> I think whatever the solution is, it needs a CT-like scheme to help
>> detect spoofed IDs.
> I would like to lay out an argument against using CT for user keys
> (borrowing from arguments made by Bruce).
> First, let me say that I convinced that the general Ben Laurie approach
> to this problem space is the correct (least-bad) approach: create a
> finite number of autonomous powers and then audit their work. In other
> words, a balance of powers. Or, as the Gipper would say, "trust but
> verify". Such an approach, for example, would be much better than the
> bitcoin block chain. But I think CT is probably not right for user keys
> (at least, not for now).

I think people keep making the same mistake of treating CT as "providing key validity". It does *not* provide key validity, nor binding; it provides transparency *to enable systems that do provide the former*. In other words, via the auditing and monitoring components, then you gain "some confidence probability" that the key is valid.

These are not specified in great detail, partly because there are so many ways of doing it, but it is an important part of the system. Often we don't even have an agreed precise *definition* of what it means for the key to be valid. We only have rough definitions, and we try to design auditing and monitoring around this; this is brittle and so we should decide on more precise definitions. So, earlier Nadim is right to point out that "auditing" is vague, but to be fair on Nym I don't see anyone doing a precise job of specifying *how auditing should be done* nor defining *key validity* properly.

For example, for web servers, "key validity" means roughly, the owner of the key is the same entity that controls the DNS name on the cert. For PGP email UIDs, it means the entity can send to/from the email address. For PGP name UIDs, it's a rabbithole. Actually the first two are also rabbitholes, but I'll keep it short here. Overall however, this is not a cryptographic problem, but a logistical one. (This is why I choose the term "key validity", to distinguish it from cryptographic "authenticity" once you have actually validated the key.)

Another thing is failure modes. What happens if auditing/monitoring finds that a CA is acting maliciously? They will say "sorry it won't happen again" and we forgive them and continue using them, because they are "too big to fail"? The bad certs are still in the log, so we need to have a revocation system, and trusting that is a whole new problem. Under Nym, multiple authorities have to certify a key, so it is easier to drop a CA, so the failure mode is better.

> It is very confusing to conflate the CA problem with the user key
> problem. They are certainly both binding problems, but sufficiently
> distinct.
> (1) The web server problem: a present server needs to prove itself to a
> present visitor. Addressed by CT, etc.
> (2) The user authentication problem: a present website visitor needs
> prove itself to a present website. Addressed by browserid, webid.
> (3) The user-key problem: a present sender needs to discover and
> validate the key of an absent recipient.
> In the case of the web server problem, when the server presents its
> certificate to be verified by the client against the log, the server
> knows it is sending the correct certificate. Once the client audits the
> cert, both sides are agreed.
> In the case of user keys, the sender can authenticate the recipient's
> public key against the audit log, but ultimately only the recipient
> knows for sure which public keys are correct for the recipient.
> Unfortunately, the recipient is not around to ask.

CT does not address (1) any more than it addresses (2) or (3). It is the auditing and monitoring surrounding CT that provides (1), and even currently there are known gaps and it is only half-implemented (no gossip protocol). As you say, "ultimately only the recipient knows for sure which public keys are correct for the recipient". This is true *for webservers as well*.

If I am in a country that is giving me a shadow mirror internet with fake DNS and HTTPS and fake CT gossip protocol, a CT log (even if I somehow validate that correctly) won't help me. The fake certs will be added to the log (because that's what the log does, it collects everything) and I will validate the website. You can actually see this from the "key validity" definition for webservers above, it depends on DNS which is an *insecure system*. Sure, this might be "good enough" for most use cases, but it is important to be precise *about what security property you actually achieve*.

The only way I can be *absolutely sure* is to meet up physically with the owner and verify the key in this way. (And of course by absolutely I mean, "not absolutely, but if I trust my own eyes and that the enemy doesn't have technology to make holograms").



-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: OpenPGP digital signature
URL: <http://moderncrypto.org/mail-archive/messaging/attachments/20140819/aed9bf40/attachment.sig>

More information about the Messaging mailing list