[messaging] Thoughts on keyservers

elijah elijah at riseup.net
Mon Aug 18 16:02:36 PDT 2014


On 08/18/2014 12:09 AM, Trevor Perrin wrote:

> A sharper criticism is that changing providers and usernames is
> costly, so Bob's flexibility is limited.  Moreover, Bob's provider is
> on the communication path, thus is one of the most important threats
> for end-to-end crypto to protect against.

I think it is important to be clear what parts are key discovery and
what parts are key validation. Just because Bob's provider is in the
communication path does not necessarily mean that validation rests on
trusting the provider.

On 08/18/2014 12:09 AM, Trevor Perrin wrote:

> Keyservers could also support "transparency"
> similar to CT... 

On 08/18/2014 02:35 PM, Tony Arcieri wrote:

> I think whatever the solution is, it needs a CT-like scheme to help
> detect spoofed IDs.

I would like to lay out an argument against using CT for user keys
(borrowing from arguments made by Bruce).

First, let me say that I convinced that the general Ben Laurie approach
to this problem space is the correct (least-bad) approach: create a
finite number of autonomous powers and then audit their work. In other
words, a balance of powers. Or, as the Gipper would say, "trust but
verify". Such an approach, for example, would be much better than the
bitcoin block chain. But I think CT is probably not right for user keys
(at least, not for now).

It is very confusing to conflate the CA problem with the user key
problem. They are certainly both binding problems, but sufficiently
distinct.

(1) The web server problem: a present server needs to prove itself to a
present visitor. Addressed by CT, etc.

(2) The user authentication problem: a present website visitor needs
prove itself to a present website. Addressed by browserid, webid.

(3) The user-key problem: a present sender needs to discover and
validate the key of an absent recipient.

In the case of the web server problem, when the server presents its
certificate to be verified by the client against the log, the server
knows it is sending the correct certificate. Once the client audits the
cert, both sides are agreed.

In the case of user keys, the sender can authenticate the recipient's
public key against the audit log, but ultimately only the recipient
knows for sure which public keys are correct for the recipient.
Unfortunately, the recipient is not around to ask.

The recipient does not need to be around if we make a rule that the
global audit logs (however partitioned) can only contain exactly one
public key entry for a given user address and any subsequent entries
would need to be signed by the private key corresponding to the first
entry. The recipient can audit the log for their public key once, and be
assured that this is the only key that will ever show up in the log for
their address.

This may be a perfectly reasonable constraint. If you lose your key, you
lose your account.

Suppose we don't want this constraint (for usability reasons, or because
partitioning logs is not so easy). The various logs might contain any
number of entries for the recipient, and the recipient would need to
periodically audit the logs themselves to make sure that no bogus public
keys for the recipient's address are in the logs.

This form of auditing for your own keys is precisely what is required in
a system like nyms or nicknym or DANE. With CT-like log, the auditing is
better, because you can audit the full history. Without the log, you can
only just audit a snapshot in time.

Now we get to my claim: For user keys, CT does not add enough benefit to
justify the complexity. I say this because I think we really really want
to be able to query keys semi-anonymously so that no key directory knows
who is communicating with whom.

If a user key discovery and validation system already enforces
semi-anonymous discovery, then it becomes very hard for a key endorser
to publish bogus keys unless it knows that the sender will attempt to
discover keys within a specific time frame and that the recipient will
not be attempting to audit within that same time frame.

One might assert that semi-anonymous discovery is more difficult than
CT. That might be the case, although key requests can be latency
tolerant (e.g. nyms). Building proxy support into the protocol, as with
nicknym and nyms, or via Tor is probably good enough for now.

In summary: a global append only log is beneficial if (a) you enforce
entry uniqueness or (b) you want slightly improved auditing of your own
keys. A bitcoin-like block chain goes with (a), the entry uniqueness
constraint, and the CT for user keys proposals go for (b), marginally
better auditing.

Ultimately, no matter what the system is, only the user's agent knows
what their real public keys are. Any system of non-centralized automatic
key validation for user keys will need fairly heavy key management logic
on some device that the client puts trust in.

The usual caveats apply: like most people, I pretend to know the broad
outlines of how CT logs and BT block chain works, but I am certainly
wrong on the details. Many people on this list intimately know those
details, so apologies in advance if some of my assumptions are off.

-elijah

p.s.

key directory: server that clients visit to discover keys, akin to
existing openpgp keyservers.

key endorser: organization responsible for endorsing public keys
(binding key to address). in nicknym and DANE, this is the email
provider. for nyms, it could also be an independent third party. in CT,
it is a CA.




More information about the Messaging mailing list