[messaging] Another Take At Public Key Distribution
Trevor Perrin
trevp at trevp.net
Fri Jul 24 00:07:24 PDT 2015
On Wed, Jul 22, 2015 at 8:31 PM, Andres Erbsen <andreser at mit.edu> wrote:
>
> For about four years I've been experimenting with different ways of
> making rigorous end-to-end security and public key cryptography
> manageable for casual users (in particular, users who do not know and
> should not be forced to learn the meanings of the words in the last
> sentence). This post describes the overall design and security
> properties of a system whose initial version I drafted in September
> 2013, and which I am now quite satisfied with.
Thanks Andres,
Nice writeup! Let me summarize to see if I understand:
I think you're starting with a transparency log in-between CT and CONIKS:
* CT publishes all certificates in clear
* CONIKS doesn't publish the name->pubkey map for privacy reasons,
but publishes a hash-tree root each epoch (e.g. each day). Clients
can query the keyserver for inclusion proofs for a particular
name->pubkey entry in a particular epoch.
You'd publish the VRF(name)->pubkey map, using a CONIKS-like "VRF"
function so the published map doesn't reveal usernames directly.
I think one of your goals is to efficiently handle the case where a
client audits their own name->pubkey entry at time T0, then goes
offline until T10. With a CONIKS-like system the client would have to
audit each past epoch by checking inclusion proofs for T1, T2, ...,
T10.
With your system you assume 3rd-party verifiers check that each entry
only changes in allowed ways. Each entry could contain a hash of all
previous changes to that username, and the verifiers would check that
these hashes are correct. Then at time T10, the client only has to
check an inclusion proof for T10 and that the hash value for her entry
at T10 is the same as T1.
I think this makes feasible your next goal, which is reducing epoch
length to the point that registering or changing a public key would be
reflected in the next root hash in "real time", i.e. a few seconds.
You're also arguing that verifiers could operate in realtime, which
seems plausible: if a large keyserver has 1 million key changes per
day, each verifier only has to handle ~10 updates/sec to the Merkle
tree, and send a signature back to the keyserver every few seconds.
So every key lookup from a client would receive an inclusion proof and
verifier signatures (the client might also want a consistency proof
between lookups, i.e. proof that different root hashes are chained
together, but that's not hard).
Having epochs every few seconds seems attractive because you can allow
real-time key changes without needing what CT calls "SCTs" or CONIKS
calls "temporary bindings" - signed promises to include an entry in
the next Merkle tree. Relying on (often unreliable) client clocks to
schedule a later security check has significant false-positive /
false-negative risk.
Of course, this comes with costs:
* You're publishing more of the user database than CONIKS, so there's
perhaps more info on the keyserver's userbase revealed.
* You're assuming verifiers are connected to the keyserver and can
respond quickly and with high availability. There's big questions
about who would run them, and whether m-of-n schemes could provide
distributed trust and sufficient reliability.
Anyways, this is definitely an interesting part of the "transparency
log" design space - let me know what I've misconstrued or missed!
Trevor
More information about the Messaging
mailing list