[messaging] Identity keys and fingerprints

Trevor Perrin trevp at trevp.net
Tue Jan 6 04:18:59 PST 2015

On Mon, Jan 5, 2015 at 10:03 PM, David Leon Gil <coruus at gmail.com> wrote:
> On Mon, Jan 5, 2015 at 4:18 PM, Trevor Perrin <trevp at trevp.net> wrote:
>> The most practical approaches are probably either synchronizing the
>> identity key between devices, or using it to sign device keys.
> There is no need for an "identity key" to sign anything except an
> initial device key. Just chase cross-signatures back to a
> distinguished (by some flag) identity key that is stored offline, and
> use a hash of that as the fingerprint:

Sure.  This is the "every user is a CA for their own devices" approach.

PGP's subkeys are a simple form of this.  Phil Hallam-Baker has
advocated this [1].  TACK is similar but for websites [2].  I
advocated something similar in my youth [3].

Let's view this as: A user has a tree of signed keys rooted in an
offline signing key, and some keys are labelled as "devices".

To send a message to Bob, Alice would fetch all of Bob's device keys,
and also fetch and verify the "certificates" (signatures and pubkeys)
linking the device keys to the root.  Then Alice would send her
message to all the device keys.

I've been advocating the simpler alternative of sharing one private
key between devices.  To compare:

Storing extra signatures and device keys for each user, and having
Alice fetch and verify them, adds communication and computation.  So
it needs to add benefits to outweigh these costs.

By default, the device keys have the same privileges as a single
identity key:  Every signed device key can read all your messages and
send messages as you.  Potential benefits would occur if:
 (a) You can constrain the device keys somehow with certificates
(expiration time?  limited privileges?)
 (b) You can revoke the device keys.
 (c) Using signatures and device keys is more compatible with secure
HW than exporting/importing a private key.
 (d) Attributing compromises to a specific device key is useful.

Regarding (a), I don't think frequent issuance of short-lived device
certs is practical, and time sync between users isn't reliable.  I'm
not sure what other privilege or policy constraints would be both
useful and manageable by users.

I'm also skeptical about revocation (b).  The way people imagine this
is that on suspecting a device compromise, a user would pull their
offline key out of storage and execute a revocation GUI to publish
some signed revocation statement.

That's a complex task for a sysadmin, much less a regular user.  I
suspect few users would be capable of handling this.

If you didn't have revocation, you'd handle the suspected compromise
by simply generating a new identity key, re-syncing it across devices,
and telling people about it.  This re-uses the existing mechanisms for
adding devices, so seems much simpler in terms of design,
implementation, and useability.

In addition, revocation can actually worsen security:
 * Revocation provides a misleading sense of security *unless* you can
guarantee the revocation statement is seen by your correspondents.  So
revocation seems dangerous unless it can be integrated with some sort
of transparency log.  Which is a whole other design question; but
perhaps solvable.
 * If an attacker compromises the key(s) that can issue revocation
statements, he can revoke *your* devices.  This becomes particularly
significant if you combine revocation signatures with some sort of
rollover capability, meaning the attacker could use a compromised key
to revoke all your keys and take over your identity.

(c) may have a small benefit with some hardware.  But I would assume
most newer hardware would support code execution or wrapped key import
/ export, and I'm generally interested in using new curves like 25519
that won't work with old HW anyways.

(d) has a small benefit. It's nice that if messages are forged as you,
you could forensically trace them to a particular compromised device.

Adding this up:  For an email / text-messaging case, I'm doubtful that
(a) or (b) are worth their complexity.

I'm also not convinced that (c) or (d) by themselves justify the added
computation and communication added by device keys and signatures.
But I admit that's a less clear-cut tradeoff analysis, and this is a
worthwhile and complex topic.


[1] http://prismproof.org/how.html ("Personal Privacy Environment")
[2] http://tack.io/
[3] http://trevp.net/cryptoID/

More information about the Messaging mailing list