[messaging] Transparency for E2E encrypted messaging at a centralized service

Trevor Perrin trevp at trevp.net
Fri Mar 28 21:15:41 PDT 2014

On Fri, Mar 28, 2014 at 2:34 AM, Ben Laurie <ben at links.org> wrote:
> On 28 March 2014 05:42, Trevor Perrin <trevp at trevp.net> wrote:
>> How it might work
>> ------------------
>> If I understand Ben [1,2], the service could maintain a "Sparse Merkle
>> Tree" which implements an "authenticated dictionary" with these
>> properties:
>>  - Given the root hash of the tree, membership of a particular
>> (username, public key) or (username, <none>) pair can be proven by a
>> fairly small "merkle path" of hash values showing the connection
>> between a leaf element and the root.
>>  - The tree can be efficiently updated by publishing deltas [2].
>> So the service would publish the Sparse Merkle Tree, using deltas so
>> changes to the tree could be efficiently processed by 3rd-party
>> "monitors" (imagine the EFF).
>> For Alice to verify Bob's public key...
>>  - Alice would retrieve the tree's root hash from a trusted 3rd-party
>> monitor.  Alice would query the service for Bob's (username, public
>> key) and a merkle path showing membership in the tree.  By checking
>> membership, Alice confirms that the public key has been published.
>>  - Bob would register with the 3rd-party monitor to receive
>> notifications when his public key changes.
>> With this, the service can't change Bob's public key without Bob
>> hearing about it!
> Nice, but here's one more piece - you need to avoid the log cheating
> by, say, reinstating Bob's old, compromised key temporarily, which
> would be invisible to anyone except the victim if you use the sparse
> tree alone.
> So, you also need to keep some kind of change timeline - a traditional
> Merkle Tree of the heads of the sparse tree would do just fine for
> this. Alice would then need to also see a proof that the current hash
> was consistent with her previous view of the timeline tree.
> Then flip-flops as described become visible to Bob, too.

I was assuming Alice would retrieve the latest root hash from a
trusted 3rd-party monitor, and a merkle path from the service.

If Alice wants to lessen her trust in the monitor, she could retrieve
the root hash from several monitors and check that it's the same.
What's the value in having Alice check "consistency proofs" showing
how a root hash is related to the previous?

One piece I missed was the timing issue - Bob's new key won't get
published immediately, so either:
 - Bob waits X minutes for his new key to become published before
using it, or...
 - Bob uses it right away, but if he doesn't present a merkle path to
Alice, then Alice waits X minutes and checks the monitor for one

>> Obstacles
>> ----------
>>  - This only has value if Alice and Bob are in the tiny fraction of
>> users who will run 3rd-party clients and setup notifications with a
>> monitoring service.
> So build that into the mail clients.

I was thinking a lot of people use the app supplied by the
social-media service.  However, in the case of a phone app, the
service-provided code could be audited and doesn't get updated that
frequently, so I was probably wrong to say this only has value with
3rd-party clients.

>>  - Even if Bob observes the service being malicious, he has no way to
>> prove this - it will just be his word against the service.  So the
>> "herd immunity" value of exposing the service's perfidy seems low.
>> (In contrast to Certificate Transparency for HTTPS, which is likely to
>> expose bad/hacked CAs who obviously shouldn't be issuing the revealed
>> certificates).
> Hmm. Presumably Bob would be able to show a new key, signed by many of
> his correspondents, that did not correspond to the published key. That
> seems strong than just Bob's word.

So Bob and his friends call the NY Times and explain that a published
key for Bob yesterday wasn't Bob's real key, and they've signed Bob's
real key to prove it.

They're sure this is a "MITM" and not just a glitch in the 3rd-party
app Bob's running, malware/hackers targeting Bob, or Bob forgetting
about the other app on his tablet.  They promise they're telling the
truth and not just trying to get attention.

Maybe it's a MITM, maybe not.  How would the NYT know?

Or rather - will SocialNetworkCo want to deploy a system that (A)
advertises they could MITM their users, and (B) gives their most
paranoid users the ammo to claim they've done so, without proof one
way or another?

> Ideally you'd also want signed revocations of keys, so non-revocation
> could be demonstrated.
>>  - You're asking the service to publish large numbers of usernames,
>> which has privacy / business implications.
> I guess it is in the nature of public monitoring that you have to
> publish what you're monitoring. Perhaps the price you pay for not
> having to trust any third parties?

Well, it's a price the service pays so its users get that benefit.

>>  - You're asking the service to setup a system designed to detect its
>> "cheating".  Maybe services want/need to cheat occasionally, to handle
>> govt requests?
> The way the law works in most countries means that the govt would
> first have to make it illegal to use the log. If that's legal, then
> their requests have to comply with the laws of physics.

Sure, if the service chooses to deploy it.

>>  Or maybe they don't want the "shitstorm of false
>> accusations" this could trigger from crazy users, as Moxie puts it.
> Have you seen even one accusation against the PGP keyservers?

I've heard of several people who've had false keys published.

> BTW, I've been on the receiving end of the "shitstorm of false
> accusations" that go with supplying very widely used s/w (i.e. Apache
> httpd and OpenSSL) for many years. Its more a slow and somewhat
> amusing trickle. :-)

It's different here - a lot of the value being argued-for is that this
is a loaded gun pointed at the service that will deter them from
cheating.  But it's not a very accurate gun, and this may deter them
from deploying it.

> e.g. http://www.links.org/?p=14.
>>  - Social-network messaging services currently do antispam and malware
>> scanning to keep users safe, so may not want e2e encryption at all.
> How about we stop using systems that are so stupidly easy to pop (yes,
> I'm looking at you, Windows)?

Right now I think a lot of the people running messaging services feel
like antispam / antivirus / malicious link filtering is an important
part of keeping their users safe.

>> It might be nice if the infrastructure and tools for this existed, so
>> it could be offered to services in a "ready-to-go" form, perhaps some
>> might be tempted to try...
> The open source CT code
> (https://code.google.com/p/certificate-transparency/) is already
> partially "pluggable", and we'll be making it more so over time.

Are there sparse Merkle trees anywhere?


More information about the Messaging mailing list