[messaging] Transparency for E2E encrypted messaging at a centralized service
ben at links.org
Fri Mar 28 07:10:27 PDT 2014
On 28 March 2014 12:09, Thomas Ristenpart <rist at cs.wisc.edu> wrote:
> I haven't been following through all the details of this exchange, so
> sorry if this was brought up already, but you might want to check out
> this paper:
Hmm, hadn't seen that before, but the claim "Roughly speaking, the
size of proofs grows from 1KB or 2KB to tens or hundreds of GB" made
of my own revocation transparency proposal is entirely false.
> I think some of the goals are quite aligned with what's under discussion
> (which is quite interesting). Cheers,
> On 3/28/14, 5:34 AM, Ben Laurie wrote:
>> On 28 March 2014 05:42, Trevor Perrin <trevp at trevp.net> wrote:
>>> On Tue, Mar 25, 2014 at 8:24 PM, Joseph Bonneau <jbonneau at gmail.com> wrote:
>>>> Here's an idea that arose between Daniel Kahn Gillmor and I (mistakes all my
>>>> own). Imagine you're a centralized web service and you want to offer E2E
>>>> encrypted messaging between your users.
>>>> *The service runs a Certificate Transparency-style log for every certificate
>>>> it issues
>>> Interesting, let me try refine the proposal (based on Ben's comments),
>>> and list some obstacles:
>>> How it might work
>>> If I understand Ben [1,2], the service could maintain a "Sparse Merkle
>>> Tree" which implements an "authenticated dictionary" with these
>>> - Given the root hash of the tree, membership of a particular
>>> (username, public key) or (username, <none>) pair can be proven by a
>>> fairly small "merkle path" of hash values showing the connection
>>> between a leaf element and the root.
>>> - The tree can be efficiently updated by publishing deltas .
>>> So the service would publish the Sparse Merkle Tree, using deltas so
>>> changes to the tree could be efficiently processed by 3rd-party
>>> "monitors" (imagine the EFF).
>>> For Alice to verify Bob's public key...
>>> - Alice would retrieve the tree's root hash from a trusted 3rd-party
>>> monitor. Alice would query the service for Bob's (username, public
>>> key) and a merkle path showing membership in the tree. By checking
>>> membership, Alice confirms that the public key has been published.
>>> - Bob would register with the 3rd-party monitor to receive
>>> notifications when his public key changes.
>>> With this, the service can't change Bob's public key without Bob
>>> hearing about it!
>> Nice, but here's one more piece - you need to avoid the log cheating
>> by, say, reinstating Bob's old, compromised key temporarily, which
>> would be invisible to anyone except the victim if you use the sparse
>> tree alone.
>> So, you also need to keep some kind of change timeline - a traditional
>> Merkle Tree of the heads of the sparse tree would do just fine for
>> this. Alice would then need to also see a proof that the current hash
>> was consistent with her previous view of the timeline tree.
>> Then flip-flops as described become visible to Bob, too.
>>> - This only has value if Alice and Bob are in the tiny fraction of
>>> users who will run 3rd-party clients and setup notifications with a
>>> monitoring service.
>> So build that into the mail clients.
>>> - Even if Bob observes the service being malicious, he has no way to
>>> prove this - it will just be his word against the service. So the
>>> "herd immunity" value of exposing the service's perfidy seems low.
>>> (In contrast to Certificate Transparency for HTTPS, which is likely to
>>> expose bad/hacked CAs who obviously shouldn't be issuing the revealed
>> Hmm. Presumably Bob would be able to show a new key, signed by many of
>> his correspondents, that did not correspond to the published key. That
>> seems strong than just Bob's word.
>> Ideally you'd also want signed revocations of keys, so non-revocation
>> could be demonstrated.
>>> - You're asking the service to publish large numbers of usernames,
>>> which has privacy / business implications.
>> I guess it is in the nature of public monitoring that you have to
>> publish what you're monitoring. Perhaps the price you pay for not
>> having to trust any third parties?
>>> - You're asking the service to setup a system designed to detect its
>>> "cheating". Maybe services want/need to cheat occasionally, to handle
>>> govt requests?
>> The way the law works in most countries means that the govt would
>> first have to make it illegal to use the log. If that's legal, then
>> their requests have to comply with the laws of physics.
>>> Or maybe they don't want the "shitstorm of false
>>> accusations" this could trigger from crazy users, as Moxie puts it.
>> Have you seen even one accusation against the PGP keyservers?
>> BTW, I've been on the receiving end of the "shitstorm of false
>> accusations" that go with supplying very widely used s/w (i.e. Apache
>> httpd and OpenSSL) for many years. Its more a slow and somewhat
>> amusing trickle. :-)
>> e.g. http://www.links.org/?p=14.
>>> - Social-network messaging services currently do antispam and malware
>>> scanning to keep users safe, so may not want e2e encryption at all.
>> How about we stop using systems that are so stupidly easy to pop (yes,
>> I'm looking at you, Windows)?
>>> - A sophisticated attacker might go after the monitoring service, and
>>> figure out how to suppress / block notifications.
>> Better have lots of those, then :-)
>>> Anyways, I need to think on this more. There could be real value, but
>>> the costs and obstacles seem pretty high.
>>> It might be nice if the infrastructure and tools for this existed, so
>>> it could be offered to services in a "ready-to-go" form, perhaps some
>>> might be tempted to try...
>> The open source CT code
>> (https://code.google.com/p/certificate-transparency/) is already
>> partially "pluggable", and we'll be making it more so over time.
>> However ... patches welcome!
>> Messaging mailing list
>> Messaging at moderncrypto.org
> Messaging mailing list
> Messaging at moderncrypto.org
More information about the Messaging