[messaging] Multiple devices and key synchronization: some thoughts
carlo von lynX
lynX at i.know.you.are.psyced.org
Mon Dec 29 13:18:30 PST 2014
On Sun, Dec 28, 2014 at 09:35:30PM -0500, David Leon Gil wrote:
> On Sun, Dec 28, 2014 at 6:48 AM, carlo von lynX
> <lynX at i.know.you.are.psyced.org> wrote:
> > . . . It's better to plan for the entire architecture in
> > a single go and have an organic platform.
>
> In general, I agree. I would not like, however, to give up on the
> idea that people's emails or instant messages can be private
> because the problem is difficult.
Actually it's the attempts to remain compatible with the existing
infrastructure, albeit it is by design exposing social metadata,
that makes the problem difficult. Starting from distributed tech
is likely easier, if you manage to wrap your head around it.
Just take Pond and RetroShare as examples. Imperfect as they may
be, they both solve identity management safer and more naturally
than PGP. How? First of all by not separating cryptography from
address routing. Next, since they have no obligation to be
backward compatible with decades of technology that doesn't do
the job, they can come up with creative alternatives to risky
discovery methods like keyservers - exposing the social graph to
everyone. Retroshare can adopt friends from friends. Pond has an
anonymous cryptographic shared secret bootstrap method.
More tools of this kind are being made, because the problem is
less difficult than trying to fix e-mail.
> Identity == permanent key-pair is crypto-nerd imagination
> at its worst. You may think that you are your keypair. I can
> assure you that you aren't: if someone hacks into your
> computer and gets your private keys, you are still the same
> person.
Waaait a second. Just because I simplify something so it doesn't
fill three more pages of mail it doesn't mean that's the end of
the story. There's a description of how identity recovery should
work for us at http://secushare.org/threats - and if somebody
truly trashed her identity for good, then she will have to start
again from zero. Of course she is still a person.
> > It should rather be able to use
> > a diverse network of relays in an unpredictable way for
> > observers such that no single "account authority" can
> > monitor or censor communications, or become a target
> > for hostile activity.
>
> Agreed.
>
> > There are solutions for this, so I
> > don't understand why you are still planning for an
> > outdated surveillance-friendly architecture.
>
> I'm not aware of any proposals thus far that scale in the range of
> 2^27 to 2^33 users...
Scalability is my background and the most important lesson I have
to tell on the topic is that scalability can be planned for, there
is a logic and a science behind it. Just because so many apps have
just started out with something that worked, and later either
randomly solved or not solved the scalability challenge, doesn't
mean you only achieve scalability once you can measure it.
http://secushare.org/pubsub tells a story about scalability and
the way the architecture is laid out there is no reason to expect
it not to be able to handle any number of users if the network as
a whole grows appropriately.
By the way, Tribler already exists and operates by very similar
principles.
> > That sounds like not so good ideas to me. The proper way to
> > do IMHO is to have a most secure generate a master identity
> > and several subordinate identities so each device is
> > given one of those subordinate identities.
>
> Yes, if we knew in advance which device would not be
> compromised. But we don't; device compromise is
> probabilistic.
So let the user decide about that? Why are individual
devices even in the focus? We are here to address mass
surveillance! Let's not spend time trying to achieve
100% safety as long as we can impede mass surveillance.
> > There is no need for external authorities at all. The only
> > trust you need to delegate is to the cryptography.
>
> Indeed, this is part of the "key transparency" idea that the
> folks at Google have partially sketched out here:
>
> https://github.com/google/end-to-end/wiki/Key-Log-Server
Let me cite a few lines from the Threat Model:
User must authenticate with the Identity Provider.
Why on Earth would I still want something antiquated, complicated
and privacy-reducing as an "Identity Provider" and even need to
register and authenticate with it? If you start with a clean slate
app there is NO NEED for something like this!
First, the User needs to ensure it sent the authentication credentials to the right Identity Provider. Malware, weak authentication credentials, phishing, weak transport-layer security, etc.. are known to be problems in this situations.
Second, the Identity Provider must authenticate the user. Most security vulnerabilities like XSS, CSRF, Auth bypass and so on, could potentially compromise this step of the flow.
And then there are even more reasons not to even try that stuff
right there as described by their own Threat Model description.
I don't understand why you cite that as an "indeed" if rather
looks like the perfect opposite of what I was suggesting?
> > Each person should have just one master key that can
> > ultimately prove the identity to other people.
>
> Again, this is not realistic: What's the last device you owned
> that there were not zero-days for?
Doesn't matter if your master key is on a piece of paper in
your safe. You need to have a safe computer at least those
few times that you generate subordinate keys for your devices,
but you can trash your Microsoft Windows all the times you
like in the moments between.
Again, no ultimate 100% perfection possible. Printing out
the master key sounds like the most reasonable compromise.
> > Why a keyserver? People should be in communication with
> > each other. If all the people have subscribed "friendship"
> > any key renovations will be pushed to them.
>
> This is fine w.r.t. friends. (In fact, I think that nearest-neighbor
> gossip is an essential part of any key distribution scheme.)
>
> But making people's social networks public is not really privacy-
> preserving.
They are not public. You only get access to as much of the
graph as your friends let you see.
> W.r.t. keyservers, perhaps the description in my post was
> misleading; I do not see that anything like a traditional
> keyserver will work. (That is for a future mail.)
Ok.
> > I'll add another variant strategy to the ones you listed:
> >
> > ### Ratchet-encrypted pubsub channels for everything
>
> I think that what you describe w.r.t. ratcheting is not incompatible
> with anything I described.
It is a further evolution of the last scenario you presented,
obsoleting some aspects of it I believe.
> > So when a sender wants to send a message to a recipient,
> > it already has a communication channel set-up to that person
> > and doesn't need to know which or how many devices are currently
> > receiving on that channel.
>
> I am not sure what this means, exactly.
The pubsub is already there and you can write to it so it gets
to your friend. You don't need to know which of your friend's
devices may be tuned in to listen, or picking the message up
a week later.
> > Each channel has its own ratchet
> > so it doesn't need to use specific public keys.
>
> This is essentially a system with a shared per-pair encryption key, no?
No, it is per-pubsub. Therefore it is immensely more flexible and
can be used to implement a distributed social network with any of
Facebook- Snapchat- Whatsapp- LinkedIn- or Twitter-like usage styles.
> I think it's generally better to -- if the underlying messaging protocol
> uses ratcheting -- maintain separate per-device ratchets.
The pubsubs maintain history, so if a device has been missing some
messages it can recover from where it left off and fix its copy of
the ratchet as it moves forward.
> (This is mainly because of the practical problem of synchronizing
> a ratchet state while dropping missed message keys as soon as
> a realtime bound is met.)
Synchronizing state is one of the core features of the pubsub
architecture. It is fundamental for doing distributed social
networking, but it happens to also be useful for messaging...
as you just elaborated.
> >> *Targeted malicious messages.* The flip side of all devices
> >> being able to decrypt all messages, as above.
> >
> > That I consider beyond scope of our work.
>
> Yes, it probably is something outside of my threat model as well.
> But it is a possible threat-vector; and I would like some opinions
> as to whether it is a serious one. (From people more on the attack
> side of things.)
To me, Snowden has made it clear which threat model we have
to take care of - the one that puts democracy at risk: bulk
surveillance. I refuse to be paranoid about targeted operations
as long as they don't scale to millions of targets.
--
http://youbroketheinternet.org
ircs://psyced.org/youbroketheinternet
More information about the Messaging
mailing list