[messaging] Key rotation

carlo von lynX lynX at i.know.you.are.psyced.org
Fri Jan 9 09:26:03 PST 2015


On Fri, Jan 09, 2015 at 03:15:37PM +0000, Michael Rogers wrote:
> On 06/01/15 17:44, carlo von lynX wrote:
> > Alright, let's say when the two devices see each other
> > successfully they emanate a gentle notification indicating they are
> > syncing up and available for further exchanges like playing a game
> > together.
> > 
> > When that does NOT happen, Alice and Bob may not notice the first
> > or second time but at some point they may wonder why there was no
> > signal, no exchange or simply why they can't do stuff together as
> > they usually do with others.
> 
> I'm not a usability expert, but I believe there's a design principle
> that says people don't notice the absence of a signal. Hence we have
> warning lights that appear when something's wrong, rather than
> "everything's cool" lights that disappear when something's wrong.

Yes, but we're not necessarily in a hurry. Sure it would be cool to
alert users ASAP, but if we can only do as they try to use their
phones, that's still useful and a threat for a WITM that she might
get caught after all. So in the scenario I described I am not
counting on them noticing the *absence* of a feel-good blip.

> > Not being aware of the potential semantics of this failure, they 
> > would probably just restart a key handshake as when they add a new 
> > person. This probably involves the QR codes you mentioned:
> 
> Maybe - we'd need to test this. They might just consider it a weird
> bug and ignore it. Bluetooth is flaky, after all...

C'mon, *if* they want to do something together.. like Alice telling
Bob "hey have you seen my cats pictures" and he goes.. "Er no.. didn't
get them. Why didn't I.. I should have.." the next human thing to do
is to fumble around with the devices trying to get them to do it.
An advanced WITM of course would have relayed the cats pictures,
unless they were marked as "bluetooth/wifi only." Or let's think in
terms of playing a social game together... it's up to us to create
some extra incentive for people to ensure their mobiles are meeting.

> >> * Alice or Bob may have selected the wrong contact from their
> >> contact list to validate * The third party who introduced them
> >> may have carried out a MITM attack * Alice may be lying to make
> >> Bob think that the third party carried out a MITM attack, or vice
> >> versa
> > 
> > At this point Bob is prompted if he just met a new Alice, or if the
> > old Alice is indeed to be replaced.
> 
> The adversary can make it tricky for the app to detect that Bob
> already has a contact with the same nickname - for example, the
> identity supplied by the adversary during the MITM may have the name
> "Alice ", "A1ice", "Al\u00A0ice", etc.

If you let the source of the public key information also suggest a
nickname. Do we have to do that? In the scenario in my head it is
a legacy phone number getting looked up. That can be written up in
a canonical way. The other situations are less likely to offer the
possibility of falsification - printed QR codes, adoption from the
social graph. If you *really* trust just one person to give you
the identity of another person, then you are socially trusting this
specific person. Not something we need at all costs to defend against,
if the culprit would be obvious. It's not like an anonymous WITM
getting away with it, it's a real person that gets kicked out of 
friendships for faking people. Did I miss any scenario? My main use
case for this is automatic upgrade from a legacy insecure telephone 
network.

> And of course, Bob may really know two people called Alice. When he
> adds the second one, we don't want to force him to choose between
> replacing the first one and getting a scary security warning.

I skip this one since you noticed that this isn't a problem yourself.  :)

> > If Bob confirms that Alice is to be replaced, then your application
> > can inform Bob that the communications that went to Alice
> > beforehand may have been tapped.
> 
> Sounds good - I like the idea of just explaining what may have
> happened without trying to attribute blame.
> 
> > If Bob chooses to create a new Alice, then the two may coexist for
> > as long as either Bob or the application do not understand he's got
> > the same person twice. Should that awareness come about later, the
> > Alice that was confirmed by physical vicinity is preferred over the
> > Alice that got acquired by unsafe methods.
> 
> We'll have to give some more thought to how Bob might act when he
> realises his two Alices are the same person - perhaps he'll look for a
> "merge contacts" feature, which could then behave as you described?

Yes, the software may however also figure it out computationally,
for example if all the profile contents is cloned. An advanced faker
would then introduce minor changes invisible to the eye as you
suggested, so we might be engaging in an arms race about this detail.
So yes, probably Bob has to ultimately figure out he's bumped into a
fake. The device should in that case be able to name the origin of
the (fake) data - a socially adopted key, a look-up in a public hashtable,
a QR scan...

> > If Alice has been lying to Bob while standing next to him, then she
> > doesn't really want to be his friend. That isn't nice, but it only
> > results in Alice appearing unsafe to talk to from Bob's phone,
> > which probably corresponds to reality.
> 
> No, the problem is that if we use language that suggests that Carol
> (the introducer) has been trying to intercept Alice and Bob's
> conversations, then Alice can present a false identity when Bob
> verifies her in order to cast suspicion on Carol, while Alice looks
> trustworthy.

Oh! Okay, on the level of interpersonal treason it could be either
Carol or Alice acting nasty. In a properly running distributed social
graph this is less of a problem, since you probably have more people
that can give you Alice's public key - but if you're indeed stuck on
a single introducer, then this can happen.

Still, the course of action that makes sense to me here is to adopt
Alice's key as she is sitting next to Bob, have her confirm in voice
that Carol is a liar, then have a date with Deborah and Emily the
next day to find out who has been telling the truth... if Alice is
telling the truth, Deborah and Emily have the same key. If Alice was
lying, Carol's version of the key is the one. But then they might
aswell all remove it. In any case these are situations that can be
retraced, so it is not convenient for anyone to do treason at this
level. Btw, "voice confirmation" is optional - i just liked it for 
the theatricality.

> > Ultimately, if both Alice and Bob want Jake's advice whether their 
> > communications had been tapped, Jake can look at the detailed event
> >  logs of both devices and do some forensics on those to try and 
> > figure out what really happened.
> 
> Even if we suppose that Alice and Bob know a suitable Jake, there's no
> way to tell by examining the logs whether it's Carol or Alice who's
> untrustworthy. (Alice can forge her own logs.)

Jake would suggest to meet up with Debbie and Emmie.	   ;)


-- 
	    http://youbroketheinternet.org
 ircs://psyced.org/youbroketheinternet


More information about the Messaging mailing list