[messaging] Exposing MITM attacks socially engineered through group chat introductions
carlo von lynX
lynX at i.know.you.are.psyced.org
Sun Jan 25 03:15:16 PST 2015
Heyo Jeff, keep up the good work on Pond!
On Sun, Jan 25, 2015 at 09:03:18AM +0100, Jeff Burdges wrote:
> Any such introduction facility admits a kind of socially engineered MITM attack : Eve initiates a (group) conversation between Alice and Bob with the express intention of introducing Alice to Bob. However, Eve actually introduces Alice to fakeBob and Bob to fakeAlice which maintain an MITM attack Alice and Bob’s future communications.
That's why it's more solid if you get a glimpse of the
social graph so that you see that several of your friends
agree on Bob having that public key... the software can
then even do Facebook-like recommendations.. "7 of your
friends know Bob. Do you know Bob?" ... which may not be a
typical Pond usage pattern, but may be popular elsewhere.
> At the same time, I suspect the facility for introductions actually makes this style of socially engineered MITM attack less dangerous overall because :
> (a) it means introductions will frequently occur between honest parties,
Yes, the attack is on an interpersonal level, which is
unlike bulk surveillance. I think our focus should be on
making bulk surveillance hard. We should not panic at the
idea that individual hand-crafted treason may occur.
> (b) it gives the software a place to explain the risks, and
> (c) such attacks can become visible though having multiple contacts that should represent the same person.
We had a thread just recently discussing the usability of
this. Its subject is "Key rotation".
> At present the patch records three types of information :
> - who a contact was originally introduced by
> - who else has verified/corroborated that contact by also sending an introduction
Great, you start creating a distributed social graph.
> - who you introduced a contact to
> I’d appreciate any comments on the attack vector, or that patch in particular, especially if additional information should be retained in those local social graph records. In particular, I’d love to address scenarios like this one :
I like the idea of maintaining person profiles in a
distributed social graph (as in secushare.org), so I
can look up legacy interfaces like a phone number
without having to go on the Internet.
> Eve does introduction MITM attacks on Alice and Bob, and Bob and Carol. Bob messages Alice and Carol, introducing them. Alice and Carol are now MITMed by Eve too but see their introduction as coming from Bob, who they trust.
> How to address this?
> The software could note that Eve introduced Bob when it notes that Bob introduced someone. This is a no brainer. We could however imagine longer chains of introductions the suspicious party Eve is not visible to the newly introduced parties.
> Bob could attach flag or counter to his introduction message, indicating that Alice and Carol were introduced to him. It’s not clear to me that his is particularly useful though. And it reveals a small amount of information about Bob’s contact list, not much, but something.
Either disallow trading introductions further, requiring
people to do a fingerprint check or shared secret exchange
first, but that would incentivate people to cheat on this,
or make it a requirement that whoever starts an introduction
will be passed on in further introductions to third parties.
In the latter case, creating good visualization of the social
graph makes it more likely to detect if the graph is bogus.
Someone who has 4 people vouching for him is cool. If instead
all identities in the graph have only been authenticated by
Eve, or by multiple identities going by the handle of "Eve",
Eve may be trying to trick you. The software may in that case
prominently advertize the functionality for making a *strong*
authentication rather than a social one.
> Alternatively, Bob could attach some form of token that’d help someone who also knows Eve identify that Eve was the original source of the introduction. I suspect any such token could be defeated by Eve through using multiple accounts or similar and this reveals too much information about Bob’s social circle.
I think there is a question of threat model here. Tools like
Pond or secushare should protect groups of people from bulk
surveillance, but that doesn't mean the tools need to be
100% safe from individual targeted social treason.
More information about the Messaging