[messaging] Logging Bad Situations (was Informing the user they have the wrong key)

Tom Ritter tom at ritter.vg
Sat Sep 27 18:44:24 PDT 2014


Tony, sorry I'm forking this thread without any replies.

On 26 September 2014 21:05, Tony Arcieri <bascule at gmail.com> wrote:
> If we build fancy systems to detect things like misadvertised keys or MitM
> attacks, how can we reasonably inform an end user what is amiss in an
> actionable way that won't confuse them with too many false positives to
> avoid taking action when something bad actually happens?

I have no idea.

(Rearranging)
> And then what? How can we help someone who is a victim of an attack like
> this actually compile all of the necessary information for someone to figure
> out what actually happened? How can encryption tools compile incident
> reports that experts can scrutinize to determine what happened?

This, I think should be the first step.  This, I think, is 'The Simple
Thing' as applied to attack response.  Gather info, send it to someone
qualified to investigate.


Let's take this in the context of a 'suspicious'* certificate when
visiting, e.g. facebook.com.  Some of the relevant information that
would be useful to have if I was tasked with determining if this was
malicious, misconfiguration, or something else.  (I'd also like to
note that Tor Project has to do this occasionally.)

 - IP you're entering the Internet from (WAN IP)
 - IP you're hitting
 - Certificate Chain you saw
 - tracert to IP you're hitting
 - Software running on your system
 - Certificate presented when you visit a randomly-chosen website I
know a little about

*Suspicious is broadly defined to be self-signed, a change in a pin
from a TOFU model, a 'weird' occurrence like change in issuing CA,
whatever.

>From that I can either narrow the situation down to:
 - False Positive - a seemingly normal connection that just set off a
trigger because of infrastructure change
 - AntiVirus/Malware that's MITMing all connections
 - Infrastructure in the path that's MITMing all connections (e.g.
corporate or country-wide)
 - Unknown / Potentially a small-scale MITM

But even if I cannot narrow it down, I can investigate it well enough to:
 - Present proof of a misissued certificate (or misbehaving/hacked CA)
 - Gather data, that when combined with more reports, presents a view
of a wide-scale attack



When you consider the case that people using TextSecure, Pond, PGP,
RedPhone, etc usually have a community of people around them they
trust, some of whom are technical, these types of incident reports
could be sent to those people securely.

An an example, I gave a training and part of it was looking at Last
Logins in Google. Someone saw a login from Texas.  We did some
sleuthing, and eventually determined that a FedEx store must be
sending it's internet traffic out of an IP geolocated in Texas,
despite being in NYC.  Great example of a False Positive.

The way I envision it, based on my small experience, is
a) I would introduce them to a tool.
b) I would go into a menu and choose 'Add Trusted Contact'
c) I would scan my QR code with their device.  This adds my name,
email, and PGP key. (No revocation, rolling - I'm expected to be able
to manage this well myself.)

When they're using their device and something 'weird' happens, it
should record all the above data.  'Weird' would be a slider.  The
girl I work with who organizes hundreds of people: his slider would be
pretty high up.  The guy who mostly just shows up: it's be pretty far
down.

If something happens that would cause a 'block': a self-signed cert
for example, after clicking through, it will display a menu prompting
someone to send a 'Suspicious Event Report'.  They click the trusted
contacts they want to send it to, and away the log goes.


I'm wondering if this system would create a sense of guilt though.
Would someone not send reports because they're embarrassed at how many
warnings they ignored when they felt they shouldn't have?  More
likely, we would meet up in person, and I would manually send their
reports to myself, and we would go through them together.  Either way
works though.


> If an end-to-end encrypted messaging system which relies on a
> centrally-managed key directory (e.g. iMessage) were to by coersion or
> compromise publish a poison key to their directory to facilitate a MitM
> attack, but the system creators wanted to make such action obvious to their
> users,

Were I designing such a system, I would make my system sign
everything.  Just like in CT, you can _prove_ misbehavior by
presenting a fraudulent signature.  (Assuming you can detect it.)

This works well with our legal system.  There are lots of things the
government can do to prevent me from speaking.  There is very little
they can do to compel speech.  Warrant Canaries are an unproven item -
compelling the same statement to appear in a report it's always been
in is a small compulsion.   Compelling someone to _lie_, a complicated
lie talking about a bug or something, would be going even further.


-tom


More information about the Messaging mailing list