[messaging] Useability of public-key fingerprints

Ximin Luo infinity0 at pwned.gg
Thu Jan 30 01:45:15 PST 2014


On 30/01/14 06:27, Daniel Kahn Gillmor wrote:
> On 01/30/2014 12:21 AM, Moxie Marlinspike wrote:
>> My intuition is that we just shouldn't be showing the user a fingerprint
>> at all if even remotely possible (TOFU).  If it's necessary to display a
>> real fingerprint at some point, the user isn't going to have any idea
>> what's going on, so it probably doesn't matter whether it's a set of
>> gibberish words, a hex string, or b32 character string.
> 
> While i'm not sure TOFU is the only answer, i think i agree with Moxie
> that asking users to cope with a long string of incomprehensible,
> high-entropy gibberish is in general a bad idea.  I lean toward the idea
> of mechanized fingerprint transmission via physical channels that humans
> can intuitively inspect.
> 

"Confidence in key validity" is the security property that we want here, and there are different methods that achieve different levels of confidence. Also, there are different requirements. With IM/TOFU, one might argue it's not so essential that your first conversation is MITMd as long as you can detect subsequent attempts, but with e.g. Bitcoin addresses, you probably don't want to take this risk.

Quantifying this confidence level would be an interesting exercise. For example, there is the argument that (in TOFU) you gain more confidence if you communicate over many different routes, but is this really true if the attacker is 1-hop right next to your target (router)? etc.

I can agree with the usability issues surrounding full verification, but this is the only method that gives you close to 99.xx% confidence. During cryptoparties I make an analogy to phone numbers, and verifying their validity. People have no trouble grasping the concept of these, I don't see much confusion here.

(I've never been asked about key vs fp, but I would fudge it like "the fp is the short non-usable version of the key that you can communicate".)

> For humans without visual impairment and with modern computing
> machinery, i really like QR codes for this use.  It's easy to tell
> whether there is an MITM or not during QR code scanning, they're easy
> enough to generate, cheap to print, simple to display on most computer
> displays, and recoverable with all but the lowest-quality webcams.
> 
> Of course, you can't transmit a QR code over the phone or on a bar
> napkin, or commit it to memory.  But i'd argue that most people can't
> reliably do any of those things for cryptographically-strong
> fingerprints in the first place anyway, regardless of encoding.
> 
> For humans with visual impairment and modern computing machinery, Brian
> Warner mentioned the idea of acoustic coupling of two devices -- one of
> them would hum or beep or squawk (all in the range of normal human
> hearing) to transmit a fingerprint, and another machine could listen and
> decode the highest-strength signal.
> 

For desktops, once nice UI enhancement is to detect if the selection is a fp, and if the clipboard contains a fp, then add a context item "compare fingerprints". I'm still super-annoyed that pidgin-otr for some reason doesn't let you select the fingerprints to ctrl-c them around.

> 
> But if we set aside mechanical transmission mechanisms like QR codes or
> acoustic coupling, I think the questions for any such scheme are:
> 
>  0) how many high-entropy bits of information can the scheme encode?
> 
>  1) how complicated is it for humans to compare two of these
> representations and determine whether they are exactly identical? (or,
> conversely, how easy is it to craft a value that is sufficiently close
> to appear as a "collision" to a significant fraction of users)
> 
>  2) how difficult is it for humans to transcribe precisely into their
> communications equipment when the representation is in front of them?
> 
>  3) how well does it work in other human-to-human transmission vectors
> (e.g. over the phone, etc).
> 

Nice, we should collect these into a document. Some things you can optimise for:

1. given a fp size, (low) error rate, and medium (verbal, visual, etc) minimise:
  - the transmission time for the sender, and
  - the comparison time for the receiver

2. given a fp size, constant tx/cmp time, and medium, minimise:
  - the error rate from start of transmission to end of verification

(1) is useful for full-verification and (2) would be useful for casual partial-verification (like what the SSH random-art tries to do). These are distinct concerns but not necessarily exclusive.

160-bits ought to be enough, but I hear bad things about SHA-1. Is there a better alternative of equal length?

Perhaps you could let the user tell you what medium they want to use, and generate a scheme optimised for that medium? (Or is that "too complex"..)

X

-- 
GPG: 4096R/1318EFAC5FBBDBCE
git://github.com/infinity0/pubkeys.git

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 880 bytes
Desc: OpenPGP digital signature
URL: <http://moderncrypto.org/mail-archive/messaging/attachments/20140130/b0200db8/attachment.sig>


More information about the Messaging mailing list