[messaging] Useability of public-key fingerprints

Robert Ransom rransom.8774 at gmail.com
Wed Jan 29 21:12:15 PST 2014


On 1/29/14, Trevor Perrin <trevp at trevp.net> wrote:

> I'm a little surprised I can't find more useability research here, except
> for:
[omitted]
>  - https://moderncrypto.org/mail-archive/curves/2014/000011.html

I wouldn't call that “research” exactly.  I applied one of the
usability lessons that Ross Anderson drew from the South African
electrification project to the context of larger single-line strings,
as straightforwardly as possible.  There's still plenty of room for a
usability study to find optimal chunk-length patterns (though the ones
I chose are already clearly better than the usual practice of using
equal-length chunks or omitting all punctuation).


The reason that I started experimenting to find those patterns is that
I wanted to sample a password for symmetric encryption from a 160-bit
distribution, such that it could easily be transmitted by voice over a
telephone.

I considered using ‘Koremutake’ (an invertible mapping of 7-bit digits
to variable-length syllables) and/or an S/Key-like word-based
encoding, but rejected both of them:

* Koremutake coding cannot be implemented in data-independent time.
  (I didn't bother to do a constant-time implementation of base32 for
  my immediate use, but as long as I'm putting design effort into
  something, I want to make sure it can be implemented Correctly.)

* Both Koremutake and word-based encodings make the encoding alphabet
  less obvious to human readers.  If they are read aloud using their
  pronounceability, I would expect this ambiguity to increase the
  likelihood of transmission errors.

  (For Koremutake, I would expect the wide variation in how vowels are
  pronounced in English to be the main problem.  For the S/Key word
  set, the presence of uncommon words (e.g. “dun”, “col”) and even
  non-words (e.g. “etc”, “bah”) looks problematic.)

* Both approaches increase the length in characters of the encoded
  string, and thus the cost of typing and writing the string.  Since
  pronounceability does not seem to lead to a more efficient
  voice-transmission encoding than reading each character separately,
  there is no benefit to offset that significant cost.

I briefly considered a mixed-case encoding such as base64, but for my
immediate use case, that would have more than doubled the voice
transmission cost of each letter (e.g. “lowercase f, zero, uppercase
o” instead of “f0O”) without reducing the number of characters enough
to be worthwhile.  (Relying on mixed case would also prevent the reuse
of these patterns in contexts such as hostnames.)


Robert Ransom


More information about the Messaging mailing list