[messaging] Test Data for the Usability Study

Trevor Perrin trevp at trevp.net
Wed May 28 12:47:09 PDT 2014


On Sun, May 25, 2014 at 5:15 PM, Tom Ritter <tom at ritter.vg> wrote:
> Hey all!
>
> Christine and I have opened 6 issues at
> https://github.com/tomrittervg/crypto-usability-study/issues to
> produce test data for the usability study.  We have until JULY 15TH to
> produce this data.

Looking at this, the number of choices we have to make is scary.  For
each of the 5 fingerprint representations there's parameters:

(1) base16 chars
 - uppercase or lowercase?
 - grouping? (16x2? 8x4? 4x8? irregular?)

(2) base32 pseudowords [1]
 - alphabet? (RFC 4648? z-base-32? other?)
 - grouping? (5x5? 6-4-5-4-6?)
 - scoring? (vowel-consonant alternation?)
 - search time (how many seconds to generate a fingerprint?)

(3) english words
 - uppercase or lowercase?
 - wordlist (diceware? basic english? mnemonicode [2,3]?)

(4) english sentences
 - uppercase or lowercase?
 - Michael Roger's poems [4]; anything else?
 - padding sentences when we run out of bits [5]?

(5) visual
 - OpenSSH Random Art?  Hash Visualization?  Vash? [6,7,8]?

That's a lot of variables, if we just choose them arbitrarily I worry
that testing won't tell us much about the general approaches, but only
about how good our choices were.

Ideally there would be initial testing to identify good parameters for
each method.  Since these tests should be a lot simpler (with a single
variable, like: upper vs lowercase; size of char groups; etc), maybe
they're easier to design and run on M-Turk?

Simulating 2^80 work-factor "fuzzy match" attacks is also going to
involve a bunch of decisions.

I think that for text methods maybe we can come up with visual /
phonetic similarity metrics that are reasonably comparable.  But I
dunno about visual fingerprints, that seems like a research project in
itself - unless someone has a lot of time to work on it, maybe the
visual methods are too much to tackle.


Trevor


More information about the Messaging mailing list