[messaging] Pour one out for "voice authentication"
Alfonso De Gregorio
adg at secyoure.com
Sun Jan 4 12:18:42 PST 2015
On Jan 1, 2015 8:34 PM, "Joseph Bonneau" <jbonneau at gmail.com> wrote:
> Shirvanian/Saxena's experiments found that untrained human subjects had
no more capacity to distinguish between a "true" voice and a morphed voice
than between a "true" voice and a "true" voice with different background
noise. They also point out some other attack avenues-if you get enough of
Alice's audio (for example, if you're reading a hex fingerprint) you don't
even have to synthesize, just re-order samples you already have. And often
in one direction you only need to synthesize "yes" or "looks good" if both
parties don't read the fingerprint
I wonder: what is the false positive rate of a text independent automatic
speaker identification system? Can it perform any better than human
subjects in the same operative setting? If yes, can the user run such
system inline and enlist on its help to detect a MitM attack?
More fundamentally, if the offense converts a crypto problem into an AI
problem should we turn to AI to defend?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Messaging