[messaging] Pour one out for "voice authentication"
mike at shiftleft.org
Sun Jan 4 19:27:12 PST 2015
But an automated speaker identification system might be looking at the same features that the voice-morphing software changes. In that case, it might not be useful except possibly for having a larger training corpus.
> On Jan 4, 2015, at 12:18 PM, Alfonso De Gregorio <adg at secyoure.com> wrote:
> On Jan 1, 2015 8:34 PM, "Joseph Bonneau" <jbonneau at gmail.com <mailto:jbonneau at gmail.com>> wrote:
> > Shirvanian/Saxena's experiments found that untrained human subjects had no more capacity to distinguish between a "true" voice and a morphed voice than between a "true" voice and a "true" voice with different background noise. They also point out some other attack avenues-if you get enough of Alice's audio (for example, if you're reading a hex fingerprint) you don't even have to synthesize, just re-order samples you already have. And often in one direction you only need to synthesize "yes" or "looks good" if both parties don't read the fingerprint
> I wonder: what is the false positive rate of a text independent automatic speaker identification system? Can it perform any better than human subjects in the same operative setting? If yes, can the user run such system inline and enlist on its help to detect a MitM attack?
> More fundamentally, if the offense converts a crypto problem into an AI problem should we turn to AI to defend?
> Messaging mailing list
> Messaging at moderncrypto.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Messaging