[curves] XEdDSA specification

Brian Smith brian at briansmith.org
Sat Oct 29 23:39:58 PDT 2016


On Fri, Oct 28, 2016 at 6:47 AM, Trevor Perrin <trevp at trevp.net> wrote:

> On Fri, Oct 28, 2016 at 2:40 AM, Brian Smith <brian at briansmith.org> wrote:
> > On Thu, Oct 27, 2016 at 7:44 AM, Trevor Perrin <trevp at trevp.net> wrote:
> >>
> >> Sure, what do you think needs to be clarified, exactly?  The math
> >> seems clear, I'm guessing you think "glitch" attacks need to be
> >> defined?
> >
> >
> > It wasn't clear to me that "glitch" referred to glitch/fault attacks.
>
> I'll clarify that, you're right that "fault" is the more generic term.
>

Simply, I was unsure whether "glitch" was being used colloquially, to
indicate some unspecified problem, or whether it was referring to glitch
attacks in general, or a specific type of glitch attack, or fault attacks
in general, or faults in general (accidental or induced).


> >> > Why is Z fixed at 64 bytes? That is, why is its length not a function
> of
> >> > the
> >> > size of the key and/or the digest function output length
> [...]
> > OK. I think one might argue that it is overkill to use a Z larger the |p|
> > and typically Z-like values are the same size as |p| so it would be more
> > consistent with existing practice and more intuitive to use a |p|-sized
> Z.
>
> For discrete-log signatures you need to make sure the nonce is
> unbiased, so one technique is to reduce a value much larger than the
> subgroup order by the subgroup order (e.g. FIPS-186 recommends
> choosing a value that's 64 bits larger).
>

The thing that is reduced mod q is the SHA-512 digest, not Z, right? I
don't see how the FIPS-186-suggested technique applies.

In your justification for why it is fixed at 64 bits, you imply that it is
kinda-sorta a function of |q|. Notice that 64 bytes = 512 bits = 448 + 64.
Using the above reasoning, would a 448-bit curve be the maximum supported
by this scheme? Or, for a 512-bit curve, would we require a Z that is at
least 512 + 64 = 576 bits = 72 bytes?

To a large extent, it probably doesn't matter. But, if you have an
implementation of X25519/Ed25519 for a very constrained device you might
have a function for gathering exactly 32 bytes of random data to use as a
key, and it seems like it would be nice to be able to reuse that exact same
function for generating Z, if a Z of size |q| is sufficient for the
security goals.


> > [I n]otice in the email I'm replying to now, you seem to be
> > intending that the scheme should be usable even with a compromised RNG to
> > some extent, which makes me wonder whether a good RNG is a requirement or
> > just nice-to-have.
>
> It's intended to be secure even with a weak RNG, but with more
> assumptions about the hash, and private scalar, and exotic attacks
> (per above) that randomization helps with.  So I'd rather not give
> people the idea the RNG is optional, or could be skimped on.
>

Understood. But, the questions that entered my mind when reading the
document were more along these lines:

1. Is using XEdDSA with a deterministic nonce equally, less, or more safe
than using EdDSA (with a deterministic nonce)?

2. Ditto, for VXEdDSA.

3. Is using XEdDSA with an actively malicious (attacker-controlled) RNG
equally, less, or more safe than using EdDSA (with a deterministic nonce)?

4. Ditto for VXEdDSA.

Cheers,
Brian
-- 
https://briansmith.org/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://moderncrypto.org/mail-archive/curves/attachments/20161029/b19cab8b/attachment.html>


More information about the Curves mailing list