[curves] XEdDSA specification
brian at briansmith.org
Thu Oct 27 23:40:33 PDT 2016
On Thu, Oct 27, 2016 at 7:44 AM, Trevor Perrin <trevp at trevp.net> wrote:
> On Thu, Oct 27, 2016 at 2:08 AM, Brian Smith <brian at briansmith.org> wrote:
> > In the motivation for the randomized scheme, the document says "However,
> > the same message is signed repeatedly, a glitch that affects the
> > of h could cause this to happen (an observation due to Benedikt
> > Could you provide a reference to a paper/message that explains what is
> > referred to here, and/or add a description of the issue to the paper?
> Sure, what do you think needs to be clarified, exactly? The math
> seems clear, I'm guessing you think "glitch" attacks need to be
It wasn't clear to me that "glitch" referred to glitch/fault attacks.
BTW, more generally it isn't clear in the document what types of attacks
are in the threat model. A lot is said about constant-timedness but it
seems like the scope might be bigger. Or, perhaps protection from fault
attacks isn't a primary consideration for the adding of randomness, but
rather it is a nice side benefit?
> In xeddsa_sign, why set:
> > r = hash1(a || M || Z)
> > instead of one of:
> > r = hash1(Z || a || m)
> > r = hash1(a || Z || M)?
> > It seems like it would be better to incorporate the randomness into the
> > state of the hash function as early as possible to reduce risk of some
> > (non-timing) differential side-channel attacks on the hashing step.
> I don't think it matters much.
> But if an attacker could choose M to cause a collision between M1 and
> M2 or biased output, even with unknown/randomized prefix, then hashing
> Z at the end might be better .
That's an unlikely scenario. But hashing Z at the end also makes the
> design look like EdDSA a little more, as the first two inputs are
> secret key, then message, same as EdDSA.
Consider this "best of both worlds" scheme:
r = hash1(a || [first half of Z] || M || [second half of Z])
I believe this has the benefit of being like EdDSA but also protects
against two different kinds of attacks.
> You mention "differential" side-channel attacks against the secret key
> "a", but adding randomness before the secret key would make those
> attacks easier if the randomness leaks (e.g. a predictable RNG), so
> that's not obviously an improvement with respect to side-channels.
If the RNG isn't predictable (i.e. if the randomness doesn't leak) then
isn't it better against those attacks, though? In the document it says that
nonpredictable RNG output is a prerequisite so shouldn't we assume it is
Regardless of that, I don't see how it would be worse than the current
scheme, but I'm not an expert in this particular area. Either way, I think
some rationale for this aspect of the design would be helpful to include in
> > Why is Z fixed at 64 bytes? That is, why is its length not a function of
> > size of the key and/or the digest function output length?
> Ed25519 was specified to reduce a 512-bit hash output to produce its
> nonce, so it seemed most consistent to add 512 bits of entropy. Since
> these values/sizes are also suitable for 448, we don't bother changing
> 512 bits of hash output / entropy is overkill in all these cases, but
> it's simple to specify and easy to implement. Taking 64 bytes from
> the RNG might add some defense in depth if the RNG is slightly weak.
OK. I think one might argue that it is overkill to use a Z larger the |p|
and typically Z-like values are the same size as |p| so it would be more
consistent with existing practice and more intuitive to use a |p|-sized Z.
Or, somebody who has read papers like |A| and the papers it cites might
argue that it might be useful to consider the block length of the hash
function when choosing the size of Z.
BTW, I'm not making an argument here. I just think it is strange to have
this constant presented without any justification in the paper.
> > The security considerations say that the caller "must pass in a new
> > and random 64 byte value each time the signing function is called." I
> > suggest s/64 byte value/**Z**/. Also, it would be nice to clarify the
> > of the badness if a Z value were to be reused.
> OK, will note to clarify.
That would be great. Notice in the email I'm replying to now, you seem to
be intending that the scheme should be usable even with a compromised RNG
to some extent, which makes me wonder whether a good RNG is a requirement
or just nice-to-have.
> > Would you be open to using a different name than `q` for the order of the
> > base point? `q` is commonly used in code and specifications to refer to
> > this paper calls `p`. Another name would be less confusing. The EdDSA RFC
> > uses `L`, IIRC.
<snip reasonable explanation>
> So I think "p" and "q" are good choices, considering?
It is not a big deal but IMO the more consistency between this and the
EdDSA specification, the better. I don't know why the EdDSA specification
uses "L" but I don't see any harm in using "L" here, and I do see a benefit
in making it easier to understand this specification relative to the EdDSA
> > Section 2.5 says "since the first |p| bits encode an integer greater than
> > p." I think it would be useful to add a clarification such as "(This is
> > it is required 2**|p| - 1 - i > p.)" or otherwise clarify the motivation
> > that precondition.
> OK, I was hoping that was clear, but will try to clarify.
I think it actually was pretty clear.
Since only small values of |i| are actualy used in the spec, the main
takeaway the reader should make is that this scheme can't work for any p
which is a Mersenne prime, AFAICT. That doesn't seem significant given the
curves that people are presently using, but it might be useful to make
crystal clear for anybody trying to use it with different curves in the
This link isn't working for me.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Curves