[curves] Costs of cofactor > 1
rransom.8774 at gmail.com
Fri Dec 26 04:18:31 PST 2014
On 12/26/14, Trevor Perrin <trevp at trevp.net> wrote:
> On Wed, Dec 24, 2014 at 10:55 AM, Michael Hamburg <mike at shiftleft.org>
>> There’s also the ed vs twisted-ed performance issue in 3-mod-4 fields. My
>> isogeny strategy mostly handles this, and for libraries implementing only
>> ECDH-like protocols don’t even need to implement the untwisted Edwards
>> curve, which makes them simpler. But sometimes you may not want to
>> multiply by 2 or 4 in a particular place, and then you will need to
>> implement both twisted and untwisted curve arithmetic.
> OK. That seems like a significant advantage to your proposal, though
> I'm not sure what costs are being added elsewhere.
The added cost is that his easy-to-use point encoding formula involves
a square root, as the Goldilocks/Montgomery Station encoding and
Elligator do, rather than merely an inversion (which can be shared
over an arbitrarily large batch of points) as with the more standard
(He has also pointed out that since this point format is
isogeny-based, one might be able to compute an encoding of P from P/2
without the square root, which should require one inversion per batch
of points and one Legendre symbol per point, which may be faster.)
Decoding to Edwards form (with the formulas he posted originally) is
simpler than with an Edwards-based or Montgomery-based point format
(it requires a square root rather than a square-root-of-fraction
operation), and faster over a Golden trinomial or Mersenne or special
Montgomery field (as pointed out in eprint 2012/309, a square root is
more efficient in those fields than inversion by exponentiation or
square-root-of-fraction because (q+1)/4 is divisible by, or is, a
large power of 2).
> This seems separate from the cofactor > 1 question, though. Maybe
> your bigger selling point is getting twisted Edwards without the
> 4-isogeny, rather than cofactor removal?
The reason that this new encoding makes unpacking to the a=-1 twist
without doubling the input point safe is that it cannot unpack to any
point of even order.
> Having looked at Ed25519 batch verification now, I like it even less:
> * requires a secure RNG
As the Ed25519 paper points out for signature nonces and Dr.
Bernstein's ‘badbatch’ paper points out for batch-verification
randomizers, one can hash the inputs with a long-term secret key to
But the ‘Fiat-Shamir heuristic’ should justify using a
*non-randomized* hash of the input batch of signatures and public keys
to generate the sequence of batch-verification randomizers.
> * doesn't tell you which signature(s) failed
There are applications where one does not care which of two or more
signatures received during a transaction is invalid, only whether all
of the signatures are valid. One example is TLS certificate-chain
verification, where (if certificates used a batch-verifiable signature
scheme) all of the signatures over each group used could be verified
together. Another example is uploading a server descriptor signed
using a short-term signing key, whose signature-verification key and
validity period are in turn signed offline using a long-term signing
key -- if either signature is invalid, the descriptor should be
The ‘badbatch’ paper specifies an algorithm which can identify which
signatures in a batch are invalid, but I think that even without that
algorithm, batch verification is useful enough to justify supporting
> I still think the mainstream could ignore batch verification, and an
> unusual protocol that wants it could mandate cofactor-clearing
> signature verification.
Either behaviour (cofactor clearing or exact verification) can be
checked using test vectors, but the only behaviour that I know how to
enforce in the field is cofactor clearing. When it is possible to
break interoperability with incorrect implementations, that's better
than hoping that all implementations will be tested properly.
More information about the Curves