[curves] Costs of cofactor > 1

Trevor Perrin trevp at trevp.net
Fri Dec 26 02:57:37 PST 2014


On Wed, Dec 24, 2014 at 10:55 AM, Michael Hamburg <mike at shiftleft.org> wrote:
> Hi Trevor,
>
> I don’t think that cofactor > 1 is a huge danger, and I know that to some degree my encoding is a solution in search of a problem.  That’s why I want to write a paper about it, get some feedback, and only then consider putting into the Goldilocks software (at least for purposes beyond testing/benchmarking).  But there is a disadvantage to the cofactor: it adds complexity to the specification, implementation and validation of the algorithms, and sometimes that complexity is outright ignored.

Agreed there are disadvantages.  I'm just trying to get a better sense of them.


> For example, consider https://tools.ietf.org/html/draft-ladd-spake2-00: "Let G be a group in which the Diffie-Hellman problem is hard of prime order p, written additively.”  This comes from the original Abdalla-Pointcheval paper:  "In the following, we assume a finite cyclic group G of prime order p generated by an element g.”  How do we need to modify this protocol if G doesn’t have prime order?  Is it safe without clearing the cofactor?

Even without the cofactor, translating SPAKE2 to the real world is a
lot of work:
 - What validation is needed for public values?  Can you skip
validation by using Montgomery-x and relying on twist security?  What
about the identity?
 - How are M and N chosen?
 - How to add features like simultaneous/symmetric exchange of message
and augmentation, and prove them secure
 - Integration into (TLS, SSH, etc): message ordering, ciphersuite
negotiation, key derivation, encoding public values and user identity,
exchanging salt, password hashing
 - Misc security: sidechannels, determinism, DoS, nothing-up-sleeves, etc.
 - IPR research
 - Comparing against alternatives (Elligator, SPEKE, etc)

Compared to all that, thinking through the cofactor doesn't seem like
a big deal.  And I think this will be true in other complex protocols
(like MQV).  Anyways, protocols get designed once and implemented many
times, so I'd put more weight on ease-of-implementation than
ease-of-design.


> There is also the issue of encoding completeness.  It’s not usually necessary to serialize the identity — after all, it should come up with negligible probability unless you’re under attack — but papers and proofs often assume that the identity can be serialized and that the point encoding is 1:1, and these often aren’t both true.  (Either the identity can’t be serialized, or it is serialized the same as the 2-torsion point.)  What’s the impact of this?  Probably nothing, but I don’t know for sure.

Sure, but the identity is a special case regardless of cofactor, e.g.

"With a prime-order short Weierstrass curve, (x,y) points transmitted
on the wire (compressed or otherwise) cannot be the identity, nor can
they have small order."
https://www.ietf.org/mail-archive/web/cfrg/current/msg05571.html

Goldilocks' "encoding completeness" might be better, but might be
worse because now the implementor and protocol designer need to
consider receiving the identity.

Cofactor>1 might be even worse (there's a few low-order points,
instead of just the identity), but since the protocol designer has to
think through the identity anyways, it's probably not that different.


> There’s also the ed vs twisted-ed performance issue in 3-mod-4 fields.  My isogeny strategy mostly handles this, and for libraries implementing only ECDH-like protocols don’t even need to implement the untwisted Edwards curve, which makes them simpler.  But sometimes you may not want to multiply by 2 or 4 in a particular place, and then you will need to implement both twisted and untwisted curve arithmetic.

OK.  That seems like a significant advantage to your proposal, though
I'm not sure what costs are being added elsewhere.

This seems separate from the cofactor > 1 question, though.  Maybe
your bigger selling point is getting twisted Edwards without the
4-isogeny, rather than cofactor removal?


> Perhaps the biggest advantage of cofactor removal is to factor EC libraries.  The current Goldilocks code is riddled with isogeny- and cofactor-related functions like “untwist_and_double_and_serialize” and “deserialize_and_twist_approx”.  I also wanted to make a generic scalarmul function, but then there is a question of, do you clear the cofactor?  Don’t clear it?  Remove the cofactor component without multiplying by it?  Have a flag to the function which says what to do?  What about for multiple scalar multiplication (linear combo)?  Should the precomputed combs expose this sort of option too?  If there’s no cofactor, you don’t need any of this.

Again, isn't this due to your "isogeny strategy" more than the cofactor?

Curve25519 ECDH specifies private keys as a multiple of the cofactor.
That implies scalarmults don't do anything special with the cofactor.

Your situation is different - because of the 3 mod 4 prime, you're
using an untwisted Edwards curve with a 4-isogeny to convert to faster
twisted Edwards math, so you're overlapping the 4-isogeny with
clearing the cofactor.  That makes sense, but it's more complicated
and makes your scalarmults more "cofactor-aware".

Anyways, if you can eliminate that complexity while retaining the
twisted Edwards benefits that sounds great, but seems different from a
"cofactor sucks" argument.


> The protocols in the Goldilocks test code have the same sorts of issues: it’s not always clear when they’re doubling for safety, and when they’re doubling due to the isogeny strategy or due to what I had bothered to implement.  I admit that my own sloppiness is a bad argument, but it does reflect my experience that the cofactor can be annoying, and it grounds my motivation to develop a workaround.  You also struggled with cofactor issues in the TripleDH design

That's a good case study:  TextSecure key agreement is based on a
"TripleDH", and identities are public-key fingerprints.  We avoided
hashing identities into the session key due to IPR concern regarding
KEA+.

To prevent Curve25519 public keys from having a few equivalent
encodings (due to the cofactor) we considered setting private keys to
1 mod cofactor instead of 0 mod cofactor (based on discussion with
Mike and others).  But we decided the simpler fix was just to hash the
identities into message MACs (not the session key).

This was certainly an annoyance.  But ultimately, the cofactor had
extremely minor security implications in this instance and was easily
dealt with.

A much bigger problem is the lack of ubiquitous 25519 in every
platform and crypto library we want to integrate with.  So I care much
more about having a couple high-quality and simple curves that we can
drive to ubiquity.


> You asked specifically about batch verification.  I’m not aware of deployed systems that need or use batch verification.  The EVITA vehicle-to-vehicle specs do require an extremely high rate of verification, but since they use ECDSA, no batching is possible.  It might or might not be desirable in that setting, because there are constraints on latency as well as throughput, but having an option would be nice.

In what I could find on vehicle-to-vehicle, latency seemed more important.

Bitcoin is a possible case, but the numbers cited below don't seem big
enough to need it:

https://en.bitcoin.it/wiki/Scalability

Having looked at Ed25519 batch verification now, I like it even less:
 * requires a secure RNG
 * doesn't tell you which signature(s) failed

I still think the mainstream could ignore batch verification, and an
unusual protocol that wants it could mandate cofactor-clearing
signature verification.

Trevor


More information about the Curves mailing list