[curves] Costs of cofactor > 1
Michael Hamburg
mike at shiftleft.org
Wed Dec 24 10:55:08 PST 2014
Hi Trevor,
I don’t think that cofactor > 1 is a huge danger, and I know that to some degree my encoding is a solution in search of a problem. That’s why I want to write a paper about it, get some feedback, and only then consider putting into the Goldilocks software (at least for purposes beyond testing/benchmarking). But there is a disadvantage to the cofactor: it adds complexity to the specification, implementation and validation of the algorithms, and sometimes that complexity is outright ignored.
For example, consider https://tools.ietf.org/html/draft-ladd-spake2-00: "Let G be a group in which the Diffie-Hellman problem is hard of prime order p, written additively.” This comes from the original Abdalla-Pointcheval paper: "In the following, we assume a finite cyclic group G of prime order p generated by an element g.” How do we need to modify this protocol if G doesn’t have prime order? Is it safe without clearing the cofactor? (Possibly, if M and N have prime order, but I haven’t proved this; the Elligator enhancement definitely needs cofactor clearing.) Is clearing the cofactor enough? (Yes, but you need to prove it.)
What about the competition, the ever-questionable Dragonfly PAKE? https://tools.ietf.org/html/draft-irtf-cfrg-dragonfly-02 says: "Elliptic curve groups used with dragonfly authentication MUST have a co-factor of one (1).” How do we modify this protocol for curves with a cofactor? Better not to use it at all, but at least there are no currently-known flaws in cofactor 1.
What about MQV? The base MQV is specified to work with a cofactor, but some variants are not. The FHMQV paper says: "The following notations are used in this paper: {\cal G} is a multiplicatively written cyclic group of prime order q generated by G…” It can surely be modified to work with a cofactor, but you’d need to check the proof of security.
There is also the issue of encoding completeness. It’s not usually necessary to serialize the identity — after all, it should come up with negligible probability unless you’re under attack — but papers and proofs often assume that the identity can be serialized and that the point encoding is 1:1, and these often aren’t both true. (Either the identity can’t be serialized, or it is serialized the same as the 2-torsion point.) What’s the impact of this? Probably nothing, but I don’t know for sure.
There’s also the ed vs twisted-ed performance issue in 3-mod-4 fields. My isogeny strategy mostly handles this, and for libraries implementing only ECDH-like protocols don’t even need to implement the untwisted Edwards curve, which makes them simpler. But sometimes you may not want to multiply by 2 or 4 in a particular place, and then you will need to implement both twisted and untwisted curve arithmetic. I recall this being an issue for some PAKE protocol I was considering, but I don’t remember which one. The version of cofactor-removal that I outlined earlier would allow all arithmetic to take place on the untwisted or twisted curve, your pick. Other variants, like “y such that x and y are both even”, would not make the twisted curve safe, but they may have other advantages (simpler, encoding can be batched, but no efficient Montgomery ladder).
Perhaps the biggest advantage of cofactor removal is to factor EC libraries. The current Goldilocks code is riddled with isogeny- and cofactor-related functions like “untwist_and_double_and_serialize” and “deserialize_and_twist_approx”. I also wanted to make a generic scalarmul function, but then there is a question of, do you clear the cofactor? Don’t clear it? Remove the cofactor component without multiplying by it? Have a flag to the function which says what to do? What about for multiple scalar multiplication (linear combo)? Should the precomputed combs expose this sort of option too? If there’s no cofactor, you don’t need any of this.
The protocols in the Goldilocks test code have the same sorts of issues: it’s not always clear when they’re doubling for safety, and when they’re doubling due to the isogeny strategy or due to what I had bothered to implement. I admit that my own sloppiness is a bad argument, but it does reflect my experience that the cofactor can be annoying, and it grounds my motivation to develop a workaround. You also struggled with cofactor issues in the TripleDH design, even though they aren’t a problem now (I think? I was going to check the specs but it looks like the repo is offline…).
You asked specifically about batch verification. I’m not aware of deployed systems that need or use batch verification. The EVITA vehicle-to-vehicle specs do require an extremely high rate of verification, but since they use ECDSA, no batching is possible. It might or might not be desirable in that setting, because there are constraints on latency as well as throughput, but having an option would be nice.
Cheers,
— Mike
> On Dec 24, 2014, at 3:20 AM, Trevor Perrin <trevp at trevp.net> wrote:
>
> Mike's considering an encoding that allows dealing with cofactor > 1
> curves as if the cofactor = 1.
>
> Below I try to list the costs of cofactor > 1 for common algorithms.
> The goal is to see how annoying the cofactor actually is, so we can
> weigh whether proposals like Mike's are worthwhile.
>
> Public-key validation
> ---
> There's no need for public-key validation with twist-secure curves and
> single-coordinate or compressed encoding. So there's nothing for the
> cofactor to affect.
>
> DH
> ---
> For DH it seems typical and easy to clear the cofactor (as done in
> Curve25519 by setting the private key to 0 mod cofactor). This means
> a public key could be tampered into a few different-but-equivalent
> values by adding points of low order.
>
> But a similar DH tampering is possible regardless of cofactor
> (multiply both public keys by the same scalar). This sort of
> tampering doesn't accomplish much - at worst, you could imagine Alice
> depositing something with Bob over a secure channel that she plans to
> retrieve later based on her long-term DH public key. Alice's naive
> assumption that Bob must be seeing the "correct" encoding for her
> public-key could be violated by tampering.
>
> All this is easily prevented by the common practice of hashing encoded
> DH public keys into the session key, or binding them with signatures
> or MACs.
>
> Signatures
> ---
> Ed25519 verification can clear the cofactor (fast batch verification)
> or not (fast single-signature verification).
>
> In an anonymity situation an attacker might be able to use a signature
> that passes the first implementation but fails the second to learn
> which choice was taken.
>
> I suggested standardizing the stricter check (don't clear cofactor) since:
> * it's the common implementation
> * batch verification seems rarely (if ever?) used
> * most verifiers won't worry about leaking this tiny bit of information
>
> Robert suggested standardizing the other way, since clearing the
> cofactor has a very small impact on single-signature performance, but
> allows ~2x optimization for batch verification.
>
> I'm interested in other opinions. Also, are there real uses for batch
> verification?
>
> "Contributory" key agreement
> ---
> The term "contributory" pops up in discussions of 25519. The idea
> seems to be that some protocols are insecure if someone can force
> different DH shared secrets to the same value, so with 25519 you'd
> need to either check for low-order inputs, or check for the identity
> on output.
>
> These checks are easy, but easier with cofactor=1 because there's no
> low-order inputs so nothing to check.
>
> However: do protocols that require this property actually exist?
>
> AFAIK the term "contributory" comes from Group Key Agreements to
> describe calculating a shared key based on DH public keys from several
> parties. But this doesn't seem to be either a security property or a
> requirement on the DH - a malicious insider to the GKA doesn't need to
> force the session key, she just executes the GKA and learns the
> session key. George Danezis has a good analysis [1].
>
> One could argue that cofactor=1 is more robust because forcing the
> same key in different sessions might assist attacks if the protocol
> has other mistakes. That actually happened with TLS triple handshake,
> so can't be dismissed, but it wouldn't be relevant to a well-designed
> protocol.
>
>
> Summary
> ---
> I'm not seeing cofactor>1 as a big deal, or worth much effort to change.
>
> It seems like cofactor=1 might be a tiny bit more robust if your
> protocol has other flaws, and a smidgen more anonymous if you want to
> batch-verify signatures.
>
> Is that it? What am I missing?
>
> Trevor
>
>
> [1] https://conspicuouschatter.wordpress.com/2014/06/28/should-group-key-agreement-be-symmetric-and-contributory/
> _______________________________________________
> Curves mailing list
> Curves at moderncrypto.org
> https://moderncrypto.org/mailman/listinfo/curves
More information about the Curves
mailing list