[curves] Second day NIST workshop notes

D. J. Bernstein djb at cr.yp.to
Sun Jun 14 13:59:31 PDT 2015

Frankly, I thought the hardware-side-channel people were making fools of
themselves. Specifically, they were talking about a bunch of decade-old
attacks as if those attacks were exciting and new and as if it weren't
already well known how to defend against those attacks.

Example: A standard way to hide bits of n in computing nP is to instead
compute (n+rl)P, where l is the order of P and r is chosen randomly. How
long does r have to be for rl to actually randomize all the bits of n?

If the prime is very close to 2^k then l will also be within roughly
2^(k/2) of 2^k by Hasse's theorem (for cofactor 1; similar statements
apply to other standard cofactors), so if r is below k/2 bits then the
top bits of n won't be randomized. This is obviously unacceptable---
ample reason to recommend r above k/2 bits, whereas for "random" primes
people typically recommend 32 bits or 64 bits.

People listening to Lochter's talk, or reading the slides, would have
acquired the impression that this was a new recommendation based on a
new attack:

   Efficient Side-Channel Attacks on Scalar Blinding on Elliptic Curves
   with Special Structure ... Our contribution: Special prime fields
   need much larger blinding factors! ... Our attacks ... O(2^29) to
   O(2^34) operations ... Countermeasure: The length of the blinding
   factors R must at least exceed the gap g ≈ k / 2.

Scary! Fast moving! Who knows what will happen next?

Back in the real world, the same issue is actually stated quite clearly
in Ciet's thesis from 2003 (thanks to Tanja Lange for showing this to
me), in the 2007 Oswald--Page--Smart "Randomised representations" paper,
etc. The statement labeled "Our contribution" is perfectly clear from
the literature, as is the "Countermeasure".

What's particularly strange is that Lochter _knew_ about these papers. I
think that Lochter, as speaker, shares responsibility with the authors
for issuing a retraction of the claims of novelty. I'm not sure Lochter
is on this mailing list so I'm cc'ing him.

I don't mean to say that the security picture is completely clear; I'm
not sure that people have really thought through what's leaked by n+rl
in general. In particular, I'm skeptical of the claim that 64 bits in r
(e.g., a 320-bit scalar for Brainpool-256) is safe for "random" primes.
I recommend 256 bits for Curve25519: i.e., a 512-bit scalar, far beyond
the 384-bit cutoff. Properly optimized hardware for Curve25519 with a
randomized 512-bit scalar should still be smaller and faster than
Brainpool with a completely _unprotected_ 256-bit scalar.

Michael Hamburg writes:
> DJB claims in a question that sufficient blinding is the number of
> consecutive 0’s or 1’s in r.  (Or presumably some minimum value for
> entropy reasons.)

Huh? I wasn't making any "claims" different from what the talk said. I
was quoting the 2007 paper, which made the same security recommendations
as the talk, using very much the same words that Lochter was using,
because of the same attack. I asked what the news was supposed to be.

Lochter's answer, basically, was that the attack hadn't been worked out
in detail before (as if the attack details were unobvious!) and that you
can't convince implementors to do the right thing without showing them a
detailed attack. If some hardware implementors are in fact ignoring the
recommendation to use more than k/2 bits then I'd agree that showing them
a detailed attack is helpful, but this doesn't justify Lochter's complete
failure to credit the long history of the recommendation.

> The attack is amplified if the implementation doesn't perform twist
> checks and will give the point output for a twisted input.  Running
> the algorithm several times on the same twisted point gives (x+rq)P
> for several r, but P doesn’t have order q.  This allows the difference
> of the r’s to be recovered which amplifies the attack, allowing error
> rates of 40-something%.

I found this part of the talk quite strange.

It's of course tempting to use this "attack" as yet another argument for
my recommended 256-bit length of r, which obviously stops the discrete
logarithm from being computed. But I'd feel silly using this argument,
since I don't see how the "attack" could get started in any real system.

How would the attacker ever see (n+rl)Q where Q is a point on the twist?
Implementations don't reveal their shared secrets nQ. The processing of
nQ (normally just a hash) is far smaller than the scalarmult, so trying
to extract nQ as a first step towards figuring out n doesn't make sense.
Implementations do of course reveal public keys nP, signatures, etc.,
but then P is a standard point on the curve, not an attacker-controlled
point on the twist.

One can of course invent protocols that reveal nP for a long-term n and
an attacker-controlled P, but these protocols are so rarely used (even
in software, never mind hardware) that I can't find any ECC standards
considering them. The big issue here isn't side-channel attacks, but
rather "static DH attacks", and maximizing security against those
attacks means imposing different constraints on curve choice.

What's particularly puzzling about Lochter mentioning this scenario is
that, in his Brainpool standard, he explicitly _refused_ to impose those
constraints on curve choice: he wrote that static-DH attacks were on
"some elliptic curve based cryptographic mechanisms that do not appear
to be practical". If he's now suddenly considering this scenario, is he
going to issue an alert to users regarding security problems with the
smaller Brainpool curves?


More information about the Curves mailing list