<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div class="">Thanks Trevor!</div><br class=""><div><blockquote type="cite" class=""><div class="">On Jan 19, 2015, at 1:47 PM, Trevor Perrin <<a href="mailto:trevp@trevp.net" class="">trevp@trevp.net</a>> wrote:</div><br class="Apple-interchange-newline"><div class="">At Real World Crypto, Patrick Longa discussed the question of choosing<br class="">efficient field primes for extra-strength curves. He suggested such<br class="">primes should be chosen to strike a balance between efficiency in<br class="">full-radix and reduced-radix implementations (i.e. "saturated" and<br class="">"unsaturated" arithmetic):<br class=""><br class=""><a href="http://www.realworldcrypto.com/rwc2015/program-2/RWC-2015-Longa.pdf?attredirects=0" class="">http://www.realworldcrypto.com/rwc2015/program-2/RWC-2015-Longa.pdf?attredirects=0</a><br class=""><br class="">2^379 - 19 was given as an example.<br class=""><br class="">Most of the discussion of extra-strength primes I've seen assumes<br class="">reduced-radix implementations, so this is an interesting twist. His<br class="">argument for considering full-radix was:<br class=""><br class=""> 1) Full-radix is more efficient on several platforms ("AMD, "Intel<br class="">Atom, Intel Quark, ARM w/o Neon, microcontrollers"), whereas<br class="">reduced-radix is most advantageous on more advanced processors ("Intel<br class="">desktop/server", "ARM with NEON”).<br class=""></div></blockquote><div><br class=""></div><div>This makes sense. Some points</div><div><br class=""></div><div>* Intel’s desktop/server parts have a really fast MUL and slowish ADC compared to everything else, so they favor reduced-radix computations which have more MULs and fewer ADCs.</div><div><br class=""></div><div>* Anything running on a vector unit (NEON) won’t have access to ADC at all, so it will suck on full-radix.</div><div><br class=""></div><div>* Medium-to-high-end ARM parts (M4 or higher) have an instruction UMAAL to accelerate full-radix arithmetic, though it still isn’t as fast as NEON. They also have an instruction UMLAL which accelerates reduced-radix arithmetic.</div><div><br class=""></div><div>* Low-end ARMs have UMLAL but not UMAAL. Some might not even have that. Parts with UMLAL and not UMAAL will favor reduced-radix.</div><div><br class=""></div><div>* Other platforms may be in-between, if they don’t have acceleration for either. A possible exception is RISC architectures with no carry flag and no magic accum register (eg RISC-V), which tend to perform terribly on full-radix code.</div><div><br class=""></div><div>* Superscalar cores can multi-issue ADDs for the reduced-radix add/sub routine, but can’t multi-issue ADC.</div><div><br class=""></div><div>* The advantage of full-radix decreases as the number of limbs increases, because it’s usually n^2 vs (n+1)^2. That’s why for Curve25519 favors full-radix.</div><div><br class=""></div><div>So the message that different processors favor different organizations is correct, and it sort of correlates with CPU size, but not perfectly.</div><br class=""><blockquote type="cite" class=""><div class=""> 2) Full-radix may be safer and easier to implement, since<br class="">reduced-radix requires "Bound analysis" to prevent inadvertent word<br class="">spilling, thus is "error prone, errors are more difficult to catch”.<br class=""></div></blockquote><div><br class=""></div><div>Maybe. With full-radix it can be annoying to get the carries right, though it’s easier if the size is not exactly a multiple of 64 (eg 2^379 instead of 2^384) because then there’s headroom.</div><br class=""><blockquote type="cite" class=""><div class="">I'm not qualified to assess (1). But if it's true that full-radix<br class="">implementations are faster on more primitive platforms that seems<br class="">significant, since optimizing for these is probably more important<br class="">than achieving maximum speed on advanced processors.<br class=""></div></blockquote><div><br class=""></div><div>I agree.</div><br class=""><blockquote type="cite" class=""><div class="">Anyways, I'm curious what other people think of this approach, and of<br class="">2^379 - 19 in particular.<br class=""></div></blockquote><div><br class=""></div><div>Seems fine to me. The prime 2^398 - 21 did bug me a little bit for this reason.</div><br class=""><blockquote type="cite" class=""><div class="">The slides give performance numbers for a "Ted37919" curve based on<br class="">this prime on Sandy Bridge, which I added to the spreadsheet here:</div></blockquote><br class=""><blockquote type="cite" class=""><div class=""><a href="https://docs.google.com/a/trevp.net/spreadsheet/ccc?key=0Aiexaz_YjIpddFJuWlNZaDBvVTRFSjVYZDdjakxoRkE&usp=sharing#gid=0" class="">https://docs.google.com/a/trevp.net/spreadsheet/ccc?key=0Aiexaz_YjIpddFJuWlNZaDBvVTRFSjVYZDdjakxoRkE&usp=sharing#gid=0</a><br class=""></div></blockquote><div><br class=""></div><div><div><br class=""></div><div>NB those numbers do not include point decompression.</div></div><br class=""></div><div>Cheers,</div><div>— Mike</div></body></html>