<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Tue, Aug 26, 2014 at 9:43 PM, Jonathan Moore <span dir="ltr"><<a href="mailto:moore@eds.org" target="_blank">moore@eds.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr"><div><div class="gmail_extra">I can imagine a few, but in practice the our down fall often due to the ones we don't imagine. After this paper:<br></div></div><div class="gmail_extra">
<br></div><div class="gmail_extra"> <a href="https://factorable.net/weakkeys12.extended.pdf" target="_blank">https://factorable.net/weakkeys12.extended.pdf</a></div><div class="gmail_extra"><br></div><div class="gmail_extra">
and this paper:</div>
<div class="gmail_extra"><br></div><div class="gmail_extra"> <a href="http://eprint.iacr.org/2013/734" target="_blank">http://eprint.iacr.org/2013/734</a></div></div></blockquote><div><br></div><div>These papers are both about bad random numbers being used for key generation. There's little to be done if you have a bad entropy source for generating keys.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra">Why not protect against these possible flaws? And even more so why not at least discuss mitigation possibilities?</div>
</div></blockquote><div><br></div><div>Combining the time and some random data or a counter and some random data should prevent nonce reuse, at least within the granularity of your counting scheme, in the event that the data coming out of the RNG repeats.</div>
</div><div><br></div>--<br>Tony Arcieri<br>
</div></div>