<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Sun, Feb 5, 2017 at 5:57 AM, Trevor Perrin <span dir="ltr"><<a href="mailto:trevp@trevp.net" target="_blank">trevp@trevp.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> * Is fragmenting ciphertexts actually sufficient for small-memory<br>
devices? Isn't it still a problem if the device is receiving and<br>
buffering an excessive *volume* of ciphertext, regardless of how it's<br>
fragmented? I.e., if there needs to be message size/volume limits in<br>
general, wouldn't that solve this problem more thoroughly than<br>
fragmentation?<br></blockquote><div><br></div><div>I see that as a network-layer flow control problem, which TCP already has mechanisms for. I've never been a fan of the window flow control mechanisms in TLS and SSH - it's usurping functions already provided by TCP (to be fair, the multiple channels in SSH do require per-channel flow control to avoid starvation).<br><br></div><div>In Arduino it is common for the TCP/IP layer to be provided by a separate co-processor (Wiznet W5100 and W5500 chipsets are common). Network buffering and flow control is done in this separate co-processor and doesn't eat into the memory of the main CPU where Noise would be running. The main CPU executes recv()/send() calls which thunk over to the co-processor to service one packet at a time.<br><br></div><div></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
* In typical IoT cases, can't both sides just be configured to send<br>
small messages, or fragment at some hard-coded limit? Does this need<br>
to be *negotiated*?<br></blockquote><div><br></div><div>If it is just IoT devices "phoning home", the application can arrange ahead of time to limit things. But there will be other devices in the network; e.g. an administrator on a laptop connecting into the IoT server to perform maintenance. And over time, IoT devices become more capable - if the application limits to the lowest common denominator then throughput is always limited to that of years-old equipment.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
* If negotiation is needed, can't this be left to the application?<br>
I.e., if we allow applications to choose the handshake message<br>
payloads to send certificates etc, they can add their own negotiation.<br>
Then we don't have to burden all NoiseSocket implementations with an<br>
infrequently-needed feature.<br></blockquote><div><br></div><div>My preference is that NoiseSocket be a simple TCP wrapper with little burden placed on the application to deal with flow control and fragment sizes. Think of how TCP handles the path MTU - it is largely invisible to the application that messages may be chopped into smaller portions.<br></div><div><br>That is, the NoiseSocket layer is allowed to chop requests into
smaller portions. The Noise message boundaries do not imply anything
about application-level message boundaries.<br><br>If we do leave negotiation up to the application, then I would like to see a standardized SetMaxFragmentSize() function in the NoiseSocket API that the application can call once it has negotiated the size by whatever means (hard-coded limit, negotiation during handshake, negotiation after handshake, etc). After that, the API chops messages up itself.<br><br></div><div>Cheers,<br><br></div><div>Rhys.<br><br></div></div></div></div>