<html><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div class="">(not an S/MIME thread, I promise)</div><div class=""><br class=""></div><div class="">tl;dr summary: threshold signed software updates are useful, you should consider desktop java for your next crypto project (yes really)</div><div class=""><br class=""></div><div class=""><br class=""></div><div class="">-----</div><div class=""><br class=""></div><br class="">For people who live in Switzerland, this evening I will be giving a talk at Rackspace Zürich about the development of Lighthouse, a Bitcoin crowdfunding wallet app:<br class=""><br class=""><a href="http://www.meetup.com/Bitcoin-Meetup-Switzerland/events/219667761/" class="">http://www.meetup.com/Bitcoin-Meetup-Switzerland/events/219667761/</a><br class=""><br class="">If you’re local, why not come along? I’ll upload the sides tomorrow.<div class=""><div class=""><br class="">Bitcoin wallets and e2e messaging apps have some things in common - they both involve key management and end to end usage of crypto technologies. Decentralisation is an attractive property in both areas. So I think it’s worth sharing experiences, as there is a lot of overlap.</div><div class=""><br class=""></div><div class=""><b class=""><u class="">Online updates in crypto apps</u></b><br class=""><br class="">Modern users expect silent and continuous improvement in their software. They don’t want to manually approve updates and many won’t do so even when prompted. One reason for people to prefer web apps is that updates are invisible. It’s no surprise that Google chose the web model for its own client apps. Chrome technically has versions, but they are never shown to the user and the product evolves silently.<br class=""><br class="">When you combine a threat model that includes provider coercion with the desire for modern UX, it’s misleading to claim that client side key management is better than hosted webmail because in both cases the provider can obtain your data. By simply pushing an online update that steals the unencrypted data, the entire scheme is undone. HushMail is an example of this principle in practice. They advertised encrypted email, but when they were served with a court order as part of a drugs investigation they were forced to back door their own software - and it worked:</div><div class=""><br class=""></div><div class=""><a href="http://www.wired.com/2007/11/encrypted-e-mai/" class="">http://www.wired.com/2007/11/encrypted-e-mai/</a><br class=""><div class=""><br class=""></div><div class="">Being open source doesn’t automatically fix this. Whilst a tiny minority of enthusiastic and technical users might compare the source code to the downloads once, they are unlikely to do so on a rolling basis, year after year, all for free. And they probably won’t audit it. And there is no requirement that every user get the same software as the rest. A targeted back door could be pushed only to the targeted users.</div><div class=""><br class=""></div><div class="">When it comes to information dodgy governments might be the primary threat, but for Bitcoin we have to worry about plain old hackers and thieves. So far there has only been one case where money was taken from a wallet through a bogus software update. The maker of StrongCoin discovered that hacked/stolen coins were being controlled in his wallet. It was a “web wallet” in which the keys were managed client side, but the software of course was updated every time the browser refreshed the page. He pushed an update that automatically grabbed the stolen money, so he could return it to the rightful owner. A modern day Robin Hood. It won’t surprise you to hear that this was controversial.</div><div class=""><br class=""></div><div class=""><a href="https://bitcoinmagazine.com/4273/ozcoin-hacked-stolen-funds-seized-and-returned-by-strongcoin/" class="">https://bitcoinmagazine.com/4273/ozcoin-hacked-stolen-funds-seized-and-returned-by-strongcoin/</a></div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><b class=""><u class="">Threshold signed updates in Lighthouse</u></b></div><div class=""><br class=""></div><div class="">When writing my own wallet I thought about this issue a lot. I don’t want to find I got hacked and someone pushed a steal-all-the-coins software update. So I wrote a framework that supports:</div><div class=""><br class=""></div><div class=""><ul class="MailOutline"><li class="">Reproducible builds</li><li class="">Threshold multi-signature signing of delta updates</li><li class="">Ability for users to upgrade/downgrade at will, from within the app UI</li><li class="">Update UI can be as silent or as noisy as the app developer wishes</li></ul></div><div class=""><br class=""></div><div class="">… and the key thing is, doing this is cheap and takes little effort. The framework was already adopted by a few other projects.</div><div class=""><br class=""></div><div class="">I think the model whereby users place faith in a handful of trusted auditors is going to work a lot better than the “open source and pray” approach that’s standard today. Saying “the source is open so we are trustworthy” is meaningless to non programmers. Saying “every version you run is audited by respected companies spread across America, Europe and Russia” allows people to use their human knowledge of reputation and politics to evaluate security claims, and have ongoing confidence in them. This model can also be compatible with proprietary software.</div><div class=""><br class=""></div><div class="">Unfortunately I haven’t found someone who is willing to do audit work for Lighthouse yet. I hope if the app gets popular enough I’ll find someone. As it involves real ongoing work though, it might require incentivisation.</div><div class=""><br class=""></div><div class=""><b class=""><u class="">Implementing crypto apps using the JVM</u></b></div><div class=""><br class=""></div><div class="">Lighthouse, like a few other modern Bitcoin apps, is internally a desktop Java app. This is not as crazy as it sounds. Java has changed a lot in recent years and I think it’s now a highly suitable platform for development of end-to-end crypto apps …. possibly moreso than HTML5/Chrome.</div><div class=""><br class=""></div><div class="">A few points in its favour:</div><div class=""><br class=""></div><div class=""><ol class="MailOutline"><li class="">The latest versions can produce self contained packages for each platform (exe/msi on windows, DMG on MacOS, DEB/RPM or tarball on Linux). The user does not need to have the JRE installed and they won’t even know Java is involved. This turns the JVM into a big runtime library and removes most of the deployment headaches.</li><li class="">There is a totally new, written from scratch GUI toolkit that is somewhat similar to Cocoa. The UI is a GPU accelerated scene graph using OpenGL or Direct3D as appropriate. It’s easy to do fancy animations, fades, blurs, etc. It has a full data binding framework built in.</li><li class="">The new UI toolkit is inspired by web technologies. You style the UI with a dialect of CSS and lay it out with a vaguely HTMLish XML dialect, though there is a graphical designer tool as well. You can use web fonts like FontAwesome and can easily imitate the look of frameworks like Bootstrap (and I do). You can implement your UI logic in Javascript if you like. You can embed videos or a full webkit if you need real html5.</li><li class="">You are not restricted to using Java, which by now is an old and mediocre language. You can use almost any modern language that isn’t C++ or Go, like Python, Ruby, JavaScript (with performance close to V8), mixed oop/functional languages like Scala or Kotlin …. there are even dialects of Haskell, Lisp and Erlang available!</li><li class="">Backend code is reusable immediately on Android, and via a translation layer on iOS like RoboVM or J2ObjC. The Google Inbox product shares >50% of its code across all platforms this way.</li></ol><div class=""><br class=""></div><div class="">... and of course, threshold signed software updates.</div><div class=""><br class=""></div><div class=""><div class="">One advantage to doing things this way is you don’t spend much time fighting the framework, because it was designed for apps from the ground up. You also get access to mature crypto libraries like Bouncy Castle.</div></div><div class=""><br class=""></div><div class="">There’s a video of the app on the website here:</div><div class=""><br class=""></div><div class=""><a href="https://www.vinumeris.com/lighthouse" class="">https://www.vinumeris.com/lighthouse</a></div><div class=""><br class=""></div></div><div class=""><b class=""><u class="">Sandboxing for minimizing audit overhead</u></b></div><div class=""><b class=""><u class=""><br class=""></u></b></div><div class="">One issue with accountable crypto is keeping the auditing cost manageable. One-off audits are incredibly expensive. Rolling audits of every change going into a codebase to catch developer backdoors …. too expensive to be affordable.</div><div class=""><br class=""></div><div class="">For apps built on the JVM there is an interesting possibility. We can use the platform’s sandboxing features to isolate code modules from each other, meaning that the auditor can focus their effort on reviewing changes to the core security code (and the sandbox itself of course, but that should rarely change).</div><div class=""><br class=""></div><div class="">For example in an email app, the compose UI and encryption module could be sandboxed from things like the address book code, app preferences, code that speaks IMAP etc. If you know that malicious code in the IMAP parser can’t access the users private keys, you can refocus your audit elsewhere.</div><div class=""><br class=""></div><div class="">But wait! Isn’t the JVM sandbox a famously useless piece of Swiss cheese?!?</div><div class=""><br class=""></div><div class="">Yes, but also no. It certainly <i class="">was</i> riddled with exploits and zero days, back in 2012/2013. But Oracle has spent enormous sums of money on auditing the JVM in recent years. In 2014 there were no zero day’s at all. There <i class="">were</i> sandbox escapes, and I expect them to continue surfacing, but they were all found by whitehat auditing efforts. What’s more, many of those exploits were via modules like movie playback or audio handling - things that a typical crypto sandbox would just lock off access to entirely.</div><div class=""><br class=""></div><div class="">So it’s starting to look like <i class="">in practice</i>, as long as the VM itself is kept up to date and the sandboxed code isn’t given access to the full range of APIs, the sandbox would be strong enough that a typical software company wouldn’t be able to break out of it even under duress. The cost of finding a working exploit would be too high.</div><div class=""><br class=""></div><div class=""><b class=""><u class="">Type safety and crypto code</u></b></div><div class=""><br class=""></div><div class="">The trend towards usage of Javascript crypto worries me. We’ve seen this cause big problems in the Bitcoin world, with <i class="">several</i> private key compromises directly caused by Javascript’s weak type safety. I wrote an article on this problem here:</div><div class=""><br class=""></div><div class=""><a href="https://medium.com/@octskyward/type-safety-and-rngs-40e3ec71ab3a" class="">https://medium.com/@octskyward/type-safety-and-rngs-40e3ec71ab3a</a></div><div class=""><br class=""></div><div class="">Using stricter, more type safe languages can help avoid real security exploits and (imo) the benefits are so strong that it’s worth avoiding web based app stacks just for this reason.</div></div><div class=""><br class=""></div><div class=""><br class=""></div></div></body></html>