[messaging] Online update and dev platforms for crypto apps
Tom Ritter
tom at ritter.vg
Wed Feb 11 09:26:46 PST 2015
This is a pretty awesome email with a lot of thought put into it, so I
feel bad nitpicking but....
On 10 February 2015 at 10:04, Mike Hearn <mike at plan99.net> wrote:
> For apps built on the JVM there is an interesting possibility. We can use
> the platform's sandboxing features to isolate code modules from each other,
> meaning that the auditor can focus their effort on reviewing changes to the
> core security code (and the sandbox itself of course, but that should rarely
> change).
>
> For example in an email app, the compose UI and encryption module could be
> sandboxed from things like the address book code, app preferences, code that
> speaks IMAP etc. If you know that malicious code in the IMAP parser can't
> access the users private keys, you can refocus your audit elsewhere.
>
> But wait! Isn't the JVM sandbox a famously useless piece of Swiss cheese?!?
>
> Yes, but also no. It certainly was riddled with exploits and zero days, back
> in 2012/2013. But Oracle has spent enormous sums of money on auditing the
> JVM in recent years. In 2014 there were no zero day's at all. There were
> sandbox escapes, and I expect them to continue surfacing, but they were all
> found by whitehat auditing efforts. What's more, many of those exploits were
> via modules like movie playback or audio handling - things that a typical
> crypto sandbox would just lock off access to entirely.
>
> So it's starting to look like in practice, as long as the VM itself is kept
> up to date and the sandboxed code isn't given access to the full range of
> APIs, the sandbox would be strong enough that a typical software company
> wouldn't be able to break out of it even under duress. The cost of finding a
> working exploit would be too high.
It doesn't really matter where the sandbox escape is, if I trigger it
I leave the sandbox and then go back in and root around in some other
sandbox to get the keys. So your protections come in two forms:
1) Locking all sandboxes down to the minimal set of APIs that code in
that sandbox needs to use, so as to minimize vectors for a sandbox
escape.
2) Sandboxing components as narrowly as possible so that code
execution inside any individual sandbox only gives the attacker a
cross-sandbox/IPC-like API to interact with data (such as
getMessageById(), decrypt(), sign()) and not a full database of data.
Aside from that, what's the use case scenario for allowing people to
downgrade? I guess it's to promise people "Upgrade the app, and if you
don't like where I moved your cheese[0] you can go back?" How does
that work with important security updates? How does it work when you
need to deprecate APIs the app talks to or messaging formats it
communicates with?
-tom
[0] http://www.hanselman.com/blog/Windows8ProductivityWhoMovedMyCheeseOhThereItIs.aspx
More information about the Messaging
mailing list