[messaging] Whiteout Secure PGP Key Sync
tankred at whiteout.io
Thu Jul 17 23:32:45 PDT 2014
Thanks much for the link to the source! (But the license is too scary for
me to actually read it; it appears to attempt to grant a license to people
'auditing' the code, but no-one else.)
Yes. We're still figuring out our businessmodel, which is why we haven't
finalized our licensing terms. Our IMAP/SMTP code is already MIT though:
There is no easy answer here. I myself am a big fan of open source, but
most FLOSS tools like GPG tools cannot provide what many non-technical
users need, like professional support, hosting and other services. We're
explicitely building a commercial product that people will want to pay for,
since we're (hopefully) providing value to our users e.g. in the form of
easy to use key management and multidevice PGP. That said, we're open to
discussions about putting the client in open source if it makes sense. But
I guess a crypto mailing list might not be the correct format to discuss
A pseudocode spec would really help: I would commend to you
Trevor's protocol specs as a model of clarity.
Thanks I'll take a look.
First, a security model question:
Suppose the following:
(1) I have access to the server's permanently stored material, but not the
various ephemeral keys.
(2) I take a picture of a user while they're using the QR code option to
transfer their master secret.
What can I do next?
Not sure what attack you're suggesting. Can you provide a more elaborate
. . . . Since the user has uploaded and
> authenticated a public PGP key to the whiteout key server before
> private PGP key sync, there is already a trust relationship between
> the PGP keypair and the server. This is used as an additional factor
> for authentication.
What is the threat this is supposed to guard against?
Existing PGP key servers don't authenticate keys. The problem are listed
When a whiteout user uploads a public key, he gets a verification email to
prove ownership of the email account to the server. This knowledge can then
be used by the server for authentication later using the user's PGP key
instead of e.g. a password.
The device secret is known to the server and stored as a hash pretty
> much like a password. This is why it should not be trusted to protect
> the user from a threat model point of view.
Since keysync is infrequent(?), why not just give the server a public key
and sign a challenge?
This sounds interesting. Can you elaborate?
Or perhaps better: sign the challenge and a commitment to a next device
secret. Then device compromise and secret use always leads to an error.
Yes. The server is a standard REST service secured via TLS.
Is this explicitly pinned? (Scanning through the directories, only saw
it on Appspot...)
Currently only certs used for our IMAP/SMTP stack are pinned, which is why
you see a google ca for gmail. Since chrome support ssl pinning for the
https stack, we might add pinning for requests to our *whiteout.io servers
in a later version.
The code and certs are deployed/installed via a packaged app (not
webserver). More on this here:
. . . . Given that all Whiteout Endpoints
> use TLS 1.2 with forward secrecy, could you elaborate on how adding a
> DH-style key exchange for the session keys would add security?
Does the client enforce this (can it)? (I wonder if would be feasible to
add a field to require TLS 1.2 [AEAD]-ECDHE for all connections to a
domain on Chrome's pinlist......)
I think dkg's point is that you are relying on the transport layer for
application security; it's just harder to control exactly what happens.
The transport layer offers some security. The application layer adds
mechanisms on top (like wrapping the PGP key). Can you name an example
where the compromise of the transport layer could compromise the user's PGP
Whiteout Networks GmbH c/o Werk1
Grafinger Str. 6
Geschäftsführer: Oliver Gajek
RG München HRB 204479
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Messaging