[messaging] Are we pursuing real solutions for security?
Peter Gutmann
pgut001 at cs.auckland.ac.nz
Tue Mar 11 12:48:58 PDT 2014
Daniel Kahn Gillmor <dkg at fifthhorseman.net> writes:
>The dialog box image you linked to (http://i.imgur.com/2bEWKNS.png) is a joke
>about Internet Explorer, which is a classic example of human-machine
>interaction (the user of the web browser is trying to authenticate a remote
>machine, which is the web server), not human-human interaction.
I didn't realise bits of my book had gone viral :-). It's actually an
illustration of how people interpret warning dialogs in general.
>This use case still a real security issue, and i haven't heard a plausible
>answer yet about how SAS can be used to verify a web server's key without
>introducing a number of troubling vulnerabilities.
Elsewhere in the book I discuss several alternative options that have been
tried in attempts to remedy this, the discussion is rather long but here it is
for people who really want to plough through it:
When youâve decided on your safe default settings, one simple way to test
your application is to run it and click OK (or whatever the default action
is) on every single security-related dialog that pops up (usability testing
has shown that there are actually users whoâll behave in exactly this
manner). Is the result still secure?
Now run the same exercise again, but this time consider that each dialog
thatâs thrown up has been triggered by a hostile attack rather than just a
dry test-run. In other words the âAre you sure you want to open this
document (default âYesâ)â question is sitting in front of an Internet worm
and not a Word document of last weekâs sales figures. Now, is your
application still secure? A great many applications will fail even this
simple security usability test.
One way of avoiding the âClick OK to make this message go awayâ problem is
to change the question from a basic yes/no one to a multi-choice one, which
makes user satisficing much more difficult. In one real-world test about a
third of users fell prey to attacks when the system used a simple yes/no
check for a security property such as a verification code or key
fingerprint, but this dropped to zero when users were asked to choose the
correct verification code from a selection of five (one of which was âNone
of the aboveâ) [ ]. The reason for this was that users either didnât think
about the yes/no question at all or applied judgemental heuristics to
rationalise any irregularities away as being transient errors, while the
need to choose the correct value from a selection of several actually forced
them to think about the problem.
A particularly notorious instance of user satisficing occurred with the
Therac-25 medical electron accelerator, whose control software was modified
to allow operators to click their way through the configuration process (or
at least hit Enter repeatedly, since the interface was a VT-100 text
terminal) after they had complained that the original design, which required
them to manually re-enter the values to confirm them, took too long [ ].
This (and a host of other design problems) led to situations where patients
could be given huge radiation overdoses, resulting in severe injuries and
even deaths (the Therac-25 case has gone down in control-system failure
history, and is covered in more detail in âOther Threat Analysis Techniquesâ
on page 242). Even in less serious cases, radiation therapists using the
device reported dose-rate malfunctions caused by this interface that ran as
high as forty occurrences a day.
The developers of an SMS-based out-of-band web authentication mechanism used
the multi-choice question approach when they found that users were simply
rationalising away any discrepancies between the information displayed on
the untrusted web browser and the cell-phone authentication channel. As a
result the developers changed the interface so that instead of asking the
user whether one matched the other, they had to explicitly select the match,
dropping the error rate for the process from 30% to 0% [ ].
Other studies have confirmed the significant drop in error rates when using
this approach, but found as an unfortunate side-effect that the
authentication option that did this had dropped from most-preferred to
least-preferred in user evaluations performed in one study [ ] and to
somewhat less-preferred in another [ ], presumably because it forced users
to stop and think rather than simply clicking âOKâ. If youâre really
worried about potential security failures of this kind (see the discussion
of SSH fuzzy fingerprints in âCertificates and Conditioned Usersâ on page
26) then you can take this a step further and, if the target device has a
means of accepting user input, get users to copy the authentication data
from the source to the target device, which guarantees an exact match at the
risk of annoying your users even more than simply forcing them to select a
match will.
(Incidentally, there are all manner of schemes that have been proposed to
replace the usual âcompare two hex stringsâ means of comparing two binary
data values, including representing the values as English words, random
graphical art, flags, Asian characters, melodies, barcodes, and various
other ways of encoding binary values in a form thatâs meaningful to humans.
Most of the alternatives donât work very well, and even the best-performing
of them only function at about the same level as using hex strings (or at
least base32-encoded hex strings, with base32 being single-case base64) so
thereâs little point in trying to get too fancy here, particular since the
other forms all require graphical displays, and often colour graphical
displays, while base32 gets by with a text-only display [ ]).
The shareware WinZip program uses a similar technique to force users to stop
and think about the message that it displays when an unregistered copy is
run, swapping the buttons around so that users are actually forced to stop
and read the text and think about what theyâre doing rather than
automatically clicking âCancelâ without thinking about it (this technique
has been labelled âpolymorphic dialogsâ by security researchers evaluating
its effectiveness [ ]). Similarly, the immigration form used by New Zealand
Customs and Immigration swaps some of the yes/no questions so that itâs not
possible to simply check every box in the same column without reading the
questions (this is a particularly evil thing to do to a bunch of half-asleep
people who have just come off the 12-hour flight that it takes to get
there).
Another technique that you might consider using is to disable (grey out) the
button that invokes the dangerous action for a set amount of time to force
users to take notice of the dialog. If you do this, make the greyed-out
button display a countdown timer to let users know that they can eventually
continue with the action, but have to pause for a short time first
(hopefully theyâll read and think about the dialog while theyâre waiting).
The Firefox browser uses this trick when browser plugins are installed,
although in the case of Firefox it was actually added for an entirely
different reason which was obscure enough that it was only revealed when a
Firefox developer posted an analysis of the design rationale behind it [ ].
Although this is borrowing an annoying technique from nagware and can lead
to problems if itâs not implemented appropriately, as covered in âAutomation
vs. Explicitnessâ on page 430, it may be the only way that you can get users
to consider the consequences of their actions rather than just ploughing
blindly ahead. Obviously you should restrict the use of this technique to
exceptional error conditions rather than something that the user encounters
every time that they want to use your application.
Techniques such as this, which present a roadblock to muscle memory, help
ensure that users pay proper attention when theyâre making security-relevant
decisions. Another muscle memory roadblock, already mentioned earlier, is
removing the window-close control on dialog boxes, but see also the note
about the problems with doing this in a wizard rather than just a generic
dialog box, covered in âSecurity and Conditioned Usersâ on page 144. There
also exist various other safety measures that you can adopt for actions that
have potentially dangerous consequences. For example Appleâs user interface
guidelines recommend spacing buttons for dangerous actions at least 24
pixels away from other buttons, twice the normal distance of 12 pixels [ ].
Another way of enforcing the use of safe defaults is to require extra effort
from the user to do things the unsafe way and to make it extremely obvious
that this is a bad way to do things. In other words failure should take
real effort. The technical term for this type of mechanism, which prevents
(or at least makes unlikely) some type of mistake, is a forcing function [
]. Forcing functions are used in a wide variety of applications to dissuade
users from taking unwise steps. For example the programming language Oberon
requires that users who want to perform potentially dangerous type casts
import a pseudo-module called SYSTEM that provides the required casting
functions. The presence of this import in the header of any module that
uses it is meant to indicate, like the fleur-de-lis brand on a criminal,
that unsavoury things are taking place here and that this is something that
you may want to avoid contact with.
Another example of a security-related forcing function occurs in the MySQL
database replication system, which has a master server controlling several
networked slave machines. The replication system user starts the slave with
start slave, which automatically uses SSL to protect all communications with
the master. To run without this protection the user has to explicitly say
start slave without security, which both requires more effort to do and is
something that will give most users an uneasy feeling. Similarly, the
Limewire file-sharing client requires that users explicitly âConfigure
unsafe file sharing settingsâ rather than just âConfigure file sharingâ, and
then warns users that âEnabling this setting makes you more prone to
accidentally sharing your personal informationâ.
Exactly the opposite approach is taken by Python when loading markup files
like yaml (Yet Another Markup Language), XML, JSON (Javascript Object
Notation), and others. Python (and Ruby, and any number of other scripting
languages, only the syntax changes) will happily load and execute markup
languages with embedded commands that do anything from printing âHello
worldâ through to reformatting your computerâs hard drive. To load a yaml
file (and potentially reformat your hard drive), you use yaml.load. To load
it safely, you need to use yaml.safe_load, which comes after load in the
auto-complete tab order so that every developer who hasnât read the relevant
security advisories will use the unsafe version by default [ ].
Peter.
More information about the Messaging
mailing list