What’s wrong with in-browser cryptography?

Above image taken from Douglas Crockford’s Principles of Security talk

If you’re reading this, then I hope that sometime somebody or some web site told you that doing cryptography in a web browser is a bad idea. You may have read “JavaScript Cryptography Considered Harmful”. You may have found it a bit dated and dismissed it.

You may have read about WebCrypto and what it hopes to bring to the browser ecosystem. This particular development may make you feel that it’s okay to start moving various forms of cryptography into the browser.

Why not put cryptography in the browser? Isn’t it inevitable? This is a perpetual refrain from various encryption products which target the browser (names and addresses intentionally omitted). While the smarter ones try to mitigate certain classes of attacks by shipping as browser extensions rather than just a web site that a user types into their address bar, there is definitely a push to a model where you can get the latest greatest crypto code by typing a friendly address into your URL bar.

What’s wrong with this? And will WebCrypto fix it? I don’t think so. Let’s look at the good, the bad, and the ugly of in-browser cryptography and the WebCrypto API.

 The Good: The Normative Parts

Like many W3C standards, the normative parts of the specification are agnostic to specific algorithms. This is described in section 24.1 of the WebCrypto specification:

This section is non-normative

As the API is meant to be extensible in order to keep up with future developments within cryptography and to provide flexibility, there are no strictly required algorithms. Thus users of this API should check to see what algorithms are currently recommended and supported by implementations.

So, in fact, the W3C is not telling us what algorithms to use at all. Instead, the normative parts of the specification cover abstract APIs for things like generating secure random numbers, managing keys, encrypting/decrypting, backgrounding computation inside workers, and abstract types that can be used with a variety of algorithms.

In that regard, the normative parts of the specification are totally fine. While the spec doesn’t cover it, the APIs seem sufficiently abstract to allow them to easily map onto future encryption algorithms and trusted platform modules (TPMs) which could provide secure storage for encryption keys.

 The Bad: Failure to Provide Normative Advice on Algorithms

The W3C has elected to make advice on algorithms a non-normative part of the specification. This leaves browser vendors without any specific standards upon which we can build an interoperable cryptographic ecosystem for the web. Instead, the section on algorithms lists a bunch of examples of common algorithms and how they can be mapped onto WebCrypto’s APIs.

Browsers already ship portable versions of a large number of cryptographic algorithms as part of their TLS stacks. Without normative guidance from the WebCrypto specification itself, what is likely to happen is that browsers will expose the algorithms in their TLS stacks directly to the browser.

Some of them are fairly good (e.g. AES-GCM), but many of them are dangerous if used improperly, like pretty much any other symmetric cipher they list which is not AES-GCM, as these are not authenticated encryption modes and in the hands of amateurs are akin to handling plutonium.

Without someone providing normative advice that all browser vendors can adhere to, my worry is that the WebCrypto ecosystem will fragment and fail to agree on particular standards. My advice to the W3C is to listen to cryptography expert Matt Green’s advice and provide a normative list of authenticated encryption algorithms (and only authenticated encryption algorithms) that all browsers should support. AES-GCM would be a good start.

 The Ugly: We’re Still In a Browser

“The browser knows that the program does not represent the user” - Douglas Crockford

There is no beating around the bush: the browser is a sandbox that attempts to let you dynamically download and run potentially malicious code from a server on-the-fly. Web browsers are a deliberately designed engine for remote code execution, a term which strikes fear into the hearts of information security professionals worldwide.

If ample precautions are taken (which includes a large laundry list of things like TLS, CSP, CORS, proper HTTP headers, JS strict mode, and more), this can allow for the successful development of cryptographic applications that attempt to enforce the interests of the web application creator. But what about the user?

Do programs in the browser represent the interests of the user? According to Commander Douglas Crockford (image at the top of this post) the answer is a resounding NO. This is not the traditional threat model of the browser.

Where installation of native code is increasingly restrained through the use of cryptographic signatures and software update systems which check multiple digital signatures to prevent compromise (not to mention the browser extension ecosystems which provide similar features), the web itself just grabs and implicitly trusts whatever files it happens to find on a given server at a given time.

The threat model of native code is now well-understood and increasingly addressed through more sophisticated software installation and update systems. Native code releases are artifacts at a point in time, don’t change dynamically, and can therefore be audited and given approval by experts (who ideally have access to the source code and can match the official binaries). This is not the case for the web platform.

The convenience of the web stems from the fact it’s a frictionless application delivery platform. Unfortunately, it does not rely on a comprehensive cryptographically secure signature system to determine content is authentic, but instead just trusts whatever is sitting around on the server at the time you access it. This is worsened by the fact that web browsers give remote servers access to wide-ranging local capabilities exposed via HTML and JavaScript. This creates an environment that is not particularly safe or stable for use in creating, storing, or sharing encryption keys or encrypted messages.

Before I keep talking about where in-browser cryptography is inappropriate, let me talk about where I think it might work: I think it has great potential uses for encrypting messages sent between a user and the web site they are accessing. For example, my former employer LivingSocial used in-browser crypto to encrypt credit card numbers in-browser with their payment processor’s public key before sending them over the wire (via an HTTPS connection which effectively double-encrypted them). This provided end-to-end encryption between a user’s browser and the LivingSocial’s upstream payment gateway, even after HTTPS has been terminated by LivingSocial (i.e. all cardholder data seen by LivingSocial was encrypted).

In this approach, there’s an implicit trust relationship between the user and the site they’re accessing. What we see happening here is cryptography being used to protect the web site’s interests, not the user’s. For this purpose, in-browser crypto is great!

Where the web encryption model fails is when we want to provide a “Trust No One” service which protects the user’s interests, for example the MEGA storage service which uses in-browser crypto. In this sort of scenario, we have MEGA wanting to act as a sort of dumb store for encrypted data, and have them never see plaintexts or encryption keys. Such a service would, ideally, pass what cryptography expert Matt Green calls the “mud puddle test”, where a person who has a particularly bad run-in with a mud puddle and loses their personal copies of encryption keys can’t ask the service to give them back, since the service itself doesn’t hold onto them.

However, this approach just doesn’t work in a browser, as illustrated by the MEGApwn utility for obtaining your MEGA keys. This utility illustrates an important problem with building “Trust No One” services in the browser: anyone who can get JavaScript to run on the same origin as the alleged “Trust No One” service can get access to your encryption keys. WebCrypto’s mechanisms for secure key storage can mitigate this partially, but an attacker can still utilize your keys remotely. Furthermore, MEGA was designed as a file sharing service, and for that to work it needs direct access to encryption keys so you can share them with other people.

MEGA has gone to great lengths to try to mitigate traditional XSS-style threats (making several mistakes along the way and earning Kim Dotcom the title of Security Charlatan of the Year at the 2013 DEFCON Recognize Awards), but no matter how hard they try this won’t change the fact that the security of the entire system is predicated on the security of MEGA’s JavaScript files at the time you happen to load their site (specifically the “SecureBoot.js” file in the case of MEGA).

The potential attacks are numerous: hackers (or governments) could compromise MEGA’s servers and change the file. A MEGA insider could place a malicious payload inside this file. Governments could coerce MEGA into placing a malicious payload inside the file. Or MEGA could just decide they want to grab everyone’s keys. If any of these things were to happen, the security of the entire system has been lost.

The web’s dynamic nature precludes our only defense against these sorts of attacks: audits by security experts. Even if crypto experts were to audit MEGA’s SecureBoot.js and give it a clean bill of health, there’s nothing to stop anyone who has sufficient access from injecting a malicious payload into it at any point in time. They could even selectively target users, so the rest of the world would still think it’s fine, but a particular victim would receive the malicious payload.

One way to mitigate this is to use browser extensions, which provide cryptographically signed software updates in a way more akin to traditional native code applications, helping mitigate the “just grab the latest code off the server any time I access the site” problem. Browser extensions have problems of their own, but they do move the security bar forward over a traditional web page.

 HTTPS Doesn’t Solve This Problem

Some of you might be thinking “if I use HTTPS, isn’t the content signed by the server?” It’s true that, after years of resolving mistakes and design flaws, and when the certificate you’re trusting hasn’t been compromised, HTTPS will ensure the integrity of the content between the remote web server and your web browser. Modern browsers support AES-GCM, which is particularly good at this.

However, HTTPS was designed to protect what’s known as “data-in-motion”. This means that HTTPS servers use online keys which are not only easily compromised, but they’re specifically designed to make it easy to send back data to users.

This means compromising a JavaScript file can involve little more than obtaining write access to a site’s static files (potentially through security vulnerabilities in a buggy web application) or obtaining CDN credentials (through similar channels). No key compromise is necessary to perform an attack, but since the keys are online the risk of key compromise is higher.

Better software update systems are specifically designed to encrypt “data-at-rest”, which is how build artifacts of native applications or even a web site’s static assets should be thought of. The advantage of data-at-rest is it can be signed by offline keys (or a combination of offline and online keys) which are much more difficult to compromise.

For more information on the problems of using HTTPS alone in the hopes of building a secure software delivery system, see section 4.1 “PKI Vulnerabilities” in the Survivable Key Compromise In Software Update Systems paper.


Cryptography is a systems problem, and the web is not a secure platform for application delivery. The web is a way to easily run untrusted code fetched from remote servers on-the-fly. Building security software inside of web browsers only makes the problem harder.

In-browser crypto is best utilized to help web sites protect their own interests. Sites attempting to build “Trust No One” cryptosystems inside of browsers (especially when not using browser extensions) have vast attack surface and are fundamentally attempting to use the browser for something it wasn’t designed for: creating software that respects the user’s interests, not the web site provider’s.

Instead, prefer either browser extensions or open source native tools. Look in particular for tools that have been audited by security professionals, and in the case of native code apps look for tools with binaries that can be reproduced from the original source code. Scrutiny by experts in paramount in making sure software is secure, and the web, as it exists today, makes this sort of scrutiny impossible.

For additional examples of the challenges of building a secure client-side JavaScript crypto application, check out Krzysztof Kotowicz’s “Keys to a Kingdom” challenge. It’s a great illustration of the sorts of problems that can arise when buiding web-based encryption applications.


Now read this

“DCI” in Ruby is completely broken

Rubyists are people who generally value elegance over performance. “CPU time is cheaper than developer time!” is a mantra Rubyists have repeated for years. Performance has almost always taken a second seat to producing beautiful code, to... Continue →