Would Rust have prevented Heartbleed? Another look

In case you haven’t heard, another serious OpenSSL vulnerability will be announced this Thursday. It reminded me of about a year ago, when Heartbleed was announced:

RUST HULK SAYS YES

In December 2014 I gave a talk at Mozilla about cryptography in Rust (slides here). I have been meaning to do a followup blog post both about my talk, reactions I received from it, and my subsequent thoughts…

And then this blog post happens. I have been reading Ted Unangst’s blog for quite awhile, mostly with great respect. This particular blog post was, unfortunately, not up to his usual standards. He blogs on a wide range of topics, but security is a complicated field and this blog post is, in my opinion, highly misleading. Ted claims he implemented “Heartbleed” in Rust. Is that actually the case?

In my talk at Mozilla, I covered several of the SSL/TLS bugs seen in 2014, and spent a lot of time covering “goto fail” (SecureTransport) and “goto cleanup” (GNUTLS). I spent some 15 minutes discussing these vulnerabilities (and how Rust could’ve helped), and probably about 15 seconds talking about Heartbleed, because I thought the severity of Heartbleed was obvious enough that the case for Rust’s memory safety should likewise be as obvious. However, apparently I assumed too much. So let’s dig into Heartbleed and Ted’s alleged version of it in Rust and see what’s really going on.

Let’s talk about Tedbleed! What’s going on? Is it as bad as Heartbleed, and are Rust’s memory safety features being oversold by uninformed zealots? Let’s take a look!

Tedbleed #

Here is Ted’s source code:

use std::old_io::File;

fn pingback(path : Path, outpath : Path, buffer : &mut[u8]) {
        let mut fd = File::open(&path);
        match fd.read(buffer) {
                Err(what) => panic!("say {}", what),
                Ok(x) => if x < 1 { return; }
        }
        let len = buffer[0] as usize;
        let mut outfd = File::create(&outpath);
        match outfd.write_all(&buffer[0 .. len]) {
                Err(what) => panic!("say {}", what),
                Ok(_) => ()
        }
}

fn main() {
        let buffer = &mut[0u8; 256];
        pingback(Path::new("yourping"), Path::new("yourecho"), buffer);
        pingback(Path::new("myping"), Path::new("myecho"), buffer);
}

Let’s locate the problematic part of this code:

 let buffer = &mut[0u8; 256];

Uhoh, this code contains a mutable buffer that is being reused and mixing up data. So what exactly is the severity?

Ouch! This is a bad bug that the Rust compiler failed to prevent. First we might ask if this is the sort of code that a Rust programmer would actually write in practice. My answer?

Absolutely.

To Ted’s credit, this code isn’t strawman code, or at least, while this specific rendition of it might be, Rust programmers definitely want to avoid allocations by reusing mutable buffers for performance, particularly in these sorts of I/O buffering scenarios.

And it’s not just Rust. A very similar vulnerability just happened in Java fairly recently called JetLeak which had a similar threat of recovery of other connections’ plaintexts because of improper handling of mutable buffers.

But is it Heartbleed?

Heartbleed #

What was Heartbleed?

This is a lot worse than “Tedbleed”. An analogy might be the telephone network when the phreaks first started exploiting it. The “Cap'n Crunch” whistle worked by exploiting something known as in-band signaling. That is to say: the phone network provides a communication medium, but it also needs control signals. Where “Tedbleed” might let us snoop on someone’s phone calls, Heartbleed lets us take over the phone network and impersonate the phone company, because we have access to more than just the signal, we have the keys to the kingdom.

Heartbleed is a vulnerability rooted in the fact that C is not a memory safe language.

Rust is. Unless you venture into the (explicitly demarcated) unsafe portion of Rust, you will not see memory exposure vulnerabilities like Heartbleed which are due to improper bounds checking. You will likewise not see the much more severe “Winshock”-style remote code execution vulnerabilities.

Memory safety is paramount to writing secure programs.

We can’t get the keys with Tedbleed.

We can with Heartbleed.

Tedbleed is an entirely different class of vulnerability from Heartbleed. Where Tedbleed exposes the contents of a particular, bounded buffer to an attacker, Heartbeed exposed the memory of an entire process. It doesn’t matter what value it was, including SSL/TLS private keys, Heartbleed could be used to write it onto the wire.

Conclusion #

Rust is a memory safe language.

C is not a memory safe language.

Writing programs in Rust prevents a wide range of attacks that result from commonplace errors made in C programs. These errors are made by novices and experts alike. When you read security announcements, these sorts of errors are often described as being corrected with “improved bounds checking”, a.k.a. fixing arithmetic, and unfortunately this class of error is exceedingly common, and often results in remote code execution vulnerabilities.

Ted is wrong: Rust would’ve prevented Heartbleed. Ted went out of his way to make a strawman version of Heartbleed, and created a vulnerability which does not allow out-of-bounds memory reads, but instead looks a lot more like JetLeak.

I hope it’s clear to anyone who actually cares about security that a memory exposure and key disclosure vulnerability is more severe than a plaintext recovery vulnerability, and that memory safety confers a wide range of security benefits on programs.

Rust would’ve prevented Heartbleed, but Heartbleed is actually kind of boring compared to remote code execution vulnerabilities like Winshock or openssl-too-open. Remote code execution vulnerabilities are far scarier, and largely preventable in Rust due to its memory safety.

I’m also quite curious if the new OpenSSL vulnerability will involve memory corruption…

 
1,116
Kudos
 
1,116
Kudos

Now read this

The Death of Bitcoin

The road of innovation is paved with the corpses of outmoded technologies. Throughout history there are countless examples of a technology presumed to be the “next big thing” starting to gain traction, only to find itself outmoded in the... Continue →