Rust in 2019: Security, Maturity, Stability

hermes.png

I’ve been a day-to-day Rust user for nearly 5 years now. In that time I’ve perpetually felt like Rust is “the language that will be awesome tomorrow”. Rust has always felt like it had so much promise and potential, but if you actually sat down and tried to use it, the result was akin to trying to assemble a complex mechanism but you were always missing a few pieces.

I managed to suffer through it in the long long ago of 2014, and each year it’s gotten better, until in 2018 I could confidently say “Now’s a great time to start learning Rust”. But how about actually writing a real-world Rust application and deploying it to production?

2018 marked my second year as a full-time Rust developer, and I’m happy to say it’s the first year I shipped production Rust applications (not just one but three!), and added the startup I cofounded to the Friends of Rust (nee Production Users) page. After years of using Rust as a hobbyist, and aspiring to build “a real app” with it someday, I finally shipped.

Thus far these applications have been unproblematic, quietly and efficiently doing their job, and just as boring and predictable as I’d hope they’d be.

I would like to hope my experience is repeatable, and that 2019 is the year that Rust can transcend the notion of being a bleeding-edge science experiment and instead settle down into being a “boring in a good way” practical language with a growing community developing and deploying production-quality software.

Some of the biggest names in tech are now using Rust for some very interesting projects, including:

All of these are very positive signs for Rust’s overall development, but is it fair to say that Rust is “mature” yet?

Almost.

2019 is the year I hope that line will finally be crossed. Things have certainly matured considerably since the early days of “nightly broke your code and all your dependencies”, but even after the release of Rust 1.0, it still felt “bleeding edge”. Year-over-year things have gotten better, and while there are still a number of high priority foundational problems with the language,
I wouldn’t consider any of them showstoppers, and many seem like low-hanging fruit that I hope will get fixed this year!

Looking ahead to 2019, I’d like to being by asking: who could benefit the most from Rust who isn’t already using it? Why aren’t they using it? Is Rust missing something they need?

Who needs Rust the most? #

rustc has your back

Image originally from Leftover Salad

Rust places a high bar on what code it considers correct and in doing so asks a lot from a programmer when authoring software. Where many other languages would just let many (potentially severe / security-critical) errors slide, Rust’s philosophy leans towards eliminating runtime errors through a much more meticulous set of rules.

The Rust compiler’s relentless nitpicking is great if you are writing any software which demands correctness or applications which are nontrivial and which you intend to maintain for some time, but contrary to the impressions given by certain Rust community members who shall remain nameless, there are an awful lot of useful programs which don’t need to be written in Rust, and while Rust is potentially useful for practically anything, Rust is overkill for many projects (or possibly even most projects).

If you just need something quickly and are fine with a garbage collector, there are any number of languages which might better fit your needs depending on the circumstances. They might be slower, with a resource-hungry garbage collected runtime unsuitable for use on certain classes of embedded devices, riddled with thread safety, type safety, integer overflow, and null pointer errors, but the reality is in a lot of cases we can tolerate bloated, buggy programs, because they may be temporary, written for a one-off purpose, or simply aren’t that important. Hardware is cheap when your program is trivial and seldom utilized.

Back to the question: who isn’t using presently using Rust, but would greatly benefit from it? How could Rust help them, and why aren’t they using it?

In broad strokes I am personally thinking about authors of software that meets the following three criteria:

  1. Widely deployed, foundational, mission-critical software
  2. Presently authored in C, C++, or Ada
  3. Critical memory safety bugs still an ongoing issue in practice

Generally you might describe the above as “high-assurance software”. Here are some specific examples:

Software like this is ubiquitous in our daily lives, and often must run on low-powered embedded hardware which until recently there weren’t practical alternatives, including Rust. Memory safety is of utmost importance: any such vulnerability is treated as high severity, mitigated, catalogued, and the fix broadcasted to the relevant system operators for immediate deployment.

It’s high stakes software, but also software developed with constraints where there often aren’t practical alternatives to languages like C, C++, or Ada, owing to demands like the need to run on cheap, low-resource devices, or requiring precise low-level control of memory not possible in other languages, including non-nightly versions of Rust!

Should developers working on this kind of software consider Rust? Absolutely. Will large numbers of them do that in 2019? I’m not sure yet.
Having worked directly and indirectly with various embedded developers, I’ve generally gotten the feeling that C is so thoroughly entrenched into every toolchain that suggesting a new language is practically a non-starter.

I’ve personally managed to squeeze Rust into a few “unusual” embedded platforms, shoving it through decades-old toolchains, and everything worked great once the initial legwork is done. But that’s very much off the “happy path” for embedded development, and C is the devil they know.

One key area where I think Rust needs to improve is getting embedded developers to make the initial leap to actually prototyping real-world code. I think Rust’s benefits are sufficient to keep them using it if we can just get them to start!

From a technical perspective, Rust is finally, as of the 2018 edition, mostly in great shape for embedded development. However, using embedded Rust in any kind of production capacity is still highly unusual. Until recently embedded Rust development required a nightly version of the compiler.

The Rust Embedded Working Group has been doing some amazing work improving the ergonomics of embedded Rust development creating foundational libraries for commonly used platforms. Embedded development is now possible with stable Rust!

Again, we’re almost there. Many of the technical blockers are now gone. However, I think where there is a modicum of interest in the embedded space, most developers who are curious about Rust are in a “wait and see” mode. The Rust community needs to demonstrate that others can come in, the water is fine!

The notorious “Rewrite it in Rust” mantra generally suggests throwing away an existing project and starting over from scratch in Rust, I think Rust has huge potential as a supplemental language, where existing C/C++ codebases can be largely left alone, but Rust integrated into the build workflow, and FFI wrappers created (potentially autogenerated with tools like bindgen and cbindgen).

If projects can tolerate the work required to get Rust integrated into their build workflow, development environments, and the legwork of setting up the existing shims/bindings between Rust and the language their existing codebase is already written in, the result is hopefully a low-impact change to existing C/C++ codebases which enables an entirely new set of developers (potentially ones who have never done systems/embedded programming before) to work on new feature development in Rust.

Instead of “rewrite it in Rust”, how about “write the new parts in Rust”, or perhaps “only rewrite the most complicated, scary, security critical bits in Rust and leave the rest as it is”. The old parts stay stable and well-understood by its original authors, and teams can attract new members not by telling them to rewrite a bunch of crufty old legacy C/C++ code, but write brand new features in an interesting new language.

I gave a talk at a 2014 Bay Area Rust meetup talking about a specific case of this kind of software and how Rust might help: Transport Layer Encryption stacks. The core of the talk, however, focuses on the difficulty of developing high-assurance applications in languages which make it difficult to express correct software not just in terms of memory safety, but also other areas like numeric overflows, or even just boolean logic:

Bay Area Rust December 2014 Meetup

As I put it in my talk, in addition to critical memory safety-related vulnerabilities in C programs, we also see many critical vulnerabilities where we’re “losing on the easy stuff”. These are problems that could’ve been easily caught by a better type system, first-class mechanisms for managing integer overflow behavior, or even just universal agreement on what true and false should be.

It’s been 20 years since C99, which added stdbool.h, and not only is it still not universally embraced, but also implicated in security bugs in practice. C seems to be stuck at a local maximum where it’s just not going to get any better, and its many deficiencies result in us seeing the same types of security critical bugs over and over.

The talk, which while a bit dated in terms of Rust specifics is just as relevant ever topically, surveyed TLS vulnerabilities in 2014 and how Rust could’ve helped. Looking back at 2018, microarchitectural vulnerabilities like Meltdown and Spectre were the security highlight of the year, however we also saw a lot of of the same type of vulnerabilities I had covered in my talk before which, despite not specifically occurring inside of a TLS stack, are quite similar to the ones in my talk as the share certificate verification in common with TLS, and these sorts of extremely high severity certificate verification failures, dare I say, a recurring pattern.

Next I’ll cover a couple of the notable security vulnerabilities of 2018. This section of the post is full of a bunch of 20/20 hindsight and preachiness, so if you’re just here for the roadmap you might want to skip ahead.

2018 Case Study: Bootloaders and Management Controllers #

We started off 2018 with a remote code execution vulnerability in the AMD Platform Security Processor (PSP), an ARM TrustZone-based security coprocessor for AMD CPUs which provides management and security functionality for the host CPU and also the lynchpin for its Secure Boot functionality.

In 2017, there was a similarly severe remote code execution vulnerability in the Intel management controller. Management controllers, despite being some of the most security critical components in modern computers, are routinely subject to high severity vulnerabilities. Sometimes the vulnerabilities arise from memory corruption, other times from logic bugs, but regardless of the specifics of the vulnerability, total compromise of the entire system is a routinely common failure mode.

In the case of the AMD PSP, it was a remote code execution vulnerability resulting from a failure to parse ASN.1 correctly in C, something which has become so exceedingly routine all I can say is “ASN.1 is a serialization format which enables C programmers to turn binary data into remote code execution vulnerabilities”.

While there are ways ASN.1 could be parsed more safely in C, like using ASN.1 compilers or failing that a library, there’s just something about ASN.1 (and BER/DER in particular) which for specific small ASN.1 messages makes the problem seem simple enough to handroll, and yet the format has so many edge cases and convolutions so as to resist correct handrolled implementations in C. As it were, I tried handrolling a few Rust ASN.1 parsers this way (for similar signature verification use cases), and I can say that while Rust’s slices were immensely helpful, I managed to screw it up several times. At least the failure mode was just a panic, and not an RCE vulnerability! Also the Rust ecosystem has some good crates for writing panic-free parsers which I should really look into using.

While I’m on the subject of bootloaders written in C with 2018-dated remote code execution vulnerabilities arising from a failure to parse ASN.1 correctly, yet another example of this is just such a vulnerability in the Nintendo 3DS Boot ROM loader. This vulnerability resulted in an unpatchable total compromise of the 3DS’s secure boot process. As yet another example of these sorts of vulnerabilities repeating themselves over and over, this particular vulnerability greatly resembles one of the TLS vulnerabilities I covered in my 2014 talk, BERserk, which impacted the NSS library used at the time by both Firefox and Chrome. Both the 3DS vulnerability and BERserk resulted from buggy code for parsing ASN.1 encoded RSA signatures.

Should developers of bootloaders and management controllers be looking at Rust? Absolutely. Do I think these developers are going to start widely embracing Rust? I think some eager early adopters will, but I’m not sure the rest are ready to leave the “devil they know” with C and venture into the relatively unknown territory of bare metal Rust development.

2018 Case Study: Embedded Databases: SQLite #

C is the lingua franca for writing libraries which can be easily wrapped/consumed by other languages, particularly for cases where performance and memory usage matter. For these reasons, legacy reasons, and a handful of other fairly sensible reasons, SQLite is written in C. The page giving SQLite’s rationale for choosing C as an implementation also makes an attempt to answer the question of why they aren’t using a memory safe language:

Safe languages are often touted for helping to prevent security vulnerabilities. True enough, but SQLite is not a particularly security-sensitive library.

The SQLite web site also covers the rigorous testing the library undergoes, including its 100% branch test coverage and testing for memory safety issues:

SQLite contains a pluggable memory allocation subsystem. The default implementation uses system malloc() and free(). However, if SQLite is compiled with SQLITE_MEMDEBUG, an alternative memory allocation wrapper (memsys2) is inserted that looks for memory allocation errors at run-time. The memsys2 wrapper checks for memory leaks, of course, but also looks for buffer overruns, uses of uninitialized memory, and attempts to use memory after it has been freed. These same checks are also done by valgrind (and, indeed, Valgrind does them better) but memsys2 has the advantage of being much faster than Valgrind, which means the checks can be done more often and for longer tests.

At the tail end of 2018 the CVE-2018-20346: “Magellan” vulnerability was disclosed: a remote code execution vulnerability impacting SQLite’s “FTS3” fulltext search functionality. The vulnerability was fundamentally just an integer overflow, but in C integer overflows are eager to evolve into buffer overflows and eventually reach their final form as remote code execution vulnerabilities.

This vulnerability was remotely exploitable in several versions of Chrome and other Chromium-based applications. In stark contrast to the SQLite statement about security, headlines proclaimed “SQLite RCE vulnerability exposes billions of apps, including all Chromium-based browsers”. Sensationalistic as that may sound, I don’t think it’s too far off.

I don’t think it makes sense to rewrite SQLite in Rust. It’s written in C, and I don’t see any reason for that to change. So why even bring it up? In this particular case, I’d like to focus on how Rust could help someone who might be considering using a SQLite-like database, and how they might potentially be better off with one written in Rust.

Despite SQLite’s admirably rigorous test suite with 100% test coverage and memory sanitization, it was unable to catch this vulnerability. In my opinion, for applications which can’t afford a garbage collected runtime, there is no substitute for the borrow checker, and nothing short of complete formal verification (which I consider incredibly onerous, labor intensive, doable only by a limited set of experts, and suitable only for greenfield projects) will ever find all of these issues in a C program.

There’s another problem going on here though: from the perspective of the SQLite project quoted above, apps which allowed this vulnerability to be remotely exploited were using SQLite unsafely. This is something we see time and time again when determining which project is responsible for a particular vulnerability: “security is somebody else’s problem”, the “Doctor, it hurts when I use SQLite to execute attacker-controlled fulltext searches” approach to security.

Empirically we have seen that though SQLite employed a multitude of different testing strategies we hope would’ve caught that particular issue, none of them were able to. The safe subset of Rust (in particular, crates which transitively are free of unsafe, i.e. they show up clean on cargo-geiger) empowers developers of embeddable, multi-language FFI-friendly libraries to make claims about the security properties of their code which are ultimately predicated on rustc’s soundness and the correctness of the borrow checker. Or to put it another way: safe Rust makes memory safety rustc’s problem. These are the memory safety benefits we’ve seen for years from garbage collected languages with managed runtimes, but now applicable to software projects like embedded databases where a runtime + GC are a nonstarter.

As for those integer overflows, by default Rust arithmetic is checked in debug builds, but unchecked in release builds for performance reasons. That said, if you’re using Rust in any security critical application, consider adding these lines to your Cargo.toml to enable overflow checks in release builds too:

[profile.release]
overflow-checks = true

That’s right, with only two lines of TOML you can at least ensure that integer overflows in your project are at worst a DoS attack.

That’s all well and good, but if we’re not going to rewrite SQLite in Rust, how can Rust help? Instead of rewriting SQLite in Rust, how about the following:

I think there are a number of projects considering use of a roughly SQLite-shaped database who might be potentially interested in a pure Rust alternative because they’re security critical and concerned about future RCE vulnerabilities. This is particularly true in greenfield Rust applications, but I also think users of all languages could benefit from such a project.

There are many noteworthy crates which could benefit more languages than just Rust if they had proper C bindings. And fortunately, there’s cbindgen for automatically building C bindings.

I think if the Rust ecosystem embraces adding C APIs/FFIs to noteworthy crates, that where C is presently the lingua franca for embedded libraries and reusing code across multiple languages, Rust can at least become an alternative people think about where they might otherwise reach for a C library. Can we start getting developers of all languages, when they’re about to reach for a noteworthy C library, to go “hey, I wonder if there’s a Rust library for that”?

2019 Roadmap Suggestions #

Finally, to the actual roadmap suggestions! I’ll quickly summarize what I said above for those of you who may have skipped over parts:

Tangibly, how do we improve Rust, and perceptions of Rust, to finally get the Rust-curious developers who have lingering doubts about the stability and maturity of the language to finally give it a whirl in real-world projects?

I have a few ideas, particularly around security, and looking at some of the 2019 roadmap posts others have written, it seems many of us are on the same wavelength in that regard.

But let me get “Stability & Maturity” out of the way first. Let’s start with core language features.

Core Language Features #

My top 3 Rust features I’d like to see stabilized in 2019:

These all seem like things a lot of people want, and from my perspective I’d hope there’s a good chance at least some if not all of them are doable in a 2019 timeframe, but perhaps that’s just wishful thinking.

That said, I can offer some insights on why these three matter for my particular use cases.

Const Generics and Cryptography #

Cryptography and const generics go together like peanut butter and chocolate. When you look at the names of cryptographic algorithms, you start to notice a pattern:

The norm for cryptographic algorithm naming is an acronym/word describing the algorithm followed by a number. Algorithms are offered at different security levels with different tradeoffs, particularly in regard to things like key size and/or message length. Tradeoffs are everywhere in security: do we want things more secure? faster? smaller messages?

Particularly in regard to things like message sizes, digest sizes, MAC sizes, etc. having something that’s smaller on the wire (for e.g. a credential which is sent repeatedly with each request) can be helpful, or maybe you’d prefer the extra security and don’t care about message size. It’s nice to have knobs for these things.

Many algorithms can be implemented generically, such that a single generic implementation of an algorithm can be used at many different security levels. For example, internally SHA-384 is almost the same algorithm as SHA-512, but with some slight tweaks and truncated output. We can hoist those differences out into e.g. associated constants, and share a single implementation for both digest functions.

With const generics, interfaces to cryptographic primitives can be described generically as traits which operate on arrays whose size is determined by an associated constant of that trait, e.g.:

pub trait NewStreamCipher: Sized {
    const KeySize: usize;
    const NonceSize: usize;

    fn new(
        key: &[u8; Self::KeySize], 
        nonce: &[u8, Self::NonceSize]
    ) -> Self;
}

This allows for abstract traits for very general cryptographic concepts which are usable for different sized keys, messages, and security levels. The API is error-free and panic-free, because the compiler can statically assert everything we need to eliminate any potential for a failure case. The best way to avoid runtime errors is to make potentially erroneous cases unrepresentable.

Const generics are needed desperately enough for these cases that many Rust cryptography crates have adopted typenum and generic-array to express them. I use this approach in two of my cryptography crates: Signatory (a multi-provider digital signature library) and Miscreant (a misuse-resistant authenticated encryption library).

While a typenum-based approach gets the job done, it utilizes what I’ll politely call “clever hacks”. This has a number of drawbacks: it’s heavy with boilerplate, the type signatures are unreadable, and doing any sort of nontrivial arithmetic, if even possible, is incredibly unwieldy. Here’s an example of a trait where that sort of type level arithmetic would be helpful:

https://github.com/tendermint/signatory/blob/master/src/ecdsa/curve/mod.rs

pub trait WeierstrassCurve {
    type ScalarSize: ArrayLength<u8>;
    type CompressedPointSize: ArrayLength<u8>;
    type UntaggedPointSize: ArrayLength<u8>;
    type UncompressedPointSize: ArrayLength<u8>;
    type Asn1SignatureMaxSize: ArrayLength<u8>;
    type FixedSignatureSize: ArrayLength<u8>;
}

😱Aack! 7 different associated type-based constants?! Why? Well really we should only need one (i.e. ScalarSize), and the rest can be trivially computed from it using a little bit of arithmetic. But where typenum supports that arithmetic in theory, it comes at a considerable cost in regard to ease of debugging, and in attempting to even try it I couldn’t figure out the incantation.

Here is what I hope this trait will look like after const generics:

pub trait WeierstrassCurve {
    const ScalarSize: usize;
}

The Miscreant encryption library provides a bit more compelling example. Here is its Authenticated Encryption with Associated Data (AEAD) API:

pub trait Aead {
    type KeySize: ArrayLength<u8>;
    type TagSize: ArrayLength<u8>;

    fn new<K>(key: K) -> Self
    where
        K: Into<&GenericArray<u8, Self::KeySize>>;

    [...]
}

Here associated type-level constants are used to determine the size of a particular parameter, namely the key size. This means this trait can be generic over key sizes, ensures that the key is always the correct size (i.e. no chance of a panic) using the type system, and could be trivially converted to const generics:

pub trait Aead {
    const KeySize: usize;
    const TagSize: usize;

    fn new<K>(key: K) -> Self
    where
        K: Into<&[u8; Self::KeySize]>;

    [...]
}

Another noteworthy project using generic-array for cryptography like this are the Rust Cryptography Project’s digest, block-cipher, and stream-cipher crates.

This project has developed a set of traits which are close to ones which, eventually, I would like to see get rolled in-tree as a solution to the Figure out in-tree crypto story. But that’s something I’ve been saying for years, and it always comes with the caveat “…but that doesn’t make sense until const generics land”.

Fortunately, as I hope I’ve shown in this post, it’s pretty easy to rewrite code already written using generic-array to use const generics once they hit stable. As soon as that happens, converting the existing RustCrypto crates to use them should be fairly straightforward, and once that’s done, we can hopefully ship 1.0 versions of them, as well as Signatory and Miscreant!

While I’m on the subject of this particular project, perhaps you can excuse a small sidebar for an important message:

rust-crypto is dead. Long live RustCrypto! #

NOTE: Those simply interested in roadmap items can skip this section

In Rust’s earlier years, the rust-crypto crate provided perhaps the first noteworthy effort to provide pure-Rust cryptographic implementations. I covered it in my Bay Area Rust talk I mentioned above, and at the time said it was a good effort but needed a lot more work to be considered mature and robust. To this day it is the most widely used Rust cryptography crate, with 213 downstream dependencies, which is approximately twice that of, say, ring, a much more mature cryptography crate.

The rust-crypto crate filled a bit different niche though: where ring presents high-level, misuse resistant APIs, which is a fantastic thing for a cryptography crate to do, the rust-crypto crate contains what I’ll call “shoot yourself in the foot” crypto, and sometimes you need that. For example, I recently implemented the GlobalPlatform Secure Channel Protocol, a purely symmetric transport encryption protocol based on AES-CTR and AES-CMAC.

While ring is great for modern protocols, there are a lot of gross old protocols which lack a nicely designed modern equivalent. I also used these crates to implement Miscreant, which is probably the best use of “shoot yourself in the foot” crypto: using it to build high-level misuse-resistant constructions.

So what’s wrong with rust-crypto? Does it contain a horrible security vulnerability?

Worse: it’s unmaintained!

Where in my 2014 talk I suggested that rust-crypto was rough around the edges but could be a good crate with a lot of effort to polish it up, work around rust-crypto stalled shortly thereafter, and it hasn’t seen a release or even a commit in nearly 3 years. I think it’s fairly safe to say that rust-crypto is dead.

Fortunately it has a replacement - the Rust Crypto GitHub organization:

https://github.com/RustCrypto/

Rather than one monolithic “crypto” crate, this organization contains several topical subprojects covering most popular (at least symmetric) cryptographic concepts like hash/digest functions, block ciphers, stream ciphers, key derivation functions, message authentication codes, and more.

If you’ve been using the [rust-crypto] crate in your project, 2019 is a great year to get rid of it! Instead consider using ring (if it supports the algorithms you need), a RustCrypto crate, or Google’s new Mundane Crypto project.

Last but not least, check out curve25519-dalek (perhaps the first particularly noteworthy 1.0+ Rust cryptography crate) and its almost post-1.0 companion crate ed25519-dalek for generating digital signatures (using the Ed25519 signature algorithm) and x25519-dalek for Diffie-Hellman key exchanges.

Cryptographic applications of Pin #

Pin is a mechanism for describing immovable types, “pinning” them to a particular memory region. In many ways it’s an answer to a longstanding frustration with kernel and embedded developers: Rust’s memory model was too high level and could not express the needed constraints for certain applications, like userspace synchronization primitives.

I’ll talk very briefly about what I’d like to use Pin for: zeroing memory. In 2018 I wrote the zeroize crate. It had a simple goal: provide APIs for reliably zeroing out memory in cryptographic contexts. Originally it provided C FFI-based wrappers to OS-specific zeroing operations, but after talking with several people about the particular problem, I felt confident enough to rewrite it in Rust, leveraging Rust’s volatile semantics and compiler/memory fences to attempt to describe to rustc exactly what I wanted.

The story around all of that now is fairly good, I think, but less so is the story about making copies of data you might potentially zeroize. Calling a magic API that promises “There, it’s zeroed!” is all fine and good, except for the part where Rust made several copies behind your back and you didn’t even notice. Indeed Copy is implemented on many types that are interesting for cryptographic purposes. Most symmetric cipher keys will ultimately look like [u8; 16] or [u8; 32], both of which impl Copy, and that makes it so very easy to make lots of copies of things like cryptographic keys if you aren’t paying attention.

With Pin, things become quite interesting. Here is the core trait of the zeroize crate:

https://docs.rs/zeroize/latest/zeroize/trait.Zeroize.html

pub trait Zeroize {
    fn zeroize(&mut self);
}

A ZeroizeWithDefault marker trait notes which types can be zeroed out with their Default value. The crate provides some fairly broad impls, like impl<Z: ZeroizeWithDefault> Zeroize for Z and impl<Z: ZeroizeWithDefault> Zeroize for [Z].

With Pin, these impls could be changed to only work on Pin-ed types, e.g. impl<Z: ZeroizeWithDefault> Zeroize for Pin<Z>. This would make Pin a mandatory part of the zeroize API, but it would also enforce a particularly powerful property: we can ensure that the data wasn’t moved since it was pinned. This allows the type system to ensure that we didn’t accidentally make a bunch of extraneous copies of data we tried to zero out.

Whenever Pin lands in stable Rust, I’d be excited to check this out.

Awaiting async / await #

I’ve been a huge fan of using async / await in TypeScript (I don’t really write vanilla JavaScript anymore), and having used several different approaches to concurrency over the years including “microthread” approaches in Erlang and Go as well as various future / promise abstractions in many languages and the boring old “handwritten I/O loop and thread pool” approach, I have to say that the way Rust composes async / await, futures, and tokio is among my favorites. Things are definitely still unstable and rough around the edges, but the foundation is solid.

Rust’s approach neatly wraps up the problems of multithreaded asynchronous I/O in a way which actually feels ergonomic in Rust. When we look at how runtimes like Erlang and Go handle multithreaded scheduling, particularly in regard to things like FFI calls, things are a bit complex, messy and gross. To (roughly) quote Chris Meiklejohn on Erlang:

200,000 processes are cheap but opening a file is expensive

Erlang and Go are both “stackless” languages (that is to say, they don’t use the C stack) with schedules that are internally event loops. Things which can potentially block these event loops are verboten, and need to be scheduled to run on a separate thread pool, e.g. the Erlang async pool, or the CGo thread pool in Go.

There are ways to work around this, like trying to make inline FFI calls by constructing C stack frames and then calling the C target from the “emulated” stack frame. Erlang supports this (NIFs), and there are experimental Go projects to do this like rustgo. This approach provides a bridge between the stackless and stackful worlds: these calls hang the language runtime’s microthread scheduler, blocking all other “microthreads” that are trying to execute on it (hence Chris’s quote above).

To avoid that problem the runtime needs to detect that a scheduler thread is stuck on an inline FFI call and that other scheduler threads need to steal work from it while it’s stuck, and now that separate thread pool is worthless and we have something that looks a lot like Tokio. I haven’t kept up on the latest Erlang developments, but I believe this is where they’re at after many decades of work. Go seems to be at the prototyping stage with this sort of work.

The result is we wind up with two completely different worlds: the stackless world where Erlang and Go code execute, and the C stack where everything else executes. Once upon a time Rust had something like this: libgreen. It wasn’t nearly as sophisticated as Erlang or Go’s schedulers: the microthreads it implemented weren’t “preemptive”. But even if they were, the divide between these two different worlds was stark, and trying to gloss over it for things like std::io was the exact opposite of the sort of “zero cost abstraction” Rust prides itself on.

With async / await, instead of having a completely different stackless world where our Rust program runs, and a far away thread pool using “real” stacks, our program runs inside of that thread pool and is free to use the C stack and make blocking FFI calls as a first-class feature. We get the benefits of an asynchronous execution model, and while we do wind up with a separate async world which is “infectious” within a codebase, all code everywhere is still using the C stack, and therefore FFI calls are “zero cost” and can block indefinitely.

The “infectious” nature of async is a common complaint with this approach, where Erlang receive and Go := <- as a sort of “virtual blocking call” built on top of a stackless runtime. But Rust supports both an async execution model and a non-async one simultaneously, so we need an annotation to describe that, and to me it’s a small price to pay to avoid having to worry about mixing two different calling conventions in the same program, especially when trying to debug. Some may lament the need to write async, but it’s worth keeping in mind that async needs a runtime to power it, and it’s nice to (still) have the option to write Rust code that doesn’t need that runtime.

Earlier I mentioned three Rust apps I wrote (or collaborated on) and deployed to production in 2018. None of them are presently using futures or tokio, and ideally all of them probably should be. One of them in particular is doing complex subprocess management and talking on lots of pipes, presently using threads and blocking I/O. But there’s a lot more asynchronous behavior and complex interactions I’d like to add, and the more complex those interactions get, the grosser that sort of thing becomes when using blocking I/O, and you start to wish there was an event loop scheduling everything.

At one point I took a look at rewriting one of these apps with tokio-subprocess, and after some issues with the tokio-core -> tokio transition were addressed, and futures was upgraded and it even became possible to even attempt to rewrite it, I was unhappy with the result. After years of using nightly Rust, I have finally moved onto stable for day-to-day use, and now that these apps are in production, I don’t want to be using nightly features. And so I suffer through multithreaded blocking I/O, waiting for async / await to land on stable.

I could go on, but I think we’re all aware that async / await is a huge missing piece of the ecosystem which is a blocker for using Rust in any asynchronous application, and members of the core team have done some outstanding work on it in 2018. When it lands, I think it will unlock Rust’s true potential as a powerhouse language for high-performance asynchronous applications which are ergonomic, maintainable, and seamlessly scalable across multicore CPUs.

I am eager to embrace async / await whenever it lands on stable, but in the meantime I will probably continue to use blocking I/O rather than going back to nightly. One of the great things about the async / await approach is code written in this style can be easily adapted from code already doing blocking I/O by adding the proper await points, so I’d like to hope it will be easy to prototype with blocking I/O today, and async-ify my code tomorrow.

That wraps it up for core language features. Onto Cargo!

Cargo: Embedded Rust’s Biggest Pain Point #

There are two big cargo pain points for embedded Rust developers right now, and one I’m not sure is properly triaged within the core team:

Let me start with the latter, since it’s the more pressing of the two, and then I’ll circle back on xargo.

Bad interactions between build-/dev-dependencies and cargo features impacting no_std crates #

ATTENTION RUST CORE TEAM!

If I can sudo rust core please ... one thing in this post, it would be drawing this particular bug to your attention. It’s a pretty big problem, but conversation about it is highly disorganized.

There are several open issues about this on the cargo bug tracker, which I believe to all be the same underlying issue:

https://github.com/rust-lang/cargo/issues/2589
https://github.com/rust-lang/cargo/issues/4664
https://github.com/rust-lang/cargo/issues/4866

Of these open issues, I think the best description of the problem is whitequark’s report in rust-lang/cargo#4866, and think it might make sense to close #2589 and #4664 as dupes of #4866 to at least move discussion to a single place.

Here’s a Reddit thread about it:

https://www.reddit.com/r/rust/comments/a91qv0/for_a_no_std_crate_libstd_is_still_included_in/

…and some organic reports I’ve seen on projects I contribute to:

https://github.com/dalek-cryptography/ed25519-dalek/issues/35

This is a bit of a tricky bug to explain, but let me give it a shot. Right now cargo feature activation applies to a project as a whole. If any dependency in a particular crate activates a particular cargo feature, it’s “on” for the entire project.

This is a big problem for no_std crates, because it effectively breaks no_std builds whenever a no_std crate shares a dependency with one of its build-/dev-dependencies, and that dependency wants to make use of that dependency’s std feature. If any build-/dev-dependencies activate the std feature of a shared dependency, that dependency now links std for the target binary, thereby breaking no_std builds.

This is a weird bit of spooky-action-at-a-distance between build-/dev-dependencies: requirements of build tooling are getting mixed in with the requirements of the target. Ensuring all of the dependencies of a no_std crate is tricky to begin with (although cargo-nono is helpful there).

It seems to me like cargo features used by build-/dev-dependencies (or proc macros or other kinds of build-time tooling) should have no effect on what cargo features are activated when building the target. That sounds like the sort of thing that might potentially have a very small fix, if there’s some way to omit build-/dev-dependiences from the target feature calculation.

It also seems like cargo workspaces already support a unified dependency calculation across all cargo features used by a workspace, while still keeping cargo features used by independent crates within a workspace isolated from each other. If the fix turns out to be complex, perhaps it could lean on whatever workspaces are already doing. I haven’t looked at any of the relevant code, just seeing the symptoms and spitballing about fixes, so apologies if I’m way off here.

If all of that turns out to be too complicated to fix, here’s another solution I think might be simple and “good enough”: it seems this particular problem comes up due to some tool which is a full-blown command line application like bindgen which is installable via cargo install. If that’s the case, another option might be a way for crates to specify build-time cargo install-able helper tools which are spawned as plain old child processes. This isolates these tools and their dependencies from the rest of the build, and might work well in tandem with something like cargo make to drive them.

Regardless, this is a big pain point that from my perspective seems to be coming up a lot, and I’d like to help get it fixed!

The future of xargo #

Let me begin by saying that xargo is a fantastic tool which, from my perspective, already does everything I want.

I use xargo to cross-compile to a number of different targets embedded targets as well as Intel SGX. For the latter I use xargo to swap in a replacement std crate (i.e. sgx_tstd) into the sysroot which is able to call out of the SGX enclave (using an “OCALL”) into untrusted userspace, which in turn calls the “real” std function (which can then, it turn, make system calls to the kernel). It’s a complicated trampoline-like system, but the result is somewhat amazing: SGX enclaves can transparently leverage the greater std-dependent crate ecosystem (provided the desired std functionality is actually wrapped and all transitive dependencies also compile inside SGX).

The current plan, as I understand it, is to fold all of xargo‘s features into cargo. That’s a fantastic plan and I want to live in that future, but in the meantime there is no “official” fork of xargo as far as I know.

It’d be nice if there were an “official” fork of xargo under the rust-lang-nursery or perhaps rust-embedded GitHub organizations which could at least serve as a coordination point for people interested in continuing to use xargo until cargo can completely replace it. I have no idea who has time to or should “own” that project, but perhaps people in the the Rust Embedded WG would be interested?

Improving Rust ecosystem security in 2019 #

Security is definitely a passion of mine. Sadly it seems I’ve written an enormous post and some of the stuff I want to say the most is buried way down here at the bottom. Whoops, guess I buried the lede.

A couple quick things before I start with some specific security improvements I’d like to see in Rust:

The newly-formed Rust Secure Code Working Group has been a great place to discuss issues relating to Rust security. Check out the open issues on GitHub and chat with us on Zulip.

While there are always security improvements to be made, Rust is not in a state of crisis, and in looking to improve its security, I personally prefer to adopt a “do no harm” approach. This means we shouldn’t be making onerous changes to existing workflows, tools, and services, or if we do, we should have very good reasons why it will improve the overall ecosystem.

Finally, some of the things I’m going to talk about below, well, I’ve tried to work on them in other languages. They are extremely difficult, almost never done well, and if we can actually pull them off they will make Rust somewhat unique. I will start with ideas I think are practical and high-value first, and then gradually work my way to the unicorn items.

Sandboxing for build.rs, proc macros, and other build-time code execution #

Rust provides a lot of features for executing code at compile time. The build.rs file is commonly used to run code at build-time by many projects, and procedural macros are very popular. Right now there are no constraints on what this code is allowed to do. Most security professionals I know who work hands on with Rust seem to agree that it would be nice to apply some form of sandboxing.

Figuring out exactly what that looks like will be tricky, especially given we aren’t greenfielding but instead adding constraints to a living system. Perhaps we can find an acceptable sandbox profile to begin with that actually manages to pass crater, or maybe we’ll need for build sandboxing is initially an opt-in feature.

But perhaps I should back up a bit and talk about what benefits we’d hope to get out of sandboxing.

At first glance, it might seem a bit silly to sandbox build tooling when we’re using that to build potentially attacker-controlled code. If we’re building a malicious crate, does it really matter if the malicious code is in the build script, or in the resulting executable?

Here are some of the reasons I feel build scripts are a bit more worrisome than their targets, from the perspective of an attacker. I’ll go ahead and put on my attacker hat. As an attacker, one of my primary concerns is stealth. I want my attack to be silent as possible (otherwise I’ll get caught!), and I want to erase my tracks as I go as much as possible.

Given these constraints, there are many reasons why I would place a malicious payload into build scripts, instead of the target code:

Less forensic evidence: when we compile binary targets from Rust code in a build system, the resulting binary is almost always first uploaded to some sort of artifact storage, be it a Docker registry, some sort of cloud storage bucket, or what have you. This means if there is a malicious payload hidden somewhere in that binary, we can at least examine it retroactively as part of a data forensics and incident response (DFIR) process to determine the nature of a malicious payload. If we find such a payload, we have the advantage of it being effectively “data at rest”, and that it won’t become actively malicious until we try to execute it.

By comparison, build scripts and proc macros are downloaded, compiled, and executed on the spot. We probably don’t keep copies of the resulting binaries. Perhaps there are logs from which a Cargo.lock can be reassembled, but where with a target binary we have the exact compiler output, unless we always save off a copy of the binaries for every build script and proc macro, it’s just going to be our best guess.

An attacker can always place the malicious payload in something innocuous-looking and difficult to audit, like a resource fetched over HTTP inside of build.rs which looks like a legitimate part of the build when fetched from a browser, but serves a malicious payload when fetched by that build.rs script. They could even blacklist IP addresses once they’ve grabbed the malicious payload, so subsequent attempts to access while perfectly impersonating the build.rs file as part of an investigation do not reveal the payload, confusing DFIR responders.

Finally, the build script can clean up after itself and delete or falsify forensic evidence, and we’ll be none the wiser. With a target binary, we’ll have a static artifact for a human or potentially an automated auditor process to examine, potentially before it’s even deployed!

Shorter window of compromise: When compiling a target binary on a build system, the result is data-at-rest which is immediately stored somewhere, and will remain stored somewhere until deployed. This means that if a dependency contains a malicious payload, there will be a minute amount of time between when the payload is downloaded and when it is executed.

Lateral movement: Build systems often build a mixture of high-value and low-value executables which are deployed differently, possibly in different firewall zones. This makes it a great place to escalate privilege between these sorts of apps. A compromise of a minor dependency in a low-value app can provide access to a build server as a build user. As an attacker, I can either try to directly attack the higher value app as the build user, or also attempt local privilege escalation attacks on the kernel and escalate to root privilege. Once I have root there are many options for stealthy persistence,

Fewer eyeballs: You may read the code of your dependencies, but did you read all of the build scripts? The proc macros? The latter in particular can be quite difficult to read, especially if you’re looking for a malicious payload.

 
136
Kudos
 
136
Kudos

Now read this

All the crypto code you’ve ever written is probably broken

tl;dr: use authenticated encryption. use authenticated encryption. use authenticated encryption. use authenticated encryption. use authenticated encryption. use authenticated encryption. use authenticated encryption. use authenticated... Continue →