Fuzzing TraceMonkey

Making JavaScript faster is important for the future of computer security. Faster scripts will allow computationally intensive applications to move to the Web. As messy as the Web's security model is, it beats the most popular alternative, which is to give hundreds of native applications access to your files. Faster scripts will also allow large parts of Firefox to be written in JavaScript, a memory-safe programming language, rather than C++, a dangerous footgun.

Mozilla's ambitious TraceMonkey project adds a just-in-time compiler to Firefox's JavaScript engine, making many scripts 3 to 30 times faster. TraceMonkey takes a non-traditional approach to JIT compilation: instead of compiling a function at a time, it compiles only a path (such as the body of a loop) at a time. This makes it possible to optimize the native code based on the actual type of each variable, which is important for dynamic languages like JavaScript.

My existing JavaScript fuzzer, jsfunfuzz, found a decent number of crash and assertion bugs in early versions of TraceMonkey. I made several changes to jsfunfuzz to help it generate code to test the JIT infrastructure heavily. For example, it now generates mixed-type arrays in order to test how the JIT deals with unexpected type changes.

Andreas Gal commented that each fuzz-generated testcase saved him nearly a day of debugging: otherwise, he'd probably have to tease a testcase out of a misbehaving complex web page. Encouraged by his comment, I looked for additional ways to help the TraceMonkey team.

JIT correctness

Differential testing is designed to find correctness bugs. It runs a randomly-generated script twice (with and without the JIT) and complains if the output is different.

It quickly found 13 bugs where the JIT caused JavaScript code to produce incorrect results. These bugs range from obvious to obscure to evil.

It even found at least one security bug that jsfunfuzz had missed. An uninitialized-memory-read bug caused output to be random when it should have been consistent. jsfunfuzz missed the bug because it ignores most output, but the differential testing caught it just like it would catch a JIT vs interpreter difference.

JIT speed

I set up the new fuzzer to compare the time needed to execute scripts and complain whenever enabling the JIT made a script run more slowly. It measures speed by letting the script run for 500ms and reporting the number of loop iterations completed in that time.

So far, it has found 4 serious bugs where the JIT makes scripts several times slower. Two of these have already been fixed, but the other two may be difficult to fix.

It has also found 10 cases where the JIT makes scripts about 10% slower. Most of these minor slowdowns are due to "trace aborts", where a piece of JavaScript is not converted to native code and stays in the interpreter. Some trace aborts are due to bugs, while others are design decisions or cases for which conversion to native code simply hasn't been implemented yet.

There is some disagreement over which trace aborts are most likely to affect real web pages. I asked members of Mozilla's QA team to scan the web in a way that can answer this question.

Interpreter speed

Mostly for fun, I also looked to see which code the JIT speeds up the most. Here's a simplified version of its answer:

for (var i = 0; i < 0x02000000; ++i) {
  d = 0x55555555;
  d++; d++; d++; d++; d++;
}

This code runs 250 times faster when the JIT is enabled. The JIT is able to achieve this gigantic speedup due to the interpreter being inefficient in dealing with undeclared variables and numbers that can't be represented as 30-bit ints.

Assertions

The JavaScript engine team has documented many of their assumptions as assertions in the code. Many of these assertions make it easier to spot dangerous bugs, because the script generated by the fuzzer doesn't have to be clever enough to actually cause a crash, only strange enough to violate an assumption. This is similar to my experience with other parts of Gecko that use assertions well.

Other JavaScript engine assertions make it easier to find severe performance bugs. Without these assertions, I'd only find these bugs when I measure speed directly, which requires drastically slowing down the tests.

More ideas

One testcase generated by my fuzzer demonstrated a combination of a JIT performance bug with a minor bytecode generation bug. I might be able to search for similar bytecode generation bugs the same way I searched for decompiler bugs: by ensuring that a function does not change when round-tripping through the decompiler. In order to do that, I'll need a new patch for making dis() return the disassembly instead of printing it.

I should be able to find some performance bugs by looking at which aborts and side exits are taken. This strategy would make some performance bugs (such as repeatedly taking a side exit) easier to spot.

One Response to “Fuzzing TraceMonkey”

  1. Ted Mielczarek Says:

    This is very cool stuff. Thanks for taking the time to explain your fuzzing techniques, and thanks for all the links to the relevant bugs!