Fuzzing JavaScript and DOM

Jesse Ruderman

Email: jruderman@mozilla.com

Twitter: @jruderman


Detecting bugs

If Firefox crashes as a result of some input, it's a bug. Valgrind is a memory debugger; it will tell you about things that could have crashed or could result in undefined behavior. Assertions document what the developer believes to be true, and allows finding bugs before they lead to crashes.

These are the kinds of bugs you can find without knowing what output is expected, without being a UI designer or standards expert or understanding the security policy.

JavaScript engine fuzzer

JavaScript engine fuzzer

(function() {
  var x;
  eval("for (x in (gc)()) for each(e in [0]) { print }")


Assertion failure: !JSVAL_IS_PRIMITIVE(regs.sp[-2]), at ../jsops.cpp:489

JavaScript engine fuzzer

jsfunfuzz is 4000 lines of randomly-recursive JavaScript. The way it generates input (functions) is pretty boring: there are functions like "makeStatement" and "makeExpression", and each picks a template at random and fills it in with calls to other make* functions.

JavaScript: Detecting bugs

It can find all the normal "easy to detect" kinds of bugs. But it accidentally puts the engine into an infinite loop or exponential blowup a lot, so I don't really bother with hangs. But also...

JavaScript: Detecting bugs

What's more interesting is what kinds of bugs it can find. It can find assertions and crashes like any other fuzzer, but it can also find several types of "correctness" bugs. The first way of finding correctness bugs is simple but temporal.

JavaScript: Comparing engines

var v = 0;
for each (var a in [0, {}, {}, {}]) {
  v = v >>> 0;
  for each (var b in [{}, {}, new String(''), 42, new String(''), {}, 42]) { }

When you refactor, or introduce a new execution mode, you can feed fuzz inputs into both engines to ensure they give the same answers. Here I was trying to find bugs in the tracing JIT, which does special things when variables change type, so I instructed the fuzzer to make code where variables change type.

Firefox has been adding a new JIT every year!

Aside: Translation party

There's this site called Translation Party where you can see what happens when Google Translate tries to take a sentence to Japanese and back. With some inputs it detects convergence right away.

Aside: Translation party

But with other inputs, the translator has some trouble. "Isn't it ironic" became "Is it not ironic". Ok, that's not too bad. But then it turned into "Ironically, it is not", which has a very different meaning. Maybe it's a comment about the song.

I don't know Japanese, so I can't tell you exactly where it went wrong. But I think it's safe to say something did go wrong.

JavaScript: Decompilation party

So here's the same idea applied to JavaScript. When SpiderMonkey prints out a function, it's actually decompiling bytecode. In this case, the compiler turned my function into 5 bytecodes, and the decompiler turned it back into exactly the same text I had entered.

Now, I can read bytecode about as well as I can read Japanese, but seeing the round-trip work gives me some confidence that both parts worked correctly.

JavaScript: Decompilation party

Something went wrong here. In fact, the first one passes undefined to C, and the second one passes 0. The decompiler screwed up by not including enough parentheses.

JavaScript engine fuzzer

Seems like a lot of bugs, but it was over a period of 6 years. Some were regressions that were only in trunk for a few days. On the other hand, some were 10 years old.

The JavaScript engine team loves getting bugs from the fuzzer. Because otherwise they'll hit the bug in some complicated web page, all tangled up in application logic and jQuery.

Now competing against jandem's method/object fuzzer and decoder's testcase mutation fuzzer.

JavaScript engine fuzzer

DOM fuzzer

DOM fuzzer: mutation modules

The first part of the DOM fuzzer just picks two nodes at random in a document and performs an appendChild or insertBefore call. (In other words, it picks a node to move and a place to put it.) It quickly turns any page into a DOM soup, so I called it Stir DOM.

Very easy to implement and reasonably effective. Great bang-for-buck, 300 bugs out of maybe 30 lines of JS. This is how I got started and it's a good place to start fuzzing a new feature.

After running for a while you have a "DOM soup" containing the ingredients of the original page but all mixed together. The effectiveness depends on starting points. I used to use real web pages, but now I use crashtest and reftest files from our own regression test suite. The starting points can be really good by choosing features that go well together (tables; image maps) that would be hard to create randomly.

What are the inputs to a layout engine?

Tests layout engine's handling of changes, not just weird DOMs

You're testing not only the final rendering but also how DOM and layout code react to dynamic changes. This code is harder to get right, more performance-sensitive, and has more room for dangling-pointer bugs.

I didn't know this when I started! I was just being lazy, and got lucky!

DOM fuzzer: Generator modules

Lists of elements, attributes, attribute values. Requires reading specs or implementations or documentation, or following bugs and commits, to compile those lists. Great at creating weird combinations of elements, but not so great at building meaningful combinations. But we have Stir DOM and the starting points for that.

This information is used not only for DOM but also for setting innerHTML. Without these modules, rare elements would get little testing, especially with other rare elements.

DOM fuzzer: other modules

There are also a bunch of other modules. I don't have time to talk about them all, but I'd like to point out that mutation events are evil.

DOM fuzzer: bug counts

DOM fuzzer example - column hang

Bug 458659. Hangs in layout.

DOM fuzzer example - accessibility crash

Bug 669228. Crashes some accessibility code.

DOM fuzzer example - SVG GC crash

Bug 639728 showed up as a crash. (Bug 686044 is similar, by the way.)

The fix, part 1

 animVal =


The fix, part 2

   if (mBaseVal) {
+    if (!aNewValue.Length()) {
+      // InternalListLengthWillChange might clear last reference to |this|.
+      // Retain a temporary reference to keep from dying before returning.
+      kungFuDeathGrip = this;
+    }

   /* Do more things using |this| */


Patterns of memory safety bugs in Firefox

Reference count of this drops to 0. Often solved with kungFuDeathGrip. Usually better to solve by having the caller hold a strong reference.

Early returns. Error paths are often untested. These bugs are often solved by adding "auto" classes to do cleanup (RAII). Out-of-memory (OOM) bugs, in particular, are the bane of my existence.

Didn't expect X to happen in the middle of Y. The web platform is full of crazy callbacks, such as mutation events. It's also full of crazy methods, such as closing the window or spinning the event loop. Adding "phase" assertions can help.

C-style casts. If you change the representation of one of the classes, you can forget to update the cast. Type-restricted casts (functions that encapsulate the cast) can prevent this problem.

Mitigations: Layout

DOM fuzzer: Detecting bugs

DOM fuzzer: Dealing with known bugs

Leaks are the trickiest. I have a hierarchy, with the top occupied by nsDocShell, nsDocument, and nsGlobalWindow. Unfortunately, there are bugs for all of these, so at the present I will only notice small leaks.

DOM fuzzer


“This seems tedious. How can I automate it?”

2005Ad-hoc layout testingFuzzing
2006Reproducing fuzz bugsRecord fuzz actions
2006Reducing fuzz testcasesDisentangle fuzz actions
2006Reducing fuzz testcasesAutomate line reduction

One of my favorite aspects of being a programmer is that I can automate any task that becomes tedious. When ad-hoc testing starts feeling tedious, maybe it's time to automate it -- with a regression test, an exhaustive test, or a fuzzer.

Unfortunately, fuzzing has a tendency to give you gigantic testcases unsuitable for bug reports. If a fuzzer finds lots of bugs, it's worth investing in automating the pieces around it.

Automated line-by-line reduction

More automation

“This seems tedious. How can I automate it?”

2007Reducing starting partsserializeDOMAsScript
2007Adding starting pointsCrashtests
2008Adding starting pointsCrashtestify extension
2010Remembering to remove "ignores"after-fix
2010Finding regressions quicklyDownload per-checkin builds

It can be worth automating something even if the automation only works half the time. Automate what you can and free up your mind for the interesting cases.

Now, it is possible to automate too much. If it's an easy thing you're only going to do twice, it might not be worth it. Or if you try to automate the process of automating tedious tasks. It's really hard, but even worse, if you succeed, you might have just enough AI to bring about the technological singularity.

So now my fuzzers run continuously on ~20 computers, and I get emails when they find new bugs. I just finish the reduction, upload to Bugzilla, and add an ignore entry.

How to fuzz

How to fuzz: pick your target

Pick a component to fuzz, and try to fuzz it directly. If your component has too much complexity for ad-hoc testing or normal regression test suites, because there are too many combinations that could interact. Can't rely on the web to find bugs, or you're very sad when the web finds bugs.

How to fuzz: types of input

"Almost valid" or "Mostly valid" is where you want to aim, so you get into deep code paths but also the error cases off of those paths.

How to fuzz: where to start

Identifying a component's inputs can be subtle. For the layout module, it's largely changes to the DOM that form input, but also user zooming and resizing. For the regular expression engine, the input is regexps AND text to match against.

How developers help