Archive for the 'Security' Category

jsfunfuzz in news and blogs

Friday, August 3rd, 2007

Before the presentation:

After the presentation:

defcon

Friday, August 3rd, 2007

I saw some DEF CON attendees playing DEFCON today. Hackers playing a game of global thermonuclear war... reminds me of a movie :)

Introducing jsfunfuzz

Thursday, August 2nd, 2007

I wrote a fuzzer called jsfunfuzz for testing the JavaScript engine in Firefox. Window, Shaver, and I announced it at Black Hat earlier today, as part of Mozilla's presentation, "Building and Breaking the Browser".

It tests the JavaScript language engine itself, not the DOM. (That means that it works with language features such as functions, objects, operators, and garbage collection rather than DOM objects accessed through "window" or "document".)

It has found about 280 bugs in Firefox's JavaScript engine, over two-thirds of which have already been fixed (go Brendan!). About two dozen were memory safety bugs that we believe were likely to be exploitable to run arbitrary code.

In the presentation, I speculated as why it has been able to find so many bugs:

  • It knows the rules of the JavaScript language, allowing it to get decent coverage of combinations of language features.
  • It breaks the rules, allowing it to find errors in syntax error handling such as bug 350415 and more generally helping the fuzzer avoid having "blind spots".
  • It isn't afraid to nest JavaScript constructs in fairly complicated ways, like when it found bug 353079.
  • It allows state to accumulate by creating and running functions in a loop. (See bug 361346 for an example of a bug that would be hard to find otherwise.)
  • It tests for correctness, not just crashes and assertions. (Since I didn't talk about this aspect much during the security-focused Black Hat presentation, I've made it a separate blog post.)

If you want to test it out, grab it from the bug report. I recommend running it in a standalone JavaScript Shell, as it is much faster to start and shut down than a whole browser.

Update (2015): newer versions are available in the funfuzz repository on GitHub

Black Hat

Monday, July 30th, 2007

This will be my first year at the Black Hat conference in Las Vegas. I'm excited and nervous ;)

Update: I'll be sticking around for DEF CON, too.

https for www.squarefree.com

Monday, June 18th, 2007

In the past, I've complained about banks not using https for login pages and software providers not using https for downloads. Both of these practices put large numbers of users at risk of financial harm through man-in-the-middle attacks, including attacks against unsecured wireless networks.

Starting today, I'm practicing what I preach: sections of my site that offer software, such as Firefox extensions and bookmarklets, are now served using https. I'm using the following .htaccess magic in each of those directories to redirect http requests to the correct https URL:

RewriteEngine On
RewriteCond %{HTTPS} !=on
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}

Supporting https will cost me about $65 per year: $17.99/year for a domain validation certificate from GoDaddy and $47.40/year for a unique IP from my web host.

sg:want

Thursday, December 14th, 2006

The Mozilla security group uses status whiteboard markings in bugs to indicate how severe a security hole is. For example, bugs that allow a malicious site to take over users' computers easily are labeled [sg:critical]. The severities we use (critical, high, moderate, and low) are described on the known vulnerabilities page.

Recently, Dan Veditz and I started using a new status whiteboard marking, [sg:want], to indicate that a fix would improve security even though we don't consider the bug to be a security hole. On many bugs, we use [sg:want P1] through [sg:want P5] to indicate roughly how much a fix would improve security. Currently, only one bug has P1, and eight bugs are P2. These bugs include user-interface changes, code-level changes, and entire new features.

If you think a bug should have [sg:want], you should contact me or dveditz rather than adding it yourself.

Here's a sampling of [sg:want] bugs, along with a few long-standing bugs that we do consider to be security holes:

Tighten the same-origin policy for local files.

The same-origin policy protects web site scripts from accessing private information on other web sites or on your hard drive. But an HTML file on your hard drive is allowed to read pretty much any text, HTML, or XML file on your computer. Combined with a widespread perception that HTML files are safe to double-click on (like MP3 files and unlike EXE files), this leads to several attack scenarios.

Microsoft's "solution" to this problem, introduced with Windows XP SP2, is kind of insane. Internet Explorer disable all scripts in local files, unless you click an information bar and then click "Yes" in a dialog. This breaks way too many pages, and anyone who clicks "Yes" in order to unbreak pages grants the page access to their entire filesystem.

Some types of mixed content don't trigger the "broken lock icon"

The broken lock icon is a way to alert https web site owners that they are using http content in a way that makes their sites less secure. But many things that should trigger a broken lock icon don't trigger it, so web site owners testing with Firefox aren't alerted.

Always show the address bar.

Allowing sites to hide the address bar in pop-up windows makes it possible for sites to spoof the address bar, making it appear that you're on a different site. I think Microsoft recently made Internet Explorer always show the address bar; we should too.

Countermeasures for Java/plugin/extension vulnerabilities (disable, warn).

Every few months we hear a report of somone getting infected with spyware while using Firefox. It usually turns out that the user had an old version of Java, and a malicious web site exploited an old Java hole. Refusing to load old versions of Java would help a lot. (Fixing this is tricky, because each plugin presents version information in a different way, and some plugins hide version information from Firefox entirely.)

Crashes on startup with XP SP2 and a processor that supports NX aka Data Execution Protection.

In addition to the obvious problem of preventing some people from using Firefox, this forces users to run Firefox without Data Execution Prevention, and may encourage some users to turn off DEP for all programs. DEP is a cheap measure that makes memory safety bugs harder to exploit to run arbitrary code.

Firefox should check for updates even if it doesn't have write access to itself.

Making users choose between using non-root accounts and having update notifications isn't too hot.

Safer handling of executable files with download manager.

Using "Save Link As..." followed by double-clicking the file in explorer doesn't give you any warning if the file happens to be an executable file. Unless you check the extension of every porn video you download before double-clicking it, you could get owned easily.

Explicit local root model has bad human factors.

Firefox used to have a lot of bugs called GC hazards, where newly created JavaScript engine objects would be garbage-collected prematurely, leading to dangling pointer situations. Through a combination of code auditing (mostly by Igor Bukanov) and testing with the WAY_TOO_MUCH_GC option, we've eliminated quite a few of them. But it's still too easy to make mistakes that lead to GC hazards, and there are some ideas for fixing that.

Fix all non-origin URL load processing to track origin principals.

Firefox has had quite a few vulnerabilities involving the javascript: and data: protocols, as used from chrome. This bug contains discussion about design changes that could eliminate this class of bugs with API changes.

HttpOnly cookie attribute for cross-site scripting vulnerability prevention.

IE supports a HttpOnly attribute for cookies. When servers use this attribute, cross-site scripting (XSS) security holes in web sites cannot be to steal their cookies. This makes useful attacks significantly harder to pull off, and in some cases involving subdomains, prevents useful attacks entirely.

Consider a policy for security bugs such that people scanning bonsai/cvs commit history can't immediately detect security bugs and work to build exploits.

As soon as we have a reviewed patch for a security hole, we check the fix into a public CVS repository, even if the bug report is to remain private until a fixed version is released. For many bugs, someone looking at the patch and checkin comment could figure out how to exploit the bug. He might then be able to exploit it quietly for weeks before a new version of Firefox is released, or exploit it widely and force a security firedrill.

It's not clear how to solve this problem. Keeping security fixes out of CVS and off of the trunk would severely limit the number of people who can test the fixes. This would cause regressions to be identified later, possibly delaying the release. Or worse, regressions might only be noticed after the release.

Security tips for Firefox users

Thursday, December 14th, 2006

I'm working on a page called Security tips for Firefox users, describing what that I think Firefox users need to know in order to be secure while using the Web. It focuses on malware and phishing as the major threats.

I find it scary that users have to know so much in order to stay secure. A lot of the things users are seemingly expected to know are not at all obvious, even to people who have been using the Web for a long time. Hopefully, this page will make it clearer what kinds of changes we should make to Firefox in order to help users protect themselves against malware and phishing.

Determining whether a crash looks exploitable

Thursday, November 2nd, 2006

If you use Mac OS X 10.4, you can usually determine whether crashes you encounter are severe security holes in seconds, even if you are not a C++ developer or do not have access to the source code of the application that crashed. Here's how.

Setting up Crash Reporter

To prepare, type "defaults write com.apple.CrashReporter DialogType developer" into a Terminal window. (Or, if you have CrashReporterPrefs installed, you can do this using a GUI.) This makes several changes to the dialog that appears when any application crashes. The most important change is the addition of a partial stack trace to the dialog that appears when applications crash. The stack trace tells you which function the crash occurred in, which function called that function, and so on.

Another nice feature of "Developer" mode is that a crashing application's windows stick around until you click "Close" instead of disappearing immediately. This gives you a chance to salvage unsaved data that was visible when the application crashed.

To try out Crash Reporter, find a crash bug report in Bugzilla, such as this null dereference or this too-much-recursion crash, and point Firefox at the bug's testcase. Now, instead of seeing a Basic crash dialog, you should see a Developer crash dialog with the first ten lines of a stack trace and other debugging information.

Skimming a crash report

By looking at three things in the crash report in order, you can get a good idea of whether the crash is likely to be exploitable:

1. Look at the top line of the stack trace. If you see a hex address such as 0x292c2830 rather than a function name such as nsListBoxBodyFrame::GetRowCount at the top of the stack, a bug has caused the program to transfer control to a "random" part of memory that isn't part of the program. These crashes are almost always exploitable to run arbitrary code.

2. Look at the last line above the stack trace, which will usually be something like "KERN_PROTECTION_FAILURE (0x0002) at 0x00000000". The last number, in this case 0x00000000, is the memory address Firefox was prevented from accessing. If the address is always zero (or close to zero, such as 0x0000001c), it's probably a null dereference bug. These bugs cause the browser to crash, but they do so in a predictable way, so they are not exploitable. Most crashes fall into this category.

3. Check the length of the stack trace by clicking the "Report..." button. If it's over 300 functions long, it's likely to be a too-much-recursion crash. Like null dereferences, these crashes are not exploitable.

Any other crash where Firefox tries to use memory it does not have access to indicates some kind of memory safety bug. These crashes can often be exploited to run arbitrary code, but you can't be as certain as in the case where you see a bogus address at the top of the stack in step 1.

Reporting bugs

If you encounter a crash in Firefox that looks exploitable, please take the time to figure out how to reproduce the bug, create a reduced testcase if you can, and file a security-sensitive bug report in Bugzilla. After filing the bug, attach the crash report generated by Mac OS X, pointing out what makes the crash look like a security hole.

If a crash bug looks exploitable based on the stack trace, Mozilla's security group assumes it is exploitable. You don't have to learn machine language and construct a sophisticated demo that uses the bug to launch Calculator.app to convince us to take such a bug seriously and fix it. The same is true for Apple's Safari team in my experience.

Windows and Linux: using Talkback

If you use Windows or Linux, you can't use the Mac OS X Crash Reporter, but you can use Talkback instead if you want to see stack traces for Firefox crashes. Installing Nightly Tester Tools gives you a menu showing your recent crashes, but it's still not quite as efficient as the Mac OS X trick, and depends on the Talkback server being in a good mood.

Talkback was developed before developers knew so many types of crashes were exploitable, and it's primary purpose is to determine which crashes are the most common, so it does not show you which memory address Firefox was denied access to. This prevents you from distinguishing likely null dereferences from some severe memory safety bugs (step 2 above).