Archive for the 'Mozilla' Category

Updated version of “Bug attachment source”

Wednesday, January 3rd, 2007

The Bug attachment source user script adds "source" links for each attachment, making it easier to see the source of attachments that demonstrate crash bugs. For zip attachments, it adds a "contents" link instead.

A recent Bugzilla upgrade changed the attachment table in a way that broke the script. Dan Veditz updated the script to work with the new version of Bugzilla (he got back from vacation before I did).

To use it in Firefox, you need to install Greasemonkey first. (Greasemonkey works fine with Firefox trunk, by the way.)

Porn browser wars?

Tuesday, January 2nd, 2007

Like Robert Accettura, I frequently get emails from strangers asking for Gmail invites, help with Firefox problems, or JavaScript code. But this one caught me by surprise:

From: Juliano
To: Jesse Ruderman
Date: Dec 27, 2006 8:45 AM
Subject: HI....... porn?? Firefox or Opera????

Hello,

The dillema I am facing is. I love surfing porn. I am with Windows Internet Explorer 6 and have had a NIGHTmare surfing porn with it. I dont need the stress, so need some advice.

People have recommended Firefox, as you have. But I am also hearing now that Opera is best. So I am confused, and yet feel URGENT need to actualize one of the other. For sanity sake.

Can you help?

Julian

sg:want

Thursday, December 14th, 2006

The Mozilla security group uses status whiteboard markings in bugs to indicate how severe a security hole is. For example, bugs that allow a malicious site to take over users' computers easily are labeled [sg:critical]. The severities we use (critical, high, moderate, and low) are described on the known vulnerabilities page.

Recently, Dan Veditz and I started using a new status whiteboard marking, [sg:want], to indicate that a fix would improve security even though we don't consider the bug to be a security hole. On many bugs, we use [sg:want P1] through [sg:want P5] to indicate roughly how much a fix would improve security. Currently, only one bug has P1, and eight bugs are P2. These bugs include user-interface changes, code-level changes, and entire new features.

If you think a bug should have [sg:want], you should contact me or dveditz rather than adding it yourself.

Here's a sampling of [sg:want] bugs, along with a few long-standing bugs that we do consider to be security holes:

Tighten the same-origin policy for local files.

The same-origin policy protects web site scripts from accessing private information on other web sites or on your hard drive. But an HTML file on your hard drive is allowed to read pretty much any text, HTML, or XML file on your computer. Combined with a widespread perception that HTML files are safe to double-click on (like MP3 files and unlike EXE files), this leads to several attack scenarios.

Microsoft's "solution" to this problem, introduced with Windows XP SP2, is kind of insane. Internet Explorer disable all scripts in local files, unless you click an information bar and then click "Yes" in a dialog. This breaks way too many pages, and anyone who clicks "Yes" in order to unbreak pages grants the page access to their entire filesystem.

Some types of mixed content don't trigger the "broken lock icon"

The broken lock icon is a way to alert https web site owners that they are using http content in a way that makes their sites less secure. But many things that should trigger a broken lock icon don't trigger it, so web site owners testing with Firefox aren't alerted.

Always show the address bar.

Allowing sites to hide the address bar in pop-up windows makes it possible for sites to spoof the address bar, making it appear that you're on a different site. I think Microsoft recently made Internet Explorer always show the address bar; we should too.

Countermeasures for Java/plugin/extension vulnerabilities (disable, warn).

Every few months we hear a report of somone getting infected with spyware while using Firefox. It usually turns out that the user had an old version of Java, and a malicious web site exploited an old Java hole. Refusing to load old versions of Java would help a lot. (Fixing this is tricky, because each plugin presents version information in a different way, and some plugins hide version information from Firefox entirely.)

Crashes on startup with XP SP2 and a processor that supports NX aka Data Execution Protection.

In addition to the obvious problem of preventing some people from using Firefox, this forces users to run Firefox without Data Execution Prevention, and may encourage some users to turn off DEP for all programs. DEP is a cheap measure that makes memory safety bugs harder to exploit to run arbitrary code.

Firefox should check for updates even if it doesn't have write access to itself.

Making users choose between using non-root accounts and having update notifications isn't too hot.

Safer handling of executable files with download manager.

Using "Save Link As..." followed by double-clicking the file in explorer doesn't give you any warning if the file happens to be an executable file. Unless you check the extension of every porn video you download before double-clicking it, you could get owned easily.

Explicit local root model has bad human factors.

Firefox used to have a lot of bugs called GC hazards, where newly created JavaScript engine objects would be garbage-collected prematurely, leading to dangling pointer situations. Through a combination of code auditing (mostly by Igor Bukanov) and testing with the WAY_TOO_MUCH_GC option, we've eliminated quite a few of them. But it's still too easy to make mistakes that lead to GC hazards, and there are some ideas for fixing that.

Fix all non-origin URL load processing to track origin principals.

Firefox has had quite a few vulnerabilities involving the javascript: and data: protocols, as used from chrome. This bug contains discussion about design changes that could eliminate this class of bugs with API changes.

HttpOnly cookie attribute for cross-site scripting vulnerability prevention.

IE supports a HttpOnly attribute for cookies. When servers use this attribute, cross-site scripting (XSS) security holes in web sites cannot be to steal their cookies. This makes useful attacks significantly harder to pull off, and in some cases involving subdomains, prevents useful attacks entirely.

Consider a policy for security bugs such that people scanning bonsai/cvs commit history can't immediately detect security bugs and work to build exploits.

As soon as we have a reviewed patch for a security hole, we check the fix into a public CVS repository, even if the bug report is to remain private until a fixed version is released. For many bugs, someone looking at the patch and checkin comment could figure out how to exploit the bug. He might then be able to exploit it quietly for weeks before a new version of Firefox is released, or exploit it widely and force a security firedrill.

It's not clear how to solve this problem. Keeping security fixes out of CVS and off of the trunk would severely limit the number of people who can test the fixes. This would cause regressions to be identified later, possibly delaying the release. Or worse, regressions might only be noticed after the release.

Security tips for Firefox users

Thursday, December 14th, 2006

I'm working on a page called Security tips for Firefox users, describing what that I think Firefox users need to know in order to be secure while using the Web. It focuses on malware and phishing as the major threats.

I find it scary that users have to know so much in order to stay secure. A lot of the things users are seemingly expected to know are not at all obvious, even to people who have been using the Web for a long time. Hopefully, this page will make it clearer what kinds of changes we should make to Firefox in order to help users protect themselves against malware and phishing.

Bookmarklets updated to comply with DOM 2 rule

Thursday, November 2nd, 2006

DOM 2 does not allow nodes to be moved between documents -- in fact, it requires that implementations throw an error when code tries to do so. But for years, Gecko has not enforced this rule.

It's a bit embarrassing that Internet Explorer gets this right and we get it wrong. Someone might think Gecko is trying to embrace and extend the DOM.

Soon, Gecko will start enforcing the rule on trunk. But bringing Gecko in line with this aspect of the DOM spec risks breaking Gecko-specific code, such as code in extensions and bookmarklets written for Firefox. For example, my Search Keys extension used to create some nodes in the chrome document, and some in the foreground tab, before putting them in the tab that that just loaded. Search Keys 0.8 creates all elements in the correct document.

I also updated the following bookmarklets to create nodes in the correct document and/or use importNode when copying nodes between documents:

These bookmarklets previously only worked in browsers that violated the DOM spec by allowing nodes to be moved between documents without a call to importNode or adoptNode. Maybe some of them work in IE now.

If you use those bookmarklets, you should grab the new versions so they won't break when you update to next week's trunk build or to Firefox 3.

Determining whether a crash looks exploitable

Thursday, November 2nd, 2006

If you use Mac OS X 10.4, you can usually determine whether crashes you encounter are severe security holes in seconds, even if you are not a C++ developer or do not have access to the source code of the application that crashed. Here's how.

Setting up Crash Reporter

To prepare, type "defaults write com.apple.CrashReporter DialogType developer" into a Terminal window. (Or, if you have CrashReporterPrefs installed, you can do this using a GUI.) This makes several changes to the dialog that appears when any application crashes. The most important change is the addition of a partial stack trace to the dialog that appears when applications crash. The stack trace tells you which function the crash occurred in, which function called that function, and so on.

Another nice feature of "Developer" mode is that a crashing application's windows stick around until you click "Close" instead of disappearing immediately. This gives you a chance to salvage unsaved data that was visible when the application crashed.

To try out Crash Reporter, find a crash bug report in Bugzilla, such as this null dereference or this too-much-recursion crash, and point Firefox at the bug's testcase. Now, instead of seeing a Basic crash dialog, you should see a Developer crash dialog with the first ten lines of a stack trace and other debugging information.

Skimming a crash report

By looking at three things in the crash report in order, you can get a good idea of whether the crash is likely to be exploitable:

1. Look at the top line of the stack trace. If you see a hex address such as 0x292c2830 rather than a function name such as nsListBoxBodyFrame::GetRowCount at the top of the stack, a bug has caused the program to transfer control to a "random" part of memory that isn't part of the program. These crashes are almost always exploitable to run arbitrary code.

2. Look at the last line above the stack trace, which will usually be something like "KERN_PROTECTION_FAILURE (0x0002) at 0x00000000". The last number, in this case 0x00000000, is the memory address Firefox was prevented from accessing. If the address is always zero (or close to zero, such as 0x0000001c), it's probably a null dereference bug. These bugs cause the browser to crash, but they do so in a predictable way, so they are not exploitable. Most crashes fall into this category.

3. Check the length of the stack trace by clicking the "Report..." button. If it's over 300 functions long, it's likely to be a too-much-recursion crash. Like null dereferences, these crashes are not exploitable.

Any other crash where Firefox tries to use memory it does not have access to indicates some kind of memory safety bug. These crashes can often be exploited to run arbitrary code, but you can't be as certain as in the case where you see a bogus address at the top of the stack in step 1.

Reporting bugs

If you encounter a crash in Firefox that looks exploitable, please take the time to figure out how to reproduce the bug, create a reduced testcase if you can, and file a security-sensitive bug report in Bugzilla. After filing the bug, attach the crash report generated by Mac OS X, pointing out what makes the crash look like a security hole.

If a crash bug looks exploitable based on the stack trace, Mozilla's security group assumes it is exploitable. You don't have to learn machine language and construct a sophisticated demo that uses the bug to launch Calculator.app to convince us to take such a bug seriously and fix it. The same is true for Apple's Safari team in my experience.

Windows and Linux: using Talkback

If you use Windows or Linux, you can't use the Mac OS X Crash Reporter, but you can use Talkback instead if you want to see stack traces for Firefox crashes. Installing Nightly Tester Tools gives you a menu showing your recent crashes, but it's still not quite as efficient as the Mac OS X trick, and depends on the Talkback server being in a good mood.

Talkback was developed before developers knew so many types of crashes were exploitable, and it's primary purpose is to determine which crashes are the most common, so it does not show you which memory address Firefox was denied access to. This prevents you from distinguishing likely null dereferences from some severe memory safety bugs (step 2 above).

Firefox lives up to its name

Wednesday, November 1st, 2006

Bug report of the week:

Firefox 2 runs 50 degrees F hotter than Safari or Firefox 1.5

Integer overflows

Wednesday, November 1st, 2006

"What is a string library? It's a way to pretend that computers can manipulate strings just as easily as they can manipulate numbers."

-- Joel Spolsky, The Law of Leaky Abstractions.

Most C++ code uses the integer mod 232 (or 264) type C++ calls "int" as if they were integers. This is great for performance -- many operations on int32 are a single CPU instruction -- but dangerous for security and correctness when the numbers can be large. This can cause security holes in at least two ways.

First, code might use int32 arithmetic to decide how much memory to allocate. Consider an image decoder that allocates width * height * 4 bytes to store RGBA pixels and then decodes the image data into the structure. But since width and height are unsigned ints, it doesn't really allocate width*height*4 bytes; it allocates width*height*4 mod 232 bytes. If the integer used to decide how much memory to allocate has overflowed in such a way that it comes out as a small integer, the code is likely to overflow the buffer as it writes the decoded image into the structure.

Second, code might use int32 arithmetic to decide when to deallocate an object. In code that uses reference counting, an extra call to "release" can obviously lead to a dangling pointer situation. But thanks to integer overflows, 232 unbalanced calls to "addref" followed by a normal "release" can have the same effect. (Luckily, you can't cause this situation by merely making 232 objects point to a specific object, because you'd run out of memory first. So this could be addressed by auditing for addref-without-release leak bugs rather than modifying the addref function to make it safer.)

Explicit checks

Some code in Gecko has explicit checks to prevent overflows. (This must be done carefully -- "width*height*4 > 232" doesn't mean anything to a C++ compiler!) If you remember to think "integer mod 232 or 264" every time you see "int", you may be able to avoid introducing new security holes due to integer overflow when you write code.

Michael Howard at Microsoft advocates this approach, at least for C code that is used near things like allocation sizes and reference counts, and provides functions to do checked arithmetic operations. These functions return a boolean indicating whether the arithmetic operation succeeded. This leads to code where it is hard to see what calculation is being done but easy to see that each step of the calculation is done safely.

Safe integer classes

Another strategy is to avoid using "int", at least in code used to compute allocation sizes, and instead use a "safe" integer class. A safe class might do correct arithmetic on large numbers, allocating extra memory when needed, but perhaps that is overkill for keeping allocations safe. A proponent of this approach might say "int is the new char *", referring to how string buffer overflows have been nearly eliminated through the use of string classes, and make fun of Joel Spolsky for the quote at the beginning of this post.

David LeBlanc, also at Microsoft, advocates a slightly different approach: using a class that treats overflow as an error and can throw exceptions. This keeps arithmetic formulas readable at the expense of having to design the function to handle exceptions correctly.

Static analysis can be used to scan for calls to malloc that use "int" and need to be converted to using SafeInt.

Will Gecko soon have a multitude of integer classes, each with different performance characteristics, signedness, and overflow behavior? Probably not, because numbers used to decide how much memory to allocate are almost always unsigned integers where overflows can be treated as errors. But I wouldn't be surprised to see different parts of the code use different strategies, with C code using the "explicit checks with helpers" strategy and XPCOM C++ code using another strategy.

Other languages

Many languages share C++'s behavior of exposing "integer mod 232" types as "int", but JavaScript and Python are two major exceptions. JavaScript has a hybrid "number" type that is sometimes stored as an integer and sometimes stored as a floating-point number. Overflowing integer arithmetic turns your numbers into floating-point numbers, while treating a floating-point number as a bit field tries to turn it back into an integer by computing its value mod 232. While JavaScript's behavior is more useful in most situations than wrapping around, you wouldn't want to use it for memory allocation.

Python instead takes advantage of its dynamic type system to make integers safe. Overflowed integers are replaced with a "long integer" type that is slower to operate on but has safe, correct behavior for integers of any size (until you run out of memory).