For those few who've followed along, I gave a talk and blogged abouthow to achieve "critical"-rated code execution vulnerabilities in Firefox with user-interface XSS. The end of the blog posts invites people to test the sanitizers but did not come with proper instructions. This is what this blog post is going to follow-up on
In the meantime, colleagues from Security Engineering managed to ship a strong Content-Security-Policy (CSP) for all our privileged internal about: pages (e.g.: about:preferences, which hosts Firefox settings). We also disallow eval() and raise an assertion for use in privileged code.
This means that a successful exploit has to bypass our built-in XSS sanitizer and Content-Security Policy (CSP) to gain arbitrary code execution. This blog post does not talk about CSP bypasses. However a bypass of the sanitizer by itself is still a security bug and probably warrants a bounty. (Typical restrictions apply. I am not the bug bounty committee. I am not a lawyer See the bounty pages for more)
A browser is a complex beast. If you want to do meaningful security research, you should test Firefox Nightly. We often times rewrite lots of code that isn't related to the thing you are testing but will still have an impact. To make sure your bug is actually going to affect end users, test Firefox Nightly. Otherwise, the things you find in Beta or Release might have already been fixed in Nightly.
The sanitizer is available to all privileged pages. You can test it with the Firefox Developer Tools. Just open a new tab and navigate to about:config. about:config. has access to privileged APIs and therefore can not use innerHTML (and its friends) without going through the sanitizer.
Open The developer tools. Go to Tools in the menu bar. Select Web Developers, then Web Console (Ctrl+Shift+k). Try XSSing yourself, type:
document.body.innerHTML = '<img src=x onerror=alert(1)>'
Observe how Firefox sanitizes the HTML markup by looking at the error in the console:
“Removed unsafe attribute. Element: img. Attribute: onerror.”
You may now go and try other variants of XSS against this sanitizer. Or read on to know how it actually works.
The Sanitizer runs in the so-called "fragment parsing" step of innerHTML. Whenever someone uses innerHTML (or its friends, like outerHTML) the browser parses the string from JavaScript and builds a DOM tree data structure. Now before the structure is appended to the existing DOM element our sanitizer kicks in. This makes sure that our sanitizer can not mismatch with the actual parser. It is the actual parser. The code line that triggers the sanitizer is in nsContentUtils::ParseFragmentHTML and nsContentUtils::ParseFragmentXML. This link here points to a specific source code revision, to make hotlinking easier. Please click the file name at the top of the page to get to a newer revision of the source code.
The sanitizer is implemented as an allow-list of elements, attributes and attribute values in nsTreeSanitizer.cpp. Please consult the list before wasting CPU cycles.
Finding a Sanitizer bypass is a hunt for mXSS bugs in Firefox. – Unless you find an element in our allow-list that has recently become capable of running script.
Right, so for now we have emulated the Cross-Site Scripting (XSS) vulnerability by typing in innerHTML ourselves in the Web Console. That's pretty much cheating. But as I said above: What we want to find are sanitizer bypasses. This is a call to test our mitigations.
But if you still want to find real XSS bugs in Firefox, I recommend you run some sort of smart static analysis on the Firefox JavaScript code. And by smart, I probably do not mean eslint-plugin-no-unsanitized. But I'm not gonna judge you :-)
This blog post describes the mitigations Mozilla Firefox has in place to protect against XSS bugs that lead to remote code execution outside of the sandbox. If you intend to search and take part in the Bug Bounty program, please consult the Bug Bounty pages.