Blog posts tagged: firefox
News and other things I find interesting
Last modified: Wednesday, February 08, 2012
Snappy is a project that
aims to improve is improving Firefox responsiveness. As part of this project I've been working on Firefox startup optimizations.
You can spend days, months, or even years trying to optimize code, but if you don't understand where to optimize, you won't be making a difference. Likewise, searching for an optimizations in a competitor's product is usually not the best way to get results.
It's nearly impossible to optimize code by guessing what is slow, you need to profile the code to understand the problems. Once you understand the problems you can then fix them, and then finally test to make sure they are fixed.
Here are some initial Firefox startup optimizations I've done with some lite profiling on Windows over the period of a few days:
- bug 724177 - 30-50ms (5%) Firefox startup speed optimization on Windows in nsLocalFileWin
- bug 724256 - Optimize move file calls on Windows, saving about 2ms per call (1 call on startup)
- bug 722225 - Firefox startup speed by ~5% (-70ms) on Windows by optimizing D3D10CreateDevice1
- bug 722315 - Firefox startup speed by by ~5% (-76ms) on Windows by lazy loading CLSID_DragDropHelper
- bug 724203 - Optimize nsLocalFile::IsDirectory on Windows by 50% giving 5ms startup improvement
- bug 724207 - Save 15-20ms on startup from unused file attributes fetch when opening files
- bug 692255 - Find a way to get rid of prefetch files on Windows for faster startup
- bug 725444 - 10-15ms main thread startup optimization in Windows AudioSession
I plan to keep doing more startup optimizations for a while in between silent update work.
So how are optimizations found? There are many ways, the first I will talk about is Xperf.
One way to identify optimization bottlenecks on Windows is to use Xperf. Xperf is a great way to find both IO bottlenecks and CPU usage bottlenecks.
It can tell you which files use the most IO, what the IO patterns are, which functions take the most time, and allow you to group based on different criteria to view the data in different ways. It is the tool that made Windows 7 so much faster than Windows Vista. It actually does a ton more than that but I won't focus on it in the context of this blog post.
Built in Mozilla profiler
Another way to find optimizations is to use Benoit's profiler (SPS). Currently it works well on Mac, and Windows is nearly completed.
I'm very excited to start using it on Windows.
It far surpasses the tools I've been using to find the bugs and fixes above.
I haven't seen how the Mozilla built-in profiler is implemented, but I suspect that this custom profiler is a simplified version of the Mozilla profiler's Pseudostack mode.
I was able to find all of the above optimizations (with the exception of the prefetch task) with a tiny class with a couple of wrapper macros.
Basically the class is an RAII class built around the Win32 API QueryPerformanceCounter and QueryPerformanceFrequency.
These functions allow you to get very detailed timing results.
For any function you want to profile you simply add the PROFILE_FUNCTION() macro to the start of the function. I usually start by adding this to each non-trivial function in a file.
Once you've found a function that takes a non trivial amount of time you can start to dig deeper into other functions inside of the function until you find the root cause.
But sometimes a function has a lot of code and you aren't sure what the slow part of the function is. For this I use a second macro PROFILE_STR("FunctionName:N") where N is a number I increment evenly spaced out. The string can be anything, I just use that normally.
These macros just create an object. The constructor of the object stores the start time, the destructor of the object gets the end time.
The destructor also uses the function name or string as a lookup in a global map that keeps track of the number of hits, the maximum length of time, the minimum length of time, and the average length of time for each function/string.
There is a third macro I was using to dump the results to a file at particular events I wanted to focus on, like first paint, or when the session is restored.
One thing you have to look out for with this method is if you happen to put the calls in a recursive function.
In that case it will appear to be a bigger bottleneck because the first call will include the full length of time of all calls that contain it.
All of the inner calls will add onto that.
The output of this function is made when the third macro to dump the results is called. It dumps the results to a simple formatted .txt file:
Verifying startup results
If you're working on Firefox, a good way to verify that your speed optimization made a difference is to use the about:startup extension.
This extension times the startup of important events like firstPaint
and sessionRestored and gives you the average up top.
To verify my results I just did 20 startups in sequence with a release build that contained my patches vs a release build that did not contain my patches.
Here is an example after one of the patches above:
Last modified: Thursday, October 06, 2011
Introducing: The Mozilla Platform Development Cheat Sheet.
Working on Firefox can be daunting when you first start as a developer.
There is an amazing amount of Mozilla specific technology you need to learn; in addition you may not have had the opportunity to work with things like mercurial patch queues (MQ).
On the other hand: There is a massive collection of documentation that you have to keep looking up.
When I started as a platform developer at Mozilla 3 months ago, I started making a cheat sheet of common information I would have to frequently look up. I will continue to update this page as I continue to learn every day at Mozilla.
I highly encourage anyone looking to contribute to an open source project to look into contributing to Mozilla.
Not only will you be helping an open source non-profit organization, but you will also connect with extremely smart peers, have extra resume flair, and learn a ton. The experience of contributing (or working) for Mozilla will forever change you as a developer.
Last modified: Thursday, September 15, 2011
As you probably already know, Windows 8 introduces the new default Tablet interface, and the old normal Desktop interface. It uses the new tablet interface as the startup interface even on Desktops though. For a good rundown on all of the new features, see here.
When you first boot up into Windows 8, it takes you briefly for about 1/2 of a second to the desktop interface and then switches directly to the tablet interface. I had read previously that the desktop/explorer process was only loaded into memory if you clicked it, but it seems to not be the case for this early pre-beta release.
The next thing you notice is that once in the Desktop interface, the start menu button no longer brings up a menu. It brings you back to the tablet interface. Pressing the Start/Windows keyboard button will take you between the 2 interfaces.
Firefox on Windows 8:
The first thing I did was install Firefox.
After installing Firefox, Windows will ask you which web browser you would like to use by default. It shows you a picture of Firefox and IE and lets you pick. Nice interface. It shows you this dialog even before our process starts. If you change focus to another tab or application though, the dialog goes away forever unless you uninstall and reinstall Firefox.
After installing we show automatically into the tablet interface as a new tile.
But when you click on the Firefox icon in the tablet interface, it takes you directly to the old Desktop environment and loads the Firefox process as normal. I think it'll be possible for integration like IE does into the tablet interface direclty. Although the solution may have to be 'creative'.
Full screen mode in Firefox works the same as previous versions currently. If you start in full screen it will switch you first to the Desktop mode, and then launch full screen. Exiting full screen leaves you at the Desktop interface.
Internet Explorer Tablet Mode:
If you start Internet Explorer from the tablet mode you'll see a full screen app with no switch to Desktop. It has a nice interface and allows you to pin any web page to your Tablet interface as a tile.
The problem with this is that even know I set Firefox as my default browser, through the Windows interface, it still launches IE for these shortcut tiles.
Here is a tile created on the far left from a pinned page in IE of the mozilla.org page:
Work to be done for Firefox on Windows 8:
There's probably a ton, but here are a few things that come to mind:
- We need to support the VS2011 developer tools for MozillaBuild
- It would be nice to not need to switch to Desktop to launch the browser.
- Platform integration for apps seems very important. We should be showing up as tiles for web applications that launch Firefox full screen.
- The Firefox tile can be leveraged to have more functionality built into it.
Last modified: Sunday, July 10, 2011
We see the http a URL scheme, just about every day:
http part in the example above is the URL scheme.
But there are also dozens of other URL schemes, including: ftp, mailto, irc, smb, chrome, about, snmp, and data. This post talks about one called the data URL scheme.
The data URL scheme, amongst other things, allows you to embed images into your HTML pages. That means that no separate HTTP request/response is needed to obtain such an image. It looks like this:
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAUAAAAFCAYAAACNbyblAAAAHElEQVQI12P4//8/w38GIAXDIBKE0DHxgljNBAAO9TXL0Y4OHwAAAABJRU5ErkJggg==" alt="Red dot" />
The first part of the URL is the scheme
data, followed by the mime type
image/png, optionally followed by base64 (if not specified assumes ASCII characters with encoded non printable characters).
The last part of the URI after the comma is the content of the file in the appropriate encoding.
The data URL scheme was specified in 1998 in RFC 2397 and has been implemented by most major browsers as of HTML4. Most major browsers already have pretty good coverage for HTML5. IE1-IE7 lack support.
The benefit of using the data URL scheme is that if the image is small, the overhead is less than the HTTP request/response headers. It also frees up concurrent connections since each browser has a maximum amount of connections it can make total and to each domain.
You wouldn't want to use the data URL scheme for large images, or if you require support for IE7 and below. Your image won't be separately cached either, so this means that it will be downloaded with each request to the parent HTML page. You can get around this last limitation though by specifying your data URL inside an already cached CSS file with a CSS rule background:url('data:image/png;base64,...);
Overall it is a good thing to use and I'd use it for social icons in HTML.
There are also many other uses of the data URL scheme mentioned below.
Uses of the data URI scheme:
You may have noticed that sometimes emails with images don't have a separate attachment. You can use the data URI scheme inside HTML email messages without having a separate image attachment.
The new HTML5
<canvas> element allows you to export your canvas to a data URL.
You can do this with <canvas>.toDataURL
How it relates to me:
I recently improved the BMP and ICO decoder (refactoring plus adding support for PNG ICOs) for Firefox. I also have to implement BMP and ICO encoders so that we can have better shell integration with Windows 7.
A side effect of doing these ICO and BMP encoders is that Firefox will support BMP and ICO generation via the
This makes Firefox a pretty good image conversion program.
This also makes it possible for example, for a web page developer to implement a favicon creator without server side code. No other browsers currently implement BMP and ICO mime types for canvas exporting.
Last modified: Sunday, June 12, 2011
This article will cover the following topics:
- An overview of the basics of HTTP
- What is HTTP pipelining?
- What problems can appear with HTTP pipelining?
- Why you should care about HTTP pipelining?
- Which web servers support HTTP pipelining?
- Which browsers support HTTP pipelining? (And how to enable it)
- Which programming languages/libraries support HTTP pipelining?
An overview of the basics of HTTP
The HTTP protocol works by sending requests and getting responses back for those requests.
I will not get into the details of the HTTP protocol syntax. Details about headers, HTTP methods, paths, parameters, etc., as this post would be too long. Instead I'll just cover some basics and then dive right into explaining HTTP pipelining. But I will show a basic HTTP GET request and response.
A typical HTTP request looks something like this:
GET / HTTP/1.1 Host: www.brianbondy.com User-Agent: Mozilla/5.0 Connection: keep-alive
A typical HTTP response looks something like this:
HTTP/1.1 200 OK Content-Type: text/html; charset=utf-8 Content-Encoding: gzip Server: Google Frontend Content-Length: 12100 ...content...
On a single socket, a single request is sent out, and then a single response is retrieved.
A browser or other HTTP client could create multiple sockets to a server and make multiple requests. The picture on the right shows 2 HTTP requests and responses on 2 different sockets.
Pretty much all web browsers do multiple connections per server today.
In Firefox you can adjust this amount by going to
about:config and adjusting:
Mine was initially defaulted to 15.
Several requests to a single server are very typical. For example an HTML file can have several referenced images.
To avoid creating several connections, HTTP 1.1 introduced persistent connections.
The picture on the right shows 3 requests and responses on a single persistent connection.
Having several connections can give better speed, but if you need to create a new connection for each and every request, it will use much more resources, require more TCP handshakes, and will be susceptible to TCP slow-start.
If you look back at the example HTTP request above, the HTTP header: "Connection: keep-alive" indicates that you would like to use a persistent connection. The default is to use a persistent connection, but the server is not forced to do this, and it can send a "Connection: close" header.
What is HTTP pipelining?
HTTP pipelining is a feature of HTTP 1.1 persistent connections. It means that you can send multiple requests on the same socket without waiting for each response.
The picture on the right shows 6 requests and responses using at most 3 requests at a time.
HTTP is based on TCP, and one of TCP's guarantees is ordered delivery. This means that all of the requests sent out on the same socket, will be received in that order on the server. An HTTP server that supports HTTP pipelining will send its responses in the same order.
HTTPS pipelining is also possible with secure HTTP connections and it gives an even greater degree of speed because of the extra needed SSL/TLS handshakes.
What problems can appear with HTTP pipelining?
Although the HTTP 1.1 RFC indicates that HTTP implementations should support persistent connections, it is possible that they will not.
You can't be sure if an HTTP server supports HTTP pipelining before making a request.
The server may even send a "Connection: Close" header after your first request is sent indicating it does not want to use a persistent connection.
There could be proxies in between as well which cause problems, making an HTTP client black list approach to determining which servers support persistent connections not ideal.
Based on the HTTP 1.1 RFC, if a client finds that a pipelined connection is not supported, the client should re-attempt the failed requests.
To avoid problems with a server getting 2 of the same requests and the client not knowing it, the client should only use pipelining on HTTP methods which are idempotent. In general, idempotence means that you can apply the same operation 1 or many times, and it will have the same effect. Example: setting a variable x to the value of 3 is an idempotent operation. Setting a variable to one more than its last value is NOT an idempotent operation.
In terms of HTTP, PUT and DELETE are idempotent operations, GET, HEAD, OPTIONS and TRACE should be idempotent and HTTP POST is probably not. In practice, most browsers that do support pipelining only do so for GET and HEAD requests.
Sometimes it's hard for a client to determine if the server's response is valid or garbage.
Requests using pipelining to servers which don't support pipelining need to be retried and so it would be slower.
It would be nice, but servers do not currently tell a client that they support pipelining. If all servers did, then only the first request would need to be non-pipelined if the client didn't already know if the server had support.
Why you should care about HTTP pipelining?
TCP/IP packets can be reduced. The typical maximum segment size (MSS) is in the range of 536 to 1460 bytes, and so several HTTP requests could fit into a single packet. It would also reduce the total number of packets. Also there are wins with the congestion control strategy, connection handshake, connection teardown and SSL handshake.
What this means is that you can get much faster page loads by using HTTP pipelining.
I've been using it in Opera and Firefox and have not run into problems.
Which web servers support HTTP pipelining
Most modern web servers support HTTP pipelining. IIS 4.0 is said to not have support for it.
Which browsers support HTTP pipelining? (And how to enable it)
- Google Chrome: No
- Safari: No
- Internet Explorer: No
- Opera: Yes
- Firefox: Yes, but you need to enable it by following the below steps.
You can adjust HTTP pipelining settings in Firefox by changing the following settings in
For HTTP pipelining:
For HTTP proxy pipelining: (Use this if you want to try pipelining and you use a proxy server)
For HTTPS pipelining:
To adjust the number of requests to send at once:
network.http.pipelining.maxrequests to 8. The pipelining picture above would have a value of 3 here.
network.http.max-connections-per-serversetting is clamped between 1 and 255. (This setting has nothing to do with pipelining but you can adjust it)
network.http.pipelining.maxrequestssetting is clamped between 1 and NS_HTTP_MAX_PIPELINED_REQUESTS which is defined to be 8. Unless you compile your own builds, a value of 8 is the most you can try with Firefox.
Which programming languages/libraries support HTTP pipelining?
Many popular programming libraries across most programming languages support pipelining.
For example, here's a small subset list of libraries that support pipelining:
- Python: httplib2, Twisted
- .NET Framework: System.Net.HttpWebRequest
- C++: Qt's QNetworkRequest, libcurl