Around a year ago I wrote a couple of CSS performance related posts. The first post centred on CSS selectors and the second on CSS layout methods (Flexbox v Table). Sadly, while I dispelled (at least for myself) some long-standing CSS performance myths (principally that certain CSS selectors should be avoided at all costs for performance reasons) I worry that those posts have possibly perpetuated belief in other CSS performance ‘rules’ that could be equally problematic in certain situations.

This posts raison d’être

It’s one thing for me to internally believe something and satisfy that belief for myself, but when I’m vocalising those beliefs on podcasts or video interviews I do worry that some may take the sound-bites without performing due diligence.


If you read nothing more of this post, read this next paragraph and DO take it to heart:

Do not memorize rules in relation to CSS performance without checking your own ‘data’. They are largely useless, transient and too subjective. Instead become acquainted with tools and use them to reveal relevant data for your own scenario. This is basically the mantra the Chrome Dev relations folks have been promoting for years, I believe it was Paul Lewis (more of which below) that coined the term, ‘Tools, not rules’ in relation to troubleshooting web performance.

Nowadays I get that sentiment. Really get it.

CSS performance from browser makers

While I generally never worry CSS selectors when authoring a style sheet (typically I just put a class on anything I want to style and select it directly) every so often I see comments from people way smarter than me that relate specifically to a certain selector. Here’s a quote from Paul Irish in relation to a post on A List Apart from Heydon Pickering which used a specific type of selector:

These selectors are among the slowest possible. ~500 slower than something wild like “ .title”. Test page

That said, selector speed is rarely a concern, but if this selector ends up in a dynamic webapp where DOM changes are very common, it could have a large effect.  So, good for many use cases but keep in mind it might become a perf bottleneck as the app matures. Something to profile at that point.  Cheers

— Paul Irish 1 on Quantity Queries for CSS

What are we to take from that? Do we try and hold that kind of selector in some ‘do not use in case of emergency’ vault in our heads?

In the months since my aforementioned posts, I have pondered attempting to create some sort of suite of tests for CSS performance but the longer I think about the problem the more I believe it would actually prove useless and maybe even do more harm than good. For example, I started to re-run those selector and layout tests and while some things have changed (Flexbox is faster than Table now in Firefox in my test for example) in all honestly, what does that really tell us? How does that help you or I author our next set of styles?

Therefore, in the post, I have opted to take a different approach. I asked the smart folks who actually work on browsers what they think we should concern ourselves with in regards to CSS performance.

In the front-end world we are lucky that the Chrome Developer relations team are so accessible. However, I’m keen for some balance. In addition, I reached out to people at Microsoft and Firefox and included some great input from WebKit too.

Asking the question

The question was essentially, “Should authors concern themselves with the selectors used in relation to CSS performance?”

Let’s start at the beginning, where things like the CSSOM and DOM actually get constructed. Paul Lewis, Developer Advocate for Chrome Developer Relations explains, “Style calculations are affected by two things: selector matching and the size of the invalidation. When you first load a page all the styles need to be calculated for all the elements, and that’s a function of tree size and the number of selectors.”

For more detail, Lewis quotes Rune Lillesveen on the Opera team (who does a lot of work on Blink’s style code):

At the time of writing, roughly 50% of the time used to calculate the computed style for an element is used to match selectors, and the other half of the time is used for constructing the RenderStyle (computed style representation) from the matched rules.

OK, that went a bit ‘science’ for me so does that mean we need to worry about selectors or not?

Lewis again, “Selector matching does affect performance, my own test confirms this (open the console for the results), but in my experience the tree size is the most significant factor.”

It stands to reason that if you have an enormous DOM tree, and a whole raft of irrelevant styles, things are going to start chugging. My own bloat test backs this up.

Anecdote time. If I give you two piles of 1000 cards, each with different names on except for 5 matching ones, it stands to reason it will take longer to pair those matching names than if there were only 100, or 10. Same principal for the browser.

I think we can all agree that style bloat it a bigger concern than selector used. Maybe that’s one rule you can bank on?

“For most websites I would posit that selector performance is not the best area to spend your time trying to find performance optimizations. I feel your previous blog post stated this well, and I would highly recommend to focus on what is inside the braces than the selectors outside of them”, says Greg Whitworth, Program Manager at Microsoft.

What about JavaScript

Whitworth also notes that extra diligence is required when dealing with JavaScript and dynamism in the DOM structure, “If you are using Javascript to add or replace classes on events over and over again you should think about how that will affect the overall web pipeline and the DOM structure of the box you’re touching.” This ties in with the earlier comment from Paul Irish. Rapid invalidation of areas of the DOM thanks to class changes can occasionally show up complex selectors. So, maybe we should be worried about selectors? “There are exceptions to every rule and there are selectors that are more performant than others but we normally only see these in cases where there are massive DOM trees in tandem with Javascript anti-patterns that causes DOM thrashing and additional layout or painting to take place,” says Whitworth. For more simplistic JavaScript changes, Lewis offers this advice, “The solution is normally to target elements as closely as possible, though increasingly Blink is smart about which elements will truly be affected by a change to a parent element.” So, practically speaking, if you need to affect a change in a DOM element, add a class directly above it in the DOM tree if possible, rather than up on the body or html node.

Dealing with CSS performance

At this point I’m happily re-concluding that CSS selectors are rarely a problem with static pages. Plus, attempting to second guess which selector will perform well is probably futile. I’ll re-quote Benjamin Poulain, (who provided valuable insight in my earlier post) from WebKit at this point, “It is practically impossible to predict the final performance impact of a given selector by just examining the selectors. In the engine, selectors are reordered, split, collected and compiled. To know the final performance of a given selector, you would have to know in which bucket the selector was collected, how it is compiled, and finally what the DOM tree looks like. All of that is very different between the various engines, making the whole process even less predictable.”

However, for large DOMs and dynamic DOMs (e.g. not the odd class toggle, we are talking lots of JavaScript manipulation) it may not be beyond the realms of possibility that CSS selectors could be causing an issue. “I can’t speak for all of Mozilla, but I think when you’re dealing with performance, you want to focus on what’s slow. Sometimes that will be selectors; usually it will be other things,” says L. David Baron, of Mozilla and a member of the W3C’s CSS working group, “I’ve definitely seen pages where selector performance matters, and I’ve definitely seen lots of pages where it doesn’t.”

So what should we do? What’s the most pragmatic approach?

“You should use profiling tools to determine where your performance problems
are, and then work on solving those problems,” says Baron. Everyone I spoke to echoed these sentiments. Here’s Poulain from WebKit again, “In practice, people discover performance problems with CSS and start removing rules one by one until the problems go away. I think that is the right way to go about this, it is easy and will lead to correct outcome”.


If you’ve developed on the web for any non-trivial period of time you will know that the answer to most web related questions is ‘it depends’.
I hate that there are no simple, cast-iron rules in relation to CSS performance that can be banked upon in every situation. I’d genuinely love to write those rules out here in a nice little paragraph and believe they would be universally true. But I can’t because there simply aren’t any universal truths in relation to performance. There can’t ever be any because there are simply too many variables. Engines update, layout methods become optimised,every DOM tree is different, all CSS files are different. On and on ad infinitum. You get the picture.
I’m afraid the best I can offer is to not sweat things like CSS selectors or layout methods in advance. It’s unlikely they will be your problem (but, you know, they just might).
Instead, concentrate on making ‘the thing’. Then, when ‘the thing’ is made, test ‘the thing’. If it’s slow or broke, find the problem and fix ‘the thing’.

Additional Info

  • Greg Whitworth recommends A 2012 Build talk
  • CSS Triggers by Paul Lewis indicates what changes in CSS will trigger Layout, Paint and Composite operations in the Blink engine (Chrome/Opera)