@@Horrordelic REALLY!!!! You are fed awesome content with some good insights and conclusions in 17 minutes. Kevin spends, depending on how smooth the recording goes, probably triple that time at least. Next he has to edit all of the recorded clips together to produce this final result. I'm gonna take a guess and say that the total production time is 3 to 4 hours. Then you have to add the countless hours of researching other content, which might easily add another 4 to 8 hours. In a best case scenario Kevin has spend a full 7 or 8 hours on it, but could well be more.
Great video to show that browsers improve. Harry has written awesome articles about the performance implications at the time. But as browser has improved they are less relevant today. In fact, Microsoft Edge team wrote an article "The truth about CSS selector performance" in January this year which I highly recommend checking out. What could affect performance is not so much rendering the page once-unless you have a lot of HTML, but rather updating the DOM nodes (adding/removing nodes, changing class attributes and other things that requires styles to be recalculated) from JavaScript over time. These are sometimes real concerns in larger apps that I have had to deal with because it can result in really poor UX. For example, setting a class attribute that updates a transform requires style recalculation. But setting it directly inline won't. So if you're doing animations or transitions with JS it can some times be beneficial to do them with inline styles.
I didn't know about this tool in Edge. Thank you for this Kevin! I definitely agree with your point that we are better off at optimizing other things rather than CSS.
I just came across this channel and it really annoys me that I'm lurking on RUclips pretty much all day and never came across it. This is probably the single most interesting content I've seen in years.
Awesome information! Maybe you could consider doing some sort of series "How CSS works under the hood"? You mentioned some things like "CSS parses the selectors from right to left" and stuff that I didn't know before. Moreover, I think many of us have no clue how CSS really works, and just know how to use it. (And you could build upon that series by another series about optimization :D) Would love to see something like that! 😊
Love the attitude of learning instead of just discarding what others are saying. Thanks for saving us a rabbit hole. Now off to follow a different rabbit hole.
I love this type of video, i.e. a report on a rabbit hole dive. You probably spent a lot of time investigating this, so this saved us a lot of time while still getting a feel of investigating this ourselves. Awesome.
I used to have discussions with my colleagues about CSS performance, thanks for making a video on it. Didn't know about this feature in Edge dev tools :)
Thank you for your research! I'm also interested in seeing this in different renders. Also, the reminder that "some info is rooted in very long time ago knowledge" is very good.
The thing about the "*" selector being slower is that, in general, it's only called once. You can use it anywhere, of course, but I tend to see it predominantly used for a cheep reset .. * { margin:0 } ..for example, meanwhile things like ">" will be used constantly throughout the stylesheet. Although I will admit I was as surprised as you to see that "div > div" seems more inefficient than "div div".
Yeah, the > really surprised me, and probably warrents more testing... though in most cases I don't think it'll actually be problematic either way. Unless I had something causing some big issues, I don't think I'd avoid > though :)
Cheap reset, indeed. In theory, a more explicit reset with several element selectors would even be worse, because these many selectors would have to plow through the DOM tree.
@@KevinPowell I think people look at it the wrong way. While order of the definitions in the css file matters... in the end one single cascade is generated. The browser will have to walk the complete DOM, in your case containing 24097 elements, and then apply the cascade to each one. So things like .media-group would have to be tested against each element because it means *.media-group, whereas div.media-group can be fast rejected. I avoid all of these *, .class, etc. Even then * is a really simple test; it just always applies to any of the nodes in the DOM. However any additional thing like class matching, attribute matching will require additional testing. Attribute testing will be done after id and element matching. Hence why id and element type are adding so much value to your specificity. You are better of having things like div[aria-label='next'] than omitting div. Chances are you basically will know one or two elements it is on. Be more specific! For your ".media-group a" vs. ".media-group > a" I think it is important to understand that '>' forces a direct parent. The other one however will do any level. A selector; div a { ... } will even apply to an A nested as follows; div-span-ul-li-a. I think browsers might optimize that because while walking the DOM if you have to potentially backtrack tens of nodes. Me as a software developer would immediately think to put some optimization in for that scenario. Edit: also for your other examples; .media-group > :is(div, a) -> the problem there is that the div, a forces the thing to a specificity of 0. Also there it is better to just use div.media-group > a { ... } and then also have a div.media-group > div { ... }
Thank you for the real performance testings. I think, although the difference is minimal, in today's web apps, where you might be ending with several libraries with their own styles, the final style could result on several tens of thousands of CSS lines, so it makes sense to worry about not having super complex or/and expensive selectors.
Ever since the has() selector became available, I have wanted to see performance tests on it, because the very reason they were reluctant to implement it in the first place, was because of performance issues. It has been discussed at least since 2006 why this selector was not a good idea. So I wonder if they found a solution, or if they're more relying on CPUs getting faster.
I ran into a css selector speed issue with a font app I made. I tried using the data attribute as the css selector. It worked fine on my computer with like 100 fonts, but it fell over on my wife's computer with like 7000 fonts. Switched it back to a class and everything worked without issue.
The star selector manages to match 24K elements in 8Kµs ... that means it takes 0.333µs (333ns) for one element... and that is with 4x CPU throttle... that's actually super fast when u think about it 👀👀 (on a per element basis)
Thanks for this interresting video, instructive in more than one way. I have always thought that avoiding HTML bloat and reducing the number of selectors was a more efficient way of optimizing our pages than avoiding some selectors.
I appreciate you taking the time to do this experiment. And thanks for the tips on using Edge. Is it possible to do a side-by-side summary of the results in order of fastest to slowest?
I always disliked BEM paradigm and preferred advanced selectors due to their expressiveness and elegance. Being a computer dude I know there are methods (caching and hashing f.e.) to reduce the weight of dissolving complicated nested css constructs and just hoped browsers are made - or will be made - well enough. Good to have a proof now. Thanks Kevin!!!
In general, I only care about if it is performant on desktop and doesn't use too much power on phone. I hate when an APP steals all my phone power. And youtube is really bad in that way. It's just a web site which shows a video, no need to suck up 2%/minute for that.
even without CPU throttling, profiling alone slows this down selector performance used to be an issue also i remember when attribute selectors were slow, not anymore
I think it gives more to look about this with respect to the complexity. What happens when aria-label=“next” changes to “next page” or another language like “nästa”? Using selectors like that introduces a dependency between content and style. When changing the content one doesn’t want to think about CSS classes. Having that hidden dependency increases the complexity.
CSS selector performance has only ever been an issue for me once, in a huge project with deeply nested scss selectors. The "fully-qualified" paths it generates are much less efficient than trying to match the bare minimum by hand.
Premium content. Everything in this episode was new to me. I was not expecting the :not() selector would do so bad but I was kind of expecting :is() and :where() to do bad. :not() is out there for quite a long time. I think, if they could, they would've made it more performant.
How about a test like? Emit - div.class1>div.class2#id2>span.class3 .class1 > .class2 { ... } .class1 .class2 { ... } .class2 { ... } #id2 { ... } Mostly asking this since I dislike BEM due to it makes the HTML seem very cluttered to me.
Nice topic, Kevin! I spent several weeks in optimizing some huge corporate project. It was 5 or 6 years ago. And it had to be working under IE9-10. So, replacing of attributes with classes gained me THREE TIMES faster rendering than before. Under Chrome/Firefox it was about 30-35% better and it was also significant. Since then I usually avoid atttributes in my CSS.
So we are more or less from the same era. I used to count down in javascript loops instead of up, gainning a huge performance from that, but this is no longer nessecary, actually it doesn't matter which way you count. My point being, time changes, optimization happens, and what was bad once is not necessarily as bad today.
I'd say it does make more logical sense to hook to a class in css than to an attribute like aria for example, because aria is something by itself which has to do with usability. Likewise, I wouldn't style on data- attribute either, as I see that as a hook to javascript, not to css. So it's not because of performance I would not do it, but because I want to separate logically what I can from each other.
@@AntiAtheismIsUnstoppable Generally yes. Now it looks like performance is the same for attributes and classes. But here in video Kevin covered only some very limited cases. But there are many different very common cases when we can hit our legs. For example, every [attr] selector requires at least one extra 'get' from DOM. Classes usually addressed by direct names, but attributes support extra syntax for *, ^, ! Or using [attr^=] could slow down everything. To speak more specific it's better to make some tests. I think we could consider a case when some attributes are applied to document.body. And mapping to these attributes in CSS. In this case all CSS styles that depend on body will be recalculated on every "styling". And this could slow down the performance. And more, we don't need to forget about old devices (phones, tablets) which could not use of modern browsers. Even Chrome is dropping off support of Windows 7/8. Two of my Pcs still use Win7. And Safari is always one or two steps behind the leaders.
I, too, was surprised that the descendant selector performance was as good as it was (after having read all "the warnings"). So, if performance isn't much of an issue, why does the descendant selector seem to be so frowned upon? I've always liked using them. They seem *efficient* in the same way multiple chained selectors are. Interested in your thoughts on this, Kevin, as well as those of others.
When you said there was a common one that had bad performance, my mind immediately went to *. However, one that I expect would be awful performance that I was surprised you didn't test is :has().
I did a lot more testing than what I showed here, but I didn't want the video to be too long, lol. It was slow-ish for sure, but like most of them, I wouldn't avoid it if I had a good use case for it, specially considering it does things that you can't otherwise do, whereas :is(), you could just make a comma separated list of selectors.
It is an interesting topic. I test all my stuff on capped speeds and all pages I make load fast regardless of selectors. The big hit is still images and such. The code is the least concern in terms of loading times unless you've got a million lines of pointlessness 😅
I don't think I've ever used the `*` selector on it's own, but i've definitly used it after something else, e.g. `.someElement > *` I do actually have a page with obscene numbers of elements on it that suffers badly from render time so maybe I should take another look at that and see if it can be optimised. But tbh it predates the new :is etc. psudo classes, so it's probably already optimal!
Dang I remember a really helpful blog and talk on this but I can’t find the link :( The tldr was optimise images first if heavy content site > optimise JS > CDN and caching > optimising your HTML > other resource delivery things like HTTPS 3 > simplifying styling and rely on inheritance more > if you still need the performance and have tried everything else look into css selector performance
It’s definitely still a valid point to explore (CSS selectors) but just last of the rabbit holes one should dive down haha and to any newbs watching there is no harm because they will be starting their sites from scratch with this knowledge!
Just a rule of thumb: the "not" and "everything" selectors are always the slowest, then the newer the selector the less optimized, so possibly slower, avoid if possible or use sparcely
I don't think that's the message here. Clarity and CSS maintainability is more important then worrying about microsecond performance saving. Hell making an image on your site 100kB smaller has 10X impact.
@@BauldyBoys thats why I say "if possible", using it out of convenience is not a good practice. 1 (or a few) selector in your CSS file that has 12ms execution, is not a problem, using it all the time will add up.
@@roellemaire1979 I understand you I just disagree. Convenience and DX does matter. Kevin had to load over 12000 elements and lower his cpu speed to get demonstrable results.
@@BauldyBoys but he also used like 5 selectors in total. Real website or app will have many more selectors than that. Of course, CSS is most likely never the bottleneck on a website or an app, at least not these days. Still it's interesting to see there is a measurable difference.
Worrying about performance can really add a lot of headache. Also, when talking about the web, you don't have good performance to begin with so don't worry about tiny things like selectors.
ID tend to be the fastest, and from what I did look at, element selectors tend to be similar to classes, since neither one has any potential matches that it later has to filter out.
Yep figured as much, makes me think of best practices of optimizing javascript dom manipulation. I think having a general understanding of how the browser interprets CSS is really important in general so again cool video. I also think its important to keep in mind performance especially when it comes to items in that load above the fold for FCP/LCP metrics. Performance matters not only to user experience but to a crawlers bandwidth consideration optimization of SEO.
@@rand0mtv660 The :has() selector has been discussed since at least 2006. Huge debates on why this is not a good thing to implement because of performance issues. So, either they found a solution, or they're relying more on CPUs getting faster.
It looks like the new selectors need some performance improvements, they should just expand them into the old comma separated in memory before matching. It should only happen if the page has a very large number of elements as the cost of expanding might not be faster on some pages. This might be something that large web apps could use and might justify having a sector added that would force in memory expanding and caching of selectors that the developer knows is a bottleneck in their app.
I could see the universal selector is the main part of the slowness, but in every code that I write it's the 1st line for resetting the margin, padding, box-sizing.... Is there any work around for the same?
But how does the “*” selector compare to selecting everything, but split across multiple selectors? I would guess that the “*” selector would be the fastest way to effectively select everything. The question is: if you need to select everything, is the “*” selector the most performant option to do so or not?
Comma separated are treated as different selectors and that's why they are twice as fast as :is, :where and :not with two selectors inside. But it seems the same.
Hi Kevin, Thanks for this video. people r talking about AI presence across all industries especially IT. I'm UI/UX developer. Does AI make people like me jobless?
AI will have a huge impact on most jobs, but I think it's still a long way from actually impacting job markets in meaningful ways. Lots of hype at the moment for sure, but it's more a tool that can help, just like Google, than something that's ready to replace people. I think the biggest issue might be helping individuals be more productive, which maybe cuts down on the total number of people needed? But it's also a disruptive technology and is going to impact basically every industry, so it's not like being in role X, Y, or Z is better or worse really. If anything, more data focused stuff and backend is probably at the highest risk. Of course, that's all just my opinions, and I'm just a random guy making YT videos, so take them with a grain of salt.
This was amazing insight in css performance. There are so many JS profiler out there but I never knew I wanted a CSS profiler so much. I think browsers should start working on CSS profiler integration because CSS is getting advanced these days. I wonder what would be the performance of using css calculations and keyframes because I use them heavily. I am also getting curious on the number of nodes that are generated by something like React, Vue etc. Your example matched 24k nodes, and you said real world is not going to have so much nodes, but to be honest, the amount of HTML nodes generated by react is mind boggling.
Just remember that micro means 1000th of, so 45 ms is 0,045 seconds, and you will begin to feel it when it reaches 0,2 seonds. 100 micro seond is 0,1 second and that is really a lot. A fast web site keeps below 0,2 seconds = 200 ms
But suppose you don't use the universal selector and reset it individually on ALL elements. I'm very curious what the difference is. I think the universal selector is slower. In addition, you also have to write more css and it is more difficult to keep track of.
I'd love to offer subtitles in more languages, but I can't afford to get them properly translated :( - The automated stuff I've seen isn't great, or requires a lot of manual work to fix up.
Aiming for those microoptimizations would make code less readable, less maintainable, the original intentions of those selectors would be less clear even to yourself a month later. Let machines do their work! 😂
People who pipe up about selector performance on Twitter are those who are just looking to pick holes in things. It’s like they want you ‘as the well known dev’ to know they have just as much knowledge as you. The difference as you’ve shown is imperceptible.
Check the video I did on semantic CSS 🙂 The tl;dr is that you can do a lot of state management with accessibility attributes that you should already be using anyway. It does the dual role of being a very clear selector in its purpose, and also enforces the correct semantics and structure for components
People lost their ability to make performance stuff. Instead of that they does not care about. Hey this is 2023 use everything because next year 100500 core Intel/Amd be produced. They don't think about others. Developers became more and more lazy and egoistic.
people who complain about performance in css selectors now days are those who sit down at a starbucks with dark glasses, a scarf, use a mac, and can't tell the difference between hard drives and SSD's, and call themselves "programmers".
I honestly have a weird feeling after watching this movie. I get the impression that you tried to justify yourself and at all costs not admit your mistake. Despite the obvious data, where you can see that the attribute selector is much slower (even more than 50%), you tried to divert attention by blaming the * selector as the worst (despite the fact that the screen shows something completely different) or that it's not really such a bad thing because there are things that slow down a page more. The claim that 18ms is not that much is not true. Unfortunately, Google page speed measures time in ms, not seconds. And in this case, every ms is important and can affect the result. I did a lot of page optimization and green results on google page speed give a lot. Not only does it look better in the eyes of the client ;) but it also affects SEO.
18ms is a fair bit... and the 18ms one I had was for a complex selector, with 4x throttling on, over 20,000 elements. (It was also taking 4.5 seconds to render the HTML for my page). With that same dataset, an attribute selector, selecting the same thing as a class selector, was ~0.1ms slower (623 microseconds vs 744 microseconds). If you had it over, say, 10 element, or even 100, they'd essentially be the same speed. So as far as class vs. attribute, I think that I showed it really doesn't matter, especially since you'd only be using attribute selectors for pretty specific things while you'd still primarily be using classes.
@@KevinPowell Oh yeah, I didn't take into account 20,000 elements :) Although I've managed to optimize pages with ~8,000 elements. I wonder what would come out if the page from the video was tested in PageSpeed Insight. Would this affect "Minimize main thread work". 🤔
The amount of work it takes to create content like this is significant. Thanks for doing this. I hope people appreciate it.
How does it take much time ?
We do !!!
We sure do!
@@Horrordelic REALLY!!!! You are fed awesome content with some good insights and conclusions in 17 minutes. Kevin spends, depending on how smooth the recording goes, probably triple that time at least. Next he has to edit all of the recorded clips together to produce this final result. I'm gonna take a guess and say that the total production time is 3 to 4 hours. Then you have to add the countless hours of researching other content, which might easily add another 4 to 8 hours.
In a best case scenario Kevin has spend a full 7 or 8 hours on it, but could well be more.
So in short, don't worry too much about what kind of selectors you use because we're in 2023. Great video as always!
@@AntiAtheismIsUnstoppable cool.
Great video to show that browsers improve. Harry has written awesome articles about the performance implications at the time. But as browser has improved they are less relevant today. In fact, Microsoft Edge team wrote an article "The truth about CSS selector performance" in January this year which I highly recommend checking out.
What could affect performance is not so much rendering the page once-unless you have a lot of HTML, but rather updating the DOM nodes (adding/removing nodes, changing class attributes and other things that requires styles to be recalculated) from JavaScript over time. These are sometimes real concerns in larger apps that I have had to deal with because it can result in really poor UX.
For example, setting a class attribute that updates a transform requires style recalculation. But setting it directly inline won't. So if you're doing animations or transitions with JS it can some times be beneficial to do them with inline styles.
Thank you so much for researching this and showing us how to do our own timings. Also, hope you’re blowing raspberries at the people who doubted you.
Shoutout to all the haters 🎉
I didn't know about this tool in Edge. Thank you for this Kevin! I definitely agree with your point that we are better off at optimizing other things rather than CSS.
It's so funny that people worry about CSS performance, but use huge amounts and overheads of JS 😂
That makes me wonder whether CSS selectors have similarly efficient performance when used in JavaScript? I assume so.
I just came across this channel and it really annoys me that I'm lurking on RUclips pretty much all day and never came across it. This is probably the single most interesting content I've seen in years.
Awesome information! Maybe you could consider doing some sort of series "How CSS works under the hood"? You mentioned some things like "CSS parses the selectors from right to left" and stuff that I didn't know before. Moreover, I think many of us have no clue how CSS really works, and just know how to use it. (And you could build upon that series by another series about optimization :D)
Would love to see something like that! 😊
Love the attitude of learning instead of just discarding what others are saying. Thanks for saving us a rabbit hole. Now off to follow a different rabbit hole.
I love this type of video, i.e. a report on a rabbit hole dive. You probably spent a lot of time investigating this, so this saved us a lot of time while still getting a feel of investigating this ourselves. Awesome.
I used to have discussions with my colleagues about CSS performance, thanks for making a video on it. Didn't know about this feature in Edge dev tools :)
Thank you for your research! I'm also interested in seeing this in different renders. Also, the reminder that "some info is rooted in very long time ago knowledge" is very good.
The thing about the "*" selector being slower is that, in general, it's only called once. You can use it anywhere, of course, but I tend to see it predominantly used for a cheep reset .. * { margin:0 } ..for example, meanwhile things like ">" will be used constantly throughout the stylesheet. Although I will admit I was as surprised as you to see that "div > div" seems more inefficient than "div div".
Yeah, the > really surprised me, and probably warrents more testing... though in most cases I don't think it'll actually be problematic either way.
Unless I had something causing some big issues, I don't think I'd avoid > though :)
Cheap reset, indeed. In theory, a more explicit reset with several element selectors would even be worse, because these many selectors would have to plow through the DOM tree.
@@KevinPowell I think people look at it the wrong way. While order of the definitions in the css file matters... in the end one single cascade is generated. The browser will have to walk the complete DOM, in your case containing 24097 elements, and then apply the cascade to each one. So things like .media-group would have to be tested against each element because it means *.media-group, whereas div.media-group can be fast rejected.
I avoid all of these *, .class, etc. Even then * is a really simple test; it just always applies to any of the nodes in the DOM. However any additional thing like class matching, attribute matching will require additional testing. Attribute testing will be done after id and element matching. Hence why id and element type are adding so much value to your specificity. You are better of having things like div[aria-label='next'] than omitting div. Chances are you basically will know one or two elements it is on. Be more specific!
For your ".media-group a" vs. ".media-group > a" I think it is important to understand that '>' forces a direct parent. The other one however will do any level. A selector; div a { ... } will even apply to an A nested as follows; div-span-ul-li-a. I think browsers might optimize that because while walking the DOM if you have to potentially backtrack tens of nodes. Me as a software developer would immediately think to put some optimization in for that scenario.
Edit: also for your other examples; .media-group > :is(div, a) -> the problem there is that the div, a forces the thing to a specificity of 0. Also there it is better to just use div.media-group > a { ... } and then also have a div.media-group > div { ... }
Thank you! This was very informative and helpful! Love your videos! Always learning something new in your channel!
Thank you for the real performance testings. I think, although the difference is minimal, in today's web apps, where you might be ending with several libraries with their own styles, the final style could result on several tens of thousands of CSS lines, so it makes sense to worry about not having super complex or/and expensive selectors.
Ever since the has() selector became available, I have wanted to see performance tests on it, because the very reason they were reluctant to implement it in the first place, was because of performance issues. It has been discussed at least since 2006 why this selector was not a good idea. So I wonder if they found a solution, or if they're more relying on CPUs getting faster.
Excellent video, Kevin.
Thank you so much for your effort and your excellently presented research!
I ran into a css selector speed issue with a font app I made. I tried using the data attribute as the css selector. It worked fine on my computer with like 100 fonts, but it fell over on my wife's computer with like 7000 fonts. Switched it back to a class and everything worked without issue.
The star selector manages to match 24K elements in 8Kµs ... that means it takes 0.333µs (333ns) for one element... and that is with 4x CPU throttle... that's actually super fast when u think about it 👀👀 (on a per element basis)
Thanks for this interresting video, instructive in more than one way.
I have always thought that avoiding HTML bloat and reducing the number of selectors was a more efficient way of optimizing our pages than avoiding some selectors.
I appreciate you taking the time to do this experiment. And thanks for the tips on using Edge. Is it possible to do a side-by-side summary of the results in order of fastest to slowest?
Thanks for this!
I always disliked BEM paradigm and preferred advanced selectors due to their expressiveness and elegance.
Being a computer dude I know there are methods (caching and hashing f.e.) to reduce the weight of dissolving complicated nested css constructs and just hoped browsers are made - or will be made - well enough.
Good to have a proof now. Thanks Kevin!!!
In general, I only care about if it is performant on desktop and doesn't use too much power on phone. I hate when an APP steals all my phone power. And youtube is really bad in that way. It's just a web site which shows a video, no need to suck up 2%/minute for that.
This is awesome.. great video man. I enjoy obsessing over performance for some reason lol
Awesome content. Thanks
even without CPU throttling, profiling alone slows this down
selector performance used to be an issue
also i remember when attribute selectors were slow, not anymore
Comments in the video sent you down a rabbit hole? Lol... Stay away from the comments dude... JJ.
This was actually very interesting thanks.
Really interesting video. Thanks
I think it gives more to look about this with respect to the complexity.
What happens when aria-label=“next” changes to “next page” or another language like “nästa”? Using selectors like that introduces a dependency between content and style. When changing the content one doesn’t want to think about CSS classes. Having that hidden dependency increases the complexity.
CSS selector performance has only ever been an issue for me once, in a huge project with deeply nested scss selectors. The "fully-qualified" paths it generates are much less efficient than trying to match the bare minimum by hand.
Premium content. Everything in this episode was new to me. I was not expecting the :not() selector would do so bad but I was kind of expecting :is() and :where() to do bad. :not() is out there for quite a long time. I think, if they could, they would've made it more performant.
What a proud to live in a world where css performance matters.
great content
i liked the rabbit hole thumbnail
How about a test like?
Emit - div.class1>div.class2#id2>span.class3
.class1 > .class2 { ... }
.class1 .class2 { ... }
.class2 { ... }
#id2 { ... }
Mostly asking this since I dislike BEM due to it makes the HTML seem very cluttered to me.
Nice topic, Kevin!
I spent several weeks in optimizing some huge corporate project. It was 5 or 6 years ago. And it had to be working under IE9-10.
So, replacing of attributes with classes gained me THREE TIMES faster rendering than before. Under Chrome/Firefox it was about 30-35% better and it was also significant.
Since then I usually avoid atttributes in my CSS.
So we are more or less from the same era. I used to count down in javascript loops instead of up, gainning a huge performance from that, but this is no longer nessecary, actually it doesn't matter which way you count. My point being, time changes, optimization happens, and what was bad once is not necessarily as bad today.
I'd say it does make more logical sense to hook to a class in css than to an attribute like aria for example, because aria is something by itself which has to do with usability. Likewise, I wouldn't style on data- attribute either, as I see that as a hook to javascript, not to css. So it's not because of performance I would not do it, but because I want to separate logically what I can from each other.
@@AntiAtheismIsUnstoppable Generally yes. Now it looks like performance is the same for attributes and classes. But here in video Kevin covered only some very limited cases. But there are many different very common cases when we can hit our legs.
For example, every [attr] selector requires at least one extra 'get' from DOM. Classes usually addressed by direct names, but attributes support extra syntax for *, ^, !
Or using [attr^=] could slow down everything. To speak more specific it's better to make some tests.
I think we could consider a case when some attributes are applied to document.body. And mapping to these attributes in CSS.
In this case all CSS styles that depend on body will be recalculated on every "styling". And this could slow down the performance.
And more, we don't need to forget about old devices (phones, tablets) which could not use of modern browsers.
Even Chrome is dropping off support of Windows 7/8. Two of my Pcs still use Win7. And Safari is always one or two steps behind the leaders.
I, too, was surprised that the descendant selector performance was as good as it was (after having read all "the warnings"). So, if performance isn't much of an issue, why does the descendant selector seem to be so frowned upon? I've always liked using them. They seem *efficient* in the same way multiple chained selectors are. Interested in your thoughts on this, Kevin, as well as those of others.
First time I heard "youtube comments sent me down a " with an actual good start.
Thanks CSS Pope.
When you said there was a common one that had bad performance, my mind immediately went to *. However, one that I expect would be awful performance that I was surprised you didn't test is :has().
I did a lot more testing than what I showed here, but I didn't want the video to be too long, lol. It was slow-ish for sure, but like most of them, I wouldn't avoid it if I had a good use case for it, specially considering it does things that you can't otherwise do, whereas :is(), you could just make a comma separated list of selectors.
It is an interesting topic. I test all my stuff on capped speeds and all pages I make load fast regardless of selectors. The big hit is still images and such. The code is the least concern in terms of loading times unless you've got a million lines of pointlessness 😅
I don't think I've ever used the `*` selector on it's own, but i've definitly used it after something else, e.g. `.someElement > *`
I do actually have a page with obscene numbers of elements on it that suffers badly from render time so maybe I should take another look at that and see if it can be optimised. But tbh it predates the new :is etc. psudo classes, so it's probably already optimal!
Kevin, you're the best!
Dang I remember a really helpful blog and talk on this but I can’t find the link :(
The tldr was optimise images first if heavy content site > optimise JS > CDN and caching > optimising your HTML > other resource delivery things like HTTPS 3 > simplifying styling and rely on inheritance more > if you still need the performance and have tried everything else look into css selector performance
I should have made a bigger point of it taking 4x longer to parse my HTML than matching the selectors themselves 😅
It’s definitely still a valid point to explore (CSS selectors) but just last of the rabbit holes one should dive down haha and to any newbs watching there is no harm because they will be starting their sites from scratch with this knowledge!
Just a rule of thumb: the "not" and "everything" selectors are always the slowest, then the newer the selector the less optimized, so possibly slower, avoid if possible or use sparcely
I don't think that's the message here. Clarity and CSS maintainability is more important then worrying about microsecond performance saving. Hell making an image on your site 100kB smaller has 10X impact.
@@BauldyBoys thats why I say "if possible", using it out of convenience is not a good practice. 1 (or a few) selector in your CSS file that has 12ms execution, is not a problem, using it all the time will add up.
@@roellemaire1979 I understand you I just disagree. Convenience and DX does matter. Kevin had to load over 12000 elements and lower his cpu speed to get demonstrable results.
@@BauldyBoys but he also used like 5 selectors in total. Real website or app will have many more selectors than that. Of course, CSS is most likely never the bottleneck on a website or an app, at least not these days. Still it's interesting to see there is a measurable difference.
Worrying about performance can really add a lot of headache. Also, when talking about the web, you don't have good performance to begin with so don't worry about tiny things like selectors.
Kevin would be awesome if you could upload the code that you used after every video so that we can play around ourselves. Thanks !
Not to worry about selectors❤
Thanks that was really interesting to see. Comparing ID's and or HTML element identifiers could be interesting too.
ID tend to be the fastest, and from what I did look at, element selectors tend to be similar to classes, since neither one has any potential matches that it later has to filter out.
Yep figured as much, makes me think of best practices of optimizing javascript dom manipulation. I think having a general understanding of how the browser interprets CSS is really important in general so again cool video. I also think its important to keep in mind performance especially when it comes to items in that load above the fold for FCP/LCP metrics. Performance matters not only to user experience but to a crawlers bandwidth consideration optimization of SEO.
I imagine the `:has()` is the worst.
I know Firefox had a big problem with it, which iirc is part of the reason it's still not out
@@modernkennnern that's interesting. As far as I know, Firefox has the fastest CSS selector engine. I guess :has() is a truly complex one to do.
@@rand0mtv660 The :has() selector has been discussed since at least 2006. Huge debates on why this is not a good thing to implement because of performance issues. So, either they found a solution, or they're relying more on CPUs getting faster.
It looks like the new selectors need some performance improvements, they should just expand them into the old comma separated in memory before matching. It should only happen if the page has a very large number of elements as the cost of expanding might not be faster on some pages. This might be something that large web apps could use and might justify having a sector added that would force in memory expanding and caching of selectors that the developer knows is a bottleneck in their app.
I could see the universal selector is the main part of the slowness, but in every code that I write it's the 1st line for resetting the margin, padding, box-sizing.... Is there any work around for the same?
Angular uses attribute selectors for components scoping. They definetely won't do it if it was too slow.
Should I use rem for margins, padding and etc? Or use rem only for accesibility matters?
But how does the “*” selector compare to selecting everything, but split across multiple selectors? I would guess that the “*” selector would be the fastest way to effectively select everything.
The question is: if you need to select everything, is the “*” selector the most performant option to do so or not?
Comma separated are treated as different selectors and that's why they are twice as fast as :is, :where and :not with two selectors inside. But it seems the same.
Are you on Mastodon yet? From a selfish point of you, would be good to see you there. Cheers!
Can you please do a series on awwwwards level web design and development
For performance gains don't use CSS lol. I wonder how it looks when we just use in-line styles since there's nothing to select ?
Hello kevin I'm form indonesia mengucapkan hatur nuhun ❤
Hi Kevin, Thanks for this video. people r talking about AI presence across all industries especially IT. I'm UI/UX developer. Does AI make people like me jobless?
AI will have a huge impact on most jobs, but I think it's still a long way from actually impacting job markets in meaningful ways. Lots of hype at the moment for sure, but it's more a tool that can help, just like Google, than something that's ready to replace people. I think the biggest issue might be helping individuals be more productive, which maybe cuts down on the total number of people needed?
But it's also a disruptive technology and is going to impact basically every industry, so it's not like being in role X, Y, or Z is better or worse really. If anything, more data focused stuff and backend is probably at the highest risk.
Of course, that's all just my opinions, and I'm just a random guy making YT videos, so take them with a grain of salt.
@@KevinPowell Yeah, Thank you
👍🏿
This was amazing insight in css performance. There are so many JS profiler out there but I never knew I wanted a CSS profiler so much. I think browsers should start working on CSS profiler integration because CSS is getting advanced these days.
I wonder what would be the performance of using css calculations and keyframes because I use them heavily.
I am also getting curious on the number of nodes that are generated by something like React, Vue etc. Your example matched 24k nodes, and you said real world is not going to have so much nodes, but to be honest, the amount of HTML nodes generated by react is mind boggling.
this is why i generate unique ids on every single element so i get those nano seconds of my life back /s
Hi Kevin have you used Firefox Developer Edition?
I have it installed, but I'm usually using Nightly for Firefox :)
Headed to my web to try this, lowest was 180 microseconds, and that is the gpu slowed 4 times, so actually 45 microseconds, guys, we're fine
Just remember that micro means 1000th of, so 45 ms is 0,045 seconds, and you will begin to feel it when it reaches 0,2 seonds. 100 micro seond is 0,1 second and that is really a lot. A fast web site keeps below 0,2 seconds = 200 ms
But suppose you don't use the universal selector and reset it individually on ALL elements. I'm very curious what the difference is. I think the universal selector is slower. In addition, you also have to write more css and it is more difficult to keep track of.
So ID selectors are probably the best speed performance because they're unique, match counts are greatly reduced.
so what is the conclusion? ... may be i didnt get it...
how did to add svg sprite your websites, can you tell , plzzzzzzzzzzzzzz
svgsprit.es/ is a great site for creating them, and it gives you the code you need. I could do a video on it though :)
9:07 a.k.a. "your mileage my vary"
Is it the same when selecting via Javascript?
I'd feel a little dirty installing edge on a mac.
How much is Edge involved in this? Or does Chromium do all the work? I mean, I don't really care if it's optimized in Edge.
늘 유익한 영상 고맙습니다. 한글자막 지원이 되면 좋겠습니다 .. 능력자 없나요 korean subtitle supports please...😲
I'd love to offer subtitles in more languages, but I can't afford to get them properly translated :( - The automated stuff I've seen isn't great, or requires a lot of manual work to fix up.
Aiming for those microoptimizations would make code less readable, less maintainable, the original intentions of those selectors would be less clear even to yourself a month later. Let machines do their work! 😂
People who pipe up about selector performance on Twitter are those who are just looking to pick holes in things. It’s like they want you ‘as the well known dev’ to know they have just as much knowledge as you. The difference as you’ve shown is imperceptible.
yeah worry first about your jquery or bootstrap scripts or even your favorite js framework overhead before worrying about css
`:is()` is `*:is()` Don't use star selectors ;)
Why even use attribute selectors? It seems stupid to me unless there's no other way which can be the case of course.
Check the video I did on semantic CSS 🙂
The tl;dr is that you can do a lot of state management with accessibility attributes that you should already be using anyway. It does the dual role of being a very clear selector in its purpose, and also enforces the correct semantics and structure for components
Universal selector is slow? Just use :not(a:is(b))
People just like repeat things without actually knowing, and give opinions as if they are facts
People lost their ability to make performance stuff. Instead of that they does not care about. Hey this is 2023 use everything because next year 100500 core Intel/Amd be produced. They don't think about others. Developers became more and more lazy and egoistic.
people who complain about performance in css selectors now days are those who sit down at a starbucks with dark glasses, a scarf, use a mac, and can't tell the difference between hard drives and SSD's, and call themselves "programmers".
What you just said is dumb in so many levels that I don’t even know from where to begin…
What does that even mean? You’re not supposed to optimize performance nowadays?
@@feldinho did you take off your scarf before typing that?
This could have been a 2min video.
I know this was hard work, but more is not always better, especially when the purpose is to transmit information.
I honestly have a weird feeling after watching this movie. I get the impression that you tried to justify yourself and at all costs not admit your mistake. Despite the obvious data, where you can see that the attribute selector is much slower (even more than 50%), you tried to divert attention by blaming the * selector as the worst (despite the fact that the screen shows something completely different) or that it's not really such a bad thing because there are things that slow down a page more. The claim that 18ms is not that much is not true. Unfortunately, Google page speed measures time in ms, not seconds. And in this case, every ms is important and can affect the result. I did a lot of page optimization and green results on google page speed give a lot. Not only does it look better in the eyes of the client ;) but it also affects SEO.
18ms is a fair bit... and the 18ms one I had was for a complex selector, with 4x throttling on, over 20,000 elements. (It was also taking 4.5 seconds to render the HTML for my page).
With that same dataset, an attribute selector, selecting the same thing as a class selector, was ~0.1ms slower (623 microseconds vs 744 microseconds). If you had it over, say, 10 element, or even 100, they'd essentially be the same speed. So as far as class vs. attribute, I think that I showed it really doesn't matter, especially since you'd only be using attribute selectors for pretty specific things while you'd still primarily be using classes.
@@KevinPowell Oh yeah, I didn't take into account 20,000 elements :) Although I've managed to optimize pages with ~8,000 elements. I wonder what would come out if the page from the video was tested in PageSpeed Insight. Would this affect "Minimize main thread work". 🤔
Love the experimentation here 👨🔬🧪
I wonder if Sass would make a difference for things like the * selector 🤔