My 2 personal "rules"/"responsibilities" as a developer: 1) make the code work, 2) make the code maintainable. I'll always prefer a more maintainable option/syntax over performance until it is an issue.
@@shivambhatnagar9473 you you say "oh wow this is slow" and determine it's the JavaScript slowing you down. The few cases I've had it in my career, I've moved the data massaging out of JavaScript and into the database, realized that I could be doing work concurrently, or found the work happening inside the loop is being slow
It doesn’t matter in most cases, but also what’s faster today is not necessarily faster tomorrow. Once a feature is more widely used, engine developers can use real world data to improve the real world performance of these functions.
Performance in JavaScript is tough. It depends on how well you understand how JavaScript handles garbage collection. A lot of the cool new toys like spread operators or helpers like .reduce .map are sometimes going to be slower than a for loop or doing it manually. It can actually make a difference. There was an issue for tanstack tables due to the spread operator (that package handles large data sets with tables and uses reduce!). Also, these performance sites are not always the best way to measure performance (which you mentioned a bit :D).
Don't optimize before you need to unless you really foresee it becoming an issue. And even then, it appears you need to run a benchmark in your code to determine the fastest implementation and dynamically change the implementation depending on those results since each js engine is different
Was unfamiliar with that site. Thanks for showing it!! I normally run my own tests in an IDE by hand (I mostly work back end and my code doesn’t often run client side). I feel like when looking at performance in multiple browsers you also need to consider market share. After a quick search I see Chrome at about 60%. Firefox at about 2%. Safari at about 20%. Exact details seem to differ from site to site. I wouldn’t really worry about Firefox at all, and it sure seems like if you’re going to implement this with thought on browser runtime chrome is all that matters … unless you determine browser type and THEN run code unique to each to make each browser perform best for itself. Also … I would default to a forEach for this rather than the reduce or a raw for loop (or the for of loop you showed). I pretty much only do for loops if there’s a way to exit early to avoid the full N time iterating over all of the data. GroupBy still seems really interesting though!
As well as browsers, this video would apply to backend code as well. If you were using Bun, that's implemented using the same JS engine as Safari, not Chrome.
@@simonhartley9158I feel like the heart of what you’re saying is … be aware of the time/space impacts of your code and optimize for the environment it’s run in. Yeah? That’s like … first year engineer stuff. Super important. The core of this video is … “hey, actually test the run time.” Couldn’t agree with it more. Just like what you said, this is the kind of conversation senior engineers have with junior engineers. It’s part of a larger conversation about optimization. I’m sure you know all of this. Making one thing 10% faster just isn’t a big deal in MOST situations. Making really key areas 10% faster or making EVERYTHING 10% faster starts to become a bigger deal. It all depends on scale and how performance is as a whole. The reason this stuff is misunderstood commonly is because it usually has such a small impact, especially on the front end. Getting it wrong … when getting it wrong costs 15% of your users 3 milliseconds on one specific action just doesn’t matter a bunch. I think this is why readability is prioritized over raw performance a lot. You want both, but if you’re picking one over the other, readability is often the right call.
@@zero11010 There's also the philosophical point: do you optimize more for the most common case, or for the worst case. If you optimize for the common case, then those assumptions may no longer hold in a different context. If you optimize for the worst case, then you have a more reliable baseline. Of course if you over optimize the worst case, then it can become excessively detrimental in most contexts.
@@simonhartley9158 good point! I think I have a pragmatic approach. Worst case vs best case? I try to focus on most common case. As with the browser example. Biggest bang for the buck. Only so many hours a day to write code. I aim for the biggest impact for the most people. Also a great point about over optimization, and a part of that is pre-optimization. I’m definitely guilty of these things from time to time! The balance of time spent on a ticket vs impact of that ticket is challenging. It’s easy to work on things in a vacuum. But, if I spend an extra half day on a ticket to optimize something mildly (but a genuine optimization) does that mean some other ticket doesnt happen at all this sprint? It’s hard to walk the line of tech debt vs getting tickets closed. And let’s be honest, a detailed breakdown to look at multiple approaches for a complex issue is often a ticket that spans multiple days, not a half day (you can break each approach into a separate ticket, but you know what I mean). At a certain point, if you want to really optimize for these separate browsers … you’d want to identify browser and then run code specifically designed to be optimized in that browser. That would make every single ticket take … 3 time longer? More? And what would the end of the day benefit be for getting a third of the work done on an entire engineering team? The most performant site on the internet! But, who would really notice the impact? The one user in a hundred million that switches browsers regularly AND notices a difference in performance between different sites on different browsers? Because if you had just optimized for chrome 60% of everyone would see those benefits in the same way.
@@simonhartley9158 I replied and it was deleted. No profanity. No rudeness. Not very long. Not typing it again. I agree with the root of what you’re saying. 👍
It's not that these simplified methods being slower than a for loop matters in individual cases, I think. It's that they obscure the fact that they are looping. You can end up with very slow code without it being obvious why. When you teach yourself to use array methods for everything, you might find it hard to optimize once you really need it. It might be good to always measure execution time for critical stuff even when the data sets are not huge just to see if its reasonable.
Always readability over (in most cases) micro-optimizations. Computers will always get faster but developers never learn to read better 😂
My 2 personal "rules"/"responsibilities" as a developer: 1) make the code work, 2) make the code maintainable. I'll always prefer a more maintainable option/syntax over performance until it is an issue.
Came for the title, leaving to go listen to Box Car Racer
DX should be important, but if performance is really a concern then opt in for workers and wasm.
yep, or do the aggregation at the DB level once things get big
@WesBos absolutely correct.
then where do we use these little optimisations?
@@shivambhatnagar9473 you you say "oh wow this is slow" and determine it's the JavaScript slowing you down. The few cases I've had it in my career, I've moved the data massaging out of JavaScript and into the database, realized that I could be doing work concurrently, or found the work happening inside the loop is being slow
It doesn’t matter in most cases, but also what’s faster today is not necessarily faster tomorrow.
Once a feature is more widely used, engine developers can use real world data to improve the real world performance of these functions.
Performance in JavaScript is tough. It depends on how well you understand how JavaScript handles garbage collection. A lot of the cool new toys like spread operators or helpers like .reduce .map are sometimes going to be slower than a for loop or doing it manually. It can actually make a difference. There was an issue for tanstack tables due to the spread operator (that package handles large data sets with tables and uses reduce!). Also, these performance sites are not always the best way to measure performance (which you mentioned a bit :D).
Don't optimize before you need to unless you really foresee it becoming an issue. And even then, it appears you need to run a benchmark in your code to determine the fastest implementation and dynamically change the implementation depending on those results since each js engine is different
People unironically arguing about this in PRs when your data has 100 items
That's a great website, ill have to use that more often. The difference between browsers is crazy
afaik classical for loops are even faster than for-of. i will always use reduce though unless i run into perf issues bcs of better dx.
If you start chaining some iteration functions they create new arrays each which might be a lot of garbage for large sets afaik
Box Car Racer
CAUSE I FEEL SOOOOO
emo devs unite
@@syntaxfm did Wes pick the background specific for the video or do your wallpapers rotate?
BCR's music is timeless!
Was unfamiliar with that site. Thanks for showing it!!
I normally run my own tests in an IDE by hand (I mostly work back end and my code doesn’t often run client side).
I feel like when looking at performance in multiple browsers you also need to consider market share.
After a quick search I see Chrome at about 60%. Firefox at about 2%. Safari at about 20%.
Exact details seem to differ from site to site.
I wouldn’t really worry about Firefox at all, and it sure seems like if you’re going to implement this with thought on browser runtime chrome is all that matters … unless you determine browser type and THEN run code unique to each to make each browser perform best for itself.
Also … I would default to a forEach for this rather than the reduce or a raw for loop (or the for of loop you showed). I pretty much only do for loops if there’s a way to exit early to avoid the full N time iterating over all of the data. GroupBy still seems really interesting though!
As well as browsers, this video would apply to backend code as well. If you were using Bun, that's implemented using the same JS engine as Safari, not Chrome.
@@simonhartley9158I feel like the heart of what you’re saying is … be aware of the time/space impacts of your code and optimize for the environment it’s run in.
Yeah?
That’s like … first year engineer stuff. Super important.
The core of this video is … “hey, actually test the run time.”
Couldn’t agree with it more. Just like what you said, this is the kind of conversation senior engineers have with junior engineers.
It’s part of a larger conversation about optimization.
I’m sure you know all of this.
Making one thing 10% faster just isn’t a big deal in MOST situations. Making really key areas 10% faster or making EVERYTHING 10% faster starts to become a bigger deal. It all depends on scale and how performance is as a whole.
The reason this stuff is misunderstood commonly is because it usually has such a small impact, especially on the front end. Getting it wrong … when getting it wrong costs 15% of your users 3 milliseconds on one specific action just doesn’t matter a bunch.
I think this is why readability is prioritized over raw performance a lot. You want both, but if you’re picking one over the other, readability is often the right call.
@@zero11010 There's also the philosophical point: do you optimize more for the most common case, or for the worst case. If you optimize for the common case, then those assumptions may no longer hold in a different context. If you optimize for the worst case, then you have a more reliable baseline. Of course if you over optimize the worst case, then it can become excessively detrimental in most contexts.
@@simonhartley9158 good point! I think I have a pragmatic approach. Worst case vs best case? I try to focus on most common case. As with the browser example.
Biggest bang for the buck.
Only so many hours a day to write code. I aim for the biggest impact for the most people.
Also a great point about over optimization, and a part of that is pre-optimization. I’m definitely guilty of these things from time to time!
The balance of time spent on a ticket vs impact of that ticket is challenging. It’s easy to work on things in a vacuum. But, if I spend an extra half day on a ticket to optimize something mildly (but a genuine optimization) does that mean some other ticket doesnt happen at all this sprint? It’s hard to walk the line of tech debt vs getting tickets closed. And let’s be honest, a detailed breakdown to look at multiple approaches for a complex issue is often a ticket that spans multiple days, not a half day (you can break each approach into a separate ticket, but you know what I mean).
At a certain point, if you want to really optimize for these separate browsers … you’d want to identify browser and then run code specifically designed to be optimized in that browser. That would make every single ticket take … 3 time longer? More? And what would the end of the day benefit be for getting a third of the work done on an entire engineering team?
The most performant site on the internet!
But, who would really notice the impact? The
one user in a hundred million that switches browsers regularly AND notices a difference in performance between different sites on different browsers? Because if you had just optimized for chrome 60% of everyone would see those benefits in the same way.
@@simonhartley9158 I replied and it was deleted. No profanity. No rudeness. Not very long.
Not typing it again.
I agree with the root of what you’re saying. 👍
It's not that these simplified methods being slower than a for loop matters in individual cases, I think. It's that they obscure the fact that they are looping. You can end up with very slow code without it being obvious why. When you teach yourself to use array methods for everything, you might find it hard to optimize once you really need it. It might be good to always measure execution time for critical stuff even when the data sets are not huge just to see if its reasonable.
If a developer doesn’t know that applying an operation on each item of a list implies a loop, that’s on them, not on the tool.
Return to tradition, use nested for loops. Can't be bothered with these convoluted iteration functions. Only one I use regularly is array.map.
its worth learning them - Object.groupBy() is a dead simple one liner in most cases, and gives you those sweet sweet types for free
afaik classical for loops are even faster than for-of. i will always use reduce though unless i run into perf issues bcs of better dx.