This time the % increase in performance isn't really interesting because it was a regression fix. No one actually saw any function complete a thousand times faster. What I'd really like to mention is that there is no greater joy than a developer fixing awful O(n^2) algorithms and posting their million-fold improvement even if the function almost never runs and the wound was self-inflicted.
And nothing worse than a customer deciding that it's perfectly fine to increase their data flows by orders of magnitude without testing or even asking you whether it's been tested. When the system is down and they can't do deliveries, you will feel stressed while trying to fix it
@@phill6859 Ah yes, Design what someone asks and you'll fail to deliver what they meant. But if you actually take your time then you're not delivering what you've been asked to do.
Could that be why I was recently bottlenecking on CachyOS and suddenly am not? My CPU was stuck around 2 GHz or so when it has a boost clock of over 4 GHZ. Both it and the GPU had strangely low temperatures and clock speeds.
You joke but I did this in some commercial software that was reading a huge linked about 10x every second and it had significant impact. O(1) v O(n) can be huge
I qm pretty sure lists in python, in most cases, are faster then sets. After all lists are just contiguous space in memory, while sets do require some hashing math to access the values
@@no_name4796 I think it depends... sets are probably faster for lookup. The fragmentation might be a downside, but idk if the hashing takes that much time
Saw the article, skimmed it, saw the intel test bot found it, thought cool, a kernel bug fixed for some workload or another. And then went on with my life, at no point did I consider it would or wouldn't be Intel only, hell, there is a chance it could even effect ARM, PowerPC... and all the other architectures the kernel support.
My favorite kind of dark magic programming! Days upon days of investigation, reading, and reverse engineering, only to discover the fix is single line of code.
People commenting stupid stuff always reminds me of the DRM panic articles. People don't even read about what features that actually bring and only resort to insults saying we don't need BSOD in Linux when that's not even the main point of DRM panic. For those who don't know the DRM panic allows the kernel to draw kernel panic screen even over a stuck GUI(X11 or wayland) session which without DRM panic is impossible and in case of kernel panic you end up with stuck last frame of your GUI and can't see the errors making debugging difficult. The ability to show error QR code is just an extra feature on top of that.
Kernel 6.11 also has a weird bug. On many systems (confirmed by several other users) when you compile dwm in a graphical environmen the code compiles succesfully but as soon as it is done compiling then X11 crashes and you get dropped to the tty. Fun. I just quickly asked a few people and so far I have it confirmed for at least 3 other users, mixed hardware (AMD, Intel, Nvidia). For some weird reason a segfault gets triggered and causes a crash of X11 and this exclusively happens with kernel 6.11 and 100% of the times.
Yeah, well not sure if it's a bug or intentional, but this happens because when dwm gets installed to /usr/local/bin/dwm the inode is updated but the kernel keeps a reference to the old one and any program will crash under such circumstance. Try hello world application in C as example
That Phoronix article title _is_ clickbait. It's not wrong, but it's misleading. There was a Veritasium video some time ago about clickbait, and how there's often a tradeoff between being truthful and being clickable. I think this Phoronix title is just barely truthful, but misleading, and very clickbaity.
Phoronix has a bit of a trademark on it. I do enjoy reading articles about Phoronix, but I found the bcachefs articles a bit optimistic (it would make everything else obsolete).
Ok, what does "a 600%" decrease mean? If -100% slows it down to 0, will -600% make it run 5x as fast backwards? Silly. I assume they saw a 6x slowdown. This is 1/6 or 17% of the original speed, so an 83% slowdown. NOT 600.
"technically not wrong, but not even remotely relevant to most people" is pretty much the definition of a click-bait headline. I mean, there's nothing stopping them from saying "only really hit by one edge case" as part of the headline, but that wouldn't get as many clicks.
Cache aliasing issues is just one of those things that can have crazy big performance impact on modern/big systems. Not surprised it the crazy high percentage was found on a system with hundreds of cores, and multiple physical CPUs. (High latency for cache syncronization.) They are very hard to reason about, so cache aliasing problems will just seem like black magic. And a fix for one application/benchmark are likely cause a problem for other applications/benchmarks.
I saw the headline, hadn't gotten around to reading the article but I was assuming it was probably real but only under some circumstances most people won't care about, or was like you mentioned a thing that takes up so little time to begin with that a speedup is almost more trivia than actual improvement.
Data Oriented Development skills. Data locality and alignment are as important as remembering to deallocate. But they didn't teach this in Computer Science papers (BSc), I learnt C as a part of the Network Engineering curriculum. Maybe I should have actually taken that Operating Systems paper instead of AI and graphics since C++ seems to abstract away from memory allocation and alignment (In a bad and confusing way). Allocators were only a recent discovery to me, it was mentioned (but deemed too advanced?) I feel like I missed out.
My usual reflex is to shame the big tech company simps for their stupidity, but it's hard to be mean to the Intel ones anymore. They just keep taking L's from all sides these days.
As someone who has been following Linux kernel development for a long time,... a large improvement often means, just a small area which might translate to a huge improvement in real life - for a select number of people. "technically not wrong" - technically correct is the best correct ? Well, if people know the context. I remember Rick van Riel being a new kernel developer, memory management is a really hard topic and he has made many many improvements/rewrites, but also can cause issues for some people.
What's wrong with 192 GB on a desktop? I only have 32 threads, but I'm back up to 192GB after getting my 14900k RMA'd. Also, if you're going to talk about Amdahl's Law, you should mention that you're talking about Amdahl's Law.
the first thing i think when i see stuff like this is "this is definitely not what it seems". if it were it would easily make mainstream headlines. and even if it did it would still be a 50/50 chance because mainstream media likes clickbait too.
I have intel so I have to watch at 1/39 times speed for it to be normal
Just press the turbo button
@@hk_8084 that thing is even wired now?
🤣
This time the % increase in performance isn't really interesting because it was a regression fix. No one actually saw any function complete a thousand times faster. What I'd really like to mention is that there is no greater joy than a developer fixing awful O(n^2) algorithms and posting their million-fold improvement even if the function almost never runs and the wound was self-inflicted.
And nothing worse than a customer deciding that it's perfectly fine to increase their data flows by orders of magnitude without testing or even asking you whether it's been tested. When the system is down and they can't do deliveries, you will feel stressed while trying to fix it
@@phill6859 Ah yes, Design what someone asks and you'll fail to deliver what they meant. But if you actually take your time then you're not delivering what you've been asked to do.
Could that be why I was recently bottlenecking on CachyOS and suddenly am not? My CPU was stuck around 2 GHz or so when it has a boost clock of over 4 GHZ. Both it and the GPU had strangely low temperatures and clock speeds.
0:25 The only people saying that are in fact Userbenchmark's alt accounts
I optimised a piece of code by 1,000,000% and it sent be back in time to 1955. I hitched a ride back and vowed never to do it again
this is like the AVX benchmark again isn't it
Noobs. I turned a list to a set in python and speed up code by 1000x.
You joke but I did this in some commercial software that was reading a huge linked about 10x every second and it had significant impact. O(1) v O(n) can be huge
I qm pretty sure lists in python, in most cases, are faster then sets. After all lists are just contiguous space in memory, while sets do require some hashing math to access the values
@@no_name4796 I think it depends... sets are probably faster for lookup. The fragmentation might be a downside, but idk if the hashing takes that much time
@@TailRecursive "i am pretty sure lists in python, IN MOST CASES, are faster then sets"
congrats man, you just said what i said lol
can't you do the same thing by unwrapping a for loop?
Saw the article, skimmed it, saw the intel test bot found it, thought cool, a kernel bug fixed for some workload or another. And then went on with my life, at no point did I consider it would or wouldn't be Intel only, hell, there is a chance it could even effect ARM, PowerPC... and all the other architectures the kernel support.
I game on Linux for the 3,800% performance boost
My favorite kind of dark magic programming! Days upon days of investigation, reading, and reverse engineering, only to discover the fix is single line of code.
It's actually possible. For instance because you call the same statement over-and-over.
Man, you're ruining UserBenchmark's day :(
People commenting stupid stuff always reminds me of the DRM panic articles. People don't even read about what features that actually bring and only resort to insults saying we don't need BSOD in Linux when that's not even the main point of DRM panic. For those who don't know the DRM panic allows the kernel to draw kernel panic screen even over a stuck GUI(X11 or wayland) session which without DRM panic is impossible and in case of kernel panic you end up with stuck last frame of your GUI and can't see the errors making debugging difficult. The ability to show error QR code is just an extra feature on top of that.
Kernel 6.11 also has a weird bug. On many systems (confirmed by several other users) when you compile dwm in a graphical environmen the code compiles succesfully but as soon as it is done compiling then X11 crashes and you get dropped to the tty. Fun. I just quickly asked a few people and so far I have it confirmed for at least 3 other users, mixed hardware (AMD, Intel, Nvidia). For some weird reason a segfault gets triggered and causes a crash of X11 and this exclusively happens with kernel 6.11 and 100% of the times.
Yeah, well not sure if it's a bug or intentional, but this happens because when dwm gets installed to /usr/local/bin/dwm the inode is updated but the kernel keeps a reference to the old one and any program will crash under such circumstance. Try hello world application in C as example
As a work around, just add rm /usr/local/bin/dwm command to install target.
@@kyrylmelekhin2667 I will try it out, thanks. I have no idea why this got introduced in kernel 6.11.
That Phoronix article title _is_ clickbait. It's not wrong, but it's misleading. There was a Veritasium video some time ago about clickbait, and how there's often a tradeoff between being truthful and being clickable. I think this Phoronix title is just barely truthful, but misleading, and very clickbaity.
Phoronix has a bit of a trademark on it. I do enjoy reading articles about Phoronix, but I found the bcachefs articles a bit optimistic (it would make everything else obsolete).
Ok, what does "a 600%" decrease mean? If -100% slows it down to 0, will -600% make it run 5x as fast backwards?
Silly.
I assume they saw a 6x slowdown. This is 1/6 or 17% of the original speed, so an 83% slowdown. NOT 600.
yeah right? i am confused too
I imagine it is a "lower is better" benchmark, but even then, "6x slower" would be "500% worse", not 600%.
the corn bots are intel fans, apparently
Vids like this are a HUGE part of why I love your stuff, Brodie. :)
Ohh, so that's why my intel pc booted in 0.05ps (Picoseconds) earlier today
"technically not wrong, but not even remotely relevant to most people" is pretty much the definition of a click-bait headline. I mean, there's nothing stopping them from saying "only really hit by one edge case" as part of the headline, but that wouldn't get as many clicks.
I think the pr0n bots have Intel CPU guys.
Cache aliasing issues is just one of those things that can have crazy big performance impact on modern/big systems.
Not surprised it the crazy high percentage was found on a system with hundreds of cores, and multiple physical CPUs. (High latency for cache syncronization.)
They are very hard to reason about, so cache aliasing problems will just seem like black magic. And a fix for one application/benchmark are likely cause a problem for other applications/benchmarks.
I don't read past the title, but that's why I wait for a Brodie video to tell me what opinion to have.
I saw the headline, hadn't gotten around to reading the article but I was assuming it was probably real but only under some circumstances most people won't care about, or was like you mentioned a thing that takes up so little time to begin with that a speedup is almost more trivia than actual improvement.
Hey, that almost makes up for all the performance loss due to Intel's security flaw mitigations :P
Until they find another security hole.
Some databases don't play nice with transparent huge pages enabled.
I've been primed not to expect any noticeable changes whenever I hear something like this
I thought this was going to be a userbenchmark cope article, not an actual article lol
Data Oriented Development skills. Data locality and alignment are as important as remembering to deallocate. But they didn't teach this in Computer Science papers (BSc), I learnt C as a part of the Network Engineering curriculum. Maybe I should have actually taken that Operating Systems paper instead of AI and graphics since C++ seems to abstract away from memory allocation and alignment (In a bad and confusing way). Allocators were only a recent discovery to me, it was mentioned (but deemed too advanced?) I feel like I missed out.
My usual reflex is to shame the big tech company simps for their stupidity, but it's hard to be mean to the Intel ones anymore. They just keep taking L's from all sides these days.
As someone who has been following Linux kernel development for a long time,... a large improvement often means, just a small area which might translate to a huge improvement in real life - for a select number of people.
"technically not wrong" - technically correct is the best correct ? Well, if people know the context.
I remember Rick van Riel being a new kernel developer, memory management is a really hard topic and he has made many many improvements/rewrites, but also can cause issues for some people.
What always astonishes me is that people can see "4000% performance improvement" and think "Sounds legit. I have understood this correctly."
What's wrong with 192 GB on a desktop? I only have 32 threads, but I'm back up to 192GB after getting my 14900k RMA'd.
Also, if you're going to talk about Amdahl's Law, you should mention that you're talking about Amdahl's Law.
The numbers Brodie, what do they mean?! (hope you get it)
Computer nerds that skipped their math classes, always make mistakes with adding and subtracting percentages.
THE NUMBERS MASON.
WHAT DO THEY MEAN.
the first thing i think when i see stuff like this is "this is definitely not what it seems". if it were it would easily make mainstream headlines. and even if it did it would still be a 50/50 chance because mainstream media likes clickbait too.
I recently removed a function from my code. This function is ∞% faster now because instead of 0.2 seconds it now runs with no seconds at all.
As the late George Carlin once said... “Think of how stupid the average person is, and realize half of them are stupider than that.”
LOL. What's next? A 100000% improvement?
Bitcoin
BogoMIPS on steroids.
My percents are max. I use Arch btw......
Wow! What a terrible clickbait title.
First real human
Love the content
I'm A Dumb Comment!
#downloadmoreram
amd is cooked, why would anyone go with amd now
😂
time to switch to intel guys!!!! amd SUCKS!!!!!
😂