There's a potentially faster version of your no-multiplication bit hacks: // For n-bit integers, use a shift of (n-1) counter += value & (value & 1) > 31; To explain it, I'll use 8-bit integers for brevity: 1. (value & 1): is the value odd? 2. > 31: perform a sign-extending(!) shift to the right, essentially creating a move mask 4. value &: use the move mask to either zero out or keep the value It'll look something like this: Value: 5 1. (0b00000101 & 1) = 1 2. 1 > 7 = 0b11111111 4. 0b00000101 & 0b11111111 = 5 Value: 6 1. (0b00000110 & 1) = 0 2. 0 > 7 = 0 4. 6 & 0 = 0 This method eliminates not only the multiplication, but also the subtraction. Would be interested to see if it's actually faster, though
Good video thanks. By the way your loop will throw an error if your array has odd length. Therefore instead of write i < array.length ; i +=2; you should write i < array.length - 1; i += 2
That would exclude the last element in the array from being calculated. Worse, the element couldn't be accessed since an exception is thrown when there is no element (as you pointed out) for odd lengths.
I always see "boolAsInt * something" where "-boolAsInt & something" is twice as fast. 0 or 1 times something is the same as -0 = 0x0000_0000 or -1 = 0xFFFF_FFFF AND something. Code size and register dependencies increase (the latter doesn't really count when it replaces an operation that takes long, like multiplying ints at ~ 4 clock ticks, which also has low throughput) so that might matter. Your bit hack is slower than a multiply because bit shifting by a non compile time constant is pretty slow (up to 7 clock ticks) and it only works with ONE particular register with X86, being CL (=> no ILP).
In fact, we rarely use Array in real world. Furthermore we can use multitasking for CPU-bound tasks or asynchronous for I/O-bound tasks to improve performance
You should do it in parallel, where your total number of partitions is the same as the number of channels supported by your CPU (4 or 8 most common I believe), but not greater than the number of CPUs available.
So I'm not completely through it yet, but the very first thing I thought of was parallelizing it. Disregard, just got to it in this video, and was really great to see, so definite thanks!
Sorry for this very noob question. @6:33 if the values of oddA and oddB can both only be 1 or 0, then why do our counters need to be added by the strange values (oddA * elementA) and (oddB * elementB)? If we're just counting how many odd numbers are in the array couldn't we just write counterA += elementA & 1; and counterB += elementB & 1; ? I don't use bitwise logic in the code that I write and I also have never considered ports, registers or memory addresses, so please understand that I'm swimming in water that's over my head here, and thank you for the very interesting video. PS~ I _LOVE_ that parallelism trick and I know of at least one spot in my code base where I think I can make use of it, thanks!
He does it because if oddA or oddB equals 0, then that means the number at that index is even. That will make it be multiplied by zero so its not added to the final sum of the function.
Bartosz Adamczewski all data structures in C# and world class example that utilize them plus most used algorithms and how to design new ones and as extra bonus machine learning and AI which use them heavily 😍
@@LevelUppp sorry I was thinking about C++, I’ve been working with it a lot lately. I wonder if modifying the optimization in build settings can do some of these optimizations though.
@@JJCUBER Sadly, C# only have one optimization option (Optimize code - true/false). But you still can use raw pointers and reference so you could optimize it a little bit more (unlike in Java as far as I know).
For this entire lecture, it will just handle the first case; many other trivial cases are still left unsolved :( The compiler will never solve all of your problems for you.
thanks for the video, -Isn't it a waste of time to use var type even though you know the type of the variable? (it should waste time for finding type) -what will happen if your array has 7 elements, your parallism in loop will be out of the array is it?
'var' doesn't actually waste any time during runtime, as the type is determined at compile time. That's why you can only use it when the type is known. So its only use is if you're lazy and don't want to write a big type name
ahh back to C yeah good But how are you sure that the instructions are run in parallel when you did not specify that? It looks like CUDA for C for me but there I knew it's parallel, but this looks like synchronous CPU code so how did it simply run in parallel for no reason?
@@LevelUppp nice to know! I actually never heard of CPU ports although studying computer science. I thought there is 1 instruction per thread and it only can predict instructions or do some special vector operations but I didn't know that you can do multiple operations in 1 thread simultaneously
My boss says my brain don't work too good. He has replaced me with a gorilla. An actual gorilla. We'll see how that works out. anyways, good video. I'm also a bit concerned if these optimizations are dependable? like will they yield the correct results every time? are there performace overhead?
To be honest the majority of difference are made solely by array bounds checks (40%) and removing branching (80%). The rest are cool, but not as spectacular. Subsribiditized. Your chanel seems amazing place to start being more aware of what our code is actually doing.
4 ปีที่แล้ว
Awesome. Hello. I am following you for a while. I have a youtube channel too. Can i convert to my language and give reference to this video(like scientific papers :))?
Do you know any other tips that wasn't mention in this video?
@Gilad Freidkin has provided a couple interesting ones as well.
OK so the trick is to have a longer method name!
That way, compiler knows that is has to optimize it harder!
jk :-)
Conclusion: Try to remove branches from loops.
And maybe~ use unsafe code
Everything else was too minor to pollute a nice codebase
which tbh is kinda sad because more branches often makes for cleaner code
There's a potentially faster version of your no-multiplication bit hacks:
// For n-bit integers, use a shift of (n-1)
counter += value & (value & 1) > 31;
To explain it, I'll use 8-bit integers for brevity:
1. (value & 1): is the value odd?
2. > 31: perform a sign-extending(!) shift to the right, essentially creating a move mask
4. value &: use the move mask to either zero out or keep the value
It'll look something like this:
Value: 5
1. (0b00000101 & 1) = 1
2. 1 > 7 = 0b11111111
4. 0b00000101 & 0b11111111 = 5
Value: 6
1. (0b00000110 & 1) = 0
2. 0 > 7 = 0
4. 6 & 0 = 0
This method eliminates not only the multiplication, but also the subtraction. Would be interested to see if it's actually faster, though
The most extreme performance tip. Shut down your computer and go climb Mount Everest.
Good video thanks. By the way your loop will throw an error if your array has odd length. Therefore instead of write i < array.length ; i +=2; you should write i < array.length - 1; i += 2
Same issue with the last parallelization improvement where array.Length % 4 != 0
I have seen .leght -1 and always wondered what was the reason behind it.
That would exclude the last element in the array from being calculated. Worse, the element couldn't be accessed since an exception is thrown when there is no element (as you pointed out) for odd lengths.
for (int i = 0; i < array.Length; i += 2) sum += array[i];
Which would be great if not for the fact you're only using half of the elements. Did you mean sum += array[i] + array[i+1]?
@@stefanalecu9532 He was talking about only adding the odd-numbered values. That's what my code does without branching.
I always see "boolAsInt * something" where "-boolAsInt & something" is twice as fast. 0 or 1 times something is the same as -0 = 0x0000_0000 or -1 = 0xFFFF_FFFF AND something.
Code size and register dependencies increase (the latter doesn't really count when it replaces an operation that takes long, like multiplying ints at ~ 4 clock ticks, which also has low throughput) so that might matter. Your bit hack is slower than a multiply because bit shifting by a non compile time constant is pretty slow (up to 7 clock ticks) and it only works with ONE particular register with X86, being CL (=> no ILP).
Great performance tips.
How about :
1^2 =1
2^2 =1+3
3^2 = 1+3+5
...
Sorry if I formulate this wrong:
Sum of x odd= (x//2 + x%2)^2
Smart man!
Wow thank you so much, all solid performance tips, cheers
In fact, we rarely use Array in real world. Furthermore we can use multitasking for CPU-bound tasks or asynchronous for I/O-bound tasks to improve performance
you should use array as much as possible.
We use arrays as much as possible, it's the fastest possible collection. Or ImmutableArray if we need the readonly part.
Who's this "we"? Of course programmers use a TON of arrays.
You should do it in parallel, where your total number of partitions is the same as the number of channels supported by your CPU (4 or 8 most common I believe), but not greater than the number of CPUs available.
@LevelUp would love to see those simd instructions and other tricks in a new video :)
Thanks for showing these tricks!
Awesome man, I learned a bunch!
Where do you learn such things ? What was your learning path on thing topic?
Experimentation mostly, and messing around with internals of the platform.
Perhaps branch-free is my biggest takeaway
So I'm not completely through it yet, but the very first thing I thought of was parallelizing it. Disregard, just got to it in this video, and was really great to see, so definite thanks!
How about conditional move, by ? : ternary operator?
Great, you deserve like
Awesome! Thanks!!!
I didn't know that Sam from LOTR knows C# XD :D
what if instead of multiplying you fill up the whole integer with the first bit from the & 1 result and & that with p[x]
Sorry for this very noob question. @6:33 if the values of oddA and oddB can both only be 1 or 0, then why do our counters need to be added by the strange values (oddA * elementA) and (oddB * elementB)? If we're just counting how many odd numbers are in the array couldn't we just write counterA += elementA & 1; and counterB += elementB & 1; ? I don't use bitwise logic in the code that I write and I also have never considered ports, registers or memory addresses, so please understand that I'm swimming in water that's over my head here, and thank you for the very interesting video. PS~ I _LOVE_ that parallelism trick and I know of at least one spot in my code base where I think I can make use of it, thanks!
We are doing sums here, not counting how many odd or even elements we have this is a sum of elements.
He does it because if oddA or oddB equals 0, then that means the number at that index is even. That will make it be multiplied by zero so its not added to the final sum of the function.
I am not sure but this can use case for SIMD intrinsics
Yes that would be much faster.
Thanks for the awesome video. I would love to see an artful graph at the end, especially as you have "code | art" as your motto.
Amazing! Next level knowledge
Yep!
It's basically, applying Assembler (x86) knowledge to C# programming.
Sounds crazy, but it works! :-)
@LevelUp Would you please create video series on data structures and algorithms????
Which ones would you like to see?
Bartosz Adamczewski all data structures in C# and world class example that utilize them plus most used algorithms and how to design new ones and as extra bonus machine learning and AI which use them heavily 😍
Is using span similar to using the pointer?
Don't most compilers which optimize already do most all of this stuff (like unwrapping for loops)?
Not in dotnet
@@LevelUppp sorry I was thinking about C++, I’ve been working with it a lot lately. I wonder if modifying the optimization in build settings can do some of these optimizations though.
@@JJCUBER Sadly, C# only have one optimization option (Optimize code - true/false).
But you still can use raw pointers and reference so you could optimize it a little bit more (unlike in Java as far as I know).
What performance profiling tool did you use?
.NET 6 compiler will do the first optimization along with many others automatically
For this entire lecture, it will just handle the first case; many other trivial cases are still left unsolved :( The compiler will never solve all of your problems for you.
after spending time in the LeetCode Community, always force a HashMap at the problem🤣🤣
thanks for the video,
-Isn't it a waste of time to use var type even though you know the type of the variable? (it should waste time for finding type)
-what will happen if your array has 7 elements, your parallism in loop will be out of the array is it?
'var' doesn't actually waste any time during runtime, as the type is determined at compile time. That's why you can only use it when the type is known. So its only use is if you're lazy and don't want to write a big type name
nice
ahh back to C yeah good
But how are you sure that the instructions are run in parallel when you did not specify that? It looks like CUDA for C for me but there I knew it's parallel, but this looks like synchronous CPU code so how did it simply run in parallel for no reason?
CPU instructions can run on multiple ports and each instruction has a set of ports that it can run on.
@@LevelUppp nice to know! I actually never heard of CPU ports although studying computer science. I thought there is 1 instruction per thread and it only can predict instructions or do some special vector operations but I didn't know that you can do multiple operations in 1 thread simultaneously
It will be expensive p+=4; than p=p+4; What do you think?
There should be no difference
@@LevelUppp I watch a video about expancy of += statement. I will share with you.
Doesn't the compiler do most of this when you run in release mode?
No
No, the compiler is a dummy 🙂
Is this just for C#? Because in C++ for example, the optimization compiler has become quite sophisticated
@@TheMusterionOfRock Correct it's for C#, C++ has a much better compiler both GCC and Clang.
please tell about the stack in c#, how work it?
Sure I'll make a video about the stack.
My boss says my brain don't work too good. He has replaced me with a gorilla. An actual gorilla. We'll see how that works out. anyways, good video. I'm also a bit concerned if these optimizations are dependable? like will they yield the correct results every time? are there performace overhead?
To be honest the majority of difference are made solely by array bounds checks (40%) and removing branching (80%). The rest are cool, but not as spectacular.
Subsribiditized. Your chanel seems amazing place to start being more aware of what our code is actually doing.
Awesome.
Hello. I am following you for a while.
I have a youtube channel too. Can i convert to my language and give reference to this video(like scientific papers :))?
You can reference the video
Results for each tip you are running is different it seems. Why is it so?
I'm testing one thing at the time. With each tip so I'm not running old tips.
You're ruining readability, but atleast its a second faster
How to achieve high performance in C# :
Rewrite it in C++
Most developers will end up with worse performance in C++ because they can't even perform fundamental optimization in C#.
Is this source code posted anywhere?
Here is a source (a little bit improved):
using System;
using System.Diagnostics;
class Program
{
static void Main()
{
int[] array = new int[40000000];
Random r = new Random();
for (int i = 0; i < array.Length; i++)
array[i] = r.Next(int.MinValue, int.MaxValue);
int count;
Stopwatch sw = new Stopwatch();
sw.Start();
// Debug = 462 ms; Release = 218 ms
//count = SumOdd(array);
// Debug = 294 ms; Release = 123 ms
//count = SumOdd_Bit(array);
// Debug = 111 ms; Release = 19 ms
//count = SumOdd_Bit_Branchless(array);
// Debug = 85 ms; Release = 30 ms
//count = SumOdd_Bit_Branchless_Parallel(array);
// Debug = 83 ms; Release = 65 ms
//count = SumOdd_Bit_Branchless_Parallel_NoMult(array);
// Debug = 55 ms; Release = 28 ms
//count = SumOdd_Bit_Branchless_Parallel_NoChecks(array);
// Debug = 41 ms; Release = 16 ms
count = SumOdd_Bit_Branchless_Parallel_NoChecks_4Ports(array);
// Debug = 43 ms; Release = 17 ms
//count = SumOdd_Bit_Branchless_Parallel_NoChecks_4Ports_BetterPorts(array);
// Debug = 46 ms; Release = 19 ms
//count = SumOdd_Bit_Branchless_Parallel_NoChecks_4Ports_BetterPorts_NoMult(array);
sw.Stop();
Console.WriteLine($"{count} it took {sw.ElapsedMilliseconds} ms");
Console.ReadKey();
}
static int SumOdd(int[] array)
{
int counter = 0;
for (int i = 0; i < array.Length; i++)
{
int element = array[i];
if (element % 2 != 0)
counter += element;
}
return counter;
}
static int SumOdd_Bit(int[] array)
{
int counter = 0;
for (int i = 0; i < array.Length; i++)
{
int element = array[i];
if ((element & 1) == 1)
counter += element;
}
return counter;
}
static int SumOdd_Bit_Branchless(int[] array)
{
int counter = 0;
for (int i = 0; i < array.Length; i++)
{
int element = array[i];
int odd = element & 1;
counter += odd * element;
}
return counter;
}
static int SumOdd_Bit_Branchless_Parallel(int[] array)
{
int counterA = 0;
int counterB = 0;
for (int i = 0; i < array.Length; i+=2)
{
int elementA = array[i];
int elementB = array[i + 1];
int oddA = elementA & 1;
int oddB = elementB & 1;
counterA += oddA * elementA;
counterB += oddB * elementB;
}
return counterA + counterB;
}
static int SumOdd_Bit_Branchless_Parallel_NoMult(int[] array)
{
int counterA = 0;
int counterB = 0;
for (int i = 0; i < array.Length; i += 2)
{
int elementA = array[i];
int elementB = array[i + 1];
counterA += (elementA
Any use at this in a real word use case.
Also if you really wants perfomance in this you can use:
var sum = n/2 * ( 2*a + ( n - 1 )* d );
I have no idea what any of this means, clearly I'm still too green
There's a point where readability is worth more than a tiny bit of performance
the point of writing high performance code is to flex in front of your teammates.
Why not just use C for performance? Code readability is more important than extra 30 milliseconds