The main differences are the number of cores, processor speed, available power, and memory. Integrated GPUs are part of another system component, usually the CPU nowadays. That limits the number of cores the GPU part can have, as well as their speed and power. They also have to use part of the system memory for its own tasks. Nonetheless, they're more energy efficient and more than powerful enough for most common tasks. Discrete GPUs have their own dedicated processor and high-speed memory. Higher end GPUs typically require more power, sometimes even more than the rest of the system.
@@RationalistRebelThis is well explained. Also, as you scale up any processor, there are diminishing efficiency gains. GPUs are often scaled up so much that they produce huge amounts of heat. Often you need more cooling capacity for the GPU than the rest of the system put together.
An important thing to pay attention when scaling dedicated GPUs is your power supply. They consume more power, but that power has to come from somewhere. Never neglect your power supply when upgrading
Gpus also have a high tolerance for memory latency vs cpu Modern cpus have some parallelism built in. Functions such as branch prediction and out of order operations are common in almost all cpus
As an example... consider Pac Man. A grid of pixels is a Sprite. Pacman has 2 basic Sprite images: mouth open, and mouth closed. You need 4 copies of mouth open, for up/down/left/right, 5 Sprites total. Movement is created by deleting the old Sprite, then copying ONE of these from memory to some screen location. A source, a target, and a simple data copy, gives you animation. That's what the GPU does.
Not entirely correct. Some architectures can handle sprites, some don't. Some handle matrix muliplication and/or rotozooming, some don't. What you described is what an IBM PC with MS-DOS does. No sprite handling, no rotozoom, no framebuffer. So you need four copies of the open mouth sprite, but you can generate them at run-time. And you need to delete and redraw the sprite with every frame. Other systems, and I mean practically every other system, makes that easier. You only need one open mouth sprite, and you can rotate it as needed, and you don't need to delete it if the sprite is handled, you just change its position. This is simple if the image is redrawn with each vertical scan interrupt. But that was before GPUs. Modern GPUs use frame buffers. You have the choice of blanking the buffer to a chosen colour and then redrawing the scene with each iteration, or just drawing over it. The latter may be more efficient: 3D animation often draws environments, and 2D often uses sprite-based graphics, two cases where everthing gets painted over anyway. And yes, for sprites that means copying the sprite pattern to the buffer, in a way that makes rotozooming trivial, so you don't need copies for different sizes and orientations either. The mouse pointer is usually handled as a hardware sprite, which means it gets drawn each frame and is not part of the frame buffer. What you also neglected to mention are colour palettes. There are, in essence, four ways of handling colour: True or indexed, with or without alpha blending. True colour just means using the entire colour space, typically RGB, which is usually 24 bits, that is 8 bits for each colour channel. 32 bits if you use alpha blending, too. This has implications for how a sprite is stored. If you use a colour palette, you can either use a different sprite for each colour, or store the colour information with the sprite. Usually you would use the former, because it makes combining sprites and reusing them with different colours easier. With the latter, you can get animation effects simply by rotating the colour palette. If you use true colour, you can use a different sprite for each colour channel, but you typically wouldn't, especially if you use alpha blending.
A good follow up video would be a similar short summary explainer on SoCs. And maybe one on the difference between the other key components like Media Engines, NPUs, onboard vs separate RAM etc. Thanks. 🙏🏽
Do you have a video talking about cores? in this example you show a core with 4 ALU, and I do not quiet understand how a single CONTROL UNIT can handle that
From my experience, unless I am running a game, my GPU is typically only very lowly utilised while CPU might often be highly utilised. So when we are not running games (or similarly intensive graphics applications) why do computers not send some of the (no graphics) processing to the GPU to help out the CPU? Or does it already do this and I just don't realise?
That's already the case on some programs. The main reason why you don't see it that often is, that a program must be designed to run (sub)tasks on the GPU. This can make a program a lot more complex really fast, since most programmers don't decide on their own, which part of the program is calculated on which part of the computer. That's done by underlying frameworks (in most cases for good reasons).
As the video made it a point to highlight, they excel at different things. Forcing a GPU to do a task it is not optimised to do would be less efficient and vice versa.
which means INTL fall is not that bad, if GPU can't fully replace CPU, then INTL will remain in the market of computing and personal PC. thanks for the explanation
When you stress the system by forcing a dos game to play the intro at very high cpu cycles, you also frce the system to turn on secret accelerating features that remain on. One is the sound acceleratig of sound device, it offlads the cpu from decompressing audio. The ther is the floating point unit, that is off by default, when it is on, some games will become harder. The Intel has an autopilot for car games, that is also off by defaut. With the GPU the secret seems to be the diamnd videos I am experimenting with, many games show diamnds as reward, and hard to get. Diamnd stresses the vga card, it's so compex to draw by the video chip. Also the tuning will consume the battery faster.
So let me ask a stupid question, why can't they combine the two and get the best of both worlds?Will doing so negate some functions rendering it useless? Or .. lol just curious 🤔 😊
CPU has few cores which is why parallel processing is hard on them but each individual core is extremely powerful. GPU has thousands of cores making them good at parallel processing but each individual core is comparatively not very powerful. You can make a Processor which has multiple cores like GPU and each core being powerful like CPU but that would cost huge amount of money
The question is stupid. You can either get high performance in non-parallel tasks and low latency or high performance in parallel tasks and high latency, if you extend the normal core with GPU features, it will instantly become large, power hungry and slow for normal tasks, that is why modern CPUs for desktops have bult-in separate GPU cores. Also, the problem is that you would need a lot of memory channels of memory to make use of that - memory for GPUs is very slow, but has a lot of internal channels.
What do you think will happen if both the heat of cpu ang separate gpu combined with just one cooling fan and heatsink? Plus greater voltage needs resulting in more heat?
If possible, explain why a GPU can not replace a CPU. Great vid but my old vic64 brain (yeah im old) dont get this. Anyway, cheers from a new subscriber
A thread is simply a logical unit of sequence of instructions that gets mapped to each core by a scheduler (modern systems usually use both hardware and software scheduling these days) whether that's on a GPU or CPU for execution. Primary difference between GPU and CPU threads is that GPUs usually execute the same "copy" of each instruction across all threads being executed on the device (single-instruction multiple data aka SIMD), whereas CPUs can have all the different threads very easily execute all kinds of different instructions simultaneously (multiple-instruction multiple data aka MIMD). In addition more than one thread can be mapped to each core whether it be a CPU or GPU and in the case that happens simultaneous multi threading (SMT) hardware technology is used to execute all the threads at once.
No, you didn't explain nor do you understand anything, all you did was read a wikipedia page, gpus dont have "more cores" thats a marketing lie, what they have are very wide simd lanes, cpus have it too but they are smaller in exchange for bigger cache, higher frequency and less heat.
Honestly this video despite oversimplifying a bit (due to time constraints likely) isn't entirely incorrect. There are in fact more cores on average in a GPU it's just the architecture of each GPU core is completely different than a CPU core; companies like Nvidia and AMD aren't lying when they talk about their CUDA core/stream processor counts it's just that each core in a GPU serves somewhat different purposes than each core in a CPU is all. Also what you're saying about cache isn't really correct AMD RDNA 2+ cards have a pretty big pool of last level infinity cache that contributes a significant amount to the overall GPU performance
@@mattslams-windows7918 The problem is, as a graphics programmer, if i were to learn it again, this video tells me nothing, its literally a presentation a high school student would do for homework "cpus do serial gpus do parallel", the funny thing is that the main differences are literally shown in the first image he showed, where it shows the number of ALUs (the simd lanes, they also have FPUs) of each, yet he didnt explain that because he has no clue what those images mean. now gpus do have slightly more cores than their cpu counterpart, but its usually 1.5x-2x higher, not thousands of cores, if you want the correct terminologies, a cpu core is equivalent to streaming multiprocessor in nvidia , compute unit in amd(funny enough amd also refers to them as cores on their integrated graphics specifications), and core in apple, a cpu thread is warp in nvidia and wavefront in amd, a cpu simd lane is a cuda core in nvidia and stream processor in amd, now for the cache thing you mentioned, you are probably using a gaming pc or console for the comparison, they will usually have 4-8 core cpus with 16-32 core gpus, in games single core performance matters more (usually because most gamedevs dont know how to multithread haha), if you want more less of a comparison take the ryzen 9 5900x 12 core and the rx 6500 16 cores, roughly similar power consumption, L3 cache is 64MB on the cpu and 16MB on the gpu, L2 cache is 6MB on the cpu and 1MB on the gpu, L1 cache is 768KB and 128KB on the gpu, now if you get a gpu with higher core counts, you will notice that L3 cache increases a lot but L1 cache stays the same, this is because L3 cache is a shared memory pool for all of the cores within the gpu and cpu, while L2 and L1 cache are local to the core. anyways that was a long reply, hopefully that answered your questions xD.
Im just worried they set him up for failure. Those investors gutted the company and sold the most valuable asset - the land they owned. He has my support ill make sure to either stop by there with a group of friends or atart getting some healthy takeout from them. Those bankruptcy investors should be stopped. Just look how they plundered toys r us.
Depends on your definition of "have": if Nvidia makes a GPU driver that supports the CPU in question then technically one can combine an Nvidia GPU and that CPU into the same computer to run CUDA stuff, but executing GPU CUDA code on the CPU is something that Nvidia probably doesn't want people to do since Nvidia likely wants to keep making money on their GPU sales so executing GPU CUDA code on CPU will likely not be a thing anytime soon
Still not clear for me: what component does the GPU miss so that it cannot replace a CPU? Ah, just checked with perplexity ai, the instruction set that a GPU accepts is too poor to make a GPU a replacement for a CPU.
gpu is like PhD holder while the CPU is the jack of all trades. look at the name GPU graphics computing CPU is Central processing unit GPU can outperform cpu computing power in singular task like in graphics computing or any other given task, while the cpu can't compute as fast as the gpu cpu can compute task simultaneously.
Most of things in this video is BS. GPU can replace CPU, but will work many times slower for most of tasks while taking more power. Only on highly parallel tasks it is efficient and fast. Also, it is missing a lot of features to control hardware, like proper interrupts management etc.
@@avalagum7957 It is optimized to consume minimum amount of energy and perform multiple calculations per cycle, but the calculations take much longer time to finish, up to 100 times slower. All those parallel operations go to waste if you don't need to perform the exact same operation on multiple entries. Also, its memory is way slower than that of CPU (albeit CPU memory is also not very fast, it merely got ~30% faster over the last 20 years, but got a lot of internal channels), but contains a lot of internal channels, so it is efficient in processing large amounts of data, which do not need high performance per each dataset.
In neural networks it's all about matrix multiplications and also we want to pass multiple inputs through the network (which also perform a number of matrix multiplication) with GPU we can perform all the passes of different inputs to the neural network parallel instead of just doing one input a time which can speed up the computations
You are wrong about many things. GPUs, the modern ones, aren't actually good in performing operations for each single pixels. They are far behind the CPU on that, but can more efficiently work with large groups of pixels. No, the modern high-end GPUs have 32-64 cores (top ones like 4090 have 128 cores). The marketing cores is a lie. No, the threads is a lie, the actual number is hundreds times lower. Those fake threads are parallel execution units, they are not threads, they are same code which works with a very large array. Each single core can run 1 or 2 actual threads, therefore the number of threads for high-end GPUs is usually limited to 128 or so. Only because of repetitive operations GPUs are faster in some tasks, they are generally much slower than a crappy processor.
Does this means that gpu is a better version of cpu only or we can say faster version of cpu which can do many calculation at a time, then why dont we use two gpus instead of a cpu and a gpu?
GPU only performs better if the task being performed can be parallelized, but majority of the tasks can't be or don't need parallel computation So they would be slower on GPU. Main Power GPU gives us is parallelization if can't exploit it in a task the overhead will make it even slower than CPU.
You have a low IQ so you struggle with basic information, its not your fault. Listen again at 0.5x speed and try to comprehend the essence of what he is saying. Take a notes as he lists what each of them do and then highlight the differences between the two. Make sure you make an effort to understand the words he is using, ask yourself "what does he mean when he says this" and try to formulate it in your own words.
2 หลายเดือนก่อน +1
I think pretty much everyone knows the difference between gpu and cpu, the most useful information here would be the "why" the gpu cannot be used as a CPU.
Simple yet short to the point, instant sub thanks mate.
That was the goal glad it was helpful!
Dude read off of Wikipedia
Can you make a similar video for GPU vs Integrated GPU? Is there any difference in their architectures?
The main differences are the number of cores, processor speed, available power, and memory.
Integrated GPUs are part of another system component, usually the CPU nowadays. That limits the number of cores the GPU part can have, as well as their speed and power. They also have to use part of the system memory for its own tasks. Nonetheless, they're more energy efficient and more than powerful enough for most common tasks.
Discrete GPUs have their own dedicated processor and high-speed memory. Higher end GPUs typically require more power, sometimes even more than the rest of the system.
Dont forget compute unit.
@@RationalistRebelThis is well explained.
Also, as you scale up any processor, there are diminishing efficiency gains. GPUs are often scaled up so much that they produce huge amounts of heat.
Often you need more cooling capacity for the GPU than the rest of the system put together.
@@HorizonOfHope Yep, power consumption == heat production.
An important thing to pay attention when scaling dedicated GPUs is your power supply. They consume more power, but that power has to come from somewhere. Never neglect your power supply when upgrading
finally an explanation that I completely understand and not trying to sell me on anything!
Easiest explanation I've ever heard... can't be a more simpler explanation than this.. Brilliant stuff..
Thank you! That's the goal!
Briljant video! I started my IT Forensics study last week and will share this with the other students in my class!
Glad it was helpful!
Beautifully done. The best explanation I came across. Understood the core concepts you explained. Again, beautifully executed
Thanks for the kind words!
Gpus also have a high tolerance for memory latency vs cpu
Modern cpus have some parallelism built in. Functions such as branch prediction and out of order operations are common in almost all cpus
As an example... consider Pac Man.
A grid of pixels is a Sprite. Pacman has 2 basic Sprite images: mouth open, and mouth closed. You need 4 copies of mouth open, for up/down/left/right, 5 Sprites total.
Movement is created by deleting the old Sprite, then copying ONE of these from memory to some screen location. A source, a target, and a simple data copy, gives you animation. That's what the GPU does.
Not entirely correct. Some architectures can handle sprites, some don't. Some handle matrix muliplication and/or rotozooming, some don't.
What you described is what an IBM PC with MS-DOS does. No sprite handling, no rotozoom, no framebuffer. So you need four copies of the open mouth sprite, but you can generate them at run-time. And you need to delete and redraw the sprite with every frame.
Other systems, and I mean practically every other system, makes that easier. You only need one open mouth sprite, and you can rotate it as needed, and you don't need to delete it if the sprite is handled, you just change its position. This is simple if the image is redrawn with each vertical scan interrupt.
But that was before GPUs. Modern GPUs use frame buffers. You have the choice of blanking the buffer to a chosen colour and then redrawing the scene with each iteration, or just drawing over it. The latter may be more efficient: 3D animation often draws environments, and 2D often uses sprite-based graphics, two cases where everthing gets painted over anyway. And yes, for sprites that means copying the sprite pattern to the buffer, in a way that makes rotozooming trivial, so you don't need copies for different sizes and orientations either.
The mouse pointer is usually handled as a hardware sprite, which means it gets drawn each frame and is not part of the frame buffer.
What you also neglected to mention are colour palettes. There are, in essence, four ways of handling colour: True or indexed, with or without alpha blending. True colour just means using the entire colour space, typically RGB, which is usually 24 bits, that is 8 bits for each colour channel. 32 bits if you use alpha blending, too. This has implications for how a sprite is stored. If you use a colour palette, you can either use a different sprite for each colour, or store the colour information with the sprite. Usually you would use the former, because it makes combining sprites and reusing them with different colours easier. With the latter, you can get animation effects simply by rotating the colour palette. If you use true colour, you can use a different sprite for each colour channel, but you typically wouldn't, especially if you use alpha blending.
With the dawn of AI PCs, which always have CPU, GPU and NPU.
Can you make a similar video on differences between GPU and NPU ?
Its' on the list!
A good follow up video would be a similar short summary explainer on SoCs. And maybe one on the difference between the other key components like Media Engines, NPUs, onboard vs separate RAM etc. Thanks. 🙏🏽
Great idea, I'll put it on the list!
Thank you... Simple and short....
Thank you!
Hey nice video!
Is it just that GPUs have more ALUs for each Cache and CU?
Or are the GPUs ALUs different in structure?
Similar for CUs and Caches?
Very powerful and concise explanation. Keep up the good work.
Thank you!!
Straight to the point and explained perfectly 👍🏽
That was the goal thank you!
Thanks for this Great Explanation!!! 👍😁
Glad you found it helpful!
Beautifully explained. Thank you
Do you have a video talking about cores? in this example you show a core with 4 ALU, and I do not quiet understand how a single CONTROL UNIT can handle that
Thank you so much! Simple , brief and easy to understand!! Awesome
Glad you enjoyed!
Excellent and well put together. Thank you.
Glad you liked it!
Very clear and helpful - thanks!
Thank you!!
How much is the latency difference between a CPU and GPU, since it's stated that CPUs focus on better latency vs throughput?
Very clear, thank you
I subscribed to your channel to see more videos like this.
Thank you!!
Intel Core2 architecture can execute up to 4 integer instructions paralell with each single core.
From my experience, unless I am running a game, my GPU is typically only very lowly utilised while CPU might often be highly utilised. So when we are not running games (or similarly intensive graphics applications) why do computers not send some of the (no graphics) processing to the GPU to help out the CPU? Or does it already do this and I just don't realise?
That's already the case on some programs. The main reason why you don't see it that often is, that a program must be designed to run (sub)tasks on the GPU. This can make a program a lot more complex really fast, since most programmers don't decide on their own, which part of the program is calculated on which part of the computer. That's done by underlying frameworks (in most cases for good reasons).
As the video made it a point to highlight, they excel at different things. Forcing a GPU to do a task it is not optimised to do would be less efficient and vice versa.
which means INTL fall is not that bad, if GPU can't fully replace CPU, then INTL will remain in the market of computing and personal PC. thanks for the explanation
Excellent explanation. Thanks!.
Thank you!
Thank you so much 🤯❤️
You're welcome!
nice explanation
Thanks!!
Very good. I am subscribing. Thank you.
Thanks for the sub!
You are very welcome. You more than earned it. That may be the best explanation on CPUs and GPUs on TH-cam. Please keep it up.
very simple and nice
Thank you 😊
Great!
Excellent!
Thank you!
helpful. thank you!
Thanks!
Holy Cow I’m ready to be a computer engineer. 👍🏾👌🏾🤓
When you stress the system by forcing a dos game to play the intro at very high cpu cycles, you also frce the system to turn on secret accelerating features that remain on. One is the sound acceleratig of sound device, it offlads the cpu from decompressing audio. The ther is the floating point unit, that is off by default, when it is on, some games will become harder. The Intel has an autopilot for car games, that is also off by defaut.
With the GPU the secret seems to be the diamnd videos I am experimenting with, many games show diamnds as reward, and hard to get.
Diamnd stresses the vga card, it's so compex to draw by the video chip. Also the tuning will consume the battery faster.
great! thank you, but if you could slow down a little in your explanation..😅
Yep will try!!
So let me ask a stupid question, why can't they combine the two and get the best of both worlds?Will doing so negate some functions rendering it useless? Or .. lol just curious 🤔 😊
CPU has few cores which is why parallel processing is hard on them but each individual core is extremely powerful.
GPU has thousands of cores making them good at parallel processing but each individual core is comparatively not very powerful.
You can make a Processor which has multiple cores like GPU and each core being powerful like CPU but that would cost huge amount of money
The question is stupid. You can either get high performance in non-parallel tasks and low latency or high performance in parallel tasks and high latency, if you extend the normal core with GPU features, it will instantly become large, power hungry and slow for normal tasks, that is why modern CPUs for desktops have bult-in separate GPU cores. Also, the problem is that you would need a lot of memory channels of memory to make use of that - memory for GPUs is very slow, but has a lot of internal channels.
Can you combine a school bus with a Formula One car?
There, you have your answer.
APUs do that.
What do you think will happen if both the heat of cpu ang separate gpu combined with just one cooling fan and heatsink? Plus greater voltage needs resulting in more heat?
2:34 lol, *core* differences
If possible, explain why a GPU can not replace a CPU. Great vid but my old vic64 brain (yeah im old) dont get this. Anyway, cheers from a new subscriber
What about the Threads?
A thread is simply a logical unit of sequence of instructions that gets mapped to each core by a scheduler (modern systems usually use both hardware and software scheduling these days) whether that's on a GPU or CPU for execution. Primary difference between GPU and CPU threads is that GPUs usually execute the same "copy" of each instruction across all threads being executed on the device (single-instruction multiple data aka SIMD), whereas CPUs can have all the different threads very easily execute all kinds of different instructions simultaneously (multiple-instruction multiple data aka MIMD). In addition more than one thread can be mapped to each core whether it be a CPU or GPU and in the case that happens simultaneous multi threading (SMT) hardware technology is used to execute all the threads at once.
No, you didn't explain nor do you understand anything, all you did was read a wikipedia page, gpus dont have "more cores" thats a marketing lie, what they have are very wide simd lanes, cpus have it too but they are smaller in exchange for bigger cache, higher frequency and less heat.
Honestly this video despite oversimplifying a bit (due to time constraints likely) isn't entirely incorrect. There are in fact more cores on average in a GPU it's just the architecture of each GPU core is completely different than a CPU core; companies like Nvidia and AMD aren't lying when they talk about their CUDA core/stream processor counts it's just that each core in a GPU serves somewhat different purposes than each core in a CPU is all. Also what you're saying about cache isn't really correct AMD RDNA 2+ cards have a pretty big pool of last level infinity cache that contributes a significant amount to the overall GPU performance
@@mattslams-windows7918 The problem is, as a graphics programmer, if i were to learn it again, this video tells me nothing, its literally a presentation a high school student would do for homework "cpus do serial gpus do parallel", the funny thing is that the main differences are literally shown in the first image he showed, where it shows the number of ALUs (the simd lanes, they also have FPUs) of each, yet he didnt explain that because he has no clue what those images mean.
now gpus do have slightly more cores than their cpu counterpart, but its usually 1.5x-2x higher, not thousands of cores,
if you want the correct terminologies,
a cpu core is equivalent to streaming multiprocessor in nvidia , compute unit in amd(funny enough amd also refers to them as cores on their integrated graphics specifications), and core in apple,
a cpu thread is warp in nvidia and wavefront in amd,
a cpu simd lane is a cuda core in nvidia and stream processor in amd,
now for the cache thing you mentioned, you are probably using a gaming pc or console for the comparison, they will usually have 4-8 core cpus with 16-32 core gpus, in games single core performance matters more (usually because most gamedevs dont know how to multithread haha),
if you want more less of a comparison take the ryzen 9 5900x 12 core and the rx 6500 16 cores, roughly similar power consumption, L3 cache is 64MB on the cpu and 16MB on the gpu, L2 cache is 6MB on the cpu and 1MB on the gpu, L1 cache is 768KB and 128KB on the gpu, now if you get a gpu with higher core counts, you will notice that L3 cache increases a lot but L1 cache stays the same, this is because L3 cache is a shared memory pool for all of the cores within the gpu and cpu, while L2 and L1 cache are local to the core.
anyways that was a long reply, hopefully that answered your questions xD.
@@mattslams-windows7918 lol my reply was removed
@@Theawesomeking4444it wasn't?
@@JosGeerink nah i had another reply which i explained the technical details but you cant state facts with proof here, unfortunately.
Im just worried they set him up for failure. Those investors gutted the company and sold the most valuable asset - the land they owned. He has my support ill make sure to either stop by there with a group of friends or atart getting some healthy takeout from them. Those bankruptcy investors should be stopped. Just look how they plundered toys r us.
can cpu have something like cuda?
Depends on your definition of "have": if Nvidia makes a GPU driver that supports the CPU in question then technically one can combine an Nvidia GPU and that CPU into the same computer to run CUDA stuff, but executing GPU CUDA code on the CPU is something that Nvidia probably doesn't want people to do since Nvidia likely wants to keep making money on their GPU sales so executing GPU CUDA code on CPU will likely not be a thing anytime soon
Still not clear for me: what component does the GPU miss so that it cannot replace a CPU?
Ah, just checked with perplexity ai, the instruction set that a GPU accepts is too poor to make a GPU a replacement for a CPU.
gpu is like PhD holder while the CPU is the jack of all trades.
look at the name GPU graphics computing CPU is Central processing unit
GPU can outperform cpu computing power in singular task like in graphics computing or any other given task, while the cpu can't compute as fast as the gpu cpu can compute task simultaneously.
Most of things in this video is BS. GPU can replace CPU, but will work many times slower for most of tasks while taking more power. Only on highly parallel tasks it is efficient and fast. Also, it is missing a lot of features to control hardware, like proper interrupts management etc.
@@trevoro.9731 why does GPU is slower than cpu for most of tasks?
@@avalagum7957 It is optimized to consume minimum amount of energy and perform multiple calculations per cycle, but the calculations take much longer time to finish, up to 100 times slower. All those parallel operations go to waste if you don't need to perform the exact same operation on multiple entries.
Also, its memory is way slower than that of CPU (albeit CPU memory is also not very fast, it merely got ~30% faster over the last 20 years, but got a lot of internal channels), but contains a lot of internal channels, so it is efficient in processing large amounts of data, which do not need high performance per each dataset.
What is it about AI that requires intense parallel computation?
In neural networks it's all about matrix multiplications and also we want to pass multiple inputs through the network (which also perform a number of matrix multiplication) with GPU we can perform all the passes of different inputs to the neural network parallel instead of just doing one input a time which can speed up the computations
CISC vs RISC plz
It's on the list!
@@TechPrepYT yay~!!!
You are wrong about many things.
GPUs, the modern ones, aren't actually good in performing operations for each single pixels. They are far behind the CPU on that, but can more efficiently work with large groups of pixels.
No, the modern high-end GPUs have 32-64 cores (top ones like 4090 have 128 cores). The marketing cores is a lie.
No, the threads is a lie, the actual number is hundreds times lower. Those fake threads are parallel execution units, they are not threads, they are same code which works with a very large array. Each single core can run 1 or 2 actual threads, therefore the number of threads for high-end GPUs is usually limited to 128 or so.
Only because of repetitive operations GPUs are faster in some tasks, they are generally much slower than a crappy processor.
A CPU can never be fully replaced by a GPU, so what happened now between intel and INVIDIA!?
Does this means that gpu is a better version of cpu only or we can say faster version of cpu which can do many calculation at a time, then why dont we use two gpus instead of a cpu and a gpu?
GPU only performs better if the task being performed can be parallelized, but majority of the tasks can't be or don't need parallel computation So they would be slower on GPU. Main Power GPU gives us is parallelization if can't exploit it in a task the overhead will make it even slower than CPU.
No, GPU is like a factory with 100 workers. CPU is like a medical practice with 4 doctors. Neither can do each other's job.
Since each core is similar to CPU can we say that it has multiple CPU units?
👍
am i the only one who sees dots in gpu drawing?
No you're not! It's an optical illusion
Still cant understand wht gpu can do what it does.
Bad video.
You have a low IQ so you struggle with basic information, its not your fault. Listen again at 0.5x speed and try to comprehend the essence of what he is saying. Take a notes as he lists what each of them do and then highlight the differences between the two. Make sure you make an effort to understand the words he is using, ask yourself "what does he mean when he says this" and try to formulate it in your own words.
I think pretty much everyone knows the difference between gpu and cpu, the most useful information here would be the "why" the gpu cannot be used as a CPU.
thanks for foolization
we in Russia have no enough our russian foolization and so need to be foolizised by american trainers
.•*
RAM actually stands for “Read And write Memory” … 😉
It doesn’t - it’s random access memory
stupid 😂
Imagine trying to correct someone whilst having the IQ of a carrot.
nice ⛑⛑helpful
Thanks!
In the future maybe we will have only one thing.. Cgpu
central graphics processing unit?
Cpu/gpu combo units exist with ARM CPUs.
@@thebtm also with Intel & AMD's APUs