CPU vs GPU | Simply Explained

แชร์
ฝัง
  • เผยแพร่เมื่อ 29 พ.ย. 2024

ความคิดเห็น • 128

  • @AungBaw
    @AungBaw 2 หลายเดือนก่อน +74

    Simple yet short to the point, instant sub thanks mate.

    • @TechPrepYT
      @TechPrepYT  2 หลายเดือนก่อน +5

      That was the goal glad it was helpful!

    • @chikenadobo
      @chikenadobo 19 วันที่ผ่านมา

      Dude read off of Wikipedia

  • @rajiiv00
    @rajiiv00 3 หลายเดือนก่อน +107

    Can you make a similar video for GPU vs Integrated GPU? Is there any difference in their architectures?

    • @RationalistRebel
      @RationalistRebel 2 หลายเดือนก่อน +39

      The main differences are the number of cores, processor speed, available power, and memory.
      Integrated GPUs are part of another system component, usually the CPU nowadays. That limits the number of cores the GPU part can have, as well as their speed and power. They also have to use part of the system memory for its own tasks. Nonetheless, they're more energy efficient and more than powerful enough for most common tasks.
      Discrete GPUs have their own dedicated processor and high-speed memory. Higher end GPUs typically require more power, sometimes even more than the rest of the system.

    • @mohd5rose
      @mohd5rose 2 หลายเดือนก่อน +2

      Dont forget compute unit.

    • @HorizonOfHope
      @HorizonOfHope 2 หลายเดือนก่อน +6

      @@RationalistRebelThis is well explained.
      Also, as you scale up any processor, there are diminishing efficiency gains. GPUs are often scaled up so much that they produce huge amounts of heat.
      Often you need more cooling capacity for the GPU than the rest of the system put together.

    • @RationalistRebel
      @RationalistRebel 2 หลายเดือนก่อน +3

      @@HorizonOfHope Yep, power consumption == heat production.

    • @mauriciofreitas3384
      @mauriciofreitas3384 2 หลายเดือนก่อน +2

      An important thing to pay attention when scaling dedicated GPUs is your power supply. They consume more power, but that power has to come from somewhere. Never neglect your power supply when upgrading

  • @kimdunphy2009
    @kimdunphy2009 2 หลายเดือนก่อน +19

    finally an explanation that I completely understand and not trying to sell me on anything!

  • @anotherfpsplayer
    @anotherfpsplayer 2 หลายเดือนก่อน +3

    Easiest explanation I've ever heard... can't be a more simpler explanation than this.. Brilliant stuff..

    • @TechPrepYT
      @TechPrepYT  2 หลายเดือนก่อน +1

      Thank you! That's the goal!

  • @samoerai6807
    @samoerai6807 2 หลายเดือนก่อน +4

    Briljant video! I started my IT Forensics study last week and will share this with the other students in my class!

    • @TechPrepYT
      @TechPrepYT  2 หลายเดือนก่อน +1

      Glad it was helpful!

  • @Zeqerak
    @Zeqerak 2 หลายเดือนก่อน +8

    Beautifully done. The best explanation I came across. Understood the core concepts you explained. Again, beautifully executed

    • @TechPrepYT
      @TechPrepYT  2 หลายเดือนก่อน +1

      Thanks for the kind words!

  • @technicallyme
    @technicallyme 2 หลายเดือนก่อน +2

    Gpus also have a high tolerance for memory latency vs cpu
    Modern cpus have some parallelism built in. Functions such as branch prediction and out of order operations are common in almost all cpus

  • @Soupie62
    @Soupie62 2 หลายเดือนก่อน +1

    As an example... consider Pac Man.
    A grid of pixels is a Sprite. Pacman has 2 basic Sprite images: mouth open, and mouth closed. You need 4 copies of mouth open, for up/down/left/right, 5 Sprites total.
    Movement is created by deleting the old Sprite, then copying ONE of these from memory to some screen location. A source, a target, and a simple data copy, gives you animation. That's what the GPU does.

    • @davidwuhrer6704
      @davidwuhrer6704 2 หลายเดือนก่อน

      Not entirely correct. Some architectures can handle sprites, some don't. Some handle matrix muliplication and/or rotozooming, some don't.
      What you described is what an IBM PC with MS-DOS does. No sprite handling, no rotozoom, no framebuffer. So you need four copies of the open mouth sprite, but you can generate them at run-time. And you need to delete and redraw the sprite with every frame.
      Other systems, and I mean practically every other system, makes that easier. You only need one open mouth sprite, and you can rotate it as needed, and you don't need to delete it if the sprite is handled, you just change its position. This is simple if the image is redrawn with each vertical scan interrupt.
      But that was before GPUs. Modern GPUs use frame buffers. You have the choice of blanking the buffer to a chosen colour and then redrawing the scene with each iteration, or just drawing over it. The latter may be more efficient: 3D animation often draws environments, and 2D often uses sprite-based graphics, two cases where everthing gets painted over anyway. And yes, for sprites that means copying the sprite pattern to the buffer, in a way that makes rotozooming trivial, so you don't need copies for different sizes and orientations either.
      The mouse pointer is usually handled as a hardware sprite, which means it gets drawn each frame and is not part of the frame buffer.
      What you also neglected to mention are colour palettes. There are, in essence, four ways of handling colour: True or indexed, with or without alpha blending. True colour just means using the entire colour space, typically RGB, which is usually 24 bits, that is 8 bits for each colour channel. 32 bits if you use alpha blending, too. This has implications for how a sprite is stored. If you use a colour palette, you can either use a different sprite for each colour, or store the colour information with the sprite. Usually you would use the former, because it makes combining sprites and reusing them with different colours easier. With the latter, you can get animation effects simply by rotating the colour palette. If you use true colour, you can use a different sprite for each colour channel, but you typically wouldn't, especially if you use alpha blending.

  • @kartikpodugu
    @kartikpodugu 2 หลายเดือนก่อน +11

    With the dawn of AI PCs, which always have CPU, GPU and NPU.
    Can you make a similar video on differences between GPU and NPU ?

    • @TechPrepYT
      @TechPrepYT  2 หลายเดือนก่อน +1

      Its' on the list!

  • @AyoHues
    @AyoHues 2 หลายเดือนก่อน +6

    A good follow up video would be a similar short summary explainer on SoCs. And maybe one on the difference between the other key components like Media Engines, NPUs, onboard vs separate RAM etc. Thanks. 🙏🏽

    • @TechPrepYT
      @TechPrepYT  2 หลายเดือนก่อน +3

      Great idea, I'll put it on the list!

  • @gibiks7036
    @gibiks7036 3 หลายเดือนก่อน +7

    Thank you... Simple and short....

    • @TechPrepYT
      @TechPrepYT  3 หลายเดือนก่อน +1

      Thank you!

  • @GigaMarou
    @GigaMarou 23 วันที่ผ่านมา

    Hey nice video!
    Is it just that GPUs have more ALUs for each Cache and CU?
    Or are the GPUs ALUs different in structure?
    Similar for CUs and Caches?

  • @JimStanfield-zo2pz
    @JimStanfield-zo2pz 5 หลายเดือนก่อน +13

    Very powerful and concise explanation. Keep up the good work.

    • @TechPrepYT
      @TechPrepYT  2 หลายเดือนก่อน +1

      Thank you!!

  • @simonpires6184
    @simonpires6184 2 หลายเดือนก่อน +1

    Straight to the point and explained perfectly 👍🏽

    • @TechPrepYT
      @TechPrepYT  2 หลายเดือนก่อน

      That was the goal thank you!

  • @markonar140
    @markonar140 หลายเดือนก่อน +2

    Thanks for this Great Explanation!!! 👍😁

    • @TechPrepYT
      @TechPrepYT  หลายเดือนก่อน +1

      Glad you found it helpful!

  • @VashdyTV
    @VashdyTV 6 หลายเดือนก่อน +7

    Beautifully explained. Thank you

  • @juanclopgar97
    @juanclopgar97 หลายเดือนก่อน

    Do you have a video talking about cores? in this example you show a core with 4 ALU, and I do not quiet understand how a single CONTROL UNIT can handle that

  • @atleast2minutes916
    @atleast2minutes916 2 หลายเดือนก่อน +1

    Thank you so much! Simple , brief and easy to understand!! Awesome

    • @TechPrepYT
      @TechPrepYT  2 หลายเดือนก่อน

      Glad you enjoyed!

  • @olliehopnoodle4628
    @olliehopnoodle4628 หลายเดือนก่อน +1

    Excellent and well put together. Thank you.

    • @TechPrepYT
      @TechPrepYT  หลายเดือนก่อน

      Glad you liked it!

  • @lodgechant
    @lodgechant 2 หลายเดือนก่อน +1

    Very clear and helpful - thanks!

    • @TechPrepYT
      @TechPrepYT  2 หลายเดือนก่อน

      Thank you!!

  • @natcuber
    @natcuber 2 หลายเดือนก่อน

    How much is the latency difference between a CPU and GPU, since it's stated that CPUs focus on better latency vs throughput?

  • @dinhomhm
    @dinhomhm 6 หลายเดือนก่อน +5

    Very clear, thank you
    I subscribed to your channel to see more videos like this.

    • @TechPrepYT
      @TechPrepYT  5 หลายเดือนก่อน +1

      Thank you!!

  • @maxmuster7003
    @maxmuster7003 2 หลายเดือนก่อน +1

    Intel Core2 architecture can execute up to 4 integer instructions paralell with each single core.

  • @ruan13o
    @ruan13o 2 หลายเดือนก่อน

    From my experience, unless I am running a game, my GPU is typically only very lowly utilised while CPU might often be highly utilised. So when we are not running games (or similarly intensive graphics applications) why do computers not send some of the (no graphics) processing to the GPU to help out the CPU? Or does it already do this and I just don't realise?

    • @christophandre
      @christophandre 2 หลายเดือนก่อน

      That's already the case on some programs. The main reason why you don't see it that often is, that a program must be designed to run (sub)tasks on the GPU. This can make a program a lot more complex really fast, since most programmers don't decide on their own, which part of the program is calculated on which part of the computer. That's done by underlying frameworks (in most cases for good reasons).

    • @Trickey2413
      @Trickey2413 2 หลายเดือนก่อน

      As the video made it a point to highlight, they excel at different things. Forcing a GPU to do a task it is not optimised to do would be less efficient and vice versa.

  • @samychihi6317
    @samychihi6317 2 หลายเดือนก่อน

    which means INTL fall is not that bad, if GPU can't fully replace CPU, then INTL will remain in the market of computing and personal PC. thanks for the explanation

  • @StopWhining491
    @StopWhining491 2 หลายเดือนก่อน +1

    Excellent explanation. Thanks!.

    • @TechPrepYT
      @TechPrepYT  2 หลายเดือนก่อน

      Thank you!

  • @gfmarshall
    @gfmarshall 2 หลายเดือนก่อน +1

    Thank you so much 🤯❤️

    • @TechPrepYT
      @TechPrepYT  2 หลายเดือนก่อน

      You're welcome!

  • @chandru_
    @chandru_ หลายเดือนก่อน +1

    nice explanation

    • @TechPrepYT
      @TechPrepYT  หลายเดือนก่อน +1

      Thanks!!

  • @waynestewart1919
    @waynestewart1919 2 หลายเดือนก่อน +1

    Very good. I am subscribing. Thank you.

    • @TechPrepYT
      @TechPrepYT  2 หลายเดือนก่อน

      Thanks for the sub!

    • @waynestewart1919
      @waynestewart1919 2 หลายเดือนก่อน +1

      You are very welcome. You more than earned it. That may be the best explanation on CPUs and GPUs on TH-cam. Please keep it up.

  • @mohamadalkavmi4932
    @mohamadalkavmi4932 2 หลายเดือนก่อน +1

    very simple and nice

    • @TechPrepYT
      @TechPrepYT  2 หลายเดือนก่อน

      Thank you 😊

  • @buckyzona
    @buckyzona 6 หลายเดือนก่อน +4

    Great!

  • @PhillyHank
    @PhillyHank 2 หลายเดือนก่อน +1

    Excellent!

    • @TechPrepYT
      @TechPrepYT  2 หลายเดือนก่อน

      Thank you!

  • @abhay626
    @abhay626 4 หลายเดือนก่อน +2

    helpful. thank you!

    • @TechPrepYT
      @TechPrepYT  4 หลายเดือนก่อน

      Thanks!

  • @siriusbizniss
    @siriusbizniss 2 หลายเดือนก่อน

    Holy Cow I’m ready to be a computer engineer. 👍🏾👌🏾🤓

  • @jozsiolah1435
    @jozsiolah1435 2 หลายเดือนก่อน

    When you stress the system by forcing a dos game to play the intro at very high cpu cycles, you also frce the system to turn on secret accelerating features that remain on. One is the sound acceleratig of sound device, it offlads the cpu from decompressing audio. The ther is the floating point unit, that is off by default, when it is on, some games will become harder. The Intel has an autopilot for car games, that is also off by defaut.
    With the GPU the secret seems to be the diamnd videos I am experimenting with, many games show diamnds as reward, and hard to get.
    Diamnd stresses the vga card, it's so compex to draw by the video chip. Also the tuning will consume the battery faster.

  • @maquestiauartandmore
    @maquestiauartandmore 2 หลายเดือนก่อน +1

    great! thank you, but if you could slow down a little in your explanation..😅

    • @TechPrepYT
      @TechPrepYT  หลายเดือนก่อน

      Yep will try!!

  • @illicitryan
    @illicitryan 2 หลายเดือนก่อน +2

    So let me ask a stupid question, why can't they combine the two and get the best of both worlds?Will doing so negate some functions rendering it useless? Or .. lol just curious 🤔 😊

    • @riteshdobhal6381
      @riteshdobhal6381 2 หลายเดือนก่อน +6

      CPU has few cores which is why parallel processing is hard on them but each individual core is extremely powerful.
      GPU has thousands of cores making them good at parallel processing but each individual core is comparatively not very powerful.
      You can make a Processor which has multiple cores like GPU and each core being powerful like CPU but that would cost huge amount of money

    • @trevoro.9731
      @trevoro.9731 2 หลายเดือนก่อน

      The question is stupid. You can either get high performance in non-parallel tasks and low latency or high performance in parallel tasks and high latency, if you extend the normal core with GPU features, it will instantly become large, power hungry and slow for normal tasks, that is why modern CPUs for desktops have bult-in separate GPU cores. Also, the problem is that you would need a lot of memory channels of memory to make use of that - memory for GPUs is very slow, but has a lot of internal channels.

    • @zoeynewark9774
      @zoeynewark9774 2 หลายเดือนก่อน +1

      Can you combine a school bus with a Formula One car?
      There, you have your answer.

    • @boltez6507
      @boltez6507 2 หลายเดือนก่อน +1

      APUs do that.

    • @keithjustinevirgenes7387
      @keithjustinevirgenes7387 2 หลายเดือนก่อน

      What do you think will happen if both the heat of cpu ang separate gpu combined with just one cooling fan and heatsink? Plus greater voltage needs resulting in more heat?

  • @mostlydaniel
    @mostlydaniel 2 หลายเดือนก่อน +3

    2:34 lol, *core* differences

  • @mrpappa4105
    @mrpappa4105 2 หลายเดือนก่อน

    If possible, explain why a GPU can not replace a CPU. Great vid but my old vic64 brain (yeah im old) dont get this. Anyway, cheers from a new subscriber

  • @lanceorventech6129
    @lanceorventech6129 2 หลายเดือนก่อน

    What about the Threads?

    • @mattslams-windows7918
      @mattslams-windows7918 2 หลายเดือนก่อน

      A thread is simply a logical unit of sequence of instructions that gets mapped to each core by a scheduler (modern systems usually use both hardware and software scheduling these days) whether that's on a GPU or CPU for execution. Primary difference between GPU and CPU threads is that GPUs usually execute the same "copy" of each instruction across all threads being executed on the device (single-instruction multiple data aka SIMD), whereas CPUs can have all the different threads very easily execute all kinds of different instructions simultaneously (multiple-instruction multiple data aka MIMD). In addition more than one thread can be mapped to each core whether it be a CPU or GPU and in the case that happens simultaneous multi threading (SMT) hardware technology is used to execute all the threads at once.

  • @Theawesomeking4444
    @Theawesomeking4444 2 หลายเดือนก่อน +36

    No, you didn't explain nor do you understand anything, all you did was read a wikipedia page, gpus dont have "more cores" thats a marketing lie, what they have are very wide simd lanes, cpus have it too but they are smaller in exchange for bigger cache, higher frequency and less heat.

    • @mattslams-windows7918
      @mattslams-windows7918 2 หลายเดือนก่อน +11

      Honestly this video despite oversimplifying a bit (due to time constraints likely) isn't entirely incorrect. There are in fact more cores on average in a GPU it's just the architecture of each GPU core is completely different than a CPU core; companies like Nvidia and AMD aren't lying when they talk about their CUDA core/stream processor counts it's just that each core in a GPU serves somewhat different purposes than each core in a CPU is all. Also what you're saying about cache isn't really correct AMD RDNA 2+ cards have a pretty big pool of last level infinity cache that contributes a significant amount to the overall GPU performance

    • @Theawesomeking4444
      @Theawesomeking4444 2 หลายเดือนก่อน +1

      @@mattslams-windows7918 The problem is, as a graphics programmer, if i were to learn it again, this video tells me nothing, its literally a presentation a high school student would do for homework "cpus do serial gpus do parallel", the funny thing is that the main differences are literally shown in the first image he showed, where it shows the number of ALUs (the simd lanes, they also have FPUs) of each, yet he didnt explain that because he has no clue what those images mean.
      now gpus do have slightly more cores than their cpu counterpart, but its usually 1.5x-2x higher, not thousands of cores,
      if you want the correct terminologies,
      a cpu core is equivalent to streaming multiprocessor in nvidia , compute unit in amd(funny enough amd also refers to them as cores on their integrated graphics specifications), and core in apple,
      a cpu thread is warp in nvidia and wavefront in amd,
      a cpu simd lane is a cuda core in nvidia and stream processor in amd,
      now for the cache thing you mentioned, you are probably using a gaming pc or console for the comparison, they will usually have 4-8 core cpus with 16-32 core gpus, in games single core performance matters more (usually because most gamedevs dont know how to multithread haha),
      if you want more less of a comparison take the ryzen 9 5900x 12 core and the rx 6500 16 cores, roughly similar power consumption, L3 cache is 64MB on the cpu and 16MB on the gpu, L2 cache is 6MB on the cpu and 1MB on the gpu, L1 cache is 768KB and 128KB on the gpu, now if you get a gpu with higher core counts, you will notice that L3 cache increases a lot but L1 cache stays the same, this is because L3 cache is a shared memory pool for all of the cores within the gpu and cpu, while L2 and L1 cache are local to the core.
      anyways that was a long reply, hopefully that answered your questions xD.

    • @Theawesomeking4444
      @Theawesomeking4444 2 หลายเดือนก่อน +2

      @@mattslams-windows7918 lol my reply was removed

    • @JosGeerink
      @JosGeerink 2 หลายเดือนก่อน

      ​@@Theawesomeking4444it wasn't?

    • @Theawesomeking4444
      @Theawesomeking4444 2 หลายเดือนก่อน +3

      @@JosGeerink nah i had another reply which i explained the technical details but you cant state facts with proof here, unfortunately.

  • @zoemayne
    @zoemayne หลายเดือนก่อน +1

    Im just worried they set him up for failure. Those investors gutted the company and sold the most valuable asset - the land they owned. He has my support ill make sure to either stop by there with a group of friends or atart getting some healthy takeout from them. Those bankruptcy investors should be stopped. Just look how they plundered toys r us.

  • @jlelelr
    @jlelelr 2 หลายเดือนก่อน

    can cpu have something like cuda?

    • @mattslams-windows7918
      @mattslams-windows7918 2 หลายเดือนก่อน

      Depends on your definition of "have": if Nvidia makes a GPU driver that supports the CPU in question then technically one can combine an Nvidia GPU and that CPU into the same computer to run CUDA stuff, but executing GPU CUDA code on the CPU is something that Nvidia probably doesn't want people to do since Nvidia likely wants to keep making money on their GPU sales so executing GPU CUDA code on CPU will likely not be a thing anytime soon

  • @avalagum7957
    @avalagum7957 2 หลายเดือนก่อน +1

    Still not clear for me: what component does the GPU miss so that it cannot replace a CPU?
    Ah, just checked with perplexity ai, the instruction set that a GPU accepts is too poor to make a GPU a replacement for a CPU.

    • @nakkabadz6443
      @nakkabadz6443 2 หลายเดือนก่อน

      gpu is like PhD holder while the CPU is the jack of all trades.
      look at the name GPU graphics computing CPU is Central processing unit
      GPU can outperform cpu computing power in singular task like in graphics computing or any other given task, while the cpu can't compute as fast as the gpu cpu can compute task simultaneously.

    • @trevoro.9731
      @trevoro.9731 2 หลายเดือนก่อน +1

      Most of things in this video is BS. GPU can replace CPU, but will work many times slower for most of tasks while taking more power. Only on highly parallel tasks it is efficient and fast. Also, it is missing a lot of features to control hardware, like proper interrupts management etc.

    • @avalagum7957
      @avalagum7957 2 หลายเดือนก่อน

      @@trevoro.9731 why does GPU is slower than cpu for most of tasks?

    • @trevoro.9731
      @trevoro.9731 2 หลายเดือนก่อน

      @@avalagum7957 It is optimized to consume minimum amount of energy and perform multiple calculations per cycle, but the calculations take much longer time to finish, up to 100 times slower. All those parallel operations go to waste if you don't need to perform the exact same operation on multiple entries.
      Also, its memory is way slower than that of CPU (albeit CPU memory is also not very fast, it merely got ~30% faster over the last 20 years, but got a lot of internal channels), but contains a lot of internal channels, so it is efficient in processing large amounts of data, which do not need high performance per each dataset.

  • @Norman-z3s
    @Norman-z3s 2 หลายเดือนก่อน

    What is it about AI that requires intense parallel computation?

    • @undercover4874
      @undercover4874 2 หลายเดือนก่อน +3

      In neural networks it's all about matrix multiplications and also we want to pass multiple inputs through the network (which also perform a number of matrix multiplication) with GPU we can perform all the passes of different inputs to the neural network parallel instead of just doing one input a time which can speed up the computations

  • @grasz
    @grasz 2 หลายเดือนก่อน +1

    CISC vs RISC plz

    • @TechPrepYT
      @TechPrepYT  2 หลายเดือนก่อน +1

      It's on the list!

    • @grasz
      @grasz 2 หลายเดือนก่อน

      @@TechPrepYT yay~!!!

  • @trevoro.9731
    @trevoro.9731 2 หลายเดือนก่อน +3

    You are wrong about many things.
    GPUs, the modern ones, aren't actually good in performing operations for each single pixels. They are far behind the CPU on that, but can more efficiently work with large groups of pixels.
    No, the modern high-end GPUs have 32-64 cores (top ones like 4090 have 128 cores). The marketing cores is a lie.
    No, the threads is a lie, the actual number is hundreds times lower. Those fake threads are parallel execution units, they are not threads, they are same code which works with a very large array. Each single core can run 1 or 2 actual threads, therefore the number of threads for high-end GPUs is usually limited to 128 or so.
    Only because of repetitive operations GPUs are faster in some tasks, they are generally much slower than a crappy processor.

  • @meroslave
    @meroslave 2 หลายเดือนก่อน

    A CPU can never be fully replaced by a GPU, so what happened now between intel and INVIDIA!?

  • @aorusgaming5913
    @aorusgaming5913 2 หลายเดือนก่อน

    Does this means that gpu is a better version of cpu only or we can say faster version of cpu which can do many calculation at a time, then why dont we use two gpus instead of a cpu and a gpu?

    • @undercover4874
      @undercover4874 2 หลายเดือนก่อน +1

      GPU only performs better if the task being performed can be parallelized, but majority of the tasks can't be or don't need parallel computation So they would be slower on GPU. Main Power GPU gives us is parallelization if can't exploit it in a task the overhead will make it even slower than CPU.

    • @pear-zq1uj
      @pear-zq1uj 2 หลายเดือนก่อน +2

      No, GPU is like a factory with 100 workers. CPU is like a medical practice with 4 doctors. Neither can do each other's job.

  • @sauravgupta5289
    @sauravgupta5289 2 หลายเดือนก่อน

    Since each core is similar to CPU can we say that it has multiple CPU units?

  • @marcopo06
    @marcopo06 2 หลายเดือนก่อน +1

    👍

  • @matteoposi9583
    @matteoposi9583 2 หลายเดือนก่อน +1

    am i the only one who sees dots in gpu drawing?

    • @tysonblake515
      @tysonblake515 2 หลายเดือนก่อน

      No you're not! It's an optical illusion

  • @JuneJulia
    @JuneJulia 2 หลายเดือนก่อน

    Still cant understand wht gpu can do what it does.
    Bad video.

    • @Trickey2413
      @Trickey2413 2 หลายเดือนก่อน

      You have a low IQ so you struggle with basic information, its not your fault. Listen again at 0.5x speed and try to comprehend the essence of what he is saying. Take a notes as he lists what each of them do and then highlight the differences between the two. Make sure you make an effort to understand the words he is using, ask yourself "what does he mean when he says this" and try to formulate it in your own words.

  •  2 หลายเดือนก่อน +1

    I think pretty much everyone knows the difference between gpu and cpu, the most useful information here would be the "why" the gpu cannot be used as a CPU.

  • @goodlifesavior
    @goodlifesavior หลายเดือนก่อน

    thanks for foolization
    we in Russia have no enough our russian foolization and so need to be foolizised by american trainers

  • @THeXDesK
    @THeXDesK 2 หลายเดือนก่อน

    .•*

  • @johnvcougar
    @johnvcougar 2 หลายเดือนก่อน

    RAM actually stands for “Read And write Memory” … 😉

    • @pgowans
      @pgowans 2 หลายเดือนก่อน +6

      It doesn’t - it’s random access memory

    • @sauceman2924
      @sauceman2924 2 หลายเดือนก่อน

      stupid 😂

    • @Trickey2413
      @Trickey2413 2 หลายเดือนก่อน +2

      Imagine trying to correct someone whilst having the IQ of a carrot.

  • @Mrmask68
    @Mrmask68 5 หลายเดือนก่อน +4

    nice ⛑⛑helpful

    • @TechPrepYT
      @TechPrepYT  5 หลายเดือนก่อน +2

      Thanks!

  • @mrgran799
    @mrgran799 2 หลายเดือนก่อน +1

    In the future maybe we will have only one thing.. Cgpu

    • @nel_tu_
      @nel_tu_ 2 หลายเดือนก่อน

      central graphics processing unit?

    • @thebtm
      @thebtm 2 หลายเดือนก่อน

      Cpu/gpu combo units exist with ARM CPUs.

    • @a-youtube-user
      @a-youtube-user 2 หลายเดือนก่อน

      ​@@thebtm also with Intel & AMD's APUs