CPU vs GPU (What's the Difference?) - Computerphile

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ก.ย. 2024

ความคิดเห็น • 560

  • @scheimong
    @scheimong 7 ปีที่แล้ว +391

    One thing worth mentioning: GPUs actually do 4 dimensional matrix calculations rather than 3, because with 3 dimensions, Rotation and magnification require matrix multiplication while translation requires matrix addition. By adding an arbitrary dimension, the GPU is able to unify all three key transformations under a single multiplicative architecture.

    • @artofgameplaying
      @artofgameplaying 7 ปีที่แล้ว +7

      I was scrolling down just hoping somebody would mention this!

    • @abhishekgy38
      @abhishekgy38 4 ปีที่แล้ว +6

      Yes... affine transformations

    • @prezadent1
      @prezadent1 3 ปีที่แล้ว +15

      @@abhishekgy38 homogeneous transformations, not affine

    • @grn1
      @grn1 3 ปีที่แล้ว +1

      That's actually talked about in the one of the Triangles and Pixels videos (playlist linked in description).

    • @Cccoast
      @Cccoast 3 ปีที่แล้ว

      Yes, he mentioned this with transparency.

  • @tomasotto8980
    @tomasotto8980 4 ปีที่แล้ว +870

    CPU - 10 Ph.D guys sitting in a room trying to solve one super hard problem.
    GPU - 1000 preschoolers drawing between lines.

    • @evenprime1658
      @evenprime1658 4 ปีที่แล้ว +21

      end of the day GPU's are taking over tho .. they use gpu for computations now days anyways

    • @DS-Pakaemon
      @DS-Pakaemon 4 ปีที่แล้ว +77

      @@evenprime1658 Nothing is taking over anything. GPU is also a CPU, CPU is also a GPU. Now let's enjoy TV.

    • @ber2996
      @ber2996 4 ปีที่แล้ว +7

      Explained in the simplest form

    • @ekrem_dincel
      @ekrem_dincel 4 ปีที่แล้ว +4

      @@DS-Pakaemon isn't GPU is a subset of CPU?

    • @mistakenmeme
      @mistakenmeme 4 ปีที่แล้ว +15

      @@DS-Pakaemon they are both PUs.

  • @magikarpusedsplash8881
    @magikarpusedsplash8881 7 ปีที่แล้ว +733

    "you've never seen a triangle that isn't flat."
    I just came from the non-euclidean geometry video.

    • @Street_Cyberman
      @Street_Cyberman 5 ปีที่แล้ว +8

      A triangle made of straight lines...

    • @pablo_brianese
      @pablo_brianese 5 ปีที่แล้ว +56

      @@Street_Cyberman Triangles in non-euclidean geometry are made of straight lines.

    • @IronicHavoc
      @IronicHavoc 5 ปีที่แล้ว +22

      You've still only seen a flat representation.

    • @idkidk9204
      @idkidk9204 4 ปีที่แล้ว

      Lol

    • @Xomsabre
      @Xomsabre 4 ปีที่แล้ว +3

      @@IronicHavoc Nah, it was drawn on an anti-sphere, so it wasn't flat... it was curved... and its angles were all 90 degrees.

  • @lawrencedoliveiro9104
    @lawrencedoliveiro9104 7 ปีที่แล้ว +127

    Here’s a key acronym to remember about GPUs: “SIMD”. That‘s “Single-Instruction, Multiple-Data”. It has to do with the fact that a GPU can operate on a hundred or a thousand vertices or pixels at once in parallel, but it has to perform exactly the same calculation on all of them.
    Whereas a single CPU core can be described as “SISD” -- “Single-Instruction, Single-Data”. With multiple CPU cores, you get “MIMD” -- “Multiple-Instruction, Multiple-Data”, where each instruction sequence can be doing entirely different things to different data. Or in other words, multithreading.
    So even with all their massive parallelism, GPUs are still effectively single-threaded.

    • @tazogochitashvili6514
      @tazogochitashvili6514 4 ปีที่แล้ว +7

      Don't modern CPUs have SIMD instructions like AVX though?

    • @lawrencedoliveiro9104
      @lawrencedoliveiro9104 4 ปีที่แล้ว +9

      Yes, but the vectors that your typical present-day CPU operates on are short ones, with something like 4 or 8 elements at most.

    • @hathawayamato
      @hathawayamato 3 ปีที่แล้ว +6

      Thank you, best explanation I've seen

    • @solaokusanya955
      @solaokusanya955 2 ปีที่แล้ว +2

      I am an autodidactic ,and I beleive in the "first principle" of things...
      I want to understand this Intuitively...
      Is there any video that can properly put into perspective all these that you say in such a way I little child can understand?

    • @solaokusanya955
      @solaokusanya955 2 ปีที่แล้ว +1

      @@lawrencedoliveiro9104 I am an autodidactic ,and I beleive in the "first principle" of things...
      I want to understand this Intuitively...
      Is there any video that can properly put into perspective all these that you say in such a way I little child can understand?....
      How do these vectors work?
      How are they represented, how can all these not look abstract? Human beings made these things so if someone has done it, then it's no more abstract....I want to Intuitively understand all these...
      It should have to be school of the 4 walls of anywhere, I beleive if a man seeks he would find...I want to know these things...thus my everyday search of all these topics...
      I just Intuitively understood how binary numbers can be used to represent any other number, but how does it translate to videos? Audios and everything we see digitally?..how does just o and 1 define everything we feel that is completely tangential to 1 and 0?

  • @f4z0
    @f4z0 8 ปีที่แล้ว +480

    woobly computerphile camera XDDD nice one hahaha.

    • @U014B
      @U014B 8 ปีที่แล้ว +17

      Wibbly-wobbly camery-wamery

    • @legostarwarsrulez
      @legostarwarsrulez 8 ปีที่แล้ว +3

      Stuff

    • @U014B
      @U014B 8 ปีที่แล้ว +3

      shut your face! Don't forget Things.

    • @oxey_
      @oxey_ 7 ปีที่แล้ว +2

      Yes xD

    • @sakracliche
      @sakracliche 3 ปีที่แล้ว

      Feels like watching the office

  • @matsv201
    @matsv201 8 ปีที่แล้ว +357

    Hi missed one of the most important things. Why they are more efficient.
    Running GPU load in parallell don´t make it more power efficient. It also don´t use less die surface.
    Whats is done is that the GPU it just have one very narrow sett of instruction that is the same width and same length regardless. Typically it can be a 128 bit by 128 bit instruktion that can run on 2x64 bit, 4x32 bit or 8x16 bit. And it can only run a very limited number of instruction.
    Also, there is no branch, and no branch prediction. This cuts away a load of piples. In a GPU every pipeline does say 128 bit of instruction for every clock regardless of instruction. If you only need 16 by 16 bit of instruction, though luck, still have to fill up the whole 128 by 128 bit register.
    Also, a CPU got a FPU and a IPU and logic, and SIMD pipes. And every pipe have duplicates for bransch prediction. Some have 4 way branch prediction, so they have 4 pipelines for one instruction. So in a normal CPU you might have 4 pipes for integer, 4 for floating, 4 for SIMD and 4 for general logic (often combined with integer). Total of 16 pipes just to calculate one value. Why.. well to get the speed up. .. the clock speed that is and keep branch prediction errors down.
    A GPU just does one type of pipeline (in the older days there was two different, one for transform and one for rendering, but in most modern one they are integrated in the same pipeline). Also, there is one instruction for every 128 or 256 set of data. That is if you uses 32 bit on 128 bit pipe you get a 1:4 instruction reduction... because most data uses the same instruction.
    Typical 128 bit data can be a HDR rendering scen where two colors are mixed RGBA mixed with a other RGBA. Insteed of running first the R, then the G and then the B and so on. They just put the whole load in to the pipe and calculate all the data with the same instruction. Almost everything in graphics come in set of four. If there is a polygon, well triangle. It have 3 corners, and then a additional value for the polygon. So it still have 4 values.
    Some Graphics load use half precision. Then you can do two pixel or two polygon calculation at the same time in the same pipe. Actually, CPU have had that capability since Pentium 3. But its still just have one output per core. (even if its a 128bit simd output, most modern CPU even have 256 bit simd)
    The other part is the missing of branch .This remove a ton of problem for the chip designer. Firstly you don´t need any branch prediction. Secondly you don´t need and branching instructions and pipes. Removing the whole logic pipe. Thow you can still make branch like calculation multiplying with a matrix that gives a very set 0 or 1 value in the end matrix. But the GPU can never make any decision about it.
    This is also the main drawback with the GPU. it will continually calculate the same set of work orders on and on and it will only calculate set given work order. It can´t make work order. This makes the load very predictable. The GPU know what it will do several 10 000 clocks in for hand. This helps parallellism very much. But it can´t make a instruction.
    So the CPU still have to make a instruction for the GPU. But the load of the CPU can be very low. For example the CPU kan tell the GPU to render a trea of data. The trea in turn is a given list of objects, that have a given list of subobjekt, that have a given list of vertexes, that have a given list of textures and so on. This way the CPU can give the GPU very limited amount of information to make quite a lot of work. This have not always been the case. GPU prior to 2000 have to have specific list of textures and vertexes directly from the CPU, giving the CPU quite a lot of work load.
    The problem nowdays is that the game dev want the word to be "living" They there for want as little amount of data to be pre fixed to a set trea of object. There for the CPU can be running in feeding the GPU with 1000 and 1000 of objects for every frame. In DX11 this is a problem, because in DX11 just one CPU core can do the GPU feeding. Someone just never thought this would be a problem. This have been in DX since the first version. Finally in DX12 it will be updated.

    • @Friedeggonheadchan
      @Friedeggonheadchan 8 ปีที่แล้ว +20

      +matsv201 Modern GPUs do have branching, albeit very primitive, but even that's getting better due to AMDs non-synchronized shaders which allows for much better efficiency and foregoing some inherent weaknesses of SIMD. I've read some modern architectures even considering adopting branch prediction quite soon.

    • @QuantumFluxable
      @QuantumFluxable 8 ปีที่แล้ว +47

      +matsv201 thanks for putting that much effort into a youtube comments section :O
      still, do you think that much detail would've made a general explanation of GPU vs. CPU easier to understand? :D

    • @matsv201
      @matsv201 8 ปีที่แล้ว +20

      QuantumFluxable Well. i had little to do.. but no problem, stop reading when you get bored.

    • @Henrix1998
      @Henrix1998 8 ปีที่แล้ว +69

      You wrote so much that I assume you are right

    • @matsv201
      @matsv201 8 ปีที่แล้ว +7

      Henrix98 Yea that tactics i usualy go for :)... actually im more likely to be wrong, the more i write....
      on the other side... people that dont understand how things work usually over simplyfies them... Is really anying when someone force you to tell something complicated in a simple way

  • @smaklilu90
    @smaklilu90 8 ปีที่แล้ว +4

    It is extremely easy to complain about a tiny glich in the game while sitting in front of a console without any clue how everything works. When u think of about the math involved in making it, it is mind blowing what a human mind is capable of.

    • @MelBrooksKA
      @MelBrooksKA 8 ปีที่แล้ว +2

      +sami aklilu When you think about thinking about the human mind, or any animal brain for that matter, your mind should be blown at the fact that you can think about that...

    • @lort6022
      @lort6022 7 ปีที่แล้ว

      it is extremely easy to fix issues before release, but they rush it. creating a game is not hard. creating a game engine is very hard, but often game dev's don't make the engine anyway and 99.9% of the bugs are not because of the engine, but made by level designers..

  • @HaniiPuppy
    @HaniiPuppy 8 ปีที่แล้ว +18

    One of the features of OS X a few versions back was the ability to off-load repetitive, non-graphical tasks to the GPU rather than the CPU to operate faster.

  • @michaelgeorgoulopoulos8678
    @michaelgeorgoulopoulos8678 8 ปีที่แล้ว +37

    2:45 - Nearly bringing up the 'w' coordinate and quick fix by baptizing it "transparency" :P

  • @MacBeach
    @MacBeach 8 ปีที่แล้ว +4

    This is one of the better simplified explanations of 3D graphics I've seen. Particularly the part about why triangles are used.

  • @MrDajdawg
    @MrDajdawg 8 ปีที่แล้ว +81

    An entire episode with no explanations on how stuff actually works. Well done.

    • @vicscalletti6427
      @vicscalletti6427 5 ปีที่แล้ว +5

      MrDajdawg , yea seriously. This video was super annoying.

    • @Awgolas
      @Awgolas 5 ปีที่แล้ว +13

      What did you think this video was missing? I thought he explained how a GPU in general worked pretty well, it's essentially just a linear algebra machine.

    • @minepro1206
      @minepro1206 5 ปีที่แล้ว +9

      It wasn't easy for my professor to explain it during a whole semester, why do you think it would be easy explaining it in a 6 minute video?

  • @rjfaber1991
    @rjfaber1991 8 ปีที่แล้ว +71

    Watching this video on one monitor, seeing my GTX970 busy folding away on the other monitor. How appropriate...

    • @rjfaber1991
      @rjfaber1991 8 ปีที่แล้ว +3

      B Snacks Not sure VRAM does a lot for folding, but if you wish...

    • @MelBrooksKA
      @MelBrooksKA 8 ปีที่แล้ว +7

      +Robert Faber Thank you for mentioning folding, I learned something new today

    • @ck88777
      @ck88777 7 ปีที่แล้ว +1

      Folding? as in protein folding?

  • @FoxDren
    @FoxDren 8 ปีที่แล้ว +52

    am I the only one who finds the way he says pixel weird, the stress seems to be in completely the wrong place

    • @jdgrahamo
      @jdgrahamo 8 ปีที่แล้ว +11

      +Ascdren
      Not if you consider what it stands for.

    • @theinsanitypenguin
      @theinsanitypenguin 8 ปีที่แล้ว +3

      +Ascdren he's saying it as you would if you say it as picture element pic*e*l

    • @allend433
      @allend433 8 ปีที่แล้ว +2

      his accent seems of England. what language are we speaking again?

    • @ann.samuel
      @ann.samuel 4 ปีที่แล้ว

      pick-SELL 😂

  • @TheStevenWhiting
    @TheStevenWhiting 8 ปีที่แล้ว +128

    Password cracking with a GPU is also faster than the CPU.

    • @Roxor128
      @Roxor128 8 ปีที่แล้ว +54

      +Steven Whiting That's because when you're running through billions of permutations of input passwords and trying to find which one matches your stolen hash, you have a situation where each run is independent, so can be easily converted to a parallel form.
      Chances are that with a typical high-end GPU you'd be calculating over 1000 hashes at once and if each core can do 200k per second, that's a total throughput of 200M per second.

    • @SirCutRy
      @SirCutRy 8 ปีที่แล้ว +1

      +Roxor128 Do you think we'll need something like MD5-256 in the future?

    • @Vulcapyro
      @Vulcapyro 8 ปีที่แล้ว +19

      +SirCutRy MD5 is effectively dead cryptographically. It's flawed in its design, not because its digest is too small.

    • @SirCutRy
      @SirCutRy 8 ปีที่แล้ว +1

      Vulcapyro What are the flaws? Isn't MD5 still used in password hashing?

    • @SirCutRy
      @SirCutRy 8 ปีที่แล้ว

      Zhiyuan Qi How can it be fixed? More scrambling?

  • @zegzezon5539
    @zegzezon5539 7 ปีที่แล้ว +2

    *GPU:* Graphic objects can generally be broken into pieces and rendered in *parallel* at the machine level.
    *CPU:* Generally sequential by design to care of problems which have many *_dependencies_* which *cannot be solved in parallel* by their very nature.
    Ex.
    Seq1: *A = b + c;*
    Seq2: *D = A + E;*
    Seq3: *y = D - A;*
    Obviously, you have to do the thing in *sequence* in order to get the value of *y.*
    However, the *tasks* of *_relegating_* as to which problem (i.e. gaming or any other apps) during *execution time* is mainly a *function of the OS.* This is so as *Gaming Apps* is not *all* a *_graphic rendering_* task but also a combination of *logic, rules, and some AI.* Those are the reasons why you need *both.* Kinda like *_specialization_* of tasks.

  • @Verrisin
    @Verrisin 6 ปีที่แล้ว +10

    When you tell a joke to an engineer and he takes it seriously and fixes it instead. XDXD
    - 6:16

  • @sanisidrocr
    @sanisidrocr 8 ปีที่แล้ว +10

    Nice shout out to Bitcoin , but for clarification purposes Bitcoin mining on GPUs is now obsolete and most of it is done of Specialty ASICs. Mining is so competitive that If you attempt to mine with a GPU or even older generation ASIC you will lose money unless you have access to free electricity. The general point of mentioning mining on GPUs is important though as hashing double rounds of SHA256 is much more efficient on a GPU than CPU, although ASIC's are far superior to both.
    For Mining - ASIC > FPGA > GPU > CPU

    • @TheKlaMike
      @TheKlaMike 8 ปีที่แล้ว

      +sanisidrocr ASIC is to GPU and GPU is to CPU.

    • @sanisidrocr
      @sanisidrocr 8 ปีที่แล้ว

      TheKlaMike ASIC's can be designed in any way and be less optimal than CPUs for mining if not designed for parallel throughput.

    • @GelebFlamebringer
      @GelebFlamebringer 7 ปีที่แล้ว

      what about quantum computer?

    • @yasinomidi7525
      @yasinomidi7525 6 ปีที่แล้ว

      sanisidrocr HAH! Little did they know what happens to GPU because of mining in 1 year

  • @tarcal87
    @tarcal87 7 ปีที่แล้ว +1

    Perfect, I just wished he had actually said that a GPU has lots of tiny processors (compute units) physically. Yes most people know this, but given how entry-level this explanation was (on purpose), the target audience might not know it.
    Very clear answer, I'm impressed :O

  • @Croz89
    @Croz89 8 ปีที่แล้ว +9

    I've been told that for computer modelling, GPU's aren't always a good solution. I've been told that for certain kinds of Finite Element Analysis, you can generally only calculate one step at a time, one node at a time, as the result of one node can affect the calculation required for all the other nodes it influences, and the same is true for those nodes at each time step. So these still need high speed CPU's to do the grunt work, as the task isn't easy to parallelize.

    • @forloop7713
      @forloop7713 6 ปีที่แล้ว

      I'm interested in this too

    • @gregorymalchuk272
      @gregorymalchuk272 4 ปีที่แล้ว

      Does anybody have any more information about this? I always wondered how they can parallelize finite difference and coupled sets of differential equations. Because the results of one computation are the input to the equations of the next element in the mesh.

  • @TheHrabik
    @TheHrabik 4 ปีที่แล้ว +2

    Thank you so much for the content and the people on the screen for their willingness to share knowledge! Bravo good sir, it takes a special kind of talent to explain something complicated this well and understandably. Exactly the kind of people you'd want to have tea/coffee/beer with!

  • @unvergebeneid
    @unvergebeneid 8 ปีที่แล้ว +99

    1:27 That's a suboptimal way of tessellating circles. All the triangles meeting in the center will increase the chance of visible artifacts occurring.
    (TH-cam smartass answer mitigation disclaimer: I know it's not a circle and I saw that the triangles don't meet. Leaving a smaller copy of the same polygon for the center doesn't solve the problem though, does it?)

    • @Computerphile
      @Computerphile  8 ปีที่แล้ว +111

      Must I put 'for illustration only' on every animation I do? :) >Sean

    • @unvergebeneid
      @unvergebeneid 8 ปีที่แล้ว +24

      ***** Alright, fair point. But it still gives people wrong ideas. I had to learn this the hard way. Would be nice if future generations of fools who tessellate their own polygons would be unknowingly lead to have a better intuition about this. But I guess it's just much easier to do it the way you did which, again, would be a fair point.

    • @borissman
      @borissman 8 ปีที่แล้ว +5

      +Penny Lane There is also a point where if you had to learn stuff the hard way, you probably would be the one teaching those who learned it the easy way.
      Not being a smarty-pats or anything.. just a bright side :)

    • @QuantumFluxable
      @QuantumFluxable 8 ปีที่แล้ว +4

      +Penny Lane Who on earth still tesselates their own polygons? Most 3D modeling programs either have support for N-gons or can clean up this kind of mess automatically.

    • @StereoBucket
      @StereoBucket 8 ปีที่แล้ว +14

      +Penny Lane Can you explain to me why? You got me curious and I want to know.And what would be a proper way of doing this?

  • @EnglishTeacherBerlin
    @EnglishTeacherBerlin 8 ปีที่แล้ว +3

    I adore his crystal-clear step by step explanations. Great vid!

  • @talideon
    @talideon 8 ปีที่แล้ว +6

    The spice must flow!
    (If you don't get the joke straight away, rewatch the video.)

    • @RustyTube
      @RustyTube 8 ปีที่แล้ว

      +Cíat Ó Gáibhtheacháin That was my first reaction, too. :-)

    • @darthkynreeve
      @darthkynreeve 8 ปีที่แล้ว

      Yes!!!

  • @wouldntyaliktono
    @wouldntyaliktono 8 ปีที่แล้ว +2

    Nicely done. I use nVidia GPUs to do huge matrix operations in statistical modeling to speed up model estimation. Bayesian MCMC isn't really practical on large scales without the parallelism offered by GPU computing.

  • @letsgoBrandon204
    @letsgoBrandon204 8 ปีที่แล้ว

    When you describe basically what the gpu is doing it really put into perspective how complicated modern graphics is.

  • @paulthy1496
    @paulthy1496 8 ปีที่แล้ว

    wow, so well explained, as simple as possible but no simpler. Its such a big area i cant believe he explained it so compactly and clearly. Hi is a real genus.

  • @smacman68
    @smacman68 7 ปีที่แล้ว +1

    I run very, very large AutoCAD files and they would take forever and a day to load. My supervisor has the exact same machine I do and he loads the same files in a fraction of the time. I asked him how and he said that he installed a graphics card. Just for displaying 2D images, the GPU makes all the difference. I got one and it is like magic.

  • @jameswhyte1340
    @jameswhyte1340 8 ปีที่แล้ว

    He is excellent at explaining these things. Please have him on more.

  • @Slarti
    @Slarti 6 ปีที่แล้ว

    Really nice simple explanation of a complicated area!

  • @kryler8252
    @kryler8252 8 ปีที่แล้ว +38

    Cuda; the best thing for the advancement of Machine Learning Research ever.

    • @Kneedragon1962
      @Kneedragon1962 8 ปีที่แล้ว +1

      +Dylan Cannisi CUDA, the best thing for lots of things... It helps gamers, but it also helps high performance computing people. The stuff you need to model the Big Bang, is the same as what you need to play Quake at 8k and 200 fps.

    • @kryler8252
      @kryler8252 8 ปีที่แล้ว +7

      +Kneedragon1962 CUDA is a programming language API for C/C++, Fortran, etc. You wouldn't use it to program graphics. It stands for Compute Unified Device Architecture. It unlocks you the ability to to do mathematics on vectors and other small computations. I'm just saying what I use it for, it allows me to build complicated models, mostly SVM's, I wouldn't be able to do on a CPU because it would take to long.

    • @Kneedragon1962
      @Kneedragon1962 8 ปีที่แล้ว +2

      Dylan Cannisi CUDA is a language and a suite of tools that allow you to do programming on the video hardware, which is not usually accessible to the programmer for general purpose computing. It provides access to hardware that is not normally used this way or accessible to a general software programmer. And it provides some tools for very wide parrallelisation of code. Many programming languages support parallel programming, multiple threads for example, but they anticipate a fairly small number of threads. CUDA supports a whole different view of this. I was at TAFE in'95 when nVidia started talking about it, and the Linux people were starting to use WolfPack to do distributed processing. We have a blanket term for this stuff today, we call it cloud computing...

    • @HiAdrian
      @HiAdrian 8 ปีที่แล้ว +15

      *+Dylan Cannisi* Yes, Nvidia did an amazing job with CUDA. It's unfortunate that it's not a hardware agnostic standard though. Doesn't seem right to have vendor lock-in for something of this nature.

    • @Kneedragon1962
      @Kneedragon1962 8 ปีที่แล้ว +3

      Adrian Right back at day one, nVidia tried very hard to sell the concept and make it an industry standard, they were shouting it from the rooftops while I was at college, but the various other manufacturers were lukewarm, and they all wrote their own proprietary versions, which all faded into the past like Flash player...

  • @dannygjk
    @dannygjk 6 ปีที่แล้ว

    It really isn't necessary to use floating point operations but it is done with floating point operations because there is hardware that does the floating point operations very well so they are making use of that. An example to clarify what I mean is doing square roots. You can do square roots without using floating point ops you can just use integer calculations then at the end you just place the decimal point in the correct position.

  • @DLCSpider
    @DLCSpider 8 ปีที่แล้ว +1

    2:30
    Games actually don't move the camera around. Everything else gets rotated around the camera.

    • @MrSlowestD16
      @MrSlowestD16 8 ปีที่แล้ว

      +DLC Spider Well not only is it implementation dependent but that's completely inaccurate for the most part.
      In the vast majority of scenes, be them game or otherwise you make a scene description. Whether the camera moves coordinates or the scene moves coordinates is pretty much irrelevant to the end result, but moving MANY vertices of ALL objects in the scene takes a MUCH larger time than moving 1 point (the camera) in the scene and updating the POV vector (a second point). From there the culling and rendering happens the same as it would no matter what moved.
      Moving vertices is expensive, moving the camera is cheap, though either will net you a valid result.

  • @Criticofus
    @Criticofus 8 ปีที่แล้ว

    D-Wave made a quantum computer, get on it Computerphile. I am kind of super hyped right now.

  • @prashanthb6521
    @prashanthb6521 4 ปีที่แล้ว +2

    I am currently doing "high throughput computing" :)

  • @BHFJohnny
    @BHFJohnny 8 ปีที่แล้ว

    Our professor on Star Clusters told us GPUs are far more effective in calculating clusters evolution. Since it is not possible to analytically derive the evolution for more than two bodies, it has to be done numerically. 'It's all just about data crunching', he said.

  • @The_Oracle
    @The_Oracle 6 หลายเดือนก่อน

    This video has aged well. Can we have an update please.

  • @jerseyjacket100
    @jerseyjacket100 8 ปีที่แล้ว

    I've been always curious of this, thanks for a video to explain it!

  • @m3ttwur5t
    @m3ttwur5t 8 ปีที่แล้ว +48

    Normalize your audio.

    • @U014B
      @U014B 8 ปีที่แล้ว +9

      Normalize your face.

    • @m3ttwur5t
      @m3ttwur5t 8 ปีที่แล้ว +3

      Nopiw
      My speakers are fine.

    • @WalnutSpice
      @WalnutSpice 8 ปีที่แล้ว +3

      Nopiw Nah

  • @mage1over137
    @mage1over137 8 ปีที่แล้ว +7

    They also use GPU 's in Lattice QCD calculations.

  • @fredweasleylives00
    @fredweasleylives00 8 ปีที่แล้ว

    very well explained, thanks for the video!

  • @TheGodlessGuitarist
    @TheGodlessGuitarist 7 ปีที่แล้ว

    Triangles can exist on curved surfaces. You can even have a triangle with 3 right angles when placed on the surface of a sphere.

  • @PhilDaw
    @PhilDaw 8 ปีที่แล้ว

    that feeling when you're explaining to people that it's a $1000 card that just draws virtual triangles.
    I guess they're pretty good at adding and subtracting floats and doubles, but the triangle drawing is where the action is.

  • @markwilliams5654
    @markwilliams5654 8 ปีที่แล้ว

    thanks for adding your info so all humans can learn
    thanks again

  • @imisinjan
    @imisinjan 7 ปีที่แล้ว

    Graphics are just one possible application for the use of GPU's but remember that GPU's are designed as better number crunchers than CPU's. Possible uses are password cracking, calculating PI, Chaos theory, stock market prediction or anything that requires highly parallel computing. The guy being interviewed seemed to be only focused on graphics work which only the tip of the iceberg.

    • @davidmella1174
      @davidmella1174 4 ปีที่แล้ว

      Just use a quantum computer instead when they get more common.

    • @samwelndonga8795
      @samwelndonga8795 2 ปีที่แล้ว

      with chaos theory is it fear to state machine will be one day be alive on exploiting that field, look the insects like bedbug are too small yet alive, could the principal of life be all about neurons or atoms arranged to achieve chaos in oscillation, with signals of data in the respective body.
      Putin one said the first to build AI will rule the world. What he didn't figure out was once it has been understand/masted by any one, that person will rule the AI, therefore the first one to understand or figure out AI will actually rule the world. its going be easy to manipulate AI while you don't have those expensive servers. Just access to AI will be more than enough, with internet of thing you can imagine how powerful a one man army will look like.

  • @rikschaaf
    @rikschaaf 8 ปีที่แล้ว

    I'm using a Tesla GPU at my university for accelerating a certain physics simulation. Works quite well, I'm getting a speedup of around 10, compared to normal CPU processing.

  • @Fux704
    @Fux704 8 ปีที่แล้ว +1

    Cool video.

  • @theondono
    @theondono 8 ปีที่แล้ว

    Numberphile people that realized that not 'all triangles are flat'

  • @sooryanarayanan4273
    @sooryanarayanan4273 2 ปีที่แล้ว

    thank you so much

  • @heudevil
    @heudevil 4 ปีที่แล้ว

    you have just got a follower

  • @MrJohnweez
    @MrJohnweez 8 ปีที่แล้ว

    This is why my Minecraft animations take a while to render.

  • @EyesOfByes
    @EyesOfByes 4 ปีที่แล้ว

    Watching this in June 2020. Nvidia DGX2 was announced. That thing is from another Universe.

  • @sukhoy
    @sukhoy 8 ปีที่แล้ว +1

    The white pixel moving over the turned off monitor on the left is getting on my nerves... D:

  • @johnallen3783
    @johnallen3783 2 หลายเดือนก่อน

    it is very simple, a cpu is designed to handle any computational algorithm you can throw at it provided the computational process capability of the cpu is adequate to do so, eg, number of cores, and processing frequency (overall speed) of cores, in a cpu as well as their general features and capabilities, which usually will always far outweigh the ability of a single gpu.
    A gpu is usually a single core processor with differing architecture to that of a cpu because of it`s general purpose, designed with a specific process type with which to handle or process, eg, - to compose, process and output a video or image you can view as an output on, for instance, a monitor.
    However, the one thing to bear in mind is that a gpu cannot perform it`s duties without the aid of a host processor, in other words, it cannot function without the aid of, or conjunctional aid of a cpu running alongside it solely, this is why when you build your own PC, you usually need to install a cpu AND a graphics card too.
    For the gpu to function properly as intended, it requires the running aid of the central processing unit (cpu) in order to tell the gpu how to function at all in the first place, usually without this specific set-up, your gpu, or graphics card would just be pretty much a novelty door stop, and pretty much useless on it`s own . . .😀😀

  • @PauloConstantino167
    @PauloConstantino167 4 ปีที่แล้ว

    "You've never seen a triangle that isn't flat."
    You've never seen non-eucledian geometry :D

  • @SproutyPottedPlant
    @SproutyPottedPlant 8 ปีที่แล้ว +1

    Computerphile seem to be on a boat?

  • @Chiken1
    @Chiken1 4 ปีที่แล้ว +1

    When it comes to game development 3D in unity do you need a powerful gpu?

  • @AishaDracoGryph
    @AishaDracoGryph 8 ปีที่แล้ว

    You should tweak your audio and reduce the heavy bass more often, half of the videos on this channel want to blow my woofer.

  • @123philimo
    @123philimo 8 ปีที่แล้ว +2

    And why do programs like C4D use the Processor for rendering a 3 dimensional space?

    • @Roxor128
      @Roxor128 8 ปีที่แล้ว +4

      +DevilsCookies Not sure what program you're talking about, but some rendering techniques don't work that well on GPUs. They suck at recursion and branching, which are key components of ray-tracing. Not to say that GPUs can't do recursion or branching. They can. They're just much slower at it than the CPU is. Enough so that when writing code for the GPU, you want to avoid branching whenever possible.
      If you're rendering a sphere in a ray-tracer, you need an if() statement in the intersection test to check if the quadratic formula has a negative discriminant, which indicates a miss.

    • @123philimo
      @123philimo 8 ปีที่แล้ว

      C4D = Cinema 4D
      and well, i thought 3D Animations is rendered faster by a GPU, because it's graphics you are rendering, and the GPU is a graphics processing unit.

  • @BariumCobaltNitrog3n
    @BariumCobaltNitrog3n 8 ปีที่แล้ว

    A light field camera will help your focus problems. And a tripod.

  • @goodmanEnt
    @goodmanEnt 8 ปีที่แล้ว

    Would it be easier to say that the table top is a circle, rather than a bunch of triangles?

  • @saminazehra456
    @saminazehra456 7 ปีที่แล้ว

    can u please explain what is the difference between computer machine and computer device??

  • @mrembeh1848
    @mrembeh1848 8 ปีที่แล้ว +4

    But the only thing you didn't say is how the gpu achieves that ability to do parallel computing. Does it just have 10.000 Cpus in it?

    • @putinstea
      @putinstea 8 ปีที่แล้ว

      +Embeh I suppose he means it works in parallel with the CPU

    • @marcschlensog7439
      @marcschlensog7439 8 ปีที่แล้ว +2

      +Embeh Yes, modern CPUs have multiple cores (faster ones in the 1000s).

    • @liquidminds
      @liquidminds 8 ปีที่แล้ว +2

      +Embeh It has to do with how the command is structure.
      You send a command to your cpu like "multiply 3 5", "multiply 1 2",.....
      when you send it to your GPU, you can send "multiply 3 5 1 2 ...." and it will run the multiplication-loop for all the values in that one request.
      A CPU is built, so it can do many different calculations. A GPU is built so it can do the same type of calculation over and over again. Saving a lot of time.

    • @dixie_rekd9601
      @dixie_rekd9601 8 ปีที่แล้ว

      +Embeh see CUDA cores :D

    • @Flehwah
      @Flehwah 8 ปีที่แล้ว +12

      +Embeh Not really. A tpical CPU Core has an ALU (Arithmetical logical unit: the part, that does the calculations) and a control unit which tells the ALU what to to.
      In a GPU core, there is only one control unit per dozens of ALUs, which lets them all do the exact same calculations, but on different data. (SIMD)

  • @ispeakforthebeans
    @ispeakforthebeans 5 ปีที่แล้ว

    So basically GPUs use multithreading very efficiently?

  •  6 ปีที่แล้ว

    I thought that the triangles were flat only in euclidean space? as 2+2 is only 4 in decimal

  • @petereiso5415
    @petereiso5415 4 ปีที่แล้ว

    In what situations are objects modelled? Would it only be in CAD situations ?

  • @tidemover
    @tidemover 8 ปีที่แล้ว

    I'm ot sure what you mean by a triangle always being flat since the four sides of a pyramid are made of triangles and they aren't on planar surface? the bottom of course isn't a triangle

    • @scotthorning354
      @scotthorning354 8 ปีที่แล้ว +4

      If you divide a rectqngle or square in half diagonally, you would get 2 triangles. Do that in certain ways for other shapes, and you will still get triangles. About them always being flat, it means they are always on the same plane. A triangle has the smallest amount of sides possible for it to be a 2d figure. It will always be on the same plane. For a rectangle, 3 points or vertexes could be on one plane, but the fourth one could be on a different one, making it not coplanar.

    • @calfischer1149
      @calfischer1149 8 ปีที่แล้ว

      pyramids are made of multiple triangles though. One triangle will always fit in one plane.

  • @christopherg2347
    @christopherg2347 ปีที่แล้ว +1

    A GPU is just a lot of really weak CPU's, doing stuff that requires heavy parallel operation.

  • @4Gehe2
    @4Gehe2 8 ปีที่แล้ว

    Soo... Cpu is one fast flowing river, while GPU is many smaller brooks, and both can carry the same amount of water but in different ways?

    • @matsv201
      @matsv201 8 ปีที่แล้ว +1

      +Henri Hänninen Well...
      If talking about GPU the river analogy works quite well. But a CPU.. well no. Its more like a water pump... well... hmm.. the water analogy dont work on CPU... the whole point is that they is that they make a bunsh of diffrent things, not only calculation, but also decition.

    • @overwrite_oversweet
      @overwrite_oversweet 8 ปีที่แล้ว

      The GPU carries more water, definitely, if all of the brooks are flowing.

  • @TechyBen
    @TechyBen 8 ปีที่แล้ว

    "Specialist" versus "General" computing?
    PS, after watching the video, the parallel computing is also a lot more important in a GPU. :)

  • @okboing
    @okboing 4 ปีที่แล้ว

    Recently I made the mistake of trying to produce an entire 3d graphics engine from scratch in pjs, all starting with the following function:
    var raster = function(x, y, z) {
    return [x/z, y/z]
    };
    This function allows me to take any point (x, y, z) and map it to the screen. This is the most vital, core component of the entire program. I eventually added cameraPos to the function so I could move anywhere. But there was one fatal problem. I could not rotate anything. I attempted to produce a function that took in 4 vectors (newx, newy, newz,) and (x, y, z) to produce a new imaginary grid that treats newx, newy, newz like x y z and transforms the point. I then tried to produce a function that generates 3 of the vectors the previous function calls for based on pitch, yaw, and roll, but I have yet to get it working.

  • @Knightfire66
    @Knightfire66 5 ปีที่แล้ว +1

    i think i have to study a couple years longer to understand the words he says. then maybe i can understand the difference between gpu and cpu...

  • @Kneedragon1962
    @Kneedragon1962 8 ปีที่แล้ว

    The other thing you can do extremely well on graphics hardware, is model a scene in 3d, with a large number of units, like sugar cubes. Astrophysicists want to model the universe after the big bang, or look at the growth of galaxies, or model dark matter. Meteorologists want to predict the weather 10 days out. Climate people want to know about global warming. Economists want to anticipate world trade next year. Boeing want a more efficient wing tip shape on the next airliner. The navy want a submarine with is 600 ft long and capable of travelling at over 30 mph while making almost no sound. The way a graphic processor treats a scene, as a series of data points which are pixels, works extremely well when you try to model the behaviour of other systems, like the ones I listed. This is what is behind CUDA and other efforts to use graphics hardware. It's very good at doing the stuff you need for large scale simulations.

  • @LazyOtaku
    @LazyOtaku 8 ปีที่แล้ว

    So I need a better gpu for gaming computer?

  • @redsunrises8571
    @redsunrises8571 8 ปีที่แล้ว

    can Computerphile do an episode on algorithmic art/music?

  • @everope
    @everope 4 ปีที่แล้ว

    The audio is so low it's only using 2 bits of dynamic range

  • @Knightfire66
    @Knightfire66 5 ปีที่แล้ว

    i didnt understand anything... why are gpus better for 3d? or matrixes? i dont get it...

  • @mrrabbit6680
    @mrrabbit6680 8 ปีที่แล้ว

    Why not just mount the camera?

  • @lukekdavis6227
    @lukekdavis6227 6 ปีที่แล้ว

    Triangles in non euclidean geometry aren’t flat

  • @vibe3d
    @vibe3d 8 ปีที่แล้ว

    So what I'm getting here is that GPU processors are weaker individually compared to a CPU processor but as a group they can outperform a CPU.

  • @newmansan
    @newmansan 8 ปีที่แล้ว

    Cool video everyone, but did you forget to pack a tripod? I feel like this was shot at sea.
    Keep up the good work otherwise.

  • @Monsmak
    @Monsmak 8 ปีที่แล้ว

    THE SPICE MUST FLOW!

  • @awaisraad
    @awaisraad 7 ปีที่แล้ว +2

    Still at a higher level of abstraction to me. :p

  • @andrewrice9362
    @andrewrice9362 4 ปีที่แล้ว

    Has he been on the spice?

  • @anatolesokol
    @anatolesokol 8 ปีที่แล้ว

    so whats the difference ?

  • @soraaoixxthebluesky
    @soraaoixxthebluesky 7 ปีที่แล้ว +106

    "triangles"? illuminati confirmed !

    • @shadowshadow2724
      @shadowshadow2724 5 ปีที่แล้ว

      This is third comment, illuminati confirmed!!!

  • @freecrac
    @freecrac 6 ปีที่แล้ว

    Most of the modern display devices provide a secondary display output for to use a second monitor. But how to display the content of the secondary linear framebuffer on the secondary monitor using a different resolution as the primary monitor for a selfmade bootmanager(no display driver loaded) with and without an UEFI-Bios?

  • @peedowaqataivuya1568
    @peedowaqataivuya1568 4 ปีที่แล้ว

    Very soon QPU will be out Quantum Processing Unit....👍👍👍

  • @olegg1082
    @olegg1082 4 ปีที่แล้ว

    I would have expected him to talk more about what the CPU does. He only talked about what the GPU does, so the difference still isn't clear.

  • @CoorDaLoor
    @CoorDaLoor 6 ปีที่แล้ว

    this guy is wrong. the gpu doesnt handle the camera. camera's are something high level and engine dependent. what really happens at the low level is that the "cam" is fixed in place and always facing the same direction. you just move your opengl objects around collectively to create the sense of moving camera.

  • @tnvmadhav2442
    @tnvmadhav2442 5 ปีที่แล้ว

    circle = a bunch of traingle_fans
    MIND BLOWN

  • @MorrisonWebDesign
    @MorrisonWebDesign 8 ปีที่แล้ว

    Maybe I missed a thing. But. If a GPU does "3D" and a CPU does "1D" (or 1d over 4-8 etc threads), why are not CPU's become GPU's? Is this really a firmware issue?

    • @overwrite_oversweet
      @overwrite_oversweet 8 ปีที่แล้ว

      No. It's because the CPU doesn't have a few thousand cores, so while the GPU can do a whole lot of work all at the same time, the CPU does 1 (or two, or 8, or 20, or maybe 80 if you're lucky and get a ton of money) at a time.

  • @rebelalternator8662
    @rebelalternator8662 4 ปีที่แล้ว

    Nobody noticed his finger?

  • @burnzy3210
    @burnzy3210 8 ปีที่แล้ว +2

    PICK CELLS?

  • @daidabus
    @daidabus 8 ปีที่แล้ว

    is't the big difference architecture between those (kinda forgot but in school they teach something like this ) and was named by architects who made it , i know one was cheap to make because he could not take and give information at same time( that should be cpu ) and gpu could do it but it was much more expensive to produce
    harvard and von neumann i think. and slower one is von neumann expensive one is Harvard

  • @digvijayjambhale3598
    @digvijayjambhale3598 8 ปีที่แล้ว

    Computerphile has a similar layout design to The most extreme program on animal planet

  • @tomaspuodziukynas5361
    @tomaspuodziukynas5361 8 ปีที่แล้ว

    I am still always wondering... Who the hell dislikes videos like this!?

  • @mareksajner8567
    @mareksajner8567 4 ปีที่แล้ว

    "fundamentally we are here to put pixels on the screen," oh boy is somebody wrong here...

  • @martinbean
    @martinbean 4 ปีที่แล้ว

    What’s a pick-zell?

  • @JobvanderZwan
    @JobvanderZwan 8 ปีที่แล้ว +3

    I love how he pronounces "pixels" as "pic-cells", which I guess is etymologically correct!

    • @jackb9045
      @jackb9045 5 ปีที่แล้ว +1

      picture element

  • @KingFredrickVI
    @KingFredrickVI 8 ปีที่แล้ว +7

    2:30 Almost all 3D engines have the camera static and they move everything around the camera.
    I really feel like this video only gave surface level differences and didn't talk about the true hardware differences :/

    • @Dolkarr
      @Dolkarr 8 ปีที่แล้ว +8

      +KingFredrickVI I feel like that's actually a myth. The matrix comes out exactly the same when you do it either way, so it's really just a matter of your point of view.

    • @overcunning
      @overcunning 8 ปีที่แล้ว

      +KingFredrickVI Sounds false. How do you do multicam rendering like stereoscopic this way? And how do you animate camera? That would be tedious.

    • @klaxoncow
      @klaxoncow 8 ปีที่แล้ว +4

      +Dolkarr You're right.
      Am I moving the camera - and then working out where the points are in relation to the camera - or am I moving the scene - working out where the camera is in relation to the points?
      Same difference.
      Am I adding 3? Or am I subtracting minus 3? Or am I subtracting the subtraction of positive 3 from zero, from zero?
      Relativity. It's really all the same thing.

    • @KingFredrickVI
      @KingFredrickVI 8 ปีที่แล้ว

      +Ahmed M.AbdElMoteleb (SAIKO) The topic of culling isn't relevant to the discussion about whether or not the objects are transformed or if the camera is transformed. Culling is the process of deciding which objects that are invisible so that you do not have to fetch, transform, rasterize and shade them and therefore are not considered when rendering each object onto the camera.

    • @ryandean3162
      @ryandean3162 8 ปีที่แล้ว +2

      +KingFredrickVI This is absolutely true. It's to deal with the problem of floating point inaccuracies at large scales, and it's called the Floating Origin method. Any game over a certain seamless world size pretty much has to do it this way. Otherwise, as the player gets farther and farther away from 0,0,0 in world coordinates, objects in the world will start to move away from their positions as the amount of precision you can use decreases over distance, and they basically snap to the closest possible precision coordinate they can use.
      So, if you want to simulate say an entire universe or galaxy or a solar system, like Celestia, or Elite, or KSP or whatever, or an entire real sized planet or even just a largish region, you have to use floating origin where the world moves around the player, and keep track of the player location and object locations in a separate, more precise manner (usually a large fixed point precision number), and translate between the positions in this overall world/universe model and positions in the game view model, using scaling tricks and what not to make far away objects appear far away even though they're actually relatively close by.
      The problem with making an engine with higher precision numbers for the vector locations is the more precise you make the numbers, the harder it is on processor and the slower the math is performed, until you get to the point that you simply can't simulate a large world without really keeping the number of objects which calculations need to be performed on to an absolute minimum.
      I know for single precision floating point numbers, things start getting screwy over about 10000 units away from the origin. A double gets significantly larger precision, but still, won't work very well on scales much above a solar system.
      Usually, rather than constantly moving the world around the player, which would be heavy on the processing power since you have to individually relocate each object in the coordinate system, you set a boundary where the player moves around in the coordinates but when they reach a certain distance from the origin, the player snaps back to the origin and everything in the world moves with them.