It still fascinates me, when I try to imagine the speed of the executions on a CPU/GPU. Especially on a GPU. Just imagine, these CUs working at gigahertz speeds. And now with all that fancy APIs, its all avaliable at your fingertips. Awesome times to be alive!
All this recent marketing effort put into ROCm has me pretty hopeful that CDNA/"Arcturus" is actually going to be something pretty special. Aka, a GPGPU platform (both hardware & software) that can FINALLY take the fight to Nvidia's CUDA outside of the MacOS realm (where OpenCL already dominates thanks to its being the foundation of Apple's "Metal" API). Which is all particularly relevant to this video as they are discussing GCN, NOT RDNA, & everything so far suggests that CDNA will be an evolution of GCN/Vega with the graphics & gaming hardware ripped out, rather than a compute focused offshoot of RDNA. (This because GCN already has much higher TFLOPs/mm² die space + TFLOPs/watt vs the very gaming focused RDNA arch). Especially as it's a 100% GPGPU dedicated design, not an attempted "best of all worlds" hodgepodge like Vega was (which was admittedly an absolute monster at raw compute, but terrible most everywhere else & with notoriously bad power efficiency thanks to having to include all the gaming focused hardware as well).
it is really hard for AMD to fight CUDA ecosystem that already becoming the gold standard for GPGPU. even OpenCL development itself was quite problematic. anandtech recently discussed the trouble of OpenCL in a long article. even if Apple metal have it's foundation based on OpenCL Apple has decided to go with proprietary route because doing that way they can develop the API in a way that suits their needs and not being held back by open community like Khronos Group. and for AMD themselves i see they still not wiling to spend the kind of money like nvidia did with CUDA for their ROCm.
@@MaddJakd Can't say I have read them, my work is not typically GPU-accelerated, this is more just for interest. But I will point out there are many pro-workflows that need graphics horsepower. Not all pro workloads are GPGPU/compute heavy. There is room for pro navi cards even if they are explicity graphics pipeline focused and not compute optimized.
@OLDSKOOL978 "AMD is fine, Nvidia has more developers on pay roll, hence they've been able to saturate the market with proprietary routines (code) Nvidia GPU's are black boxes, devs get binary blobs and work from there. AMD has a much better system in that it doesnt hide things form the developers you can poke at things." it depends on what you do. the one you talk about is more about driver side of things. for operating system developer (like linux) having access and documentation on GPU hardware/software are important. but when we talk about CUDA and ROCm such thing does not really matter. because this is developer that try run their software (like machine learning) not to figure out how the specific GPU will function with operating system. the more you can ease their job (less tinkering on the hardware side) the better it is because that's mean they can do their actual job faster instead of wasting hours just to set everything up. and CUDA itself at this point is no different than ROCm or OpenCL when it comes to openness. that's why AMD able to create some of the base foundation for ROCm itself. because AMD pretty much have access to all CUDA specifics and there is no obstruction from nvidia or anything they do with CUDA will make nvidia bring the issue to the court. "The way I see it, devs are being forced to work closely to the metal with this new generation of console. eventually developers will learn to get the most out of their architecture." not just for upcoming generation. game developer have been doing this since forever in console space. the last of us said they were able to push the game graphically (for the original PS3) because they known the hardware for a very long time. this is something that cannot be done on PC even with DX12 and Vulkan.
Hey can someone help me? I really wanna understand all this. What kind of stuff should I study to be able to follow along? Do I need like a full compsci or computer engineering degree? Or are there just a few classes I should try and learn? I have an aerospace engineering degree with some programming experience and a bit of Arduino stuff but virtually I don't know much about computation and computer architecture :0
So is he talking about opencl, because he refers to similar cuda terminology. Secondly, why these development related videos posted here? Shouldn't it be on a separate development channel?
Let's see the related API and developers tools. Let's hope that "Like always " AMD weak point is not the API and the software. If you have incredible hardware but the developing cost is too high is useless. 64 double précision pour cycle is really sweat. PS: is clear that haters against Nvidia that are commenting on this video are not developers and don't understand anything about this AMD improvement. Game/graphics developers must be able to code for both brand (and don't hate) they just don't like work more because one brand don't give the necessary modern tools to program their hardware. Being a brand hater on internet just make you look deeply incompetent.
It is a legitimate complaint. I had an RX480 that would blackscreen in anything remotely intensive, I tried numerous drivers that I believed to be stable. Nothing seemed to work. I RMA'd the card. A few months go by, the issue comes up again. I RMA that card. Another few months go by and the issue happens again. Further research into the issue showed me it was a problem with the Polaris architecture on the whole and the drivers not being able to cover up for whatever power delivery issues the card had. I have an AMD CPU and before the RX 480 I had an R9 390X, before that a Radeon HD7850. My CPUs have all been AMD as well. Phenom II X4 965, FX-8350, Ryzen 7 1800X. I am someone who has been mostly an AMD "fanboy" so to speak as I've bought nothing but AMD components for the most part (for budget-related reasons, not because I was just fanboying for the sake of doing so). I switched and bought a 1080Ti 2 years ago and haven't looked back, it's been nothing but smooth sailing. The complaints about AMD software and drivers are valid complaints imo. There can be objective complaints about the issues with AMD GPUs.
Fortunately gpu compute doesn't seem to draw the fanboys as much. Also they mention that this GCN, where the fanboys battles are nearly all RDNA[2] vs Turing/Ampere. Hopefully they will stay away. I just hope there aren't rival factions amoung cryptocoin miners, but even if they all showed up there can't be many of them. Still, I'm a big fan of AMD, but I'd still have to recommend doing any work like this in CUDA, nvidia's software ecosystem (regardless of hardware power) is that much better. You'll probably want opencl if you are releasing software to the public (not specialized compute racks).
Yes, they can, if the necessary APIs are developed to translate and calculate BvH data on a GPGPU. In fact, thats what even Nvidia does, except their proprietary RT cores are specialized to process BvH data efficiently.
@@kishaloyb.7937 the API already exist in the form of DXR. Vulkan also have the require API. RT core is not needed to run DXR. hence why the first DXR showcase was done on volta and later on nvidia enable DXR on pascal based GPU. any AMD GPU that is capable of using DX12 should be able to run DXR. they just need to enabled them from their drivers.
Well. If the GPU is as good as audio on this clip then I would avoid it for all costs. I would expect AMD level company have someone to watch over for official publishing process. The high level must be set before the first word is recorded as it supposed to reflect the quality of the actual product. It might be honorable property of the organization. AMD is a big company so why not have dedicated audio/video studio and couple of engineers to guarantee high level of media publishing process. Then send all the guys there to build their content in nice environment when it is needed.
AMD's GPU department does not have the slightest respect for consumers, thankfully Nvidia this time decided to change its move because I am tired of having problems with all the GPUs that I have from AMD since rx 580 / vega 56 and rx 5700 xt for the experience I had with these 3 products I guarantee that I will never buy any gpu from AMD again
the initial idea of getting ATI is to kill discrete GPU anyway. they intend to create APU as a specialized processor that will be able to tackle a problem that both CPU and GPU cannot tackle alone. in other words they want to create a processor that would make stand alone CPU and GPU obsolete. hence the new processor was called as APU. but in the end those APU still end up being a CPU with integrated GPU function in it. AMD never reach the stage to create the specialized software than can only have "accelerated" performance when using APU.
This isn't right AT ALL! The reason he's talking about GCN here is that CDNA (their upcoming explicitly GPGPU architecture) is a direct GCN derivative, NOT RDNA. RDNA 2 in particular has abandoned most all of its major GCN DNA, whereas CDNA is essentially next-gen Vega (because GCN ALREADY has superior TFLOPs/mm² AND TFLOPs/watt vs the gaming focused RDNA) with all the graphics & gaming hardware stripped out entirely in favor of more CU's/stream processors.
Thx for this nice GPU overview grandfather with your WWII microphone.
lol
Lmfao
AbdooMs EG no it’s funny. Learn to take a joke ❄️.
Grampa is fighting the good fight against the allied forces of Nvidia & Intel. Thanks grampa!
@@MicCheckMemoirs Learn to respect other people!
It still fascinates me, when I try to imagine the speed of the executions on a CPU/GPU. Especially on a GPU. Just imagine, these CUs working at gigahertz speeds. And now with all that fancy APIs, its all avaliable at your fingertips.
Awesome times to be alive!
All this recent marketing effort put into ROCm has me pretty hopeful that CDNA/"Arcturus" is actually going to be something pretty special. Aka, a GPGPU platform (both hardware & software) that can FINALLY take the fight to Nvidia's CUDA outside of the MacOS realm (where OpenCL already dominates thanks to its being the foundation of Apple's "Metal" API).
Which is all particularly relevant to this video as they are discussing GCN, NOT RDNA, & everything so far suggests that CDNA will be an evolution of GCN/Vega with the graphics & gaming hardware ripped out, rather than a compute focused offshoot of RDNA. (This because GCN already has much higher TFLOPs/mm² die space + TFLOPs/watt vs the very gaming focused RDNA arch).
Especially as it's a 100% GPGPU dedicated design, not an attempted "best of all worlds" hodgepodge like Vega was (which was admittedly an absolute monster at raw compute, but terrible most everywhere else & with notoriously bad power efficiency thanks to having to include all the gaming focused hardware as well).
it is really hard for AMD to fight CUDA ecosystem that already becoming the gold standard for GPGPU. even OpenCL development itself was quite problematic. anandtech recently discussed the trouble of OpenCL in a long article. even if Apple metal have it's foundation based on OpenCL Apple has decided to go with proprietary route because doing that way they can develop the API in a way that suits their needs and not being held back by open community like Khronos Group. and for AMD themselves i see they still not wiling to spend the kind of money like nvidia did with CUDA for their ROCm.
Someone didn't read the whitepaper..... nor notice that there were Navi Pro cards released not too lomg ago.........
@@MaddJakd Can't say I have read them, my work is not typically GPU-accelerated, this is more just for interest. But I will point out there are many pro-workflows that need graphics horsepower. Not all pro workloads are GPGPU/compute heavy. There is room for pro navi cards even if they are explicity graphics pipeline focused and not compute optimized.
ok no gaming more coding
@OLDSKOOL978 "AMD is fine, Nvidia has more developers on pay roll, hence they've been able to saturate the market with proprietary routines (code) Nvidia GPU's are black boxes, devs get binary blobs and work from there. AMD has a much better system in that it doesnt hide things form the developers you can poke at things."
it depends on what you do. the one you talk about is more about driver side of things. for operating system developer (like linux) having access and documentation on GPU hardware/software are important. but when we talk about CUDA and ROCm such thing does not really matter. because this is developer that try run their software (like machine learning) not to figure out how the specific GPU will function with operating system. the more you can ease their job (less tinkering on the hardware side) the better it is because that's mean they can do their actual job faster instead of wasting hours just to set everything up. and CUDA itself at this point is no different than ROCm or OpenCL when it comes to openness. that's why AMD able to create some of the base foundation for ROCm itself. because AMD pretty much have access to all CUDA specifics and there is no obstruction from nvidia or anything they do with CUDA will make nvidia bring the issue to the court.
"The way I see it, devs are being forced to work closely to the metal with this new generation of console. eventually developers will learn to get the most out of their architecture."
not just for upcoming generation. game developer have been doing this since forever in console space. the last of us said they were able to push the game graphically (for the original PS3) because they known the hardware for a very long time. this is something that cannot be done on PC even with DX12 and Vulkan.
Appreciate on this hardworking job and the results are tremendously fabulous.
Are the workgroup managers programmable elements like an MCU running a firmware, or fixed function hardware?
He is wrong at 4:43. Hey says "GPU submits packages to command queue". However he should have said CPU instead of GPU.
Big Navi waiting room
LDS is for each compute unit, is it visible to multiple wavefronts or multiple compute units ? Can you clarify a bit.. its a bit confusing
So, when will we be able to write with general purpose languages and get compute performance for free when ran on an APU instead of a CPU?
The audio might need RTX voice... hahahahaha...
Hey can someone help me? I really wanna understand all this. What kind of stuff should I study to be able to follow along? Do I need like a full compsci or computer engineering degree? Or are there just a few classes I should try and learn? I have an aerospace engineering degree with some programming experience and a bit of Arduino stuff but virtually I don't know much about computation and computer architecture :0
So is he talking about opencl, because he refers to similar cuda terminology.
Secondly, why these development related videos posted here? Shouldn't it be on a separate development channel?
AMD has OpenCL and also translation layers for CUDA... today there is very very little reason for your software to not support AMD GPUs.
No he's not talking about OpenCL.
He's talking about the actual _hardware_ .
OpenCL/CUDA are _software_ abstractions.
How did AMD get Joe Peralta to do this?
Could anyone help me to explain about the Stream Processor in AMD GPU ?
Let's see the related API and developers tools.
Let's hope that "Like always " AMD weak point is not the API and the software.
If you have incredible hardware but the developing cost is too high is useless.
64 double précision pour cycle is really sweat.
PS: is clear that haters against Nvidia that are commenting on this video are not developers and don't understand anything about this AMD improvement.
Game/graphics developers must be able to code for both brand (and don't hate) they just don't like work more because one brand don't give the necessary modern tools to program their hardware. Being a brand hater on internet just make you look deeply incompetent.
so basically technically they have the "same" power but Nvidia gpus are easier to use?
sorry I'm not that good om this type of things.
AMD IS ❤️
In before people here complain about AMD's software again. :P
lol
It is a legitimate complaint. I had an RX480 that would blackscreen in anything remotely intensive, I tried numerous drivers that I believed to be stable. Nothing seemed to work. I RMA'd the card. A few months go by, the issue comes up again. I RMA that card. Another few months go by and the issue happens again. Further research into the issue showed me it was a problem with the Polaris architecture on the whole and the drivers not being able to cover up for whatever power delivery issues the card had. I have an AMD CPU and before the RX 480 I had an R9 390X, before that a Radeon HD7850. My CPUs have all been AMD as well. Phenom II X4 965, FX-8350, Ryzen 7 1800X.
I am someone who has been mostly an AMD "fanboy" so to speak as I've bought nothing but AMD components for the most part (for budget-related reasons, not because I was just fanboying for the sake of doing so). I switched and bought a 1080Ti 2 years ago and haven't looked back, it's been nothing but smooth sailing. The complaints about AMD software and drivers are valid complaints imo. There can be objective complaints about the issues with AMD GPUs.
@@asteroiddropper Okay...
Yah xt 5700 rx, 4 crashes the last 2 days. Two times in rocket league.. no oc or uv.
@@asteroiddropper my rx 5700 had the same issue, nothing fixed it. all tho rx 580 never gave any issues like that.
AMD YES!
no
Curious to see how this performs stacked against NVIDIA's Ampere architecture.
competitive enough to challenge AMPERE cards' to lower the prices . but not enough to take the performance crown
@@50H3i1 Let's see if that ages well.
Curious to see how NVIDIA's Ampere stacked against NVIDIA's Turing architecture.
Cool upload
*Oh I see the fanbois aren't here yet*
they didn't even put in presentation how to make error hahaha
Fortunately gpu compute doesn't seem to draw the fanboys as much. Also they mention that this GCN, where the fanboys battles are nearly all RDNA[2] vs Turing/Ampere. Hopefully they will stay away.
I just hope there aren't rival factions amoung cryptocoin miners, but even if they all showed up there can't be many of them.
Still, I'm a big fan of AMD, but I'd still have to recommend doing any work like this in CUDA, nvidia's software ecosystem (regardless of hardware power) is that much better. You'll probably want opencl if you are releasing software to the public (not specialized compute racks).
Sounds good. How about show us the performance on some modern 3A games?
Does Radeon run ray tracing using Compute Units ?
Yes, they can, if the necessary APIs are developed to translate and calculate BvH data on a GPGPU. In fact, thats what even Nvidia does, except their proprietary RT cores are specialized to process BvH data efficiently.
@@kishaloyb.7937 the API already exist in the form of DXR. Vulkan also have the require API. RT core is not needed to run DXR. hence why the first DXR showcase was done on volta and later on nvidia enable DXR on pascal based GPU. any AMD GPU that is capable of using DX12 should be able to run DXR. they just need to enabled them from their drivers.
The don’t Right now, we will get mesh shaders and SMB calculation blocks in RDNA 2 and new-gen consoles with the DxR/Dx12Ultimate APIs.
❤️ AMD
could you create deep learn at reasonable price ?
Quadro take a lot fee
Interesting video but what about making stable drivers?
Well. If the GPU is as good as audio on this clip then I would avoid it for all costs. I would expect AMD level company have someone to watch over for official publishing process. The high level must be set before the first word is recorded as it supposed to reflect the quality of the actual product. It might be honorable property of the organization. AMD is a big company so why not have dedicated audio/video studio and couple of engineers to guarantee high level of media publishing process. Then send all the guys there to build their content in nice environment when it is needed.
i think u need an alien technology and another math genius to push the boundary
How about a good microphone...?
Can u repeat once more i wasn't paying attention 🤔 😁😆🤣
Repeat button
Good bye Nvidia
not so soon
NVidia watching this video: Interesting.
AMD's GPU department does not have the slightest respect for consumers, thankfully Nvidia this time decided to change its move because I am tired of having problems with all the GPUs that I have from AMD since rx 580 / vega 56 and rx 5700 xt for the experience I had with these 3 products I guarantee that I will never buy any gpu from AMD again
Xd
Should bring back the ATI team again. Since AMD axed them day one! ATI was the only competitive force against Nvidia and AMD disbanded it. ⭐️
the initial idea of getting ATI is to kill discrete GPU anyway. they intend to create APU as a specialized processor that will be able to tackle a problem that both CPU and GPU cannot tackle alone. in other words they want to create a processor that would make stand alone CPU and GPU obsolete. hence the new processor was called as APU. but in the end those APU still end up being a CPU with integrated GPU function in it. AMD never reach the stage to create the specialized software than can only have "accelerated" performance when using APU.
Nope, ATI and AMD were basically merged.
You should buy hardware based on performance, price and features, not based on the logo, the font or the color of the logo.
Intel has been defeated . Now is the time for nvidia
Big navi
Add CUDA compatibility mode to GPUs
and get slapped with a lawsuit by nvidia, great idea
1 dislike by Nvidia
the other is intel
DLSS?.
DLSS is Nvidia's proprietary technology.
at the moment AMD is a 127 Billion dollar company yet their presenter's microphone sounds like sandpaper
Hey AMD can you give me a cheapest AMD build possible not for game but for code and stuff
In INDIA . Should be decent
says gcn, not rdna, because rdna is just gcn6 and rdna2 is gcn7. everyone said i was crazy. jokes on yall
This isn't right AT ALL! The reason he's talking about GCN here is that CDNA (their upcoming explicitly GPGPU architecture) is a direct GCN derivative, NOT RDNA. RDNA 2 in particular has abandoned most all of its major GCN DNA, whereas CDNA is essentially next-gen Vega (because GCN ALREADY has superior TFLOPs/mm² AND TFLOPs/watt vs the gaming focused RDNA) with all the graphics & gaming hardware stripped out entirely in favor of more CU's/stream processors.
RDNA is a different implementation of the GCN ISA for gaming workloads,
its very different (hardware wise) compared to GCN1/2/3/4/Vega
@@Cakerpie would you think the good architecture name ?
This is worse than Nvidia announcement
This is just educational, not meant for consumers to care about.
It wasn't an announcement for any new product at all nor was it intending to be.
Not everything is meant for the basic consumer.....
This is the worst comment I've seen today.
Found the inshill nvidiot. LOL
AVOID AMD GPUs !!