How to Choose an NVIDIA GPU for Deep Learning in 2021: Quadro, Ampere, GeForce Compared

แชร์
ฝัง

ความคิดเห็น • 182

  • @HeatonResearch
    @HeatonResearch  2 ปีที่แล้ว +1

    I updated this video for 2023! th-cam.com/video/F1ythHjdWI0/w-d-xo.html

    • @yor.senpai._7233
      @yor.senpai._7233 23 วันที่ผ่านมา

      Sir can you please tell me I am making a pc for my Deep learning tasks, possible I will run LLms also not so much i n budget but I have considered a 4060ti 16gb msi will it be good or I can get better performance for cheaper price or same price , for processing I have Ryzen 7 7900x and the motherboard is gigabyte B650M XAX.
      Can u please give a solution sir, it's kind of urgent 😢

  • @HeatonResearch
    @HeatonResearch  3 ปีที่แล้ว +139

    Between cryptominers and scalpers I feel like I should rename this video to "What GPU to Dream about Getting in 2021" 😞

    • @TheExard3k
      @TheExard3k 3 ปีที่แล้ว +2

      very strong competition for the more scientific applications of modern GPUs. But prices seem to crash atm, so I might actually get my hands onto the higher GeForce series for all those juicy CUDA cores. Really a shame the different manufacturers don't have cards with more/less memory on board like they used to "in ages past". 3070/80 with an extra 4GB can be a gamechanger for deep learning workloads.

    • @Jibs-HappyDesigns-990
      @Jibs-HappyDesigns-990 3 ปีที่แล้ว +1

      😶‍🌫🤩

    • @zhoupact8567
      @zhoupact8567 3 ปีที่แล้ว +2

      I feel like I am poking at stuff way beyond my grade. I have been looking a bit at Nvidia Tesla v100 but I pretty much have no idea how it would hold up compared to anything else.
      I do not even know if I would be capable of figuring out a meaningful use for it. I want to do some deep learning stuff and make my own AI text adventure model nonsense. But I already got stumped at the basic coding stuff. And I am still unsure how I managed to screw over my gtx 1080 so it now lags when trying to play in vr. I know it probably is because I tried to make my poor 8gb card try to run some basic text adventure stuff. And something among the drivers and other stuff I had to install to get that going probably messed up something one way or another... If it was not something related to the coding program nonsense. :/
      I guess what I want at this point probably is a combination of AI text processing, vr running? bitcoin mining? something? ... I have no idea how bitcoin mining even works, is it even possible to turn such a process on and off? I mean, bitcoin is money right? If I first bought such an expensive setup, it would not hurt if it could pay back at least a tiny bit of itself over time, the 98% of the time its not in use.
      ... Checked out some of your other videos. I am absolutely subscribing! You seem to go over a lot of interesting topics.

    • @cezarstietot2417
      @cezarstietot2417 3 ปีที่แล้ว +1

      still relevant to this date ;0

    • @jeremymoon9088
      @jeremymoon9088 2 ปีที่แล้ว +1

      NOT NO MO!

  • @3monsterbeast
    @3monsterbeast 4 ปีที่แล้ว +9

    I don't think i've seen a better explanation on this. thanks a lot!

  • @ironmonger4498
    @ironmonger4498 4 ปีที่แล้ว +30

    For people dipping their toes into ML and DL and those who are not 100% sure that this is their future, Colab/Kaggle Notebooks and even AWS/GCP/Azure instances are excellent options in my opinion as the time required for the return on investment is pretty high.
    Also, Prof Heaton could you compare the performance increase between 20 and 30 series? I am interested in knowing if the price hike (and general lack of unavailability) of 30 series is worth it.

    • @HeatonResearch
      @HeatonResearch  4 ปีที่แล้ว +8

      I agree 100%, I own higher-end GPUs and often use CoLab myself.

    • @denysivanov3364
      @denysivanov3364 3 ปีที่แล้ว +2

      @@HeatonResearch colab was nice but last month they've cut hours per month very significantly :( GPU prices also gone crazy. Kind of 2 problems hitting at the same time.

  • @HeatonResearch
    @HeatonResearch  4 ปีที่แล้ว +55

    Okay, looks like NVIDIA will be providing me with a RTX A6000! Thank you NVIDIA! Look for an unboxing coming soon! Subscribe, so you do not miss it. Also, one correction to the video, there is no NVLINK on the RTX 3080.

    • @Garycarlyle
      @Garycarlyle 4 ปีที่แล้ว

      For free? How if I may ask? I guess because you are a known academic.

    • @HeatonResearch
      @HeatonResearch  4 ปีที่แล้ว +7

      @@Garycarlyle It's a number of factors. I am an influencer, which is a combination of academic and social media.

    • @Garycarlyle
      @Garycarlyle 4 ปีที่แล้ว +2

      @@HeatonResearch One good thing about tech is most people will have a GPU even better than that soon enough. I mean that in a nice way. Looking forward to more of your videos. :)

    • @lineage13
      @lineage13 3 ปีที่แล้ว

      Can you talk about tensorcores!

    • @AndrianHendrawan
      @AndrianHendrawan 3 ปีที่แล้ว

      Congratulations

  • @tpocomp
    @tpocomp 4 ปีที่แล้ว +13

    Great video Jeff. Thank you! Trying to add some info to your great video:
    im using NVlink on 2x RTX 8000. I have done lots of experiments on TensorFlow with NVlink on and off.
    In resume, on same model res, settings and bs:
    2x 8000 nvlink: Around 700ms VS 2x 8000 No-nvlink: Around 1050ms (So you get 30% - 40% speed when using NVlink)
    Ampere vs Turing:
    Limiting the model to use only 24Gb of vram, compared 3090 vs 8000:
    You can get around same speed of 1x 3090 using 2x 8000 with NVlink only. Without NVlink 2x 8000 are around 25% slow vs Ampere.

  • @projectbravery
    @projectbravery 3 ปีที่แล้ว +21

    What do you think about the new RTX 3060's for deep learning? They have 12 GB of VRAM, would that outperform a 3080, my guess is no, that it also depends on the cuda cores. Could you do a video talking about this, I am not sure if you have.

  • @benjamintrathen6119
    @benjamintrathen6119 3 ปีที่แล้ว +27

    I would've liked to see an RTX 3060 thrown into the mix.
    I would like to know if bus-width matters with deep learning applications as a 3060 has 12GB VRAM.

    • @thomassieghold2698
      @thomassieghold2698 2 ปีที่แล้ว +1

      Yeah, definitly a good price effeciency with the normal rtx 3060 vor deep learning

    • @HaunterButIhadNameGagWtf
      @HaunterButIhadNameGagWtf 2 ปีที่แล้ว

      @@thomassieghold2698 but not so many CUDA cores

  • @eckee
    @eckee 3 ปีที่แล้ว +27

    I have the answer to save you some time: You don’t choose a GPU in 2021.
    You just wait for the new gen, not only that, you’ll probably need to wait till 2022 summer because people and scalpers will panic buy again.

  • @ichallengemydog
    @ichallengemydog 3 ปีที่แล้ว +6

    What do you think about the new Quadro RTX A4000? A lot fewer CUDA cores than say RTX 3080, but that 16GB of ram will fit bigger batch sizes. Also the low end Quadro cards are easier to find.

  • @eduardogonzales6945
    @eduardogonzales6945 3 ปีที่แล้ว +3

    Very useful, most of my tech channels focus on gaming performance for this kind of hardware and completely disregard ML, DL and other data science uses of this kind of hardware. I was looking for a compromise between gaming and productivity, thanks.

  • @pusj61
    @pusj61 4 ปีที่แล้ว +8

    11:15 I don't think that 3080 supports NVLink

    • @medasani
      @medasani 4 ปีที่แล้ว +3

      Yeah there’s no nvlink support on 3080

    • @HeatonResearch
      @HeatonResearch  4 ปีที่แล้ว +5

      Good point, you could still run two of them, though. But yes, no NVLink.

  • @joliver1981
    @joliver1981 4 ปีที่แล้ว +34

    I agree with your take on the 3090 but good luck finding one for $1500. They don’t exist.

  • @navarclaudio
    @navarclaudio 2 ปีที่แล้ว

    Thank you very much for this video, It is probably the best one about this topic and I´m very clear about what I need for my PhD!!

  • @MarcelloNesca
    @MarcelloNesca 3 ปีที่แล้ว

    Wow!!! Thanks a ton!! Fantastic video!!

  • @tericgreenan8816
    @tericgreenan8816 2 ปีที่แล้ว

    Super informative video!

  • @ProjectPhysX
    @ProjectPhysX ปีที่แล้ว

    It's generally the same considerations for computational physics GPU simulations. VRAM capacity is the most important factor, and VRAM bandwidth comes second. I'm using 8x Instinct MI200 to pool their 512GB VRAM for super large resolution fluid simulations.
    The TFLOPs of the GPU itself is irrelevant, and so is the GPU generation. If multi-GPU is supported by the software, the it's always model parallelism, pooling/combining the VRAM of multiple GPUs. Looking at affordable options, the 3090 (non-Ti) is super appealing here.

  • @hernanmartinez1258
    @hernanmartinez1258 3 ปีที่แล้ว +1

    This is pure gold!

  • @ibrahimadiallo7444
    @ibrahimadiallo7444 3 ปีที่แล้ว +1

    Best explanation on TH-cam, Thank you!

  • @11folders
    @11folders 3 ปีที่แล้ว

    Thanks for the thorough guide!

  • @TheExard3k
    @TheExard3k 3 ปีที่แล้ว

    vendor at 16:45 is System 76 just if you want to know where to get really good Linux systems. They are the Linux Apple company but with open source firmware and exchangeable parts. No Quadros yet :(

  • @shamsulazhar
    @shamsulazhar 3 ปีที่แล้ว

    Thanks for the info, I was looking for something for Deep Learning (and also Deep Mining on the side...hehe)

  • @diegomauriciomejiasegura2788
    @diegomauriciomejiasegura2788 4 ปีที่แล้ว +1

    Thanks Jeff. Hi from Colombia 🤙

  • @techzung
    @techzung 2 ปีที่แล้ว

    What are your thoughts with the M1 Max chips?

  • @ShubhamChawla
    @ShubhamChawla 3 ปีที่แล้ว

    How does the amd radeon 6600 gpu’s perform ? Are they good as nvidia?

    • @HeatonResearch
      @HeatonResearch  3 ปีที่แล้ว +1

      They are an order of magnitude less compatible with machine learning libraries and not available in mainstream cloud. These are the main reasons I've shied away from AMD. Not on the CPU side, I am a major AMD "fanboy".

  • @minma02262
    @minma02262 3 ปีที่แล้ว

    Thank you for your explanation and sharing!

  • @ianh5537
    @ianh5537 4 ปีที่แล้ว +1

    Thanks, Jeff!

  • @mercadolibreventas
    @mercadolibreventas 3 ปีที่แล้ว +1

    Would it be better to buy 4 x cards K80 or M40 -24GB, or buy the over priced RTX 3090? Whats your professional suggestion?

  • @ashwinjayaprakash7991
    @ashwinjayaprakash7991 2 ปีที่แล้ว

    Can we have the link to the github page that mentions the config details

  • @sbpdev5908
    @sbpdev5908 4 ปีที่แล้ว +4

    Thank you for sharing this! How does the clock speed and number of CUDA cores affect the choice apart from the RAM? Is it true that more CUDA cores = more parallelization, hence speed up? Thanks again!

    • @fnorgen
      @fnorgen 4 ปีที่แล้ว +6

      If you're comparing cards based on the same architecture then yes. Just beware that specifically what a CUDA core is tends to change slightly from generation to generation.
      For example, the RTX 3000-series cards have simpler cores than the RTX 2000-series, so despite the 3000 cards having roughly twice the cores as their 2000 equivalent, they're far from twice as powerful in most workloads. The 3000-cards are really good, but not overwhelmingly better than the older cards. Especially at current prices.
      You're generally better off comparing benchmark scores rather than core count and frequency.

    • @sbpdev5908
      @sbpdev5908 4 ปีที่แล้ว

      @@fnorgen Thank you for the explanation

    • @sumitbali9194
      @sumitbali9194 4 ปีที่แล้ว

      @@fnorgen Hi, Should I consider getting a used Quadro card instead of the new 3000 series card to save some money, or would it be better to spend a little more and get the new architecture GPU

    • @fnorgen
      @fnorgen 4 ปีที่แล้ว +1

      ​@@sumitbali9194 I suppose it is worth looking into. Depends on what kind of prices you find. If a used Quadro gets you significantly better performance per $ for your intended use then I say go for it. If performance per $ is about the same, then you are probably better off buying one of the new cards if it is still within your budget. If you can find one for sale.

  • @rdwnilyas
    @rdwnilyas 3 ปีที่แล้ว

    hai Heaton. RTX 2060 have more tensor core than RTX 3060, but RTX 3060 have more cuda core and memory. which one do you think is better to buy for deep learning works?

  • @nullpointerexception1685
    @nullpointerexception1685 3 ปีที่แล้ว

    Would you choose a Quadro P6000 over a 2080 or titan Xp? Pascal Quadros are much cheaper now. The 24G VRAM is really pulling me towards a P6000. 🤔

  • @arwo1143
    @arwo1143 2 ปีที่แล้ว

    For the price of one RTX 3090 you can buy between 5 and 8 used Tesla M40 cards
    They have 24GB as well

  • @stuartsherman5975
    @stuartsherman5975 2 ปีที่แล้ว

    Very informative videos. Thanks. Would you please point me to your sources that shows that rtx 3080s can use nv link. The info I see only shows the rtx 3090s having nv link capability. Any light you could provide would be greatly appreciated.

  • @cinematic_monkey
    @cinematic_monkey 2 ปีที่แล้ว

    would nvlinking two 3090s work as having 48gigs of VRAM for training?

  • @HaunterButIhadNameGagWtf
    @HaunterButIhadNameGagWtf 2 ปีที่แล้ว

    I am not fully knowledged about deep learning yet, but you are talking just about image processing. What if I want to train my model on e.g. something with e.g. 20 inputs? That means my matrix is e.g. 20x100, so RAM is not so important for me and I should look on number of cores? Right now I'm thinking abou 3060 Ti because of ~5k CUDA cores, but we'll see what will change with new gpus on market in the december.

  • @patricecarbonneau6444
    @patricecarbonneau6444 3 ปีที่แล้ว +1

    Any thoughts on the new quadro rtx a5000? Or more generally on the benefits of tensor cores? I ask because the rtx a5000 actually has less tensor cores then the older rtx 5000. Which would you get if you were building a Tensorflow workstation?

    • @HeatonResearch
      @HeatonResearch  3 ปีที่แล้ว +1

      I like the A5000, I would get along with it just fine. I rarely use the full 48GB that my A6000 has, and the A5000 has around 80% of the cores of an A6000. Not everything always scales to using all of those cores at 100%.

    • @chriskorfmann
      @chriskorfmann 3 ปีที่แล้ว

      @@HeatonResearch similar question: what are your thoughts on an A4000? In an ideal world, I would aim for a 3080 at that price point, but in the 2021 world I have only found the A4000

  • @Lukrafiveman
    @Lukrafiveman 3 ปีที่แล้ว

    Whats the difference between M40 and P40 besides the price?

  • @gercohassink4159
    @gercohassink4159 3 ปีที่แล้ว +1

    I am still a bit confused after watching your videos. In my research group, we have a PC 24GB, with 1080RTX 8B for deeplearning using tensorflow/keras what would make more sense to increase speed, buy another 1080RTX 8 or 11GB or will (almost) every newer (and cheaper) higher spect NVIDA make a larger contribution?

    • @HeatonResearch
      @HeatonResearch  3 ปีที่แล้ว

      Depending on the structure of the neural network being trained, 2 GPUs does not always mean a speed up or the ability to process twice as large of a neural network. Generally a 2nd GPU will nearly double the speed of training, if there is enough data that that it overcomes the overhead of two GPUs. To use the RAM of both GPUs on a single neural network requires customization of the neural network code is needed. So it is hard to say for sure which option is the best without trying the hardware. I tend to go for the newer, bigger GPUs, but not always necessary.

    • @gercohassink4159
      @gercohassink4159 3 ปีที่แล้ว

      @@HeatonResearch Thanks for your answer and sorry for my late reply but due to circumstances I was out of office for a while. Ok, let's assume we stay with our data set under the 8GB using the cards in simple parallel. Any NVIDIA card equal or faster than the 1080 with 8GB or more will show some improvement I assume or should I use exactly identical cards? The reason I am asking is we first want to get some cheap experience with parallel GPUs before we start building a larger cluster with newer hardware.

    • @gercohassink4159
      @gercohassink4159 3 ปีที่แล้ว

      I guess you answered this on "5 Questions about Dual GPU for Machine Learning (with Exxact dual 3090 workstation)" (th-cam.com/video/_d3xs1L4jeA/w-d-xo.html). So 1080 won't work together with 2060 or 3090., well that limits the choices, many thanks!

  • @muhammadammarmm7361
    @muhammadammarmm7361 2 ปีที่แล้ว

    my GTX 1050 have 6.1 score of Compute capability, and GTX 1060 have score 6.1 too, that's meant GTX 1050 have same performance with GTX 1060 for ML training?

  • @lakshminarasimmanv
    @lakshminarasimmanv 3 ปีที่แล้ว

    Can you please make an updated video?

  • @HannesZietsman
    @HannesZietsman 2 ปีที่แล้ว

    update please for 2022

  • @mp-xt2rg
    @mp-xt2rg 3 ปีที่แล้ว

    What about the 3060? it has 12gb of ram

  • @pubudutharaka9604
    @pubudutharaka9604 3 ปีที่แล้ว

    Hello Sir,
    what do you think about the use of Nvidia GTX 1650 super for deep learning?

  • @mikefalls2405
    @mikefalls2405 3 ปีที่แล้ว

    So, I'm looking at getting a Vision 3090 OC to pair with my 3080Ti. I have Gigabytes B550 Vision D-P and was wondering if I use the 3090 to occupy the first Pcie 4.0x16 and then connect the 3080Ti to the 4.0x8, in terms of training time, would that result in a significant difference given I'm cutting the available pcie lanes in half to one Gpu?

  • @NEWLOCALBUS
    @NEWLOCALBUS 3 ปีที่แล้ว

    What is actual difference between Quadro RTX 3090 Vs Before RTX 3090. What do think RTX 5000 16 GB Vs GeFroce RTX 3090 24GB

  • @sebastiangabriel3392
    @sebastiangabriel3392 3 ปีที่แล้ว

    What do you think about RX 6900 XT for deep learning using python R and octave for financial analysis ? Do i really need a NVIDIA card for it ? As i just started getting in ML i lean towards RTX 3090, RTX 3080 and RX 6900 XT as i noticed it has 16 GB VRAM.

  • @yahyakhattak2661
    @yahyakhattak2661 2 ปีที่แล้ว

    I'm working on NLP related deep learning project on 6 GB RTX 3060 for notebook PC. Frequently running into errors like "CUDA out of memory". Any suggestions? Should I change the hardware or is there a workaround?

  • @DeepFrydTurd
    @DeepFrydTurd 2 ปีที่แล้ว

    what would be cheaper in the long run? buying your own $10,000 gpu or just outsourcing hardware into the cloud?

  • @iroshanijayawardene8647
    @iroshanijayawardene8647 3 ปีที่แล้ว

    Hi I already have a GEFORCE RTX 2080 Ti and planning to expand by adding a RTX 3080 / 3090 . Is it possible to link two GPUs with NVLink?

  • @SnippetBucketTechnologies
    @SnippetBucketTechnologies 3 ปีที่แล้ว

    To setup data center with 2 x Intel Xeon silver 4114 (10 cores) server in local data center, which GPU from nvidia will be best ?

  • @flyerzy
    @flyerzy 3 ปีที่แล้ว

    what about tenser cores and FP16 32 INT8....

  • @BradHelm
    @BradHelm 4 ปีที่แล้ว

    I had hoped to pick up a 30-series GPU at launch, but that didn't happen. Right now I'm waiting to see what kind of availability we will see when the new SKUs come to market featuring different RAM/core count combinations. Like you, I prefer the larger RAM sizes in order to work with larger models and I'll trade off core counts for better RAM. Unfortunately, 20-series cards aren't an option right now because of the lack of availability of 30-series cards for the general market.

    • @HeatonResearch
      @HeatonResearch  4 ปีที่แล้ว

      Have similar thoughts myself on this.

    • @bigdreams5554
      @bigdreams5554 4 ปีที่แล้ว

      Same here.. thinking about the A6000, overkill for me at the moment but it is actually available for purchase it seems. I had hoped to get two 3090s for my multi GPU setup but that seems like a pipedream at this point.

  • @TheThunderSpirit
    @TheThunderSpirit 3 ปีที่แล้ว

    i have budget for either an RTX2060 or GTX1660TI , both 6 GB but RTX has 'tensor cores'.
    how much performance gain can I expect from rtx over gtx for memory size as less as 6GB? is it worth spending more on RTX2060?
    Edit: for laptops only

  • @theblackariel5957
    @theblackariel5957 2 ปีที่แล้ว

    I am a student and is RTX 3060 6GB okay for machine learning?

  • @joaobarreira5221
    @joaobarreira5221 3 ปีที่แล้ว

    Hi,
    i'm working on computer vision and i need to improve my set, what about e-GPU?
    I like the portability of working with a laptop but I have some doubts about the performance of using a laptop with an external RTX (e-GPU).
    I would like to know your thoughts on using deep learning with a laptop + eGPU.
    Best regards

  • @Darkev77
    @Darkev77 3 ปีที่แล้ว

    Dr. Jeff, thoughts on the GeForce RTX 3070 in the new Razer Blade 14?

    • @HeatonResearch
      @HeatonResearch  3 ปีที่แล้ว

      That is actually one of the computers I am considering for my own laptop, when I upgrade my Windows machine. Looks like a solid machine.

  • @ofrylivney367
    @ofrylivney367 2 ปีที่แล้ว

    hey Jeff! awesome content, really enjoyed the video.
    I've got a question about the stylegan training table - what do they mean by 25000 kimg? Is it 25 thousand or 25 million images?
    Thanks!

  • @mkloppel
    @mkloppel 2 ปีที่แล้ว

    Correct me if I am wrong. But typical software requires consistency across similar hardware that's more meaningful than just the brand (2 MSI 3090, 2 ASUS 3090. Can't be 1:1 each or even 2 different brand/models.
    This is what I've come to understand. I've never SLI. Own a 3090 and a 1080 before this. But such software as "OpenMPI" I thought requires system consistency when combining. (I have another PC I would bring in to the mix).
    I honestly haven't confirmed this, just seen it in documentation at various times either implied or stated. And am curious if this is *technically* not the case.

  • @natsukidaichi4438
    @natsukidaichi4438 4 ปีที่แล้ว +1

    Are quadro cards good for machine learning?

  • @Mobay18
    @Mobay18 4 ปีที่แล้ว +1

    I hope more people start developing for Ampere using tensorflow 2.x I wish I could train Stylegan2-ada on my Ampere card, but the Nvidia code is only using old CUDA10 and Tensorflow 1.x

    • @HeatonResearch
      @HeatonResearch  4 ปีที่แล้ว

      Check out the new StyleGAN2 ADA, now for Pytorch. It uses Ampere. th-cam.com/video/BCde68k6KXg/w-d-xo.html

    • @Mobay18
      @Mobay18 3 ปีที่แล้ว

      @@HeatonResearch Thank you! Going to try this out!

  • @Dpower
    @Dpower 3 ปีที่แล้ว

    HI Jeff! Thx for useful information. Sorry if there's already question about gaming laptops, i saw MSI GE76 Raider model with nvidia 3080 16Gb memory, it can be compare with desktop or mobile cards never competing pc versions? Thank you

  • @shadowmist1246
    @shadowmist1246 3 ปีที่แล้ว

    Supermicro has workstations that carry 4 x DW GPUs.

  • @babushkabunny2767
    @babushkabunny2767 3 ปีที่แล้ว

    Sir, I am about to start my master's program in AI. Could you please recommend which workstation I should buy with a limited budget (5-6k)?
    I used Google Colab, but it is difficult to do computer vision projects. Thank you!

  • @boquangdong
    @boquangdong 3 ปีที่แล้ว

    i have money for buy 1 RTX3090. but. i can buy 2 Rtx 3080 with this money. what is your choose?

  • @Handlefv
    @Handlefv 4 ปีที่แล้ว

    The nvidia 3060 (non-ti) has been announced and it will come with 12 gb($329). This is the lowest tier so far from the 3060 series. Does it make sense to get a slower gpu if it has more memory compared to 3080 and 3070?
    I will be starting my learning now and not sure if I should invest on something less powerful now and upgrade next year to a high end 4000 series.

    • @bigdreams5554
      @bigdreams5554 4 ปีที่แล้ว +1

      RAM matters more. What type of model are you training, how much RAM does it need? 12GB should be great to start.

    • @tclf90
      @tclf90 3 ปีที่แล้ว

      If you just want to try out and learn ML, then yes, more RAM means more options to choose and to experience.

  • @eketiger
    @eketiger 4 ปีที่แล้ว

    Hi Jeff, what do you think about Jetson? Can you make a video comparing Nano, NX, AGX and TX2? Regards from Argentina!

    • @HeatonResearch
      @HeatonResearch  4 ปีที่แล้ว

      I really have not gotten into Jetson at this point. Comparison videos can be difficult unless I own all of the things compared, so I generally focus on how to use technology.

  • @bralis2
    @bralis2 4 ปีที่แล้ว

    What is Quadro RTX P4000 F64 performance ?

  • @JacekBrauner
    @JacekBrauner 4 ปีที่แล้ว

    You could make a video on how to deal with big data. For example 100,000,000 rows. That would be interesting. With this size even simple sql count takes forever in postgresql

    • @kharthigeyan
      @kharthigeyan 3 ปีที่แล้ว

      Run vaccum or use teradata/spark/redshift

  • @stelluspereira
    @stelluspereira 3 ปีที่แล้ว

    Mr Heaton THANKYOU, I work mostly in MacBook Pro 2020(thunderbolt), what is your recommendation for external GPU Nvidia

    • @colombelle
      @colombelle 3 ปีที่แล้ว

      If it's an M1 I think they're not compatible with eGPU.

    • @stelluspereira
      @stelluspereira 3 ปีที่แล้ว

      Thankyou Sir

    • @denysivanov3364
      @denysivanov3364 3 ปีที่แล้ว

      macbooks don't work with nvidia..... There are tricks but generally apple do not support nvidia drivers.

    • @habasch51
      @habasch51 3 ปีที่แล้ว +1

      ​@@denysivanov3364 to my knowledge, they do if you install Windows on them using bootcamp and if they have an intel cpu...

    • @denysivanov3364
      @denysivanov3364 3 ปีที่แล้ว

      @@habasch51 maybe its just mac os thing.

  • @nukulkhadse5253
    @nukulkhadse5253 3 ปีที่แล้ว

    Nvidia does not recommend using the RTX 30 series for DL saying it's a consumer grade card for gaming and cannot be used for DL. For DL they recommend using Tesla and Quadro series card which are much costlier with almost same amount of RAM and cuda cores. They say you can use consumer cards for DL but their performance will drop over a period of time. If this is true this might be due to form factor or other parameters of the circuit. But I'm not sure. I'd appreciate if you can explore and make a video on the feasibility of consumer cards for DL.

    • @denysivanov3364
      @denysivanov3364 3 ปีที่แล้ว

      good luck with 14k usd a100 and 7k usd used v100.

    • @boquangdong
      @boquangdong 3 ปีที่แล้ว

      @@denysivanov3364 haha Nvidia performance cuts in 3090 and 3080. i used 3080 and tesla K80.

    • @RAVIRAJPRAJAPATmech
      @RAVIRAJPRAJAPATmech 3 ปีที่แล้ว

      M also looking for clarity over this. Please do mention here if anyone has answer to this question.

    • @eckee
      @eckee 3 ปีที่แล้ว

      @@RAVIRAJPRAJAPATmech Nvidia sofware-locks geforce cards for certain tasks in order to be able to sell pro series at stellar prices. However, for ever 90% of the tasks, you won’t notice.
      But I also want to point out that if you’re going to use the GPU intensly, non-stop, always working etc, you may go for pro series cards. Because they are built to last longer under stress, tight and hot spaces for a long time. They are easier to stack up and you’ll get better customer support in emergencies. Geforce is for consumers, who aren’t keeping their PC’s running high end tasks non stop.

  • @Meat_City
    @Meat_City 3 ปีที่แล้ว +5

    Thanks for the video. As someone whos trying to get in on a budget, and with the no end in sight on the GPU shortages, what do you think about picking up an old Tesla K80. They can be had for a pittance on ebay right now (under 200 bucks)

    • @fangchen6805
      @fangchen6805 3 ปีที่แล้ว +1

      new PyTorch doesn't support k80 anymore, but you can use the older version, but it is painful.

  • @firefoxmetzger9063
    @firefoxmetzger9063 3 ปีที่แล้ว +1

    There is a problem with choosing a RTX 3080 over a RTX 3070. The additional RAM is nice, but it comes at the cost of buswidth. To this day I don't know why the 3080 has lower buswidth than the 3070 or 3090, but I do think that this is worth considering when buying a GPU. It is one of the (if not the) main bottleneck for throughput on optimized pipelines.

  • @quantum3712
    @quantum3712 3 ปีที่แล้ว

    Hello , great content, I am getting into data science and would be aiming to go for deep learning. I have heard that cloud services like google colab does give nice performance for gpu works for deep learning. I am on a budget and can't afford RTX version of Nvidia card which is actually able to do deep learning work. I am getting a good deal on non gpu laptop Lenovo Thinkpad E series gen 3 with AMD ryzen 7 5800U and vega 8 on board gpu. Will it be sufficient to learn data science and do machine learning ? Thanks in advance.

  • @loaialoamro8199
    @loaialoamro8199 4 ปีที่แล้ว

    Thank you for the nice explanation
    Is it possible to use eGPU with thunderbolt 3/4 to training deep learning model on laptop?

    • @HeatonResearch
      @HeatonResearch  4 ปีที่แล้ว +1

      That is not something I've tried. Though I am thinking of trying an eGPU with my previous GPU.

    • @loaialoamro8199
      @loaialoamro8199 4 ปีที่แล้ว

      @@HeatonResearch Thank you so much

  • @MrSteve-hy9yo
    @MrSteve-hy9yo 3 ปีที่แล้ว

    So coming at this from a gamer and AI cloud engineer, I am curious. I was thinking of a setup for a 2 x 3090 NVlinked with a 3rd GPU using the RTX A6000 (this will be my main GPU for all deep/machine learning duties). My main OS will be a Linux OS but curious to know if anyone has a similar setup and what their experience is.

    • @HeatonResearch
      @HeatonResearch  3 ปีที่แล้ว

      That would be an interesting setup, and I guess in theory that might work. I've never tried a heterogeneous multi-GPU setup

    • @johntennant2428
      @johntennant2428 3 ปีที่แล้ว

      Following...I was wondering the same thing (but running on Win 10)

    • @evolvere9021
      @evolvere9021 3 ปีที่แล้ว

      Steve. Were you able to complete you build with 2 x 3090 NVL together and a standalone A6000. Asking because I am along that pathway with currently 2x 3090 NVLink and contemplating on getting an A6000 for DL primarily. Also what motherboard would you recommend (Intel processor)? Thanks

    • @MrSteve-hy9yo
      @MrSteve-hy9yo 3 ปีที่แล้ว

      @@evolvere9021 Not yet. I actually built most of it but but have not settled on local storage. And I am using the 3995WX AMD Ryzen Threadripper pro (require lots of cores and high amount of RAM). The motherboard I am using is the ASUS pro WS WRX80E-SAGE Se (extended). Required a large motherboard (and large case) so it can fit the 3 GPU's ( 2 x 3090 NV Linked) and the A6000. Got it all running on a single M.2 NVME SSD drive but with all of the work I will be doing, looking for a hardware raid card where I can take up to 4 NVME SSD drives (Samsung pro) 2 TB drives and raid 10 it together. On the single drive, the setup works pretty good. I haven't test benched it really but it screams. I was able to run some decent speed machine learning instances. Will be looking to run some Neural Net cases on the A6000 but so far so good. The NV Linked 3090's work good but keep in mind, I only use them for video editing, watching some 8k released movies (not alot so far) and the cards work fine. I put this project on hold as I am waiting on the 3090 Ti cards as well as the hardware raid card for the nvme drives (raid 10). Also, I am waiting on getting 4 x 8k monitors. currently using a friend's and it's beautiful. Also, my contacts at AMD is telling me to wait another 6-12 months as the new threadripper pro CPU is coming out with double the cores. I require alot as I also am running multiple VM's (windows and other linux distro's) for other purposes as well as some docker clusters for various container related work I do. I think I am still thinking about secondary storage (SATA SSD drives 8 x 4 TB drives raid 10). This is where I will store my VM's and container related work. If all goes well, hopefully by the end of 2022, it will be done. With the high cost, I am treading carefully here to make sure I spec out the right high-end workstation.

  • @rakon8496
    @rakon8496 3 ปีที่แล้ว

    In gaming SLI is dead for some years basically. The differences between the 3090 and 3080 are marginally, until the 3080 runs into mem limits like with Microsoft Flight Simulator more often then not... THX! 💙

  • @nasimhuq1726
    @nasimhuq1726 3 ปีที่แล้ว

    Which GPU to choose for Laptop - Quadro 5000 16GB or RTX 3080 - 16GB?

    • @HeatonResearch
      @HeatonResearch  3 ปีที่แล้ว +3

      I've been considering that myself. I would go with 3080, so you have the new Ampere architecture.

  • @JuanCarlosGarfiasTovar
    @JuanCarlosGarfiasTovar 4 ปีที่แล้ว

    What do you think about eGPUs?

    • @HeatonResearch
      @HeatonResearch  4 ปีที่แล้ว

      They seem very interesting; however, it is not something that I have worked with.

  • @adipramono1666
    @adipramono1666 2 ปีที่แล้ว +2

    If you get the 3090, dont forget to "copper mod" your memory. GDDR6x could run like 110°C and after applying copper it could go as low as 85° c or something.

  • @HeavyMetal56339
    @HeavyMetal56339 3 ปีที่แล้ว

    Great video, unfortunately the price of GPU's in 2021 are very expensive :(

  • @hoaxuan7074
    @hoaxuan7074 4 ปีที่แล้ว

    Can't buy a GPU card? Try Continuous Gray Code Optimization (pfd online) on a CPU cluster. Each CPU has the full neural model and part of the training set (which can be local and private to the CPU.) Each CPU is sent the same short sparse list of mutations to make and returns the cost for its part of the training set. The costs are summed and if an improvement an accept message is sent to each CPU else a reject message. Very little data is moving around per second hence the CPUs can be anywhere on the internet. The mutation operator is random plus or minus
    a.exp(-p.rnd())
    If the weights are constrained between -1 and 1 then a=2 to match the interval. The so called precision p is a problem dependent positive number. The function rnd() returns a uniform random between 0 and 1. If the mutation is too big it is simplest just to reject it.

  • @nguyenngocly1484
    @nguyenngocly1484 4 ปีที่แล้ว

    Dude, ReLU is a switch. f(x)=x connect, f(x)=0 disconnect. The dot product of a number of dot products is still a dot product. Go figure things out from there.

  • @davidcmpeterson
    @davidcmpeterson 3 ปีที่แล้ว

    Would you choose a NVIDIA Jetson over say a cheapie RTX 3060ti if you're just a student?

    • @HeatonResearch
      @HeatonResearch  3 ปีที่แล้ว +1

      Maybe, I've thought about the thought really has crossed my mind. However, I am not real familiar with Jetson and have never owned one.

  • @KodandocomFaria
    @KodandocomFaria 4 ปีที่แล้ว +1

    Analysing the price... just make sense buy a expensive graphic card if you can get this money back working with this gpu ... because the depreciation is too high ... take a look to TITAN RTX .. it has 24gb memory and 4608 cuda cores and cost $2499..release em 2018 .. then after 2 year rtx 3090 with same amount of memory but with twice more cuda cores (10496) and cost $1499 .. so it is to much money to spend, if you don't get this return back. It could be cheaper... it could move more the market, making deep learning environments more accessible and people would change more frequently their graphic cards. But it depends on Nvidia.
    Resuming ...buy rtx 3090 just if you have this return back or if you have money enough to burn

    • @bigdreams5554
      @bigdreams5554 4 ปีที่แล้ว +1

      I think of it this way: will my cloud compute costs exceed the price of a GPU? If not, then just do cloud. If yes, just get the GPU. You own it , not rent it. You could sell it used later. With cloud, you could spend thousands and have no equipment afterwards. I tend to keep computers for many many years so it's not like I buy a new one every year.

  • @neuron8186
    @neuron8186 3 ปีที่แล้ว

    totally worth it

  • @sevdev9844
    @sevdev9844 3 ปีที่แล้ว +6

    Amusing and sad at the same time, watching this at the end of April. These prices, lol.

    • @hamzaabdikadir2657
      @hamzaabdikadir2657 3 ปีที่แล้ว

      They dropped 😁

    • @rmt3589
      @rmt3589 3 ปีที่แล้ว

      @@hamzaabdikadir2657 Yus! Let me tell all my dogecoin friends! Been waiting to get into Cryptomining!!! XD

  • @typeer
    @typeer 3 ปีที่แล้ว

    Thanks for taking the trouble to make this

  • @orthodoxNPC
    @orthodoxNPC 3 ปีที่แล้ว

    no mention of AMD's SSG

  • @wiredrabbit5732
    @wiredrabbit5732 2 ปีที่แล้ว

    I have been looking at your videos to learn about Nvidia GPUs. I recently upgraded from a Quadro P2000 and Quadro M1200 moving to a 3090. It is a serious jump. Backwards it seems. It may because I use remote desktop and I use Matlab for deep learning, and I can't take advantage of parallélisation as I use large LSTMs. Anyway, the new system is slower and sometimes half as fast in training. Did I make a mistake by not buying an A5000? (I had originally ordered an A5000, but gave up getting it delivered) Can I recover if I go to the office? I'd love to have your expert advice or that of any NVidia contacts please. Thanks in advance.

  • @landofloss6075
    @landofloss6075 2 ปีที่แล้ว

    How do I make money from this

  • @indylawi5021
    @indylawi5021 3 ปีที่แล้ว

    Thx for the info. I am just starting my ML/Deep Learning and just recently ordered a notebook with RTX 3050 Ti due to my limited budget. I guess with this RTX 3050 I am pretty limited on doing DL. bummer!

    • @hoewai1415
      @hoewai1415 3 ปีที่แล้ว

      why do you say so? is it due to the vram memory? then what do you think about my gtx1650?

  • @BeKindPlox
    @BeKindPlox 3 ปีที่แล้ว

    man i wish i could afford an A100' rn

  • @pjslon9219
    @pjslon9219 3 ปีที่แล้ว

    In my test the 1080ti is 5 times FASTER than the 3080. 64sec per epoch 1080ti, 324sec per epoch 3080. The same RNN model, the same code and data. It seems like software (Conda, TensorFlow) is lagging badly. I configured the system following this video guide and left a comment there as well th-cam.com/video/nMmCu3Nm1xA/w-d-xo.html

  • @DariusAlexanderMusic
    @DariusAlexanderMusic 3 ปีที่แล้ว +1

    Came to the same conclusion as you to get 2x 3090's... now... has anyone seen one???

    • @HeatonResearch
      @HeatonResearch  3 ปีที่แล้ว

      If I were building (and paying for myself)the GPUs, that is likely what I would get.

    • @sohailshaikh7454
      @sohailshaikh7454 3 ปีที่แล้ว

      Only if we can find one :-(

    • @timonix2
      @timonix2 3 ปีที่แล้ว

      @@sohailshaikh7454 3090's aren't so bad to find. It's the lower end cards which are impossible

  • @vladg1252
    @vladg1252 2 ปีที่แล้ว

    Rtx 3060 ti 399$?!😭😭😭😭😭😭

  • @ozziejin
    @ozziejin 3 ปีที่แล้ว +2

    MSRP...

  • @bozidarprotulipac3771
    @bozidarprotulipac3771 4 ปีที่แล้ว

    No NVLINK on rtx 3080 :)

  • @fnorgen
    @fnorgen 4 ปีที่แล้ว

    Well, it's all unobtanium now, so those prices are purely hypothetical. Crypto-miners and scalpers have bought all the good cards. Desperate gamers have bought the rest.

  • @billykotsos4642
    @billykotsos4642 4 ปีที่แล้ว

    Mastery of Colab I believe is paramount