Before you buy google Colab

แชร์
ฝัง
  • เผยแพร่เมื่อ 4 ต.ค. 2024
  • What mystical GPUs do they offer? That's what I explain in this video, and my general overview of how Colab Pro and Pro+ works, as someone who has bought it himself previously.
    Get educated about coding GPTs: payhip.com/b/u...
    Twitter: @ starsaicompany
    #machinelearning #ml #ai #chatgpt #bard #programming #coding #computerscience #computer #chatbot
  • บันเทิง

ความคิดเห็น • 33

  • @acromos
    @acromos 7 หลายเดือนก่อน +2

    Very informative, thanks bro.

    • @stars_ai
      @stars_ai  6 หลายเดือนก่อน

      I wouldn't suggest Colab, now that I've looked more into it. It isn't only my problem that my A.I. can't train on it - many across the world can't do so as well.

  • @franklinfache
    @franklinfache 2 หลายเดือนก่อน

    Thank you! i'm not gonna wate my money and get disappointed

  • @lucasquemelli4142
    @lucasquemelli4142 5 หลายเดือนก่อน +2

    What would be the best option for training LLMs? A PC with excellent hardware or Colab+?

    • @stars_ai
      @stars_ai  3 หลายเดือนก่อน

      Neither. Having a PC with OP stats sounds better since Colab+ is not great, but then you would need a lot of money for that type of PC (buying multiple A100s, or T4s or what not). If you're training an LLM that is. A smaller model may work fine, but LLM by definition refers to a Large Language Model.
      TLDR; I'd recommend Kaggle (free). I've gotten better results with it than with any other service.

  • @abdumoez2077
    @abdumoez2077 7 หลายเดือนก่อน

    I suggest you set limits on CPU and GPU usage when using TensorFlow. While you can control TensorFlow's use of OpenMP threads, doing so may result in a trade-off with processing time.

    • @stars_ai
      @stars_ai  7 หลายเดือนก่อน

      Why would I set limits on a system already failing even at maximum capacity?

    • @abdumoez2077
      @abdumoez2077 7 หลายเดือนก่อน +2

      @stars_ai Sometimes, you need to reduce the resources usage while model training because TensorFlow tries to use the maximum memory available for fast training. In the case of training with a 3-4TB dataset, it will definitely encounter an OOM (Out of Memory) error because it tries to load all the data into memory and copies it to the GPU, which leads to a crash. I also suggest using a garbage collector to remove unnecessary variables and release memory. Along side with OpenMP limits to load data according to given conditions.

    • @stars_ai
      @stars_ai  7 หลายเดือนก่อน +2

      Alright I'll try @@abdumoez2077

  • @laavanayadhawan6681
    @laavanayadhawan6681 8 หลายเดือนก่อน +1

    I bought collab Pro and i got the SAME OOM error. I really felt it is too expensive and not worth the price

    • @stars_ai
      @stars_ai  7 หลายเดือนก่อน

      Thinking the same too, I can't lie.

  • @Chadpritai
    @Chadpritai 28 วันที่ผ่านมา

    What would you recommend a student who is just starting out in deep learning 😅?

  • @rossinidesigner1
    @rossinidesigner1 3 หลายเดือนก่อน

    thanks for the information, what is your opinion about the Microsoft azure?
    other thing, do you think 200$ worth for the Copilot Studio?

    • @stars_ai
      @stars_ai  3 หลายเดือนก่อน

      I honestly don't know about Azure.
      Research more about it. Look for reviews on TH-cam and everywhere else.
      Don't trust it so easily, it is a lot of money after all.

  • @AdarshSingh-rm6er
    @AdarshSingh-rm6er 5 หลายเดือนก่อน +1

    thanks bro, stuck in the same situation. I am currently using colab pro for training transformers and llms and i exhausted all the compute limits and i dont what is the best solution for long term? Is the cloud services worth it or is there anything else i can do?

    • @stars_ai
      @stars_ai  5 หลายเดือนก่อน +1

      I have been recommended to use Kaggle Kernels. They say it gives you free 30 hours of double or something TPU usage per week. I have tried to load it up in vain, so far. But try it - if it works, perfect. I'm trying to load it myself too.

    • @AdarshSingh-rm6er
      @AdarshSingh-rm6er 5 หลายเดือนก่อน

      @@stars_ai thanks bro

  • @kami_AS
    @kami_AS 6 หลายเดือนก่อน

    Thnak you, I won't waste my money on it then.

    • @stars_ai
      @stars_ai  6 หลายเดือนก่อน +1

      Yes, please don't. I did, and it was pretty bad.

  • @AI-Stories54
    @AI-Stories54 หลายเดือนก่อน

    Brother how many GB of GPU are given in pro plan

  • @najbighouse
    @najbighouse 7 หลายเดือนก่อน

    bought colab pro+ to run processes in the background as adverticed and it was pretty useless..

    • @stars_ai
      @stars_ai  7 หลายเดือนก่อน

      Honestly, I agree. As I said near the end of the video, my A.I. system failed on their best A100 system. Thought it was only me with these issues, but actually it turns out that Colab is just a bad service

  • @abujessica
    @abujessica 8 หลายเดือนก่อน

    nice
    what are training exactly? LLM?

    • @stars_ai
      @stars_ai  8 หลายเดือนก่อน +1

      Specifically what am I training? I'm training a language model, though it's not that large, since I can't get such a huge dataset to work with the limited computational resources that I have

  • @abdelrhmanmouse_1917
    @abdelrhmanmouse_1917 6 หลายเดือนก่อน

    Colab pro 9.9us$ inceease disk size??

    • @stars_ai
      @stars_ai  5 หลายเดือนก่อน

      Form a coherent sentence first

  • @uazik-kamazik
    @uazik-kamazik 5 หลายเดือนก่อน

    У тебя хороший английский братишка

  • @davifernandeszk
    @davifernandeszk 4 หลายเดือนก่อน

    I cant use dynamic parallelism in gpu colab free! Is it possible in colab pro?

    • @stars_ai
      @stars_ai  3 หลายเดือนก่อน

      You want parallel processing? That means multiple GPUs, and it only offers one GPU unfortunately. I would not recommend Google Colab at all. Go for Kaggle honestly.