Before you buy google Colab
ฝัง
- เผยแพร่เมื่อ 4 ต.ค. 2024
- What mystical GPUs do they offer? That's what I explain in this video, and my general overview of how Colab Pro and Pro+ works, as someone who has bought it himself previously.
Get educated about coding GPTs: payhip.com/b/u...
Twitter: @ starsaicompany
#machinelearning #ml #ai #chatgpt #bard #programming #coding #computerscience #computer #chatbot - บันเทิง
Very informative, thanks bro.
I wouldn't suggest Colab, now that I've looked more into it. It isn't only my problem that my A.I. can't train on it - many across the world can't do so as well.
Thank you! i'm not gonna wate my money and get disappointed
What would be the best option for training LLMs? A PC with excellent hardware or Colab+?
Neither. Having a PC with OP stats sounds better since Colab+ is not great, but then you would need a lot of money for that type of PC (buying multiple A100s, or T4s or what not). If you're training an LLM that is. A smaller model may work fine, but LLM by definition refers to a Large Language Model.
TLDR; I'd recommend Kaggle (free). I've gotten better results with it than with any other service.
I suggest you set limits on CPU and GPU usage when using TensorFlow. While you can control TensorFlow's use of OpenMP threads, doing so may result in a trade-off with processing time.
Why would I set limits on a system already failing even at maximum capacity?
@stars_ai Sometimes, you need to reduce the resources usage while model training because TensorFlow tries to use the maximum memory available for fast training. In the case of training with a 3-4TB dataset, it will definitely encounter an OOM (Out of Memory) error because it tries to load all the data into memory and copies it to the GPU, which leads to a crash. I also suggest using a garbage collector to remove unnecessary variables and release memory. Along side with OpenMP limits to load data according to given conditions.
Alright I'll try @@abdumoez2077
I bought collab Pro and i got the SAME OOM error. I really felt it is too expensive and not worth the price
Thinking the same too, I can't lie.
What would you recommend a student who is just starting out in deep learning 😅?
thanks for the information, what is your opinion about the Microsoft azure?
other thing, do you think 200$ worth for the Copilot Studio?
I honestly don't know about Azure.
Research more about it. Look for reviews on TH-cam and everywhere else.
Don't trust it so easily, it is a lot of money after all.
thanks bro, stuck in the same situation. I am currently using colab pro for training transformers and llms and i exhausted all the compute limits and i dont what is the best solution for long term? Is the cloud services worth it or is there anything else i can do?
I have been recommended to use Kaggle Kernels. They say it gives you free 30 hours of double or something TPU usage per week. I have tried to load it up in vain, so far. But try it - if it works, perfect. I'm trying to load it myself too.
@@stars_ai thanks bro
Thnak you, I won't waste my money on it then.
Yes, please don't. I did, and it was pretty bad.
Brother how many GB of GPU are given in pro plan
bought colab pro+ to run processes in the background as adverticed and it was pretty useless..
Honestly, I agree. As I said near the end of the video, my A.I. system failed on their best A100 system. Thought it was only me with these issues, but actually it turns out that Colab is just a bad service
nice
what are training exactly? LLM?
Specifically what am I training? I'm training a language model, though it's not that large, since I can't get such a huge dataset to work with the limited computational resources that I have
Colab pro 9.9us$ inceease disk size??
Form a coherent sentence first
У тебя хороший английский братишка
I cant use dynamic parallelism in gpu colab free! Is it possible in colab pro?
You want parallel processing? That means multiple GPUs, and it only offers one GPU unfortunately. I would not recommend Google Colab at all. Go for Kaggle honestly.