💓Thank you so much for watching guys! I would highly appreciate it if you subscribe (turn on notifcation bell), like, and comment what else you want to see! 📅Book a 1-On-1 Consulting Call WIth Me: calendly.com/worldzofai/ai-consulting-call-1 🔥Become a Patron (Private Discord): patreon.com/WorldofAi 🚨Subscribe to my NEW Channel! www.youtube.com/@worldzofcrypto 🧠Follow me on Twitter: twitter.com/intheworldofai Love y'all and have an amazing day fellas. ☕ To help and Support me, Buy a Coffee or Donate to Support the Channel: ko-fi.com/worldofai - Thank you so much guys! Love yal
You don't really need it. ChatGPT, Claude, Gemini, Huggingface Chat, etc. are all good. We would only need to do this if those servers got destroyed or they turned their servers off either during WW3 or whatever. This would be a good backup solution but very costly. You can just run LLMs through your PC then just IP in through Port Forwarding.
💓Thank you so much for watching guys! I would highly appreciate it if you subscribe (turn on notifcation bell), like, and comment what else you want to see!
📅Book a 1-On-1 Consulting Call WIth Me: calendly.com/worldzofai/ai-consulting-call-1
🔥Become a Patron (Private Discord): patreon.com/WorldofAi
🚨Subscribe to my NEW Channel! www.youtube.com/@worldzofcrypto
🧠Follow me on Twitter: twitter.com/intheworldofai
Love y'all and have an amazing day fellas. ☕ To help and Support me, Buy a Coffee or Donate to Support the Channel: ko-fi.com/worldofai - Thank you so much guys! Love yal
Hyperwrite: Create and Deploy Personalized AI Agents on Your Operating System! - th-cam.com/video/VLBBeqhZmjc/w-d-xo.htmlsi=B0GJER43rFZJG1yr
Mistral Large: NEW Mistral Model Beats GPT-3.5 and Llama2-70B on EVERY Benchmark!: th-cam.com/video/EgYWzwhHc_k/w-d-xo.html
Any monthly affordable packages?
How can an A100 be "overkill" but also "more efficient"?
are there any cloud platforms ,where i can run LLM's for free?
You don't really need it. ChatGPT, Claude, Gemini, Huggingface Chat, etc. are all good.
We would only need to do this if those servers got destroyed or they turned their servers off either during WW3 or whatever.
This would be a good backup solution but very costly.
You can just run LLMs through your PC then just IP in through Port Forwarding.
it would involve Google cloud for free, but the GPU at your disposal is more on the lower tier.
You can use LM studio or ollama to run LLMs on your local computer offline