6 Ways to Run ChatGPT Alternatives in Your Machine (Including Llama3)
ฝัง
- เผยแพร่เมื่อ 11 พ.ค. 2024
- Open-source AI and Large Language Models are getting better and better. If we can replace ChatGPT or Bard with them we would gain a great deal of privacy and use these models in cases we previously couldn't before (like when dealing with sensitive or proprietary data).
In this video, we'll learn 6 ways to run various open-source large language models locally.
🔗 Useful links:
I tried 7 ChatGPT alternatives: • I Tried 7 ChatGPT Alte...
How to train a model using CI/CD: • CI/CD Essentials for M...
Blog post with examples: semaphoreci.com/blog/local-llm
HuggingFace: huggingface.co
LangChain: www.langchain.com
Llama.cpp: github.com/ggerganov/llama.cpp
Llamafile: github.com/Mozilla-Ocho/llama...
Ollama: ollama.ai
GPT4ALL: gpt4all.io/index.html
================================================
Timestamps:
0:00 Intro
0:54 Hardware requirements
2:04 (1) How to use HuggingFace 🤗 and Transformers 🤖
8:35 (2) LangChain
11:58 (3) Llama.cpp
17:21 (4) Llamafile
20:39 (5) Ollama.ai
23:43 (6) GPT4ALL
27:10 Conclusion
================================================
#llm #ai #localllm #llama2 #openai #chatgpt #nlp #machinelearning #ml #development #programming #devops #tutorial #llama3 - วิทยาศาสตร์และเทคโนโลยี
Learn how to run Llama3 in your machine. Running a ChatGPT alternative locally can be cheaper and more secure. You can ask it about private information without worrying what happens with the data.
Arrived here after feeling confused by all the links and approaches floating around in the internet. After your video, I finally feel like I understand this space, this is the best resource I've found on this topic by far.
Man, thank you so much for this walkthrough. It feels like multiple hours of my own browsing, research, and getting lost safely packed in 30 min video.
Glad to hear it!
I appreciate that this video is a nice well rounded high-level description. The llama.cpp explanation was very helpful. Thank you very much for sharing.
Glad you enjoyed it!
An excellent explanatory video with valuable and useful information for using LLAMA models without the Internet to ensure complete privacy.
I appreciate you !! I look forward to seeing your channel grow
Thank you so much 🤗
Excellent breakdown of user friendly options to run llms.
Thank you. The space is moving so fast is hard to keep track of everything. Exciting times.
Thanks for clarifying my long-listed doubts. Now I am able to connect the dots.
Glad it was helpful!
Thanks for the summary! There were a lot of things in your video which I didnt know before. ♥
Thank you! I'm happy it helped
Thank you so much. Clear concise beautiful presentation. I look forward to engaging with more of your content.
Awesome, thank you!
This is one of the best videos I’ve ever seen about running LLMs! Specifically it’s very user friendly for non-coders! Maybe some tweaks in the title/description will help none coders to find this better
Thank you! It's a good suggestion. I'm glad you enjoyed it.
thank you presenting this... very helpful to understand options..
Thank you for watching. I'm happy it helped!
Excellent presentation! Thank you
Glad you enjoyed it!
Thanks very much for your time and awesome video ❤🎉
Thanks for watching!
Thank you for sharing, I will try some of your suggestions for open-source products. :-)
Please do!
@@SemaphoreCI I managed to get llama.cpp running with phi-2-uncensored, love it.. fast answers are not bad.. I showed it to my 10-year-old, he wanted a 1000-page SA on cats..lol
Like some other commenters have said, this video was really useful to unpack the different options available for working with Llama models. Thank you so much!
Aside: Your pronunciation of the “ll” sounds like the way it’s done in Argentinian Spanish…are you in Argentina?
Yes I am. That ""LL" the sound is what we use to call the actual llamas, which we have plenty of. I understand that in English it's more like "lama" but it's hard to remember that when I'm recording.
Thank you for the kind words.
appreciate your good work
Great video!
Thank you!
It was very useful.
Thank you!
Good video. Thanks
Glad you liked it! Thank you!
This is fantastic stuff man - quick question - have you considered using LM Studio?
Thank you. I considered but I preferred to highlight open-source tools first.
where exactly did you run the gh repo clone of llama.cpp on 13:04? Thank you
just subs. Which model would you recommend to train using a specific golang repo, for example a project repo. Would it allow me to train locally by pointing to a directory? thanks
Ah I should have continued watching your video to the end where you mention GPT4ALL allows you to point to a directory which it indexes. Do you think this would work if I point it to a repo?
In my experience GPT4ALL models are not great at coding. YMMV but I think something like Copilot or AWS CodeWhisperer would be better for your use case.
Wow, I'm not here yet. See you in...
Good luck!
LocalAI in docker?
They provide a docker image: localai.io/basics/getting_started/
@@SemaphoreCI thank you
why there is 0 model in conversation now
Sorry, what do you mean?
Hugging face has no conversation model now
He's right. The filter option for "conversational" under Natural Language Processing in huggingface is gone. Maybe the filter "question amswering" is covering that now?
@@jesskrikra may be