Learn how to run Llama3 in your machine. Running a ChatGPT alternative locally can be cheaper and more secure. You can ask it about private information without worrying what happens with the data.
Arrived here after feeling confused by all the links and approaches floating around in the internet. After your video, I finally feel like I understand this space, this is the best resource I've found on this topic by far.
Man, thank you so much for this walkthrough. It feels like multiple hours of my own browsing, research, and getting lost safely packed in 30 min video.
I appreciate that this video is a nice well rounded high-level description. The llama.cpp explanation was very helpful. Thank you very much for sharing.
This is one of the best videos I’ve ever seen about running LLMs! Specifically it’s very user friendly for non-coders! Maybe some tweaks in the title/description will help none coders to find this better
Like some other commenters have said, this video was really useful to unpack the different options available for working with Llama models. Thank you so much! Aside: Your pronunciation of the “ll” sounds like the way it’s done in Argentinian Spanish…are you in Argentina?
Yes I am. That ""LL" the sound is what we use to call the actual llamas, which we have plenty of. I understand that in English it's more like "lama" but it's hard to remember that when I'm recording. Thank you for the kind words.
@@SemaphoreCI I managed to get llama.cpp running with phi-2-uncensored, love it.. fast answers are not bad.. I showed it to my 10-year-old, he wanted a 1000-page SA on cats..lol
You've saved me hours of research. Extremely high quality video. Well done! BTW if you don't mind me asking, is your mother language latin-based? Spanish speaker here and I hear some similarities :D edit: se te escapó por ahí una "eshe" (ll) jeje así que debes ser argentino o uruguayo. saludos desde perú!!
just subs. Which model would you recommend to train using a specific golang repo, for example a project repo. Would it allow me to train locally by pointing to a directory? thanks Ah I should have continued watching your video to the end where you mention GPT4ALL allows you to point to a directory which it indexes. Do you think this would work if I point it to a repo?
In my experience GPT4ALL models are not great at coding. YMMV but I think something like Copilot or AWS CodeWhisperer would be better for your use case.
He's right. The filter option for "conversational" under Natural Language Processing in huggingface is gone. Maybe the filter "question amswering" is covering that now?
Learn how to run Llama3 in your machine. Running a ChatGPT alternative locally can be cheaper and more secure. You can ask it about private information without worrying what happens with the data.
Arrived here after feeling confused by all the links and approaches floating around in the internet. After your video, I finally feel like I understand this space, this is the best resource I've found on this topic by far.
Man, thank you so much for this walkthrough. It feels like multiple hours of my own browsing, research, and getting lost safely packed in 30 min video.
Glad to hear it!
I appreciate that this video is a nice well rounded high-level description. The llama.cpp explanation was very helpful. Thank you very much for sharing.
Glad you enjoyed it!
Excellent breakdown of user friendly options to run llms.
Thank you. The space is moving so fast is hard to keep track of everything. Exciting times.
This is one of the best videos I’ve ever seen about running LLMs! Specifically it’s very user friendly for non-coders! Maybe some tweaks in the title/description will help none coders to find this better
Thank you! It's a good suggestion. I'm glad you enjoyed it.
An excellent explanatory video with valuable and useful information for using LLAMA models without the Internet to ensure complete privacy.
Thanks for clarifying my long-listed doubts. Now I am able to connect the dots.
Glad it was helpful!
Like some other commenters have said, this video was really useful to unpack the different options available for working with Llama models. Thank you so much!
Aside: Your pronunciation of the “ll” sounds like the way it’s done in Argentinian Spanish…are you in Argentina?
Yes I am. That ""LL" the sound is what we use to call the actual llamas, which we have plenty of. I understand that in English it's more like "lama" but it's hard to remember that when I'm recording.
Thank you for the kind words.
Thanks for the summary! There were a lot of things in your video which I didnt know before. ♥
Thank you! I'm happy it helped
Thank you so much. Clear concise beautiful presentation. I look forward to engaging with more of your content.
Awesome, thank you!
I appreciate you !! I look forward to seeing your channel grow
Thank you so much 🤗
This is fantastic stuff man - quick question - have you considered using LM Studio?
Thank you. I considered but I preferred to highlight open-source tools first.
thank you presenting this... very helpful to understand options..
Thank you for watching. I'm happy it helped!
Thank you for sharing, I will try some of your suggestions for open-source products. :-)
Please do!
@@SemaphoreCI I managed to get llama.cpp running with phi-2-uncensored, love it.. fast answers are not bad.. I showed it to my 10-year-old, he wanted a 1000-page SA on cats..lol
Excellent presentation! Thank you
Glad you enjoyed it!
You've saved me hours of research. Extremely high quality video. Well done!
BTW if you don't mind me asking, is your mother language latin-based? Spanish speaker here and I hear some similarities :D
edit: se te escapó por ahí una "eshe" (ll) jeje así que debes ser argentino o uruguayo. saludos desde perú!!
Thank you. Glad it helped. You got it right, I'm from Argentina
where exactly did you run the gh repo clone of llama.cpp on 13:04? Thank you
Great video!
Thank you!
Thanks very much for your time and awesome video ❤🎉
Thanks for watching!
It was very useful.
Thank you!
just subs. Which model would you recommend to train using a specific golang repo, for example a project repo. Would it allow me to train locally by pointing to a directory? thanks
Ah I should have continued watching your video to the end where you mention GPT4ALL allows you to point to a directory which it indexes. Do you think this would work if I point it to a repo?
In my experience GPT4ALL models are not great at coding. YMMV but I think something like Copilot or AWS CodeWhisperer would be better for your use case.
LocalAI in docker?
They provide a docker image: localai.io/basics/getting_started/
@@SemaphoreCI thank you
why there is 0 model in conversation now
Sorry, what do you mean?
Hugging face has no conversation model now
He's right. The filter option for "conversational" under Natural Language Processing in huggingface is gone. Maybe the filter "question amswering" is covering that now?
@@jesskrikra may be
Wow, I'm not here yet. See you in...
Good luck!
appreciate your good work