Spring AI - Run Meta's LLaMA 2 Locally with Ollama 🦙 | Hands-on Guide |
ฝัง
- เผยแพร่เมื่อ 4 ก.ค. 2024
- #JavaTechie #SpringAI #Ollama #LLaMA2
👉 In this tutorial I will walk you through the steps to understand how to run LLM locally using ollama & spring ai , We will also work on a hands-on project with the popular LLM, Llama2
What You Will Learn:
👉 What is Spring AI?
👉 What is Ollama?
👉 What is LLaMA 2 ?
👉 hands-on project with the popular LLM, Llama2
Spring documentation : docs.spring.io/spring-ai/refe...
🧨 Hurry-up & Register today itself!🧨
Devops for Developers course (Live class ) 🔥🔥:
javatechie.ongraphy.com/cours...
COUPON CODE : NEW24
Spring boot microservice Premium course lunched with 70% off 🚀 🚀
COURSE LINK : Spring boot microservice course link :
javatechie.ongraphy.com/cours...
PROMO CODE : SPRING50
GitHub:
github.com/Java-Techie-jt/spr...
Blogs:
/ javatechie4u
Facebook:
/ javatechie
Join this channel to get access to perks:
th-cam.com/users/javatechiejoin
🔔 Guys, if you like this video, please do subscribe now and press the bell icon to not miss any update from Java Techie.
Disclaimer/Policy:
📄 Note: All uploaded content in this channel is mine and it's not copied from any community, you are free to use source code from the above-mentioned GitHub account. - วิทยาศาสตร์และเทคโนโลยี
Thanks for starting a series on AI. This is the need of the hour. Thanks for accepting our request for AI demos
Great job. Will look forward for Vector database with AI integration
Thanks for posting this sir, I have learn a lot from you. Developers should be the highest paid group of any field because we do more learning than any field. We never stop learning, there is always something new to learn.
"Developers should be highest paid group in any field".... exactly the opposite should happen.
Thanks Basant sir, as a developer i like ur video to watch, whenever new leaning required i follow ur video.
Thanks Basant. Appreciate your efforts. God bless you!
Loved it... Yes you can create more.
thanks for this JT. another great video. I'm going to take a look at the ETL framework for Data Engineering. look forward to more content
Thanks for the video. I hope more will come on Spring AI soon.
Very nice ❤❤
Great start to the Java AI world bro,
Will be Waiting for the demos in this playlist on RAG applications using Vector DB, and Function calling,
Take your time and plan a video on E2E real world projects, deploying it on a server.
Very Good Video On Spring-AI. Your content(Videos) like Java Magnet, Its attract very fast.
Brilliant explanation
Thank you
Thanks
Thanks buddy 🙂👍
Please share different model videos...they are very helpful
Thank you Basanth, can you please take us to Spring ai with google cloud (like vertexAiGemini) also vertextai with java
Great video. Thank you for always making effort to update us with the latest technologies. I have one issue though, after implementing the generate rest api, my response takes too long, sometimes it takes up to 7minutes and i also noticed that you used GET request while llama "/api/chat" expects a POST request. Is there a particular reason you used a GET request?
2. Is it possible to train the llama model to recognize and provide responses based on the trained data?
Thanks bro.. You are a good motivation for all developers to keep learning ❤
😎👍🏻💯🙏🏻
Here comes Spring boot to challenge Python
Great video. Can you do a RAG solution using the Weaviate Vector DB? I had one with AI 0.8 running, but it has changed so muched to 1.0...
Sure I will give a try as currently I am learning it 😊
Great start in AI world. I had question If I have an spring boot application and I want ans related to my application data only. how we can achieve that
I am not sure will check and update you
Hey basant nice video , can you please teach us how to use RAG functionality in locally running llms
I will do this
@@Javatechie thank you as always 🙂
I got below issue
Error: model requires more system memory (8.4 GiB) than is available (3.9 GiB)
docker exec -it ollama ollama run llama2
Hi JT, I am running both ollama and my spring boot app using docker compose, but app is getting 500 response when hitting ollama api. This is fixed only when I manually run ollama run llama2 or ollama pull llama2 in the ollama container. Is there any way to automatically pull the model while starting from docker compose? I tried command: ["ollama", "pull", "llama2"] in docker compose file with no luck :(
not sure buddy, I will check and update you
Sir, are you launching any latest course spring boot & microservices?
The course which is listed on your site, is it Live?
It's pre-recorded!
It was recorded session of past live class currently I don't have any plan for new batch but If I have any plan in future then definitely I will update first in my TH-cam channel for my audience 😀
Do we need tokens to generate the response? Also are these models for free?
No tokens required yes these models are open-source
@@Javatechie thanks
getting this error while ,
Error: llama runner process has terminated: signal: killed
At what step are you getting this error?
@@Javatechie while running
docker exec -it ollama ollama run llama2
does it require any higher machine spec
No specification required
getting same error
I got below issue
verifying sha256 digest
writing manifest
removing any unused layers
success
Error: model requires more system memory (8.4 GiB) than is available (2.9 GiB)
after running -> docker exec -it ollama ollama run llama2
how to solve this ?? please help me..
I got same issue ? did you got solution.
@@atulgoyal358 I did not touch after this issue.
I think increase docker memory size.
@@2RAJ21 Need to check how to increase docker memory size
@@atulgoyal358 no idea bro..
I am confused with ram or memory..
@@2RAJ21 Need to configure .wslconfig file in %userprofile% increase RAM and processor then it will work..
As usual, great job ❤
Please, if possible, work on your microphone voice quality recording
Hello buddy thanks for your suggestion actually mic quality is good and things are echoing so will definitely try to improve it