Finally, we got LLM lectures from you. OI would like it so much if you start making in-depth LLM lectures just like you did for Computer Vision. I am excited and looking forward to them.
Very useful tutorial, very good👍, one query on this, can we integrate and use both oLlama and streamlit together to get outcome from other LLM like Gemma or phi3?
Hi, im resepectfull for this video, and im interested, first can i ask you for your specification laptop/desktop. Cause im interested in ai specialy in object detection and i try to build some program and build in and try to run in my laptop, the result after i try is bad, but the result of training epoch in collab is good but when im pair in my program the result is bad. Do the specifications of our device really have that influence on the performance of the program we create?
Please make a video on all the genAI architecture, a dedicated one each to one specific model in each, or how to build a gpt model from scratch and agents and RAG
I get following error: (env_langchain1) C:\Requirements_LLM\Generative_AI-main\Generative_AI-main\L-6>ollama run llama3.1:405b 'ollama' is not recognized as an internal or external command, operable program or batch file.
Dear Madam, I'm always facilitating many about your your TH-cam lectures, but my laptop is window 10 but I'm trying to download ollam but it can't download i don't maybe there are other thing i can do, i need your help, its really good lecture you gave us.. Looking forward to read from you.
@@CodeWithAarohi Thanks,it @CodeWithAaro,It work with German but I try in the place of German,I put Chinese or Japanese but it does work or are those languages not in geminiai?
I have a doubt mam, when we say run- we are already using chatgpt on our device which process on cloud services how is it different when we use models running.
When you use ChatGPT, it processes your requests on cloud servers. Running models on your own device means the processing happens on your device instead of the cloud. This can be faster and give you more control, but it might need more power and storage from your device.
Keep sharing such a knowledgeable content ma'am
Thank you, I will
Finally, we got LLM lectures from you. OI would like it so much if you start making in-depth LLM lectures just like you did for Computer Vision. I am excited and looking forward to them.
I'll definitely make in-depth LLM lectures. In this playlist, I will cover such topics.
Very nicely explained.
Thank you! 🙂
Great job Madam, Very informative! Ollama simplifies local LLM setup effectively.👍
Thank you! 🙂
This is really helpful !
Glad it was helpful!
Very useful tutorial, very good👍, one query on this, can we integrate and use both oLlama and streamlit together to get outcome from other LLM like Gemma or phi3?
great video please make more videos related to gen ai
Sure
thank u
Welcome
Amazing. I hope you could also make a video how to train LLMs on custom dataset like custom PDFs and then build a prompt for it.
Sure, I will make videos on LLMs with custom datasets. In this playlist. I will cover all the topics related to Generative AI.
Hi, im resepectfull for this video, and im interested, first can i ask you for your specification laptop/desktop. Cause im interested in ai specialy in object detection and i try to build some program and build in and try to run in my laptop, the result after i try is bad, but the result of training epoch in collab is good but when im pair in my program the result is bad. Do the specifications of our device really have that influence on the performance of the program we create?
Please make a video on all the genAI architecture, a dedicated one each to one specific model in each, or how to build a gpt model from scratch and agents and RAG
Sure, Soon
I get following error:
(env_langchain1) C:\Requirements_LLM\Generative_AI-main\Generative_AI-main\L-6>ollama run llama3.1:405b
'ollama' is not recognized as an internal or external command,
operable program or batch file.
Well done arohi
Thanks :)
Dear Madam, I'm always facilitating many about your your TH-cam lectures, but my laptop is window 10 but I'm trying to download ollam but it can't download i don't maybe there are other thing i can do, i need your help, its really good lecture you gave us.. Looking forward to read from you.
You can email me your query at aarohisingla1987@gmail.com
@@CodeWithAarohi Thanks,it @CodeWithAaro,It work with German but I try in the place of German,I put Chinese or Japanese but it does work or are those languages not in geminiai?
why will I use this if I can go to meta orgmini and run it there ?
can we make a chatbot application using stremlit by using ollama...If possible please make a video on that part
Yes, you can do that. Check this video: th-cam.com/video/6ExFTPcJJFs/w-d-xo.html
I have a doubt mam, when we say run- we are already using chatgpt on our device which process on cloud services how is it different when we use models running.
When you use ChatGPT, it processes your requests on cloud servers. Running models on your own device means the processing happens on your device instead of the cloud. This can be faster and give you more control, but it might need more power and storage from your device.
Can you please share your PC configuration?
64GB RAM, NVidia RTX 3090 24GB
Madam can you immplement person re identification using res net in next yolo using google golab
I will sure make it after finishing my pipelined videos.
Please make a video but by using python
Sure, In this playlist, I will cover different ways to use LLMs.