AI field is really becoming dynamic.Lot of changes are coming from Traditional Machine Learning to Generative AI.This field is changing dynamically and we need to update ourself as we go ahead. Join my telegram group where I post and discuss these types of content.Happy Learning!! Make sure you have telegram installed in it. t.me/+V0UeLG8ji-F8ThNb
Sir, I have a basic question related to Prompting. Does any learning (with model weight update) happen during Prompting? If not, then how does the model learn from Few Shot Prompting?
hi sir could you please help me with the erro i'm facing when run the model File "C:\Users\tarun\anaconda3\envs\venv\lib\site-packages\streamlit untime\scriptrunner\script_runner.py", line 534, in _run_script exec(code, module.__dict__) File "C:\Users\tarun\llm\app.py", line 57, in st.write(getllamaresponse(input_text,no_words,blog_style)) File "C:\Users\tarun\llm\app.py", line 28, in getllamaresponse response=llm(prompt.format(style=blog_style,text=input_text,n_words=no_words)) File "C:\Users\tarun\anaconda3\envs\venv\lib\site-packages\langchain_core\prompts\prompt.py", line 132, in format return DEFAULT_FORMATTER_MAPPING[self.template_format](self.template, **kwargs) File "C:\Users\tarun\anaconda3\envs\venv\lib\string.py", line 161, in format return self.vformat(format_string, args, kwargs) File "C:\Users\tarun\anaconda3\envs\venv\lib\site-packages\langchain_core\utils\formatting.py", line 18, in vformat return super().vformat(format_string, args, kwargs) File "C:\Users\tarun\anaconda3\envs\venv\lib\string.py", line 165, in vformat result, _ = self._vformat(format_string, args, kwargs, used_args, 2) File "C:\Users\tarun\anaconda3\envs\venv\lib\string.py", line 205, in _vformat obj, arg_used = self.get_field(field_name, args, kwargs) File "C:\Users\tarun\anaconda3\envs\venv\lib\string.py", line 270, in get_field obj = self.get_value(first, args, kwargs) File "C:\Users\tarun\anaconda3\envs\venv\lib\string.py", line 227, in get_value return kwargs[key]
I followed the steps in the video and downloaded the requirements.txt file, but I’m encountering errors related to library versions. Could you please provide the specific versions of the libraries that work with this project, or update the requirements.txt file? It would be really helpful if you could share the versions you're using in the video.
This is nice. However can you create some end to end video project like network log analysis or db log analysis for enterprise customers so that it is more meaningful
I want to understand what is the future of LLM and GenAI for developers what kind of work we will get ? Using other AI models to crate chat bot, images, videos ??
Superb video. Thanks, Krish. Just a question: How do you use a finetuned model adapter to build the application? A video on this would be greatly helpful.
Hi Krish! I learned data science through your videos and am now working as a data scientist. BIg Thanks! Can we use QA answer models without OpenAI or anything free of cost?
THANK YOU KRISH FOR THE AMAZING VIDEO. BUT WHAT DOES IT MEAN THE TRAINING HOUR IS 1720320 hours? it is about 196 years or what they reported in the paper I am confused on that.
Great Sir, Thanks for this wonderful video. Pls continue read---- I have built Chatbot's using llama-2 13B using API only, 2 bots i built, one normal text generation and another is for Uploading a PDF and Text documents (you have made video using OpenAI) and asking QnA on the document, bots performing well. 😊. But I have tried to build same document QnA for CSV file, when it comes to CSV not performing good, (then i use CSV agent it is working great) if possible pls do make video on QnA using CSV file. But Pls make a video that instead of downloading the model and running in Locally, use API (don't have capabilities to go with locally). Thanks again
It's fantastic session Krish. Please help to clarify below Now we are crafting the input to extract what we need in a specific way. It's kind of a prompt engineering technique correct ?
Hi Sir, thank you so much for this helpful video. But I'm getting problem to run the model. I did same thing as you said but, when I click on Generate button it doesn't showing anything. Not answer neither error. can you please help me in that.
I am working on lllama 2-7b .to make a fine tunned model in finance domain .i am right now colllecting dataset about this domain .Can u tell me which form should i structed m’y dataset for good performance of this model
Hello sir, PW student here. First of all, I would like to thank you for the amazing ML classes you took for PW. They were really amazing and the way you taught were very easy and very simple. Looking forward to learning the rest from you here.
Hello Sir, Thank you for such a quick and Concise tutorial on Llama. I watched till the end and did a side-by-side code my myself till the end BUT the output in the StreamLit app is, taking forever to generate the required text
Can you point me to the video where you deploy this to AWS, as you mentioned in the video? I created a similar streamlit app for my use case but want to deploy it to AWS to reduce the latency of the generation. Looking forward to hearing from you!
Hello sir... I have been waiting for more than 15 mins.. still the output is not coming as well as no error has occurred My system has 8GB of RAM .. Can you guide me ..
@@Bgmiupdates03 actually the model was of 8 GB .. and my memory size was also same.. so I just reduced the size of model to 50% of my memory. Then it gave me results.
Currently there is an opportunity at a well-known cross-border e-commerce company in China developing its own AI LLM. The company is looking to hire talented algorithm experts. The position allows for remote work. The salary is also competitive. PM if u r interested
Could you pls .. tell me which laptop you are using.. ? I am planning to buy laptop that can support this LLM at local . Pls let me know you system configuration/ laptop model name / link to buy Thanks
Hey, I need help. After running the app, it successfully takes me to the app link in browser but after filling the topic name and size of blog it keeps running and never gives the information asked. Please help me.
Sir, May you suggest some innovative project that can cover a current research gap/ or it be innovative enough to grab attention and can be performed in windows, for 8th semester college project? Our professors as expected are particular about creativity but due to limits, we don't have access to amazing hardware. We did think about a blog generation project using llava so images will be produced as well. But even if retrain it using different techniques, a lot such websites are available. Second, a video analysis , initially using yolo retinanet other object detection models , and llava on the frames to extract image info and then using tracking objects/motion and analyze the temporal relationship from the information obtained from the earlier phases. Since we are literally quite new to the field, we don't have much info, so we wondered if it was possible to use ollama and get llava give results about the frames and then use that info.. But the issue is will this idea be feasible? Otherwise may you suggest something feasible? We have access to only windows so linux dependencies will lead to issues.
How do you manage to know everything ? I have a feeling that you were born in another galaxy and they put you on a rocket to earth because you were very naughty !! Whatever is the truth , I salute you Teacher Extraordinaire SIR !
I made the project but my output is not showing up There is no error in terminal it's showing only local host and url link but in web when I enter the topic of blog it's continuously running only not giving any output Please if anyone knows how to fix it help me
When running the .py file, I get the localhost interface on my browser but after entering the topic, no of words, no response is generated. I wait for an hour but it still shows running
Hi Krish, Thank you so much for uploading all best resources to keep us upto date. I am ur follower from many years. because of your resources I got into Datascience job 2 year back and upgrading myself by ur videos and encouragement. Thank you
Due to Change in Langchain documentation, I am geting the following Error, How to resolve it ? LangChainDeprecationWarning: The function __call__ was deprecated in LangChain 0.1.7 and will be removed in 0.2.0. Use invoke instead.
Okay I was facing same issue but it's not the error :) yes you heard it right you just have to wait for more time to get the response from the model as we are running it on cpu and depend on your cpu capability it will take time to generate response . Try with less no of words so it generate quicker....
Can you please make a tutorial on using LLM's to augment textual data on private data. Where no data will be re-used to train LLM models. That would be a great help!
it is been so long waiting up on the model to give me response been like 5 minutes. I know my compute has to do a lot of it here, but how much so and what is the possible opensource alternative would be to this faster?
Dear Krish Naik, Very nice video for LLMs especially using Open Source LLMS. One question that Can create custom LLMs chat application by using LLAMA 2- Open Source LLM Model, in that we make some customization like it will search from specific websites (8-10 different websites) and pdfs (custom pdfs we will prepare and upload on specific location). so it will search from the specific website and our own pdfs and generate the custom human like response? Thank you so much for your efforts in TH-cam, Your videos is very useful and easy to understand. Waiting for some more practical demos on LLMs (Open source). Regards Bhavesh
Hi Krish , this is really amazing video. Though the application works fine, however while using this app for some reason the number of words generated is 200. Not sure if anyone else is observing the same It would be great if you could advise.
AI field is really becoming dynamic.Lot of changes are coming from Traditional Machine Learning to Generative AI.This field is changing dynamically and we need to update ourself as we go ahead.
Join my telegram group where I post and discuss these types of content.Happy Learning!!
Make sure you have telegram installed in it.
t.me/+V0UeLG8ji-F8ThNb
hi, do we have an open source model which can help us in selecting something out of the available list based on requirements...... or any other way
Sir we don't want to use pre-trained model, we want to fine tuning these models with our custome data.
Sir, I have a basic question related to Prompting. Does any learning (with model weight update) happen during Prompting? If not, then how does the model learn from Few Shot Prompting?
Your videos are awesome, If you kindly make a video on RLHF with the code, it will be greatly helpful.
hi sir could you please help me with the erro i'm facing when run the model
File "C:\Users\tarun\anaconda3\envs\venv\lib\site-packages\streamlit
untime\scriptrunner\script_runner.py", line 534, in _run_script
exec(code, module.__dict__)
File "C:\Users\tarun\llm\app.py", line 57, in
st.write(getllamaresponse(input_text,no_words,blog_style))
File "C:\Users\tarun\llm\app.py", line 28, in getllamaresponse
response=llm(prompt.format(style=blog_style,text=input_text,n_words=no_words))
File "C:\Users\tarun\anaconda3\envs\venv\lib\site-packages\langchain_core\prompts\prompt.py", line 132, in format
return DEFAULT_FORMATTER_MAPPING[self.template_format](self.template, **kwargs)
File "C:\Users\tarun\anaconda3\envs\venv\lib\string.py", line 161, in format
return self.vformat(format_string, args, kwargs)
File "C:\Users\tarun\anaconda3\envs\venv\lib\site-packages\langchain_core\utils\formatting.py", line 18, in vformat
return super().vformat(format_string, args, kwargs)
File "C:\Users\tarun\anaconda3\envs\venv\lib\string.py", line 165, in vformat
result, _ = self._vformat(format_string, args, kwargs, used_args, 2)
File "C:\Users\tarun\anaconda3\envs\venv\lib\string.py", line 205, in _vformat
obj, arg_used = self.get_field(field_name, args, kwargs)
File "C:\Users\tarun\anaconda3\envs\venv\lib\string.py", line 270, in get_field
obj = self.get_value(first, args, kwargs)
File "C:\Users\tarun\anaconda3\envs\venv\lib\string.py", line 227, in get_value
return kwargs[key]
Eagerly waiting for fine-tuning such model on our own data.
you videos deserve more than 100000000 comments and likes
Thank you for making this amazing video on open source llama2 , really helpful for free gpt programmers
please keep going with this format for other models as well. Thanks
Sir its a very good video to learn about LLM and to know how to generate end to end projects from these models
It's really initiative. You are doing very well
Thank you for sharing this valuable knowledge
I followed the steps in the video and downloaded the requirements.txt file, but I’m encountering errors related to library versions. Could you please provide the specific versions of the libraries that work with this project, or update the requirements.txt file? It would be really helpful if you could share the versions you're using in the video.
I hope to see your future video tutorials of ai-based chatbot using python. Thanks
This is nice. However can you create some end to end video project like network log analysis or db log analysis for enterprise customers so that it is more meaningful
I want to understand what is the future of LLM and GenAI for developers what kind of work we will get ? Using other AI models to crate chat bot, images, videos ??
Superb video. Thanks, Krish. Just a question: How do you use a finetuned model adapter to build the application? A video on this would be greatly helpful.
Thank you for uploading end to end project, can you make video on other LLMs like falcon, Jurassic, Lama index
Hi Krish! I learned data science through your videos and am now working as a data scientist. BIg Thanks! Can we use QA answer models without OpenAI or anything free of cost?
THANK YOU KRISH FOR THE AMAZING VIDEO. BUT WHAT DOES IT MEAN THE TRAINING HOUR IS 1720320 hours? it is about 196 years or what they reported in the paper I am confused on that.
Thanks for the video sir .....❤
Great Sir, Thanks for this wonderful video. Pls continue read----
I have built Chatbot's using llama-2 13B using API only, 2 bots i built, one normal text generation and another is for Uploading a PDF and Text documents (you have made video using OpenAI) and asking QnA on the document, bots performing well. 😊.
But I have tried to build same document QnA for CSV file, when it comes to CSV not performing good, (then i use CSV agent it is working great) if possible pls do make video on QnA using CSV file.
But Pls make a video that instead of downloading the model and running in Locally, use API (don't have capabilities to go with locally).
Thanks again
how to you build this chatbot using API only?
Lets go Krish❤
It's fantastic session Krish. Please help to clarify below
Now we are crafting the input to extract what we need in a specific way. It's kind of a prompt engineering technique correct ?
Hi Sir, thank you so much for this helpful video. But I'm getting problem to run the model. I did same thing as you said but, when I click on Generate button it doesn't showing anything. Not answer neither error. can you please help me in that.
Bro do U get the result like how it will show the output
same, were you able to resolve it?
You forgot "f" in formating string
template = f"""
Text
"""
sir also need a video on using RAG with these models
Kindly prepare one video with Flan T5 model
Hi there! You are amazing. Do you by any chance know Dr. Tejesh Sivasubramani
Hi Krish, thanks for the excellent demo. Is there a way to extend the code to use GPU if available ?
Thank you
❤❤
Great video, what's your local system spec?
nice content
Hi Krish
Thank you for this video.
How to run if we have an AWS endpoint for this model? Please suggest.
How to call the llama model directly from hugging face?
Could you please make a project using LLM for AI agent's please
Love from pakistan
On running the code the app is opening but not giving any response just showing running status
what is a token and how much do they cost?
Sir i want to do llama 2 13b for video on vs code .but how ? Sir please do a video mp4
Sir please do a video
How to resolve this error please help me
Error Msg show like repository not found for url
And show username or password invalid
OSError: TheBloke/Llama-2-7B-Chat-GGML does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack.
is the accuracy calculated for llm after fine tuning the model with our custom pdf text dataset?
Sir, i am using this model in colab but it gives me an error 'TheBloke/Llama-2-7B-Chat-GGML' it is not working. Plz help
I am working on lllama 2-7b .to make a fine tunned model in finance domain .i am right now colllecting dataset about this domain .Can u tell me which form should i structed m’y dataset for good performance of this model
I can't believe this is available for free
Hello sir, PW student here. First of all, I would like to thank you for the amazing ML classes you took for PW. They were really amazing and the way you taught were very easy and very simple. Looking forward to learning the rest from you here.
What is pw
@@riyayadav8468😮 Physics wallah
@@riyayadav8468, Physics Wallah
it is taking a lot of time to execute. can somebody tell me solution for that?
This is a Great video.
but please make a video about on a how to connect NVidia gpu with conda oro python
Hello Sir, Thank you for such a quick and Concise tutorial on Llama.
I watched till the end and did a side-by-side code my myself till the end BUT the output in the StreamLit app is, taking forever to generate the required text
make a video on how to train llama2 on your custom data on local machine and make fastapi deploy on azure
Can you point me to the video where you deploy this to AWS, as you mentioned in the video? I created a similar streamlit app for my use case but want to deploy it to AWS to reduce the latency of the generation. Looking forward to hearing from you!
Sir kindly give the specification machine for this peoject
Hello sir...
I have been waiting for more than 15 mins.. still the output is not coming as well as no error has occurred
My system has 8GB of RAM ..
Can you guide me ..
Bro do U get the result like how it will show the output
@@Bgmiupdates03 actually the model was of 8 GB .. and my memory size was also same.. so I just reduced the size of model to 50% of my memory. Then it gave me results.
@@project_maker how did you reduce the size?
@@mainakseal5027 I took different model of size 4Gb
You can watch this video for assistance
th-cam.com/video/3ykVBe5Ph7Y/w-d-xo.html
@@project_makerplease tell how did you reduce the size
Currently there is an opportunity at a well-known cross-border e-commerce company in China developing its own AI LLM. The company is looking to hire talented algorithm experts. The position allows for remote work. The salary is also competitive. PM if u r interested
Could you pls .. tell me which laptop you are using.. ?
I am planning to buy laptop that can support this LLM at local . Pls let me know you system configuration/ laptop model name / link to buy
Thanks
Hi @krishnaik06 sir, I have followed the same steps, but I am getting empty response (multiple lines of blanks), what might be the issue?
Hey, I need help. After running the app, it successfully takes me to the app link in browser but after filling the topic name and size of blog it keeps running and never gives the information asked. Please help me.
Sir, May you suggest some innovative project that can cover a current research gap/ or it be innovative enough to grab attention and can be performed in windows, for 8th semester college project? Our professors as expected are particular about creativity but due to limits, we don't have access to amazing hardware. We did think about a blog generation project using llava so images will be produced as well. But even if retrain it using different techniques, a lot such websites are available. Second, a video analysis , initially using yolo retinanet other object detection models , and llava on the frames to extract image info and then using tracking objects/motion and analyze the temporal relationship from the information obtained from the earlier phases. Since we are literally quite new to the field, we don't have much info, so we wondered if it was possible to use ollama and get llava give results about the frames and then use that info.. But the issue is will this idea be feasible? Otherwise may you suggest something feasible? We have access to only windows so linux dependencies will lead to issues.
Can you do something of your face circle because sometimes it covers useful information when youŕe typing or displaying something.
Hello Sir, Can i run this project on my basic I5 laptop without GPU and 4GB Ram
How do you manage to know everything ? I have a feeling that you were born in another galaxy and they put you on a rocket to earth because you were very naughty !! Whatever is the truth , I salute you Teacher Extraordinaire SIR !
I made the project but my output is not showing up
There is no error in terminal it's showing only local host and url link but in web when I enter the topic of blog it's continuously running only not giving any output
Please if anyone knows how to fix it help me
Can the project be done on macbook
been trying to run the same code for last 2 hours in a 8gb ram no GPU as such. Can someone help me out? Kinda new to llms
Can I get your system specs. I am planning to build my work station it will be helpful.
I faced an error
RepositoryNotFound. Please help me to solve it
I have a doubt in this video. The output which is generated, from where it's coming
When running the .py file, I get the localhost interface on my browser but after entering the topic, no of words, no response is generated. I wait for an hour but it still shows running
Have you found any solution?
No, are you facing the same problem ?
I am facing the same problem
@@justchill2199is it showing again or u resolved
I am getting module object not callable. Any idea how to resolve this ?
could you please create a video for Alpacaeval model as well which is a fine tuned model of Llama2
i have downloaded the ollama and it is in .exe , i want this ollam model to add in my configuration file how can i proceed?
Thank you Krish, please continue to create contents like this!
hi i am not able download the llama-2-7B could you please help me with this
Thanks a lot 🤍
Make for mistral
Hi Krish - can't we run this code in jupyter notebook
How can I download the llama 3 and use it in vscode like you did with llama2
it is taking a lot of time to execute. can somebody tell me solution for that?
Where you learn these generative AI? Have any resources pls tell me .
Hi krish how to deploy this in azure ai models is there any video is there?
hello sir ji AI me switch tu kar lu fir bal apke jese tu nahi ho jaege
please make a video on language translation using LLama3
Hi Krish, Thank you so much for uploading all best resources to keep us upto date. I am ur follower from many years. because of your resources I got into Datascience job 2 year back and upgrading myself by ur videos and encouragement. Thank you
that's great 😊
Due to Change in Langchain documentation, I am geting the following Error, How to resolve it ?
LangChainDeprecationWarning: The function __call__ was deprecated in LangChain 0.1.7 and will be removed in 0.2.0. Use invoke instead.
Okay I was facing same issue but it's not the error :) yes you heard it right you just have to wait for more time to get the response from the model as we are running it on cpu and depend on your cpu capability it will take time to generate response . Try with less no of words so it generate quicker....
Sir, could you do a video on how to deploy a fine tuned llama model ?
Can you please make a tutorial on using LLM's to augment textual data on private data. Where no data will be re-used to train LLM models. That would be a great help!
How to fix this error:module not callable
I have downloaded it but I want to do this in node js how to do ?
Doubt:
what is GGML model?
Give me ONE USECASE WHERE PEOPLE WILL PAY money to use the final product!!!
womp womp
The model is not available now what should i do?
it is been so long waiting up on the model to give me response been like 5 minutes. I know my compute has to do a lot of it here, but how much so and what is the possible opensource alternative would be to this faster?
dude i am not able to use the model repository what could be the reason ?
for more then 100 comments. I'm commited to it.
please create a RAG systems using LLama 2 as well
please keep doing such videos!! kudos to your work.. and patience.
very informative and that too free
Good project🎉
How to deploy it in a server?
Dear Krish Naik,
Very nice video for LLMs especially using Open Source LLMS. One question that
Can create custom LLMs chat application by using LLAMA 2- Open Source LLM Model, in that we make some customization like it will search from specific websites (8-10 different websites) and pdfs (custom pdfs we will prepare and upload on specific location). so it will search from the specific website and our own pdfs and generate the custom human like response?
Thank you so much for your efforts in TH-cam, Your videos is very useful and easy to understand. Waiting for some more practical demos on LLMs (Open source).
Regards
Bhavesh
The response the model generating does not have a proper ending. How to achive this.
Bro do U get the result like how it will show the output
I am getting runtime error
Hi Krish , this is really amazing video. Though the application works fine, however while using this app for some reason the number of words generated is 200. Not sure if anyone else is observing the same It would be great if you could advise.
Make more project videos please!