I used the same question but only get the table. Change it a bit and I get a bargraph. Rephrase the question again and I get an error. Any suggestions how to get both the table and chart like in your video?
Thanks for this great video! One question from me: Since the classical pandas style data analysis performed well, what is the potential advantage of using LLM? Is it possible that LLM can introduce more intelligent analysis than excel?
The advantage of this tool is that you can use it to explore, clean and analyze your data using generative artificial intelligence. All you need to do is talk to your data. If you put in good prompts, you can get good outputs. You can also use this library to analyze your Excel data.
@@TirendazAI Yes, an user-friendly QA is definitely something important. Thanks! But do you think we can find a scenario that Excel fails because it is not intelligent enough, but LLM has a chance to outperform Excel?
LLMs have huge potential for data analysis. Yes, they are not intelligent enough. We can think of them as a black box but, you can provide that they return good outputs using agents or RAG techniques.
Thanks for the video! I tried this using ollama in docker with different models like llama3, llama2 mistral in a network in my workplace. But I get a gateway timeout. Why do I get this error? (Ollama is running without errors)
The tutorial video was truly amazing, with very clear subtitle translations. However, I'd like to know how to use PandasAI + Ollama in VS Code like you demonstrated. Also, how can I implement Ollama + Llama3 + LlamaIndex? What are the minimum computer specifications required for such usage? Additionally, how should I write the Python code to ensure that your consecutive queries are processed in the same thread? Best regards;
For example, you need a minimum of 7B RAM to work with a 7B LLM version. Unfortunately, the responses generated by LLMs are not stable. You need to try several prompts to get the output you want.
@@TirendazAI Thank you very much. However, my current computer setup includes an i7 processor, an RTX 2060 6G graphics card, and 16GB RAM. Can I run a local LLM with these specifications?
@@TirendazAI Thank you for your response. Originally, I planned to spend about $120 to upgrade my laptop's standard RAM from 16GB to 64GB. Additionally, could you tell me what software you used in your video demonstration? The interface looks very similar to VS Code, which I am currently trying to learn on my own. However, I'm not sure how to set up the environment. My command prompt shows Python 3.10.6. Do you have any videos that explain how to install Python and use related editors? Setting up the Python environment seems a bit complex, as it appears you have to set the environment before running the program.
Thank you. I've got an error like this: NameError Traceback (most recent call last) Cell In[30], line 2 1 from pandasai import SmartDataframe ----> 2 df = SmartDataframe(data, config={"llm": llm}) NameError: name 'data' is not defined
@@Lifes_Student I asked ChatGPT before and it didn't give me a solution. I am a beginner and I think I am free to ask any questions in any platform. Is there a problem for you? Who are you? Thougt police?
@@sorgulabiraz3161 lol ...I'm just surprised someone's willing to wait for a solution in a comment rather then instantly debug it using a language model these these days. You should probably also know that you'll need a powerful graphics card to run the mistral model locally. I recommend an RTX 3060
And just an FYI "data" is a variable. You need to define it first .i.e: data = yourdata then you can call 'data' or pass it through a function e.g. function(data)
It does not work for me. I got the following error message: 40 if llm is None or not isinstance(llm, LLM or LangchainLLM): ---> 41 raise LLMNotFoundError("LLM is required") 42 return llm LLMNotFoundError: LLM is required
Hello @tirendazakademi, Same Error for me. I have ollama version 0.1.28 Used: ollama pull mistral Tried with this dependencies (the ones that were latest 1 week ago when you uploaded) and got the very same error about: LLMNotFoundError("LLM is required") pandasai==2.0 langchain==0.1.10 langchain-community==0.0.25 Could you please verify which exact packages you used? Thanks and keep the great content!😁
i have the same error, langllm work with langchain, i can call : llm.invoke("Tell me a joke") with a valid response. but SmartDataFrame does not recognize ollama as a valid LLM. i have also tried to install the github package. The only difference is ollama version is 0.1.28
@@JAlcocerTech with miniconda i have your same version and error , with visual studio code seem to work these are the library version i have : pandas 1.5.3 pandasai 2.0.8 langchain 0.1.11 langchain-community 0.0.27 langchain-core 0.1.30 langchain-text-splitters 0.0.1 I hope this can help
Definitely worth the sub! Was lookin for exactly this
Edit: also, thank you!
You're welcome!
PandasAI is brilliant. Thanks for putting this video together!
I used the same question but only get the table. Change it a bit and I get a bargraph. Rephrase the question again and I get an error. Any suggestions how to get both the table and chart like in your video?
Prompt engineering is important to get good results. I got this response by trying a few prompts. You may write more detailed prompts.
Thanks Bro. Very Helpful.
Glad it helped
Thanks for this great video! One question from me: Since the classical pandas style data analysis performed well, what is the potential advantage of using LLM? Is it possible that LLM can introduce more intelligent analysis than excel?
The advantage of this tool is that you can use it to explore, clean and analyze your data using generative artificial intelligence. All you need to do is talk to your data. If you put in good prompts, you can get good outputs. You can also use this library to analyze your Excel data.
@@TirendazAI Yes, an user-friendly QA is definitely something important. Thanks! But do you think we can find a scenario that Excel fails because it is not intelligent enough, but LLM has a chance to outperform Excel?
LLMs have huge potential for data analysis. Yes, they are not intelligent enough. We can think of them as a black box but, you can provide that they return good outputs using agents or RAG techniques.
Great video! What are the dataset size limitations? I get an answer 30% of the time and errors the rest of the time.
Large models like Llama-3:70b and GPT-4 respond better.
Nice. Thanks
You're welcome!
Thanks for the video! I tried this using ollama in docker with different models like llama3, llama2 mistral in a network in my workplace. But I get a gateway timeout. Why do I get this error? (Ollama is running without errors)
Great video!
How accurate will this be if your dataset is about 100mb in filesize?
The tutorial video was truly amazing, with very clear subtitle translations.
However, I'd like to know how to use PandasAI + Ollama in VS Code like you demonstrated.
Also, how can I implement Ollama + Llama3 + LlamaIndex?
What are the minimum computer specifications required for such usage?
Additionally, how should I write the Python code to ensure that your consecutive queries are processed in the same thread?
Best regards;
For example, you need a minimum of 7B RAM to work with a 7B LLM version. Unfortunately, the responses generated by LLMs are not stable. You need to try several prompts to get the output you want.
@@TirendazAI
Thank you very much. However, my current computer setup includes an i7 processor, an RTX 2060 6G graphics card, and 16GB RAM. Can I run a local LLM with these specifications?
Yes, you can run a local LLM such as Llama3:8b or mistral:7b.
@@TirendazAI Thank you for your response. Originally, I planned to spend about $120 to upgrade my laptop's standard RAM from 16GB to 64GB.
Additionally, could you tell me what software you used in your video demonstration? The interface looks very similar to VS Code, which I am currently trying to learn on my own.
However, I'm not sure how to set up the environment. My command prompt shows Python 3.10.6.
Do you have any videos that explain how to install Python and use related editors? Setting up the Python environment seems a bit complex, as it appears you have to set the environment before running the program.
Thank you. I've got an error like this:
NameError Traceback (most recent call last)
Cell In[30], line 2
1 from pandasai import SmartDataframe
----> 2 df = SmartDataframe(data, config={"llm": llm})
NameError: name 'data' is not defined
You had a traceback error and your first thought is to put it in a youtube comment rather then something like ChatGPT? ....Wow
@@Lifes_Student I asked ChatGPT before and it didn't give me a solution. I am a beginner and I think I am free to ask any questions in any platform. Is there a problem for you? Who are you? Thougt police?
@@sorgulabiraz3161 lol ...I'm just surprised someone's willing to wait for a solution in a comment rather then instantly debug it using a language model these these days. You should probably also know that you'll need a powerful graphics card to run the mistral model locally. I recommend an RTX 3060
And just an FYI "data" is a variable. You need to define it first .i.e:
data = yourdata
then you can call 'data' or pass it through a function e.g. function(data)
@@Lifes_Student Thank you.
Thanks very much ❤❤❤
I wait this video for long time 😊
Q: what is best open source LLM from hugging face used for PandasAi & SQL data analysis
You're welcome. Choosing the best model varies from task to task. I love working with Mixtral for PandasAI.
facing this error
'Unfortunately, I was not able to answer your question, because of the following error:
No code found in the response
'
When a prompt does not work, I am trying again by changing this prompt.
Hi , Unfortunately I'm not able to install pandasai in the terminal it showing could not find a version that satisfies the requirement pandasai
Hi, did you create a virtual environment? If yes, you can also use this command: poetry add pandasai
This poetry add pandasai used in the terminal or cell?
It does not work for me. I got the following error message: 40 if llm is None or not isinstance(llm, LLM or LangchainLLM):
---> 41 raise LLMNotFoundError("LLM is required")
42 return llm
LLMNotFoundError: LLM is required
Make sure you install Ollama and download a model
Hello @tirendazakademi,
Same Error for me.
I have ollama version 0.1.28
Used: ollama pull mistral
Tried with this dependencies (the ones that were latest 1 week ago when you uploaded) and got the very same error about: LLMNotFoundError("LLM is required")
pandasai==2.0
langchain==0.1.10
langchain-community==0.0.25
Could you please verify which exact packages you used?
Thanks and keep the great content!😁
i have the same error, langllm work with langchain, i can call :
llm.invoke("Tell me a joke")
with a valid response.
but SmartDataFrame does not recognize ollama as a valid LLM. i have also tried to install the github package.
The only difference is ollama version is 0.1.28
@@JAlcocerTech with miniconda i have your same version and error , with visual studio code seem to work
these are the library version i have :
pandas 1.5.3
pandasai 2.0.8
langchain 0.1.11
langchain-community 0.0.27
langchain-core 0.1.30
langchain-text-splitters 0.0.1
I hope this can help
'Unfortunately, I was not able to answer your question, because of the following error:
No code found in the response
'
I change population.csv Country to CountryName ; Population to populations; it works ..
👍
my genai refused to activate on conda
Make sure you have the Anaconda platform installed on your computer.
@@TirendazAI I do... But whenever I try to activate genai on the terminal, it doesn't work... Do you think it's a problem with my vs code?
hocam nıye turkce konusmuyonuz
Türkçe kanalımız ayrı, şuradan ulaşabilirsin: @tirendazakademi
Waiting for trivial questions ~1min. Try to use bigger files and you will get errors. AI is not ready for this.
Its not working all the time. Its handy but not good. In other way langchain dataframe agents are working better than this.
If you are using a smaller model like the Llama-8b, sometimes you may need to try a few prompts to get a good response.
@@TirendazAI I am using same model as shown in video.