What if the information is distributed in multiple tables within the SQL database? Would LlamaIndex be able to join the relevant tables without the user explicitly citing them? This question is essential, especially if the tables are not labeled more intuitively e.g. a table having cities labelled/named as Lctin!
why we use pinecone exactly? i know with llama_index there is alresdy some VectorStorrIndex etc is it because pinecone does it better? i dont understand that part
Very little reason to implement it yourself with langchain when llamaindex exists here. You could recreate all this with raw langchain if you want technically though.
@@vgtgoat I understand that but I am having some issues while using this query engine. I asked on their discord channel and they said that they won't be fixing the bug any time soon. I thought maybe I'll implement my version
Hi what if I dont have ustructured data and instead of a vector store, I just want to query the llm itself or the general internet. Plus how to implement chat functionality where by I can ask follow up questions based on previous answers?
@@dennissdigitaldump8619 pine is a vector database and has nothing to do with foundation model llms though...one way is to use huggingface hub api token
excited for this whole channel!
What if the information is distributed in multiple tables within the SQL database? Would LlamaIndex be able to join the relevant tables without the user explicitly citing them? This question is essential, especially if the tables are not labeled more intuitively e.g. a table having cities labelled/named as Lctin!
Cheer~~~add (comments or supplementary information) at the end of a speech or text.😊
hi~could u share more about the selector realization? whether i can use the llm itself to realize?
is it basically a NL2SQL method?
why we use pinecone exactly? i know with llama_index there is alresdy some VectorStorrIndex etc is it because pinecone does it better? i dont understand that part
And ollama2 is posible language natural to sql ?
Thank you very much, very nice work. Could you please share your notebook? Thanks again.
Instead of using open AI..Can we do this using Llama or mistral model?
Yes you can run your own server with something like Oobabooga textgen webui
Is there any JS version of this? At least for the structure ones, cz I think it’d still be better if the unstructred ones can just use python
Can we implement the same thing in Langchain?
Very little reason to implement it yourself with langchain when llamaindex exists here. You could recreate all this with raw langchain if you want technically though.
@@vgtgoat I understand that but I am having some issues while using this query engine. I asked on their discord channel and they said that they won't be fixing the bug any time soon.
I thought maybe I'll implement my version
There's other examples of combining langchain and llamaindex, I'm working on figuring that out right now. glhf.
@@vgtgoat llamaindex is not opensource and is a commercial product right?
🔥🔥🔥
Can something similar be done with SparQL instead of SQL?
Yess, if it is pretty trained with SparkQL and has a solid understanding developed over time
how can we use llamaindex on nodejs ?
So it's basically some sort of an Agent
Hi what if I dont have ustructured data and instead of a vector store, I just want to query the llm itself or the general internet. Plus how to implement chat functionality where by I can ask follow up questions based on previous answers?
Can you share the code once
How can i use it whithout OPEN API KEY - i dont want to use Open AI and want to use open source llm
There was a pine api key. This is Llama, which can be run locally.
@@dennissdigitaldump8619 pine is a vector database and has nothing to do with foundation model llms though...one way is to use huggingface hub api token
Hi , what if the SQL database has 10 million records, this code still working?
Unlikely without a pro version with unlimited token size limitations, you will face challenges
You stated that the Jupiter Notebook was linked in the description. I don’t see it. 🫨
And folks... he put the QR code in the video.😅😅😅
TORONTOTOKYO ?