- 13
- 107 986
AI RoundTable
Canada
เข้าร่วมเมื่อ 26 มี.ค. 2023
I help you design industrial AI applications using LLMs and generative AI.
My name is Farzad and I help firms enhance their decision-making and operational processes by integrating AI into their core functions. Since 2019, I have concentrated on AI design and applications, working in Asia, Europe, and North America, and I am currently based in Canada.
Subscribe for cutting-edge AI applications, fun projects, industrial tips, tutorials, and much more.
My name is Farzad and I help firms enhance their decision-making and operational processes by integrating AI into their core functions. Since 2019, I have concentrated on AI design and applications, working in Asia, Europe, and North America, and I am currently based in Canada.
Subscribe for cutting-edge AI applications, fun projects, industrial tips, tutorials, and much more.
Chat with Your Database Using AskYourDatabase and LLM agents (A Review)
In this video, I review AskYourDatabase (AYD), a no-code and low-code app that allows you to chat with various databases, including MySQL, PostgreSQL, MongoDB, SQL Server, and Snowflake. This chatbot uses LLM agents built with GPT-3.5 and GPT-4 to interact with databases using natural language.
00:00 General Overview
02:54 Test prep (Sakila DB)
04:59 Connecting our DB to AYD
07:04 Adding DB documentation and extra examples to the LLMs
09:15 Testing AYD desktop app
11:46 Performance in the presence of a typo in the query
15:17 Performance in case the answer is too long
17:54 Performance with complex queries containing a chain of questions
21:34 Asking AYD to generate images
26:18 Integrating the AYD chatbot into a custom app and testing it
Resources:
AYD website: www.askyourdatabase.com/?
AYD documentation: www.askyourdatabase.com/docs?
AYD chatbot dashboard: www.askyourdatabase.com/dashboard/chatbot?
MySQL Workbench: dev.mysql.com/downloads/workbench/
Sakila DB installation: dev.mysql.com/doc/sakila/en/sakila-installation.html
ngrok installation: ngrok.com/download
#LLM #GPT #chatbot #agent #database #ai
00:00 General Overview
02:54 Test prep (Sakila DB)
04:59 Connecting our DB to AYD
07:04 Adding DB documentation and extra examples to the LLMs
09:15 Testing AYD desktop app
11:46 Performance in the presence of a typo in the query
15:17 Performance in case the answer is too long
17:54 Performance with complex queries containing a chain of questions
21:34 Asking AYD to generate images
26:18 Integrating the AYD chatbot into a custom app and testing it
Resources:
AYD website: www.askyourdatabase.com/?
AYD documentation: www.askyourdatabase.com/docs?
AYD chatbot dashboard: www.askyourdatabase.com/dashboard/chatbot?
MySQL Workbench: dev.mysql.com/downloads/workbench/
Sakila DB installation: dev.mysql.com/doc/sakila/en/sakila-installation.html
ngrok installation: ngrok.com/download
#LLM #GPT #chatbot #agent #database #ai
มุมมอง: 3 071
วีดีโอ
Chat and RAG with Tabular Databases Using Knowledge Graph and LLM Agents
มุมมอง 12K2 หลายเดือนก่อน
In this video, together we will go through all the steps to construct a #knowledgegraph from Tabular Datasets and design a ChatBot APP to interact with the Knowledge Graph using natural language. For this purpose, we will use Knowledge Graph LLM agents and the GPT model. We will design a Chatbot that can: 1. Chat with Graph DB using an improved LLM agent 2. Chat with Graph DB using a simple LLM...
Chat with SQL and Tabular Databases using LLM Agents (DON'T USE RAG!)
มุมมอง 34K2 หลายเดือนก่อน
In this video, together we will go through all the steps necessary to design a ChatBot APP to interact with SQL and Tabular Databases using natural language, SQL LLM agents, and GPT 3.5. We will design a Chatbot that can: 1. Chat with SQL DB that we create from SQL files 2. Chat with SQL DB that we create from CSV and XLSX files 3. Chat with SQL DB that we create by uploading documents while us...
All-In-One Chatbot: RAG, Generate/analyze image, Web Access, Summarize web/doc, and more...
มุมมอง 2.8K3 หลายเดือนก่อน
HUMAIN V1.0 is a Multi-Modal, Multi-Task chatbot project, empowered with 4 Generative AI models and was built on top of RAG-GPT and WebRAGQuery. Features: - Can act similar to ChatGPT - Has 3 RAG capabilities: RAG with processed docs, upload docs, and websites - Can generate images - Can summarize documents and websites - Connects a GPT model to the DuckDuckGo search engine (the model uses sear...
Open Source RAG Chatbot with Gemma and Langchain | (Deploy LLM on-prem)
มุมมอง 4.5K4 หลายเดือนก่อน
In this video, I show how to serve your open-source LLM and Embedding model on-prem for designing a Retrieval Augmented Generation (RAG) chatbot. For this purpose, I take RAG-GPT chatbot and instead of the GPT model, I use *Google Gemma 7B* as the LLM and instead of text-embedding-ada-002, I use *baai/bge-large-en* from Huggingface. I use Flask to develop a web server that will serve the LLM fo...
Nvidia's Free RAG Chatbot supports documents and youtube videos (Zero Coding - Chat With RTX)
มุมมอง 4K5 หลายเดือนก่อน
Chat With RTX is a free chatbot released by Nvidia. This chatbot can be used as an AI chatbot, RAG with documents, and RAG with TH-cam videos. In this video, I show how to install and use the chatbot. I also test its inference time, accuracy, and hallucination with both Mistral 7B and Llama2 13B parameter Large Language models (LLMs). Link to download the chatbot: www.nvidia.com/en-us/ai-on-rtx...
Langchain vs Llama-Index - The Best RAG framework? (8 techniques)
มุมมอง 14K5 หลายเดือนก่อน
Curious about which RAG technique suits your project best? Here I Compare two chatbots and examine eight techniques from #Langchain and #llamaindex, I'll guide you through designing a pipeline to evaluate your RAG system's performance efficiently. And that's just the beginning! We'll analyze their effectiveness across various documents and tackle over 40 questions, putting these techniques to a...
Fine-tuning Large Language Models (LLMs) | w/ Full Code
มุมมอง 1.8K6 หลายเดือนก่อน
We use a fictional company called Cubetriangle and design the pipeline to process its raw data, #finetune 3 large language models (#LLM) on it, and design a #chatbot using the best model. 00:00:17 Presentation (Key concepts) 00:02:39 #llama2 Pre-trained vs llama2-chat (fine-tuned) models 00:13:22 Fine-tuning schema 00:14:22 CubeTriangle data description 00:17:21 Data processing (first step) 00:...
ChatGPT v2.0: Chat with Websites, Search the Web, and Summarize Any Website | Chainlit Chatbot
มุมมอง 1.2K6 หลายเดือนก่อน
#WebRagQuery is a powerful #chatbot, built with #OpenAI #GPT model in #chainlit user interface, that harnesses the power of GPT #agents, #functioncalling, and #RAG to offer an enhanced conversational experience. Here's how you can make the most of its diverse functionalities: *Normal ChatGPT Interaction:* Engage in natural conversations as you would with a regular ChatGPT app, experiencing seam...
Connect GPT Agent to Duckduckgo Search Engine | Streamlit Chatbot
มุมมอง 1.2K7 หลายเดือนก่อน
*WebGPT* is a powerful chatbot, designed with #streamlit, that enables users to pose questions that require internet searches. Leveraging #GPT models: * It identifies and executes the most relevant given #Python functions in response to user queries. * The second GPT model generates responses by combining user queries with content retrieved from the #websearch engine. * The user-friendly interf...
RAG-GPT: Chat with any documents and summarize long PDF files with Langchain | Gradio App
มุมมอง 26K7 หลายเดือนก่อน
RAG stands for Retrieval Augmented Generation and RAG-GPT is a powerful chatbot that supports three methods of usage: 1. *Chat with offline documents:* Engage with documents that you've pre-processed and vectorized. These documents will be integrated into your chat sessions. 2. *Chat with real-time uploads:* Easily upload documents during your chat sessions, allowing the chatbot to process and ...
RAG explained: A Step-by-Step Guide to Vector Search and Content Retrieval
มุมมอง 1.4K7 หลายเดือนก่อน
This is the first video covering the fundamentals that we need for designing the Chatbots. We start by breaking down text embedding and its role in Retrieval-Augmented Generation (RAG) systems. Then, we put theory into practice by building a simple RAG system from scratch. 📘 Get the tutorial notebooks here: github.com/Farzad-R/LLM-Zero-to-Hundred/tree/master/tutorials/text_embedding_tutorial. 📚...
LLM-Zero-to-Hundred Introduction
มุมมอง 1.7K7 หลายเดือนก่อน
Welcome to the premiere episode of our "LLM-Zero-to-Hundred" series! Dive into the world of Large Language Models (LLMs) as we explore a variety of applications and techniques. 📚 Explore the full GitHub repo: github.com/Farzad-R/LLM-Zero-to-Hundred/tree/master 🔗 Connect on LinkedIn: www.linkedin.com/in/farzad-roozitalab/ 🌐 Visit my website: farzad-r.github.io/ AI image in the thumbnail is from:...
What would be the difference if I used pandas instead of SQL?
Great Explanation 😍
Afarin
Thanks!
i only asked few questions and was told to subscribe.How come that you were continously asking questions without being stopped
I am using an active account in AYD. If you have activated your account, I am not sure where that subscribe request came from in your test
Great video! Question: I want to use graphDB to combine my unstructed knowledge in pdfs with the constructed user data in MongoDB. My aim is that the LLM can retrieve both the necessary data and the knowledge needed to solve the problem to approach a user request. But the only problem is that the user-data is changing, is there a way to update my graphDB everytime I add/change something to my mongoDB (that my application is essentially running in)?
Thanks! Yes, you can achieve this. You should create an automated pipeline that monitors changes in your MongoDB database and updates the GraphDB accordingly. Instead of recreating the GraphDB from scratch each time, you should implement a mechanism that reflects only the recent changes from MongoDB. To do this, you need logic that detects changes in MongoDB and updates the GraphDB appropriately. For example, you can set up multiple checks, and for each check, use different Cypher queries to add or modify content in GraphDB based on the changes detected in MongoDB.
very very good tutorial in a topic which confuses many people, thanks.
I am glad that it was helpful!
I have a realtime database. Can this still work? Thanks for the awesome video..
Thanks! I haven't tested it in that scenario but I think it should work. As long as the DB that is getting updated is the one that is passed to the agent, the agent should be able to interact with it in real-time.
Thank you so much for this. This is gold!!!
I am glad the contents were helpful!
Is it possible to use local llm without using api key . Using ollama mistral ?
Yes you can use local LLMs. But the LLM must be very powerful and able to handle long context lengths. Plus, the agent itself needs to be adjusted and modified to work with the O.S LLM
So +/- 60% accuracy from graphRAG with tabular info? Not there yet!
I didn't understand the +-60% accuracy. But if you are referring to the accuracy of this technique, I should say this approach is probably on the frontline of retrieval techniques and it is very new. So,I expect it to become better and better over time with the advancements in the field. I also should say that between GraphRAG with tabular data and using Graph Agents to directly query the DB, I prefer the second approach and I would use agents
Well done
this is very awesome. I need this type of series. now waiting next video for function calling?
Thanks. I am glad you liked the video. I removed the function calling video because OpenAI updated the functions and they changed the way the models were called. But you can have access to the code here: - github.com/Farzad-R/LLM-Zero-to-Hundred/tree/master/tutorials/LLM-function-calling-tutorial plus, I am using function calling in the following two projects: - th-cam.com/video/KoWjy5PZdX0/w-d-xo.htmlsi=DN1Gt6sA8W-E4C2l - th-cam.com/video/55bztmEzAYU/w-d-xo.htmlsi=kMV5ZtPaGugVckP6 And in the next video I will explain the new ways of function calling and how to design LLM agents from scratch. That project is almost ready!
Thank you so much for such a valuable and unique series.
Thanks!
Hi, I wonder does this system work on MacOS? I saw in your repository that this works on Linux and Windows.
This system works on MacOS. If the provided requirements.txt file doesn't run, install the necessary libraries manually. Once you have them, you're good to go.
@@airoundtable Thank you so much! I have a question regarding RAG: I am currently working on a survey data with open-ended questions and mostly user type-in answers (which means they are unstructured and may vary by personal writing/spelling/Abbreviation habits, like ai/AI/A.I, etc), and I wonder what approach would be the best for me to analyze and do Q&A with the data, especially grouping and pairing users by their answers? I know sql query is no use due to the nature of unstructured responses, and traditional RAG that don't use the model's own knowledge but solely the context (I remember seeing this in one of your video, correct me if I am wrong...) may not be helpful because I need the model to read these answers like a human not a string matching machine.
@@黄迪宏 You're welcome! I need to know more about the source of the data. When users ask their question, what type of a data do you want to search for getting the answer? Is it tabular or semi-unstructured or fully unstructured data?
@@airoundtable Its an excel file of google survey results, each column is the answer to one question and each row refers to one respondent. Most of the questions require typing instead of selecting a option, so the answers will be all unstructured strings. I would like to have the model answer my questions like who and who have similar interests in what (grouping), whose skills seems to be able to solve whose problem (pairing), who and who know the same people, etc.
@@黄迪宏 Interesting.. I never worked on such problem. But to quickly brainstorm here a bit, if the type of questions that you want to ask from the DB are clear and you know what questions you are looking for, you can use an LLM and construct a knowledge graph from unstructured data and use a GraphAgent to query the graph DB that you created using the knowledge graph (I have a video on knowledge graph and in the final part, I explain a similar problem for a medical chatbot). With this approach you can at least answer all the questions that you mentioned in your previous reply. However, if the type of questions are not clear and it is an open ended problem and users can ask literally anything from the DB, that would make it a much harder problem to solve. Because not only you want to search the text but you also want to connect the dots between multiple possible answers and make a conclusion from them. I am not sure what would work for that scenario at the moment
Really well done, probably the most complete RAG I have seen to date... I have a lot to learn form you here so thank you. One question, do you have anything explaining using local LLMs with a tool such as this?
Thanks. I am glad you liked the video. yes, I have a video explaining how to design a RAG chatbot using Open Source LLMs, which might interest you. here is the link th-cam.com/video/6dyz2M_UWLw/w-d-xo.htmlsi=XvWfkFAEL4Jt4CsX
Thanks!
Very interesting, in your opinion what are the steps to take in order to convert the sql query results into a visualized chart?
Thanks. Add agents specifically designed for data visualization to the system. That would be my first choice since current agents are very good a retrieving data but they just was not desgined to visualize the data.
damet garm, Thank you it was useful for me.
Thanks! I am glad you liked the video
the video was great and amazing there is one issue you use only some paid llms , also use open source llms , this will help us more
Thanks for the suggestion. I will try to include more applications with O.S LLMs.
How about adding some few shot examples for the agent and a bit of table reference to shrink the context window and make it more fine tuned?
That is a great idea. Adding few shot examples and extra content can improve the performance
@@airoundtable Today, we achieved excellent results by incorporating a few-shot learning approach, enhancing them with dynamic vectors, so eventually dynamic few-shots. We also developed a concise table description aligned with the few-shot prompts. These results are highly optimized, significantly reducing calls to OpenAI and token usage. Even with GPT-3.5-turbo, we've used fewer than 3,000 tokens so far.
@@debarghyadasgupta1931 It is great to hear it! I really like the strategy and it is very interesting to me that the number of calls to OpenAi were reduced as well. Great job and thanks for sharing the results with me!
@@debarghyadasgupta1931 Hi, good to hear your great progress! Would you mind sharing some insights about how to do it? I am interested in how to incorporate few-shot and dynamic vectors.
@@黄迪宏 The few-shot examples were optimized using vectors. First, you need to embed all your few-shot examples. Then, when a user poses a question, you perform a vector similarity search exclusively on these few shots. Typically, Approximate Nearest Neighbors (ANN) is used for this purpose. I prefer using k=5, so among the 100 or possibly 1000 few-shot examples in your collection, you will identify the top 5 that are most relevant. Adding context from the table can further refine the query. This approach not only reduces the number of calls to the LLM API but also decreases the token usage.
Excellent idea and project ,i am doing the same project in my college as part of mini project , could you guide me to accomplish this , like the main problem is open ai API key is paid one , arent there any other to use ?
This is the abstract of my project:*"An Intelligent Chatbot for Attendance & Academic Support in Educaon" This study introduces an AI-driven chatbot designed for educational institutions, focusing on attendance management and academic support. Leveraging natural language processing (NLP) and machine learning (ML), the chatbot simplifies attendance tracking for educators and provides personalized academic guidance for students. It offers real-time updates on attendance records, performance metrics, and suggests tailored improvements. Additionally, serving as a communication and data visualization platform, it offers dynamic representations of statistics, enhancing decisionmaking. Overall, this intelligent chatbot streamlines administrative tasks and fosters student success in education
Thanks! OpenAI models are not free. But you can try to build this project using opensource models from huggingface. Search langchain agents using huggingface models and you would fine some guides on how design the agents using opensource LLMs
It is an interesting project. With a good database, you can design an agent to accomplish the goals that you mentioned in the project description.
@@airoundtableyeah thanks , many discouraged me that I am studying 2nd year in 3rd tier btech college you couldn't do it , but I am willing to and will see where it goes , Thanks again 🙂.
@@LIVELOVEASH645 Don't let anyone discourage you. If you think you can, you can.
Can the project Generate plots of the data, such as histograms, scatter plots, and line plots etc., and also answer questions about the data in a comprehensive and informative way OR can it only perform statistical analysis like mean , median , standard daviation, etc. ?
It can perform simple statistical analysis like mean, median, standard deviation, etc. I am not sure what you meant by "answer questions about the data in a comprehensive and informative way". But since the agent has access to the database schema and can also be provided with further information such as examples and details of the DB, it can have a very good understanding of the DB and would be able to answer general questions as well. It cannot plot the data and for that, you need to improve the agent's capabilities and possibly add a new agent designed for visualization tasks to the system.
I wanted to know out of these 3 methods u showed , which one is the most less token consuming and cost effective
Firstly, it's important to note that selecting strategies based solely on token usage isn't practical, as each strategy serves a different purpose. Generally, over the long term, LLM agents tend to consume more tokens compared to standard RAG approaches. This is particularly true when multiple agents collaborate or require frequent debugging. In RAG and similar approaches, the primary cost arises during initial data vectorization. After that, it functions as a straightforward chatbot with lower token consumption compared to agents.
Hi I had a csv extract of 8k rows , I want to ask it questions and get responses , which will be the best among the methods u explain , and I assume I am on the right video ☺️
Hi Harry. Most probably you asked it on the right video :)). In general, the approach that can work best very much depends on the type of questions that you would like to ask from the database. As a simple rule, if your questions can be answered by querying the csv file, then SQL agents would be the best choice. If your questions can be answered by vector search (answer and question have semantic relationship) rather than a logical query, then RAG can be the better approach. If you are not sure, test the approaches on a subset of the data and evaluate their performance. Although, my guess is that you would get the answer to your questions using logical queries, then agents would be my first suggestion.
@@airoundtable is agent more token consuming as I am using openai
@@harrysharma3435 Yes in general agentic systems are more token-consuming than RAG approaches. The number of agents collaborating together, the data size they have to process in every query, and the number of times they have to correct themselves to get the answer directly impact the cost of the agnetic systems.
Could you explain to me how to replace azure with rocama for this example? thank you
I didnt't understand what rocama is. I might be able to help if you can give me more info
This video is so inspiring and wonderful! I really appreciate the effort you put into creating such an amazing tutorial. I was particularly impressed with the All-In-One Chatbot capabilities. I have also watched your video about Chat with SQL and Tabular Databases using LLM Agents. I do really admire your work! I believe it will be more fascinating if you could please make a tutorial on how to build a chatbot that can display images generated from a code interpreter? For example, it would be awesome to see how to create graphs or charts from prompt + csv files or SQL or tabular databases, similar to the features available in OpenAI's GPT-4. Being able to visualize data directly in a chatbot would be incredibly helpful. Thanks again for the fantastic content!
Thanks for the kind words. I am very happy to hear the content was useful for you. I might make another video to improve the SQL agent and add more features to it. But at the moment I am working on two other videos so that one will happen somewhere down the road. In the meantime, there are some articles that you can check to get an idea of how to make agents for data visualization. Here are some examples. I hope they help dev.to/ngonidzashe/chat-with-your-csv-visualize-your-data-with-langchain-and-streamlit-ej7 medium.com/@nageshmashette32/automate-data-analysis-with-langchain-3c0d97dec356 www.reddit.com/r/LangChain/comments/1d2qqqy/building_an_agent_for_data_visualization_plotly/
Awesome tutorial! 👍
Thanks!
fare 31.275 at 54:40 does not seem right
It is correct. You can find the answer to this question in row #15 of the titanic_small dataset: Pclass: 3 - Name: Mr. Anders Johan Andersson - Sex: male - Age: 39 - Siblings/Spouses Aboard: 1 - Parents/Children Aboard: 5 - Fare: 31.275
is there a video talking about how to combine RAG, SQL agent, and Knowledge Graph?
I don't have a video about combining them. But to combine them first there need to be a logical use case that needs a specific roadmap and mapping, since combining them together will not solve a general problem. But in case you have a usecase in mind you can definitely do it
Can you do a series on llama-index? They have a lot of tools and it’s so different from building using langchain
You are right, llama-index is great and has a lot of tools. I will check it again soon. My last interaction with it was for a video in which I compared llama-index and Langchain for RAG. It was a while ago and I know they have evolved the framework alot.
Thank you. There aren’t enough llamas-index tutorials and I think you’ll explain them better than anyone! Learning so much from your videos.
@@sujit5013 I appreciate the kind words. I am working on two videos now. But I will keep llama-index inmind and check it out later for sure. Thanks again for the suggestion!
Amazing Content
Thanks!
how do i ask follow up questions to a sql database ? Can the chatbot maintain the history and context while using a sql database agent ?
The code that I published on github does not have a memory. But yes it can have memory. You can use Langchain memory classes and integrate them into the chatbot. For start check the following link: python.langchain.com/v0.1/docs/modules/memory/agent_with_memory_in_db/
Based on the user prompt how we can define the autonomous flow to determine when to use SQL LLM Agent vs RAG for similarity search?
It depends on the use case. if there is a logic that a human can understand and decide then an LLM agent on top of both with that logic being passed to it can make the decision for your users. If the logic is only a matter of whether the sql agent fails to retrieve the answer, then you can add that logic in your code as well. But if there is no specific logic behind the selection, then that can't be integrated into the chatbot.
@@airoundtable I have seen some routing concepts recently. May be worth checking
Great demo, I would like to know more of how they build the training module. I assume this is an hybrid system with a Retriver connected to a VectorDB or ElasticSearch to find training chunks, with previous success queries, and then the Text-to-SQL module to actually generate the query.
I cannot say with certainty since I haven't seen the code behind AYD. and I am not sure what you meant by the "training chunks". But my guess is more toward the same approach that LangChain has been working on. LLMs are good at generating queries for these databases and by passing the schema of the database to the LLM it can understand how to retrieve the answer to the user's question. Next level would be to also add some necessary description of the database to the system prompt to help the LLM navigate easier. And finally, with the help of few shot learning you can do the final adjustment of the LLM on your database. If the database is huge, in a custom solution, I would break it down into smaller databases and utilize the divide and conquer strategy to achieve a good performance.
Video full of Knowledge and well explained. I am looking for you channel to grow more !
Thanks! I appreciate the kind words. I am glad that the video was helpful
can you be my teacher? 🙏
Hi Darkmatter9583. I would be happy to help. You can go through the tutorials and ask your questions. I work on multiple projects at the moment but I will respond to questions whenever I can. In case you would like a head start in the field, send me a description of your background and what you want to accomplish. I will try to guide you in the right direction. You can send me a message on Linkedin.
could you be my mentor?? your knowledge is awesome, i would like to learn thar wat
Hello Darkmatter9583. Thanks! I answered your other message
Very interesting video, it would be great if you can make a video about Lamini too!
Thanks! Does it have any free features? As far as I remember, Lamini is a commercial product. I am not planning to go through the commercial products unless they make a lot of sound or I get a sponsorship :)) But I will keep it in mind for a potential project down the line. Thanks for the suggestion!
Any thoughts on how to use SQL agent to work with multiple tables,n, say n=10-20? Can SQL agent pick the right table(s) and columns dynamically based on query?
The same agent that I designed and used in this video is dealing with multiple tables. It has access to around 11 tables and can easily get the answers. If you check the link below, you can see the diagram of the database that I am using in this video: docs.yugabyte.com/preview/sample-data/chinook/
Thank you for this - brilliantly explained. With any of the examples you've shown, what happens if you ask a general question such as "Who is Joe Biden?". Does the LLM Agent or Knowledge Graph understand it's irrelevant to the information in the database and therefore doesn't provide an answer from its general training?
Thanks. I am glad vyou liked the video. Yes the LLM is aware that it should not use its own knowledge and only provides information that exists in the database.
Is the code no longer available?
It is available: github.com/Farzad-R/LLM-Zero-to-Hundred/tree/master/RAG-GPT
Thanks for your presentation, keep going!
Thanks!
I see you are creating a Vector Index at 48:56. But Are you using that anywhere? I see that you are using the embeddings created further down the program. Then I wonder why you create the Vector Index?
I am adding the vector embeddings of my data to that vector index at 49:50. Then I use that vector index for RAG in this chatbot
Hi I'm having trouble using csv as my source document for rag in llamaindex. The result is not accurate.
Hi. if you are planning to use CSV for RAG please check out my video called "Chat with SQL and Tabular Databases using LLM Agents". The methods that I implemented in this video for llamaindex are only for PDF and text documents
New project using llm plzzzz
I am working on it. Coming soon!
nice work
Thanks!
If I wanted to query a very large database, what is the best method for the LLM to choose which table to query from? Currently using the Q&A pipeline, but running into problems with the LLM not making the correct SQL query and/or choosing the correct table even with a table retriever…
Update the agent strategy using this approach and check the results: python.langchain.com/v0.1/docs/use_cases/sql/large_db/
bro can you tell which tool you used to build these flow diagrams? BTW your videos are amazing
Thanks. I use draw.io
@@airoundtable Thanks. Just like we can ask numerical or statistical questions from thi chatbot, can we also ask questions (how and why) like why this number is high or low like explanation of an answer because LLM are good in generating text.
@@muhammadtalmeez3276 Not with this agent. This agent is designed to just retrieve information from databases. What you are looking for is another feature that needs to be added to the system for data analysis. That is a different problem and very challenging to pull out depending on the size and complexity of the database
Can you share the openai.api_type = os.getenv("OPENAI_API_TYPE") openai.api_base = os.getenv("OPENAI_API_BASE") openai.api_version = os.getenv("OPENAI_API_VERSION") these in your env files other than the key? I can't figure the right values for those to run the code. Thank you!
What framework are you using? OpenAI from Azure or OpenAI directly?
Thank you for this. It's filled in a lot of conceptual information for me and put me on the right path. Looking forward to watching the other videos in this series!
Happy to hear it! Thanks