All-In-One Chatbot: RAG, Generate/analyze image, Web Access, Summarize web/doc, and more...

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ก.ค. 2024
  • HUMAIN V1.0 is a Multi-Modal, Multi-Task chatbot project, empowered with 4 Generative AI models and was built on top of RAG-GPT and WebRAGQuery. Features:
    - Can act similar to ChatGPT
    - Has 3 RAG capabilities: RAG with processed docs, upload docs, and websites
    - Can generate images
    - Can summarize documents and websites
    - Connects a GPT model to the DuckDuckGo search engine (the model uses search engine functions automatically based on the user's query)
    - Can understand text, voice, and image.
    - Has built-in memory for the GPT models.
    (The project is fully developed in Python)
    🚀 GitHub Repository: github.com/Farzad-R/LLM-Zero-...
    00:00:00 Intro
    00:01:19 Quick demo
    00:06:36 Project requirements
    00:06:36 Repository walk-through
    00:12:12 Project schema
    00:12:51 RAG-GPT and WebRAGQuery schema walk-through
    00:16:30 Project structure
    00:18:45 Process documents for RAG in advance
    00:20:10 User interface design
    00:23:45 RAG-GPT code explanation
    00:33:00 Document summarization code explanation
    00:36:39 RAG-GPT full demo
    00:40:19 WebRAGQuery code explanation
    00:58:40 Serving open-source Generative AI models (LLAVA, Diffusion, Whisper) code explanation
    01:04:20 Multimodal full Demo
    01:07:51 Final keynotes
    Models:
    - GPT 3.5: OpenAI (Microsoft Azure)
    - text-embedding-ada-002: OpenAI (Microsoft Azure)
    - llava-hf/llava-v1.6-mistral-7b-hf: Higgingface
    - stabilityai/stable-diffusion-xl-base-1.0 : Higgingface
    - openai/whisper-base.en: Higgingface
    HUMAIN has fully adapted RAG-GPT and WebRAGQuery projects. To watch those two projects please check the links below:
    RAG-GPT: • RAG-GPT: Chat with any...
    WebRAGQuery: • ChatGPT v2.0: Chat wit...
    The thumbnail's character is AI generated. Credit: Angelo Scarcella from Pixabay.
    #openai #chatbot #multimodal #huggingface #diffusion #AI #generativeai #GPT #chatgpt #langchain #rag #LLM #largelanguagemodel #python #gradio
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 37

  • @AbdulSofyan-m6z
    @AbdulSofyan-m6z 20 วันที่ผ่านมา

    This video is so inspiring and wonderful! I really appreciate the effort you put into creating such an amazing tutorial. I was particularly impressed with the All-In-One Chatbot capabilities. I have also watched your video about Chat with SQL and Tabular Databases using LLM Agents. I do really admire your work! I believe it will be more fascinating if you could please make a tutorial on how to build a chatbot that can display images generated from a code interpreter? For example, it would be awesome to see how to create graphs or charts from prompt + csv files or SQL or tabular databases, similar to the features available in OpenAI's GPT-4. Being able to visualize data directly in a chatbot would be incredibly helpful. Thanks again for the fantastic content!

    • @airoundtable
      @airoundtable  19 วันที่ผ่านมา

      Thanks for the kind words. I am very happy to hear the content was useful for you. I might make another video to improve the SQL agent and add more features to it. But at the moment I am working on two other videos so that one will happen somewhere down the road. In the meantime, there are some articles that you can check to get an idea of how to make agents for data visualization. Here are some examples. I hope they help
      dev.to/ngonidzashe/chat-with-your-csv-visualize-your-data-with-langchain-and-streamlit-ej7
      medium.com/@nageshmashette32/automate-data-analysis-with-langchain-3c0d97dec356
      www.reddit.com/r/LangChain/comments/1d2qqqy/building_an_agent_for_data_visualization_plotly/

  • @KhanhLe-pu5wx
    @KhanhLe-pu5wx 3 หลายเดือนก่อน +1

    OMG I was looking for video like this so long

  • @vinaygautam6234
    @vinaygautam6234 3 หลายเดือนก่อน

    Great work,thanks for sharing !

  • @Trevor0713
    @Trevor0713 3 หลายเดือนก่อน

    this is AWESOME! this project is really inspiring to me.

    • @airoundtable
      @airoundtable  3 หลายเดือนก่อน

      Thanks! I am glad to hear it!

  • @aussie-elders
    @aussie-elders 12 วันที่ผ่านมา

    Really well done, probably the most complete RAG I have seen to date... I have a lot to learn form you here so thank you. One question, do you have anything explaining using local LLMs with a tool such as this?

    • @airoundtable
      @airoundtable  11 วันที่ผ่านมา

      Thanks. I am glad you liked the video. yes, I have a video explaining how to design a RAG chatbot using Open Source LLMs, which might interest you. here is the link
      th-cam.com/video/6dyz2M_UWLw/w-d-xo.htmlsi=XvWfkFAEL4Jt4CsX

  • @seththunder2077
    @seththunder2077 3 หลายเดือนก่อน +1

    Lets goooooo

  • @0ZeroTheHero
    @0ZeroTheHero 3 หลายเดือนก่อน

    Great job!

    • @airoundtable
      @airoundtable  3 หลายเดือนก่อน

      Thanks! Love the name: ZeroTheHero! :)) Cheers!

  • @robertgoldbornatyout
    @robertgoldbornatyout 3 หลายเดือนก่อน

    Looks very good well done new sub

  • @MuhammadZamanAli-xc1vy
    @MuhammadZamanAli-xc1vy 3 หลายเดือนก่อน

    Great

  • @emulated24
    @emulated24 3 หลายเดือนก่อน

    Great work, I'm following your projects. It would be awesome to enable your projects for locally hosted LLM's. 😉

    • @airoundtable
      @airoundtable  3 หลายเดือนก่อน +1

      Thanks! Great suggestion. That is indeed something that I really like to take into account. This project has two parts that are calling the GPT model. Except those two, the rest is fully open source. And for those two:
      1. One part is the RAG-GPT project for which I uploaded a project called Open-source-RAG with Google Gemma. So, you can easily switch that part with the open source version and the LLM of your choice by following that video.
      2. For the WebRAGQuery part though, since I am using function calling, it would be harder to modify it and tun it into an open source branch. Function calling is not a simple task and for that you would need to have access to a powerful LLM with a good context length size. And also from there, it would require alot of prompt engineering to fully adjust the model. So, I wouldn't suggest to go with open source models at least for that part for now. GPT models are very good at that task.

    • @emulated24
      @emulated24 3 หลายเดือนก่อน +1

      @@airoundtable First suggestion I understand, I will look into it. Thanks.
      The elephant in the room here is the privacy, I'm sure you understand NDA with corporates and one can't simply upload and run public AI queries against internal documents. Hence the need to run it locally in a fenced environment.
      Even for a hobby enthusiast. 🙂
      In such a case privacy > quality/ease of use.
      But rest assured, your content is as close as I found so far. Just a tiny bit missing...

    • @airoundtable
      @airoundtable  3 หลายเดือนก่อน

      @@emulated24 True. That's an important issue. Hopefully that would be an easy switch then

  • @dswithanand
    @dswithanand 3 หลายเดือนก่อน +2

    Hi, Can you please tell how to vectorize mysql db and store the vectors in any vector database.

    • @airoundtable
      @airoundtable  3 หลายเดือนก่อน +1

      Hi, I am preparing a project for it. I will release it soon

    • @dswithanand
      @dswithanand 3 หลายเดือนก่อน

      @@airoundtable great.. Will be waiting

  • @priyatosh-ig5eo
    @priyatosh-ig5eo 3 หลายเดือนก่อน

    Hi, its awesome!!! I have a question can genertae the embeddings of mysql database ? can you suggest a way or if possible make a video. Thanks

    • @airoundtable
      @airoundtable  3 หลายเดือนก่อน

      Hi, thanks! the closest approach that I've seen for having Q&As with sql is this one:
      python.langchain.com/docs/use_cases/sql/quickstart/
      There is also a youtube video from another youtuber that I saw recently. I haven't watched it yet but you can have a look:
      th-cam.com/video/9ccl1_Wu24Q/w-d-xo.htmlsi=FrE6AwVObBrgPMha
      I've had it in mind but still haven't been able to dig deeper and make a video about it. Thanks for the suggestion!

  • @kunalsatpute8379
    @kunalsatpute8379 2 หลายเดือนก่อน +1

    Can you also create a calculator to calculate cost of openai services like gpt3.5, gpt4, whisper, stable diffusion etc? by n number of users?

    • @airoundtable
      @airoundtable  2 หลายเดือนก่อน

      Depending on how you are using the OpenAI models, you have access to your consumption price already. For instance, if you are using Azure OpenAI, you have a cost analysis tab in your OpenAI service that shows all the details of your consumption. I am just not sure if it tells how many users are making the API calls. Anyhow, you already have access to at least 90% of what you are looking for. Just one note, stable diffusion is not an OpenAI model. In this chatbot, except the GPT 3.5, all other models are open-source and free to use.

    • @airoundtable
      @airoundtable  2 หลายเดือนก่อน

      Either way you are using OpenAI (Azure OpenAI or OpenAI) you already have access to your models' consummption details. For instance in Azure, there is a tab called "cost analysis" in your OpenAI service that shows you alot of information for your usage. Just one note, Stable Diffusion is not an OpenAI model. In this chatbot, except GP 3.5, all other models are open source and free to use.

  • @frederic7511
    @frederic7511 หลายเดือนก่อน

    Hi. Great work ! Can I make it work on windows without wsl with bitsandbytes-windows ? My windows GPU desktop is hosted and unfortunately, the vitualization is set off.

    • @airoundtable
      @airoundtable  หลายเดือนก่อน

      Hi, Thanks. You can run it on Windows if you want to run the chatbot without bitsandbytes. Unfortunately, this library is not compatible with Windows. If you load the full models (without bitsandbytes), you would need a lot of GPU to run Stable diffusion, Lava, and whisper models simultaneously. If you are not sure how to compute the GPU size, check this website:
      www.substratus.ai/blog/calculating-gpu-memory-for-llm

    • @frederic7511
      @frederic7511 หลายเดือนก่อน +1

      @@airoundtable Thanks for answering. Actually there is a bitsandbytes library for windows bitsandbytes-windows. I checked youd code and spoted where u implement bitsandbytes and already modified your code to force 8bit quantization (instead of 4bits which is not supported). But I noticed u use triton and uvloop in your requirements which is definitely not supported on windows systems. Is it mandatory for the project ?

    • @airoundtable
      @airoundtable  หลายเดือนก่อน +1

      @@frederic7511 Lastr thime that I checked bitsandbytes-windows was not still functional. I hope it works for you and solve the issue. No you can remove that library for running the project on windows

    • @frederic7511
      @frederic7511 หลายเดือนก่อน +1

      @@airoundtable Ok appears I made it work with bitsandbytes-windows. Now i'll modify the code to try it with GROQ and OLLAMA (chat and embeddings). I'll let you know if it works ;-) Thanks !

    • @airoundtable
      @airoundtable  หลายเดือนก่อน +1

      @@frederic7511 Great job! Interesting to hear that bitsandbytes-windows worked. Absolutely, I am curious to know the final result.

  • @mrmhc5458
    @mrmhc5458 3 หลายเดือนก่อน

    Wao 🥺🥺🥺🥺❤

  • @KhanhLe-pu5wx
    @KhanhLe-pu5wx 3 หลายเดือนก่อน +1

    Hi, can you show me where you set the API keys and all the key api we need or example key??? like huggingface

    • @airoundtable
      @airoundtable  3 หลายเดือนก่อน

      Hi, in your project create a file and name it ".env". Open it and add your OpenAI credentials. I am using Azure OpenAI so I have the following credentials in my .env file:
      OPENAI_API_TYPE=...
      OPENAI_API_VERSION=...
      OPENAI_API_KEY=...
      OPENAI_API_BASE=...
      They will be automatically called in the project.
      In case, instead of Azure, you use OpenAI directly, you won't have all of thsee. Just add the key and also switch the OpenAI functions from Azure to OpenAI style. We don't need any credentials from HuggingFace for this project.

    • @user-tn3xl1vv1b
      @user-tn3xl1vv1b หลายเดือนก่อน

      @@airoundtable how to switch the OpenAI functions from Azure to OpenAI style

    • @airoundtable
      @airoundtable  หลายเดือนก่อน

      @@user-tn3xl1vv1b In the project code, wherever I am using:
      response = openai.ChatCompletion.create(
      engine=gpt_model,
      messages=[
      {"role": "system", "content": llm_system_role},
      {"role": "user", "content": prompt}
      ],
      temperature=temperature,
      )
      replace it with openai chat function:
      client = OpenAI()
      chat_completion = client.chat.completions.create(
      model="gpt-3.5-turbo",
      messages=[
      {"role": "system", "content": llm_system_role},
      {"role": "user", "content": prompt}

      )
      You can find that function in:
      For summarize:
      HUMAIN-advanced-multimodal-chatbot/src/utils/raggpt/summarizer.py
      For RAG:
      HUMAIN-advanced-multimodal-chatbot/src/utils/raggpt/perform_rag.py
      For AI assistant:
      HUMAIN-advanced-multimodal-chatbot/src/utils/ai_assistant/interact_with_gpt.py
      Remember to install the proper libraries and provide your OpenAI API key to the project.