Chat with Multiple PDFs | LangChain App Tutorial in Python (Free LLMs and Embeddings)

แชร์
ฝัง
  • เผยแพร่เมื่อ 6 ม.ค. 2025

ความคิดเห็น • 1.1K

  • @alejandro_ao
    @alejandro_ao  2 หลายเดือนก่อน +1

    🔥Join the AI Engineer Bootcamp:
    - Learn with step-by-step lessons and exercises
    - Join a community of like-minded and amazing people from all over the world
    - I'll be there to personally answer all your questions 🤓
    - The spots are limited since I'll be directly interacting with you
    You can join the waitlist now 👉 course.alejandro-ao.com/
    Cheers!

  • @ScottHufford
    @ScottHufford ปีที่แล้ว +45

    🎯 Key Takeaways for quick navigation:
    00:00 🤖 The video tutorial aims to guide the building of a chatbot that can chat with multiple PDFs.
    00:38 ❓ The chatbot answers questions related to the content of the uploaded PDF documents.
    01:33 🔧 The video tutorial also covers the setting up of the environment, including the installation of necessary dependencies like Python 3.9.
    02:14 🔑 After setting up the environment and installing dependencies, the video progresses to explain the installation of other needed components to execute the task.
    03:38 👩‍💻 The video demonstrates the design of a graphical user interface (GUI) using Streamlit imported as 'St'.
    05:44 🎨 The sidebar of the GUI contains a file-upload feature for the chatbot to interact with PDF documents.
    07:11 🗳️ A 'Process' button is added to the sidebar as an action trigger for the uploaded PDF documents.
    08:57 🗂️ The tutorial explains how to create and store API keys for OpenAI and Hugging Face in an .env file.
    12:26 📄 The video further explains how the chatbot operates: it divides the PDF's text into smaller chunks, converts them into vector representations (embeddings), and stores them in a vector database.
    14:17 🧲 Using these embeddings, similar text can be identified: when a question is asked by a user, it converts the question into an embedding and identifies similar embeddings in the vector store.
    15:28 📚 The identified texts are passed to a language model as context to generate the answer for the user's question.
    19:54 🧩 The video guides the viewers to create functions within the application to extract the raw text from the PDF files.
    23:37 📋 The video further shows how to encode the raw extracted text into the desired format.
    25:03 ✂️ The tutorial provides guidance on creating a function to split the raw text into chunks to feed the model.
    25:28 📜 The presenter explains how to create a function that divides the text into smaller chunks using a library - Laungchain, which uses a class called 'character text splitter'.
    29:58 🌐 The presenter introduces OpenAI's embedding models for creating vector representations of the text chunks for storage in the Vector store.
    31:37 🏷️ The instructor model from Hugging Face is introduced as a free alternative to OpenAI's and is found to rank higher in the official Hugging Face leaderboard.
    33:59 💽 The speaker explains how to store the generated embeddings locally rather than in the cloud using Files from Langchain, a database to store numeric representations of text chunks.
    36:06 ⏱️ Demonstrates how long it could take to embed a few pages of text locally with the instructor model compared to the Open AI model.
    40:07 🔄 The host introduces conversation chains in Langchain, which allow for maintaining memory with chatbot and enabling follow-up questions linked to previous context.
    44:17 🧠 The presenter details how to use conversation retrieval chains for creating chatbot memory and how it aids in generating new parts of a conversation based on history.
    48:05 🔄 The speaker covers how to make variables persistent during a session using Streamlit's session state, useful for using certain objects outside their initialization scope.
    50:23 🎨 The presenter proposes a method of generating a chatbot UI by inserting custom HTML into the Streamlit application, offering fine-tuned customization.
    51:05 📝 The presenter introduces a code pre-prepared to manage CSS styles of two classes - chat messages and bots. Styling is discussed with reference to images and HTML templating for distinct user and bot styles.
    53:07 🔂 The presenter shows how to replace variables within HTML templates, using Python's replace function. By replacing the message variable, personalized messages can be displayed using the pre-arranged template.
    57:42 🗣️ The speaker demonstrates how to handle user input to generate a bot's response using the conversation object. The response is stored in the chat history and makes use of previous user input to generate context-aware responses.
    01:00:14 🔄 A loop is introduced to iterate through the chat history. Messages are replaced in both the user and bot templates resulting in a more dynamic conversation history displayed in the chat.
    01:03:14 💬 The host highlights how the chatbot is able to recall context based on the user's previous queries. The AI remembers the context from previous messages and appropriately answers new queries based on that.
    01:03:27 🔄 The speaker introduces how to switch between different language models, using Hugging Face models as an example. These models from Hugging Face can be used interchangeably with OpenAI's with minor adjustments in the code.
    01:06:00 🔁 The presenter demonstrates how the system works using different models. The response from the Hugging Face model is fetched in the same manner as the previous OpenAI model.
    Made with HARPA AI

  • @Pramesh37
    @Pramesh37 10 หลายเดือนก่อน +16

    Mate, you're a legend. I was searching for tutorials on Langchain framework, HuggingFace, LLM and Embeddings to understand the concept. But this one practical implementation gave me the entire package. Great pace, clear explanation of concepts, overall amazing tutorial. You are a gifted teacher and I hope you continue to teach such rare topics. Earned yourself a subscriber, looking forward to more such videos.

    • @xspydazx
      @xspydazx 8 หลายเดือนก่อน

      in reality we should not be using any form of cloud AI systems unless they are FREE !! Thats Point 1...
      But also we should be focussing on Hugging face models !
      All tasks can be performed with any model !
      Even the embeddings can be extracted from the model ! so no need for externeal embeddings providers , the embedding used should ALWAYS be from the model , the rag can be added to the tokenized prompt and injected as the content , so pre-Tokenized datasets are useful, reducing the search time and rag speed for local systems : (we cannot be held to ransom using the intenet as a back bone for everything and making these people richer each day !"
      the services provided by a vector store are easily created in python without third artly librarys, but any library which is complely open source and local is perfect !
      in fact we should be looking at our AI_researchers , to fill our rag based on our expectations and after examining and filtering it shouldl be able to be , extracted up to the llm ! (fine tuned in as talkng to the llm DOES NOT TEACH IT!)

    • @HelloIamLauraa
      @HelloIamLauraa หลายเดือนก่อน

      hii:). which HF model are u using?

  • @pathmonkofficial
    @pathmonkofficial ปีที่แล้ว +53

    The use of Huggingface language models takes this to another level, enhancing performance and functionality. The tutorial's step-by-step approach to setting up LangChain and building the chatbot application is truly valuable.

    • @alejandro_ao
      @alejandro_ao  ปีที่แล้ว +19

      you are truly valuable

    • @kaushikas4764
      @kaushikas4764 11 หลายเดือนก่อน

      What huggingface model is he using here?

    • @maximus3159
      @maximus3159 9 หลายเดือนก่อน +8

      This comment sounds suspiciously AI generated

    • @mrudulasawant4677
      @mrudulasawant4677 5 หลายเดือนก่อน

      @@alejandro_ao can we use python 3.10?

    • @chilldom.
      @chilldom. 4 หลายเดือนก่อน

      @@alejandro_ao i cannot thank you enough for this. Love from Ethiopia❤❤

  • @langmod
    @langmod ปีที่แล้ว +31

    Perfectly executed tutorial. Definitely worth a coffee. If you are taking suggestions, I'd be interested in a tutorial (or just exploring potential solutions) on comparing content between two documents; or more specifically answering questions about changes/updates between document versions and revisions. There are many situations where changes are made to a document (e.g. new edition of a book; documentation for python 2 vs 3; codebase turnover; etc.), and while 'diff' can show you exactly what changed in excruciating detail, it would be nice to have an LLM copilot that can answer semantic questions about doc changes. For example a bioinformatics professor might want to know how they should update their course curriculum as they transition from edition 3 to edition 4 of a textbook (e.g. Ch4 content has been moved to Ch5 to make room for a new Ch4 on advances in gene editing; Ch7 has major revisions on protein folding models).

    • @alejandro_ao
      @alejandro_ao  ปีที่แล้ว +5

      hey there! sorry for the late reply, this is a great idea! i started recording videos again a couple weeks ago and they are going up soon. this is totally something that could be very useful to a lot of people. i will look into that! and thanks for the coffee, you are amazing!!

  • @AdegbengaAgoroCrenet
    @AdegbengaAgoroCrenet ปีที่แล้ว +20

    I rarely comment on YT videos and I must say your sequencing and delivery of this content is really good. Its informative, clear, concise and straight to the point. No fluff or hype, just good and quality content with exceptional delivery. I couldn't help but subscribe to your channel and smash the like button. I have seen alot of videos about this and they don't deliver the kind of value you have

    • @alejandro_ao
      @alejandro_ao  11 หลายเดือนก่อน

      thank you man, it mean's a lot!

  • @iftrejom
    @iftrejom ปีที่แล้ว +19

    Thank you, man! I had so much fun replicating this project, I feel I learnt a lot with it. I am a AI student and this is the kind of content that make candidates appealing to employers. I will try to build up some projects of my own with all the great stuff I just learnt.

    • @alejandro_ao
      @alejandro_ao  ปีที่แล้ว +3

      that's awesome mate! keep building side projects and don't forget to look back to see your progress 💪

    • @deekshithkumar2153
      @deekshithkumar2153 ปีที่แล้ว +1

      Can you please answer this, Why am I not getting any output as shown in in video other than this
      load INSTRUCTOR_Transformer
      max_seq_length 512
      load INSTRUCTOR_Transformer
      max_seq_length 512
      Is it a problem with my system specifications or anything else?

    • @alangeorge1090
      @alangeorge1090 ปีที่แล้ว

      Even I'm currently facing the same issue, still unresolved :(@@deekshithkumar2153

    • @mohammedalqaisi7114
      @mohammedalqaisi7114 ปีที่แล้ว +1

      @@deekshithkumar2153 I'm having the same problem have you found a solution? maybe the data are not loaded into the faiss correctly idk?

    • @aishu2623
      @aishu2623 ปีที่แล้ว +1

      Sir a small doubt in this project can we upload any pdf and ask questions or we need to upload the same pdf what the person has uploaded?

  • @speerunscompared
    @speerunscompared ปีที่แล้ว +19

    This tutorial is excellent. It's nice that you also explained some of the smaller details, like the environment variable setup, and how this works with git.

  • @erniea5843
    @erniea5843 ปีที่แล้ว +19

    Well done! That overview diagram is very helpful and I appreciate that you referred back to it often. Too often tutorial videos neglect the system overview aspects but you made it easy to see how it all fit together

  • @VladimirBalko
    @VladimirBalko ปีที่แล้ว +19

    🎯 Key Takeaways for quick navigation:
    00:00 📝 This video tutorial demonstrates building a chatbot application that allows users to interact with multiple PDFs simultaneously.
    04:20 🛠️ The tutorial uses Streamlit to create the graphical user interface for the application, enabling users to upload PDFs and ask questions.
    10:20 🔐 API keys from OpenAI and Hugging Face Hub are used to connect to their APIs for language models and embeddings.
    16:39 📚 The application processes PDFs by converting them into chunks of text, creating embeddings, and storing them in a vector store.
    24:07 🔢 The large text from PDFs is split into smaller chunks to be fed into the language model for answering user questions.
    25:28 🧩 The tutorial demonstrates how to divide text into chunks using the "character text splitter" class from the "LangChain" library.
    29:31 📚 Two ways to create vector representations (embeddings) of text chunks: OpenAI's paid embedding models and the free "Hugging Face Instructor" embeddings.
    32:35 🏭 Demonstrates how to create a vector store (database of embeddings) using OpenAI's embeddings or Hugging Face's Instructor embeddings. The Instructor option is free but can be slower without a GPU.
    35:51 🕑 Processing time comparison: OpenAI's embeddings processed 20 pages in about 4 seconds, whereas Instructor embeddings on CPU took around 2 minutes for the same task.
    41:00 💬 Utilizing "conversation chain" in LangChain to build a chatbot with context and memory for a more interactive experience. Demonstrates how to create and use the conversation object.
    51:05 💻 The video demonstrates how to create templates for styling chat messages (CSS) in a Python app for displaying chatbot conversations.
    52:15 📜 CSS is imported and added to the HTML template for styling the chat messages in the Python app.
    54:10 🔄 The Python function `replace` is used to personalize the chat messages and display user-specific messages in the bot template.
    56:41 📝 User inputs are handled to generate responses using a language model (OpenAI or Hugging Face) and displayed with a chat-like structure.
    01:04:07 🏭 The tutorial shows how to switch from using OpenAI to Hugging Face language models in the Python app for chatbot interactions.
    Made with HARPA AI

    • @alejandro_ao
      @alejandro_ao  ปีที่แล้ว +3

      cool

    • @texasfossilguy
      @texasfossilguy ปีที่แล้ว

      wow

    • @Sahil-ev5pm
      @Sahil-ev5pm ปีที่แล้ว

      @@alejandro_ao Good project but how to host this to showcase in our resume please guide for the same.

  • @alejandro_ao
    @alejandro_ao  ปีที่แล้ว +10

    Hey there! Let me know what you want to see next 👇

    • @EntertainmentDoseByAkash
      @EntertainmentDoseByAkash ปีที่แล้ว

      Me also doing the same. However, what's your charges approx. per project?

    • @pyw
      @pyw ปีที่แล้ว +2

      amazing, can the app response answers with the original pdf context?

    • @EntertainmentDoseByAkash
      @EntertainmentDoseByAkash ปีที่แล้ว

      Yes anything can be answers except images. But accuracy and speed is low

    • @alejandro_ao
      @alejandro_ao  ปีที่แล้ว +1

      ​@@pyw hey there, yes that's the idea. the app responds only with the context in your PDF files. regarding images, it would depend on the images in your doc, but in some cases we could make the app read that too :)

    • @sushantraikar1
      @sushantraikar1 ปีที่แล้ว

      I have dropped you an email with the request. Please have a look and let me know

  • @shivamroy1775
    @shivamroy1775 ปีที่แล้ว +48

    Great quality content. I absolutely love that you took the time to explain everything in such great detail and walk us through the coding process, Unlike on TH-cam few other video compromise explainability and knowledge for pace. Please keep up the good work. Also, the explanation of the system diagram of the application was by far the best explanation I have ever seen.

    • @WildFire49
      @WildFire49 ปีที่แล้ว

      is your project working? when i process my pdfs it is not getting converted into chunks, What should i do?

    • @nealdriscoll22237
      @nealdriscoll22237 ปีที่แล้ว

      anyone knows how to use Azure instead of Open ai?

    • @MachineLearningZuu
      @MachineLearningZuu ปีที่แล้ว

      Yes I am using. What is the issue ? @@nealdriscoll22237

    • @mrudulasawant4677
      @mrudulasawant4677 5 หลายเดือนก่อน

      can we use python 3.10?

  • @GrahamAndersonis
    @GrahamAndersonis ปีที่แล้ว +5

    Great video! Question-when you have mixed pdf (text and tables) do you need to preprocess the tabular data in some way…like format/convert the inline table to a CSV string, or is Pypdf doing enough preprocessing so the table rows can be ingested?

    • @alejandro_ao
      @alejandro_ao  ปีที่แล้ว +4

      hey there! pypdf works pretty well with pdfs that are only text and ideally compiled directly from a text editor. if you have more complicated files, with tabular data (or scanned documents from a photo), i recommend you perform OCR on them to be sure that you get all the data form it.
      when the file contains tabular data or is hard to process, i usually use pdf2image to convert the file to image and then use pytesseract.image_to_string to do OCR on it. i hope this helps!
      sorry for the late reply, i'm out in summer vacation! and thanks for the tip 💪

    • @GrahamAndersonis
      @GrahamAndersonis ปีที่แล้ว

      @@alejandro_ao myself, I’ve been pre-converting pdfs to MS Word (direct word import) and then exporting table objects to pandas dataframes. Text objects are treated normally. Every object has an index for inline ordering.
      I haven’t tried it-you might be able to use Adobe Extract API.
      Question-Have you tried the pre-converting the pdf-to-Word approach? This can be automated, btw. Iterating with python-docx is easy.
      If so, does that behave better than converting to image? Thanks for a great channel!

  • @sandorkonya
    @sandorkonya ปีที่แล้ว +23

    Nice project! Since langchain's pdf reader saves the page as metadata, if you ask something, the results (the pages of the pdf) could be shown in an embeded /canvas next to the chat. This way one could see the relevant pages of the corresponding PDFs, not just the straight answer.

    • @maxbodley6452
      @maxbodley6452 ปีที่แล้ว +14

      Yeah that sounds like a great idea. Do you know how you would go about doing that?

    • @kaiserchief500
      @kaiserchief500 ปีที่แล้ว

      @@maxbodley6452 have you got some information of how that works?

    • @xt3708
      @xt3708 ปีที่แล้ว

      bump

    • @oleum5589
      @oleum5589 ปีที่แล้ว

      how would you do this

    • @sandorkonya
      @sandorkonya ปีที่แล้ว

      @@oleum5589 langchain.document_loaders.pdf.PyPDFLoader --> Loader also stores page numbers in metadata.

  • @alejandro_ao
    @alejandro_ao  11 หลายเดือนก่อน +8

    💬 Join the Discord Help Server: link.alejandro-ao.com/981ypA
    ❤ Buy me a coffee (thanks): link.alejandro-ao.com/YR8Fkw
    ✉ Join the mail list: link.alejandro-ao.com/o6TJUl

    • @qwadwojohn2628
      @qwadwojohn2628 8 หลายเดือนก่อน

      Hi Alejandro, any help on how I can setup the remote GitHub repository?

  • @geumyongjung8502
    @geumyongjung8502 ปีที่แล้ว +2

    Thanks for the video. This is the most clear explanation about langchain I've found. I do have a quick question. Can I use HuggingFace to do embeddings and use OpenAI to answer the user questions based on the vectors generated by HuggingFace? I think my question is basically asking if the vector data structure is universal which can be made and used by both OpenAI and HuggingFace models

    • @MirthaJosue
      @MirthaJosue ปีที่แล้ว

      I had exactly the same question

    • @alejandro_ao
      @alejandro_ao  ปีที่แล้ว +1

      hey there! that's an excellent question. the embedding models and the LLMs are two different things and they work separately. you first embed the text to perform a semantic search on it. then, once you have the text, you can send it to any LLM (huggingface, openai or local) to get a response.
      this means that you can totally use hugging face models for embeddings and openai for LLM and viceversa. what you cannot do is embed some text with huggingface and then try to embed more text with openai's embeddings and try to add it to the same vectorstore.
      this is because each embedding model generates vectors with different dimensions.
      just remember to stick to the same embeddings model from the beginning and you'll be fine!

    • @alejandro_ao
      @alejandro_ao  ปีที่แล้ว

      also, sorry for the delay in my response. i'm in summer vacation!

  • @weiimyi
    @weiimyi ปีที่แล้ว +5

    Nice video! I like how you mention all the little details people will miss. Video deliver is clear throughout. Keep up the work!

  • @wapoipei
    @wapoipei 7 หลายเดือนก่อน

    I've been searching for this topic with working samples and you gave us a full working project. You have a gift in teaching, keep it up mate. Thank you Alejandro!

  • @svenst
    @svenst ปีที่แล้ว +38

    Hey, thanks for this tutorial. Small hint: it’s recommended to use pypdf instead of PyPDF2, since this branch was merged back into pypdf. ;-)

  • @laurencewhite7554
    @laurencewhite7554 ปีที่แล้ว

    Thanks for your super informative videos!

    • @alejandro_ao
      @alejandro_ao  ปีที่แล้ว

      hey there! thank you so much for the superthanks! i'm glad you've find it useful :)

  • @junyang1710
    @junyang1710 ปีที่แล้ว +4

    you are such a good teacher, everything is explained so clearly. Thank you!

  • @nameunknown007
    @nameunknown007 ปีที่แล้ว

    Thanks a lot buddy, it is my first time using all these components and the AI understanding and responding to some random PDF I uploaded gives so much joy hahaha thanks again!

    • @alejandro_ao
      @alejandro_ao  11 หลายเดือนก่อน

      keep it up, you're doing great! and thanks for the tip!

  • @adriangheorghe8814
    @adriangheorghe8814 ปีที่แล้ว +1

    I have been dreaming of something like this for months, great work. I can't wait for the video on persistent vector stores, a real game changer.

    • @alejandro_ao
      @alejandro_ao  11 หลายเดือนก่อน +1

      in next week’s video i use a persistent vector store :)

    • @akarunx
      @akarunx 11 หลายเดือนก่อน

      @@alejandro_ao Any updates on persistent vector stores? Eagerly waiting for.

  • @RickeyBowers
    @RickeyBowers ปีที่แล้ว +20

    Your pacing and coverage of material is excellent! A progressive external database seems like some future steps. Could support multiple applications, caching at the file level. I can imagine querying a project (selection of files). Suppose it could get more meta - making decisions based on response content.
    Really, looking forward to wherever you take us!

    • @alejandro_ao
      @alejandro_ao  ปีที่แล้ว +1

      absolutely, there are so many ways that these applications can be scaled up for your own projects! keep it up :)

    • @mrudulasawant4677
      @mrudulasawant4677 5 หลายเดือนก่อน

      @@alejandro_ao can we use python 3.10?

    • @ryanvk8318
      @ryanvk8318 2 หลายเดือนก่อน

      how to deploy it? Help!

  • @MrBekimpilo
    @MrBekimpilo ปีที่แล้ว +1

    This is one of the best tutorials ever, caters to a wide audience. The explanations and everything were on point.

    • @alejandro_ao
      @alejandro_ao  ปีที่แล้ว +1

      thanks mate, i appreciate it

    • @MrBekimpilo
      @MrBekimpilo ปีที่แล้ว

      @@alejandro_ao you welcome. I will reach out sometime via email.

  • @fishbyte
    @fishbyte ปีที่แล้ว +8

    Hi Alejandro, thank you for making the series of Langchain tutorials. I have learned a lots! I wonder if you could show us how to ask a question over multiple uploaded files with different formats (e.g., PDFs + csv files).

    • @francoislepron2301
      @francoislepron2301 ปีที่แล้ว

      This would be really helpful. Do you think that such a tool set is able to recognize the fields in an invoice, such as the provider, the date, the invoice reference, and the amounts and quantities for each article, the total price, and after we can query the tool for all invoices received from a specific provider and so on ?

  • @sahiljamadar7324
    @sahiljamadar7324 9 หลายเดือนก่อน

    I was interested in taking a taste in LLM and this video just fulfilled my taste. I completed this project and it works fine and gave me a lot of learning about the vectorstore, the LLM itself which very much appreciated. THANKS ALOT MAN!!!

  • @MZak-js7oy
    @MZak-js7oy ปีที่แล้ว +5

    Thank you so much for the detailed explanation. one curious question as i'm planning to use instructor model locally.
    how to store the embeddings db locally instead of reprocessing it everytime you initialize the app?

  • @gbengaomoyeni4
    @gbengaomoyeni4 ปีที่แล้ว +2

    Wow! This guy is simply brilliant! Continue the good work bruh. You just gat a subscriber!

  • @BrandonFoltz
    @BrandonFoltz ปีที่แล้ว +5

    I cannot believe I got this running (because I am a coding idiot). EXCELLENT work.
    Do you know if there is a simple way to get the chat to display in reverse? I.e. the latest query/response is at the top so you don't have to scroll down each time?
    Keep up the great content. You are on your way.

    • @alejandro_ao
      @alejandro_ao  ปีที่แล้ว +8

      thank you man! i'm glad got this to work 💪 to display the chat in reverse, you just need to reverse the array containing the messages before displaying it. you can add these 2 lines and then loop through this new array:
      reversed_messages = st.session_state.messages
      reversed_messages.reverse()
      you need to run the `reverse()` method in a new variable to not mess up the messages history you have.
      ps. your videos are gold btw

    • @BrandonFoltz
      @BrandonFoltz ปีที่แล้ว +2

      @@alejandro_ao I will give that a try!
      Very kind of you to say my friend. Lots of us out here just trying to do good work and help others learn.
      Our viewers are the gold; we just provide the light so they can shine.

    • @riyajatar6859
      @riyajatar6859 ปีที่แล้ว

      def handle_userinput(user_question):
      response = st.session_state.conversation({'question': user_question})
      st.session_state.chat_history = response['chat_history']
      chat_list = st.session_state.chat_history
      # rev_msg = st.session_state.chat_history
      # chat_list.reverse()
      # st.write(st.session_state.chat_history)

      USER_INPUT = np.arange(0,len(chat_list),2).tolist()
      BOT_RESPONSE = np.arange(1,len(chat_list),2).tolist()
      USER_INPUT.reverse()
      BOT_RESPONSE.reverse()
      for i,j in zip(USER_INPUT,BOT_RESPONSE):
      st.write(user_template.replace(
      "{{MSG}}", chat_list[i].content), unsafe_allow_html=True)
      st.write(bot_template.replace(
      "{{MSG}}", chat_list[j].content), unsafe_allow_html=True)

    • @MirthaJosue
      @MirthaJosue ปีที่แล้ว +2

      ha, ha, ha... I felt the same way until I watched this video

  • @prerithsagar5694
    @prerithsagar5694 5 หลายเดือนก่อน

    Man you deserve more followers.The quality that you provide is unmatchable.Please do videos on branch chaining multiple LLM in langchain

    • @rouge-tl8ks
      @rouge-tl8ks 5 หลายเดือนก่อน

      Hi, how were you able to integrate OpenAI portion as it is now free now. Did you purchase it?

  • @tictaco31530
    @tictaco31530 ปีที่แล้ว +3

    Very nice and thanks very much for sharing!! With little experience got this to work and I see a lot of potential.
    It should be possible to save and load a FAISS index file. But I'm not able to get this to work. So instead of uploading a lot of PDF's each time you could access an already generated - and saved - vector store. Also an option to append PDF's later on would be nice. And... does the vector store have info on what comes from which pdf? And some metadata about the pdf's? Goal: to see the creation date or modified date. To see when that info was created (and may be outdated now ;-) Or to determine which info is newer and older.
    And a plus one on dr. Kónya's question. Would be nice to see the references of where the answer was based on.

  • @pickelbarrelofficial1256
    @pickelbarrelofficial1256 ปีที่แล้ว +1

    You are so good at explaining this, you've got a real talent there.

    • @alejandro_ao
      @alejandro_ao  ปีที่แล้ว

      the student has 50% percent of the merit ;)

  • @guanjwcn
    @guanjwcn ปีที่แล้ว +11

    Thanks for the insightful videos as always, Alejandro! Could you also do a tutorial on persistent vectorstore? For the same set of docs, if the app is refreshed, the embeddings of the docs would need to be re-done, which might not be cost effective if openai embedding is used. Not sure whether persistent vectorstore like pinecore would allow embeddings to be saved on local disk from its first use and the app can just read from there subsequently.

    • @alejandro_ao
      @alejandro_ao  ปีที่แล้ว +27

      hey there, thanks :) sure. indeed, in this example, the vectorstore is in memory, which means that it will be deleted when you refresh the app. pincone, as far as i know, works only on the cloud. but for local storage i'd probably go for either qdrant or chroma. i'll make a video about that soon!

    • @lordmelbury7174
      @lordmelbury7174 ปีที่แล้ว +6

      @@alejandro_ao A Langchain + Qdrant vid would be really useful! 👍👍

    • @Sergio-rq2mm
      @Sergio-rq2mm ปีที่แล้ว

      @@alejandro_ao Could you not write the vectorstore variable to file and then source it later?

    • @mairex9978
      @mairex9978 ปีที่แล้ว +1

      chroma could be a solution, you can try it out

    • @tictaco31530
      @tictaco31530 ปีที่แล้ว

      +1

  • @arielwadyese7091
    @arielwadyese7091 2 หลายเดือนก่อน

    Thanks for making such high quality, descriptive content, wish you an amazing rest of the year.

    • @alejandro_ao
      @alejandro_ao  2 หลายเดือนก่อน

      thank you! an amazing rest of the year to you as well :)

  • @scottregan
    @scottregan ปีที่แล้ว +7

    Hey mate, thanks so much. This is my first ever coding and I am thrilled to have it working.
    However, like many others, I am hitting the token limit-- I know this is super obvious to anyone with tacit knowledge, but you've made a beginner's guide so so bear with us. I assumed langchain would take care of this and only "call" the LLM for relevant chunks?. Otherwise, what is the point of this whole project? This is my error: "This model's maximum context length is 4097 tokens. However, your messages resulted in 20340 tokens. Please reduce the length of the messages."

    • @charlesd774
      @charlesd774 ปีที่แล้ว +1

      you cant send the entire conversation each time, you have to cut it off at some point. another option is to generate some kind of summary of each message so you can send in summaries instead. This is from a thread on openai forums

  • @dswithanand
    @dswithanand 11 หลายเดือนก่อน

    explained in very simple way and anyone starting beginner to advance can easily digest the content of the video. successfully completed the project. thanks bro

    • @alejandro_ao
      @alejandro_ao  11 หลายเดือนก่อน

      very glad to hear this! keep it up!

  • @thiagocorreaNT
    @thiagocorreaNT ปีที่แล้ว +5

    Congrats, great content!
    How can I show the PDF link that the response refers to?

  • @crystal14w
    @crystal14w ปีที่แล้ว +2

    This was great! I was able to build it with no problem 😄 the only issue I had was the human photo being outdated so I tried to upload a new photo but it didn’t update.
    Major warning ⚠️ to those who test their apps alot. Don’t waste your free API, because OpenAI will ask you for your card number and take away $5 😢 I didn’t know that was a thing until now. I built another project with OpenAI API so just keep tabs everyone 🙏
    This was a great video! Thanks so much 👏

    • @alejandro_ao
      @alejandro_ao  11 หลายเดือนก่อน +2

      hey there, that's a good point! oh that's strange. anyways, you can now use the latest streamlit chat module, which allows you to create a chat-like UI with a few lines instead of building it all in HTML and CSS like we did here :)

  • @topanimespro
    @topanimespro ปีที่แล้ว +6

    Hello, I wanted to express my gratitude for this tutorial. I'm curious to know if the concepts discussed here can also be applied to PDFs that are not primarily written in English (applicability to other languages such as Arabic or French)?

  • @YashwanthPindi
    @YashwanthPindi 29 วันที่ผ่านมา

    Amazing Tutorial!! Understood all the concepts so well!

    • @alejandro_ao
      @alejandro_ao  22 วันที่ผ่านมา

      you're the best

  • @qwerto-ye5pe
    @qwerto-ye5pe ปีที่แล้ว +2

    Hello and thank you for this project, I just wanted to ask if there's a better way to split the text, for example, wouldn't be better breaking the text after a "." or a ","?

    • @rulesmen
      @rulesmen ปีที่แล้ว

      Breaking the text after a n/ means you are spliting by parahraphs instead of sentences.

  • @theophilus4723
    @theophilus4723 ปีที่แล้ว +2

    Thank you so much Alejandro! The content was great. The explanation was clear and concise. Looking forward for more contents like this. Great job!

  • @GuruShankar-h1s
    @GuruShankar-h1s ปีที่แล้ว +3

    Hello Sir, Thank you for this amazing tutorial.
    I have implemented using the HuggingFaceInstructEmbeddings for embeddings and HuggingFaceHub for the conversation chain.
    I am getting the below error:
    ValueError: Error raised by inference API: Input validation error: `inputs` must have less than 1024 tokens. Given: 1080
    Please guide on how we can resolve this issue.
    Thanks :)

  • @donconkey1
    @donconkey1 ปีที่แล้ว +2

    Excellent Video!! You are an great teacher and a master of the material you present. Thanks your videos really help and save me a lot of time.

  • @armandopena3272
    @armandopena3272 11 หลายเดือนก่อน

    Well done! Congratulations. So far, this has been the clearest tutorial on the topic.

    • @alejandro_ao
      @alejandro_ao  10 หลายเดือนก่อน

      thank you! i'm glad to hear that :)

  • @DadCooks4Us
    @DadCooks4Us 8 หลายเดือนก่อน +10

    Some of the content is deprecated. Following through the content as I am trying to learn becomes a but difficult. Are you planning on updating this?

    • @RajkumarRavi21
      @RajkumarRavi21 5 หลายเดือนก่อน +2

      Video released one year back, langchain giving frequent updates so it is good to refer to the latest documentation

    • @johnfakes1298
      @johnfakes1298 3 หลายเดือนก่อน +1

      @@RajkumarRavi21even their documentation is deprecated in some places lol I was looking at it last night

    • @khizarstudy2095
      @khizarstudy2095 หลายเดือนก่อน

      I was looking at it today morning​@@johnfakes1298

  • @learnthetech7152
    @learnthetech7152 ปีที่แล้ว +1

    Hi Alejandro, this is a superb tutorial and thanks so very much for this. Like me, am sure many have got inspired by this. And you know what, I saw it is an hour long video, but at no point I felt it to be so long, its super engaging.

    • @alejandro_ao
      @alejandro_ao  ปีที่แล้ว

      you are amazing, thank you for being around! i have more videos coming :)

  • @GraceLiying
    @GraceLiying ปีที่แล้ว +6

    Hi Alejandro. Thank you so much for making this video. This is extremely help to me. I followed your tutorial and made my own pdf chatbot. I also made a cool testing if you are interested in. th-cam.com/video/EynIc0Shgrw/w-d-xo.html. I utilized a fictitious document to prevent the LLM from accessing its existing knowledge, and it was doing well. I noticed some problems of current code. Once the conversation became longer, the session_state may lost chat_history. But overall this is a very fun project to work with. Keep up with your excellent work!

  • @seanjames1626
    @seanjames1626 ปีที่แล้ว +2

    I have definitely subscribed! Great work. Thank you!

  • @deveshkumar84
    @deveshkumar84 4 หลายเดือนก่อน

    This helped me a lot to understand and build my first project related to LLM. It is an amazing tutorial which gives you a clear explanation regrading the methods and processes being used which is required for making any modifications to the project.
    I am facing some issues with the installation of instructor Embedding which shows why people prefer to use API calls instead of running on their own hardware. (You don't have to worry about maintenance and everything become easier to implement with API calls.)

    • @alejandro_ao
      @alejandro_ao  4 หลายเดือนก่อน

      Great to hear! Indeed, using a LLM API allows you to outsource all these tedious setup and also all the updates for the new LLMs :)

  • @ronicksamuel2912
    @ronicksamuel2912 ปีที่แล้ว +1

    that was a great detailed and direct tutorial, you are a good teacher.💪💪

    • @alejandro_ao
      @alejandro_ao  11 หลายเดือนก่อน

      Thank you!! I appreciate it

  • @jugjiwanseewooruttun7198
    @jugjiwanseewooruttun7198 ปีที่แล้ว

    Thank you Alejandro, it is very well explained succinctly. Your clarity in explaining the steps made it easy. You are valuable.

  • @ronan4681
    @ronan4681 ปีที่แล้ว +1

    Thank you Sir, one of the clearest instructional videos I have watched. Look forward to following your videos

  • @jamesallison9725
    @jamesallison9725 ปีที่แล้ว +1

    Terrific tutorial, you are a born teacher :)

    • @alejandro_ao
      @alejandro_ao  ปีที่แล้ว

      you are just amazing, thanks 🤓

  • @ResearchTutorials-hx4xm
    @ResearchTutorials-hx4xm ปีที่แล้ว

    Thanks I also had the token limit issue, could you please advise? I have a plus account with openai, would I need an enterprise account?

    • @alejandro_ao
      @alejandro_ao  11 หลายเดือนก่อน

      hey there, sorry i had been off youtube for a while. in case you haven't solved this yet, all you need to do is raise your token limit in your open ai dashboard. just be sure to keep your budget under control because sometimes API consumption can go off rails!

    • @ResearchTutorials-hx4xm
      @ResearchTutorials-hx4xm 11 หลายเดือนก่อน

      @@alejandro_ao thanks! I made it work but it only works with gpt 3.5 right? the kind of responses that I get are very superficial and there seem to be a limit to the number of pdfs you can upload. do you know of upgraded tools that would do the smae? Thanks!!

  • @nikolas.adhiarta
    @nikolas.adhiarta 4 หลายเดือนก่อน

    thanks I am lucky to find this content which is very helpful for my work. Greetings from Indonesia

  • @rainbowtrout8331
    @rainbowtrout8331 ปีที่แล้ว

    The way you explain each step is so helpful! Thank you

  • @karannesh7700
    @karannesh7700 ปีที่แล้ว

    This video is pure gold! Thanks @Alejandro great work! helped me a lot !

  • @tonyww
    @tonyww ปีที่แล้ว

    Thank you so much for your high-quality technical walk through of the project. I found it very fascinating.

  • @Sam-kou
    @Sam-kou 4 หลายเดือนก่อน

    Thanks!

  • @FunLau-u9e
    @FunLau-u9e 2 หลายเดือนก่อน

    Thank you so much for this video! 🎉 Your explanations were super clear and easy to follow. I really appreciate the time and effort you put into breaking down each step - it made all the difference! 🙌

    • @FunLau-u9e
      @FunLau-u9e 2 หลายเดือนก่อน

      for those who find some dependency causing error:
      TypeError: INSTRUCTOR._load_sbert_model() got an unexpected keyword argument 'token’
      > downgrade sentence-transformer==2.2.2
      ImportError: Dependencies for InstructorEmbedding not found
      > downgrade huggingface-hub==v0.25.2

    • @alejandro_ao
      @alejandro_ao  2 หลายเดือนก่อน

      it is great to hear this! let me know if you have any questions!

  • @giraffa-analytics
    @giraffa-analytics 4 หลายเดือนก่อน

    I love your style and learn a lot from the videos! Thank you!

  • @techandprogramming4688
    @techandprogramming4688 ปีที่แล้ว +1

    Great content! Thanks for sharing all the knowledge so beautifully and smartly, without getting things complicated.
    Also, I would like to say that please more & more of COMPLEX projects for us, LLM as a product or a complete software product, and also some things on LLMOps

  • @minhphuongle8017
    @minhphuongle8017 5 หลายเดือนก่อน

    Very good and clear and easy-to-understand tutorial thank you so much

  • @laurentlemaire
    @laurentlemaire ปีที่แล้ว

    Excellent video! Thanks for describing it so clearly and with the helpful git repo.

  • @ShikharDadhich
    @ShikharDadhich ปีที่แล้ว

    Awesome video! I am able to follow and run exactly what you did, thanks a lot man!

  • @ssgoh4968
    @ssgoh4968 ปีที่แล้ว

    Best tutorial ever. Very organised and easy to follow and understand.

    • @alejandro_ao
      @alejandro_ao  ปีที่แล้ว +1

      probably cause you’re the best learner ever 😎

  • @samsquamsh78
    @samsquamsh78 ปีที่แล้ว

    fantastic video and great pace and explanations of each steps and functions. I subscribed!

  • @sfisothecreative99
    @sfisothecreative99 ปีที่แล้ว

    I just had to subscribe. Great quality content!

  • @bhuvanbharath03
    @bhuvanbharath03 ปีที่แล้ว

    I am new to LLMs and NLP. I can't thank you enough for explaining this with all the details. This is the first video of yours that I watched. You absolutely deserve a SUBSCRIBE and SHARE. Keep making long and detail videos like this.
    with much love from India.

    • @alejandro_ao
      @alejandro_ao  ปีที่แล้ว +1

      thank you my friend, this means a lot!

  • @sammriddhgupta5614
    @sammriddhgupta5614 11 หลายเดือนก่อน

    Awesome video!! Concise explanations, and it works with openai, thank you!

  • @jeffg56
    @jeffg56 ปีที่แล้ว

    Dude amazing job on this! Keep em coming!

    • @alejandro_ao
      @alejandro_ao  ปีที่แล้ว

      thanks a ton! i will as soon as i come back from summer vacation!

  • @maria-wh3km
    @maria-wh3km ปีที่แล้ว

    You are awesome, well presented and the code is so clean and perfect. Big thank you!

  • @ninocrudele
    @ninocrudele ปีที่แล้ว

    Amazing content, very well explained, I immediately subscribed to you channel, please keep going !

    • @alejandro_ao
      @alejandro_ao  ปีที่แล้ว

      awesome, thank you! i totally will :)

  • @harshmunshi6362
    @harshmunshi6362 8 หลายเดือนก่อน

    Really good tutorial! Had to adapt and make some changes for my use case, but good intro!

  • @top_1_percent
    @top_1_percent 11 หลายเดือนก่อน +1

    Thank you son! You have made this video so step by step that a complete beginner like me even in python was able to follow and understand everything. This is helping me a lot in my current assignment. Although with the new version of Python in Feb 2023, Faiss CPU does not work and Instructor XL is also not the leader but this video cleared so many doubts and concepts of mine that I can dig further and close those gaps with other libraries. God Bless you and keep your purpose of sharing knowledge alive. Not everyone can do this in such an efficient and easy way. Cheers!

    • @JuniorValdivieso-q5q
      @JuniorValdivieso-q5q 9 หลายเดือนก่อน

      Which library did you put in place of FAISS?

  • @Sulls58
    @Sulls58 ปีที่แล้ว

    You are an amazing teacher. well done!

    • @alejandro_ao
      @alejandro_ao  ปีที่แล้ว

      i appreciate a lot, thanks 😊

  • @antarikshverma8999
    @antarikshverma8999 ปีที่แล้ว

    Thank you for clean and lucid explanation

  • @ermax7
    @ermax7 ปีที่แล้ว

    You are simply the best. Thanks for sharing us valuable knowledge, bruh.
    ✌️

  • @veranium24
    @veranium24 7 หลายเดือนก่อน

    Great video dude. Really well explained

    • @alejandro_ao
      @alejandro_ao  7 หลายเดือนก่อน

      Appreciate it!

  • @Tsardoz
    @Tsardoz 10 หลายเดือนก่อน

    Great tutorial but I found a huge difference between LLMs. For my case I had to introduce "llm = ChatOpenAI(model_name="gpt-4-0125-preview")" before I started getting decent results. This model was also able to draw on its own knowledge of the external world rather than rely solely on the pdfs I gave it. I'd love to see a follow up of how these trained models can be saved for later use to avoid training costs each time.

  • @Tejas07777
    @Tejas07777 ปีที่แล้ว

    best video so far on LLMs 🔥🔥🔥🔥

  • @wolfrowell9435
    @wolfrowell9435 ปีที่แล้ว

    Outstanding tutorial! Congrats 🚀🚀

  • @swithmerchan92
    @swithmerchan92 ปีที่แล้ว

    you are a master sensei .... masters of masters THANKS

  • @Moochers
    @Moochers 7 หลายเดือนก่อน +1

    when you are embedding on the free vs the paid method, let's say you want to embed on the free method but it's a lot of pages so it will take long but you only need this once. For example, I'm uploading the PDF locally to use with the AI. Do I have to embed every time or can I save those embeddings and just use that so that it's faster?

  • @berendjdejong
    @berendjdejong ปีที่แล้ว

    Great content, enjoyed watching it, explained very clearly

  • @marciorodriguesmota7927
    @marciorodriguesmota7927 ปีที่แล้ว +4

    Does anyone know how to solve this error or had it too?Does anyone know how to solve this error or had it too? Retrying langchain.embeddings.openai.embed_with_retry.._embed_with_retry in 4.0 seconds as it raised RateLimitError: You exceeded your current quota, please check your plan and billing details..

    • @Beelpatd
      @Beelpatd ปีที่แล้ว +1

      same

    • @KollektivTraumland
      @KollektivTraumland ปีที่แล้ว

      Same

    • @Veerarajankarunanithi
      @Veerarajankarunanithi ปีที่แล้ว +1

      It is because of openai limitations. You need purchase tokens to use further.

    • @JunaidAzizChannel
      @JunaidAzizChannel ปีที่แล้ว

      You need to purchase a pay as you go plan in Open ai account settings. Once done, you will need to generate a new API key for use

    • @JunaidAzizChannel
      @JunaidAzizChannel ปีที่แล้ว

      You need to purchase a pay as you go plan in Open ai account settings. Once done, you will need to generate a new API key for use

  • @beysachpromax
    @beysachpromax ปีที่แล้ว

    you are awesome man. keep it up I like how you explain in detail.

    • @alejandro_ao
      @alejandro_ao  ปีที่แล้ว +1

      thanks man, you are awesome

  • @kirthiramaniyer4866
    @kirthiramaniyer4866 ปีที่แล้ว

    Very thorough in explaining - good tutorial! Thanks

    • @alejandro_ao
      @alejandro_ao  11 หลายเดือนก่อน

      Glad it was helpful!

  • @paule7656
    @paule7656 ปีที่แล้ว

    Thank you sooo much!! That's a great piece of educational content!

  • @KeithWatson-f2q
    @KeithWatson-f2q ปีที่แล้ว +1

    I had to add these two lines to the requirements.txt file in order for it to work:
    altair

  • @dipitjaywant8044
    @dipitjaywant8044 ปีที่แล้ว

    It is a great video. Gives thorough understanding of the topic. I got the entire thing working. My question is while pushing the whole project to github how to hide the openai api key , at the same time make available to the streamlit cloud for sharing it as project link.

  • @GEORGEBELG
    @GEORGEBELG ปีที่แล้ว +1

    Excellent explanation and coding. Thank you

  • @changfengzhang1555
    @changfengzhang1555 ปีที่แล้ว

    Thanks!

    • @alejandro_ao
      @alejandro_ao  ปีที่แล้ว

      thank you for the tip! you are awesome

  • @woojay
    @woojay ปีที่แล้ว

    Thank you so much. This was super helpful for my own that I was building.

  • @federiconobili6038
    @federiconobili6038 ปีที่แล้ว +1

    extremely high quality tutorial! Congratulations! It was extremely helpful. A further step forward would be to store the pdfs' embeddings into a database so that every time that you close your application, you have not to upload your pdfs again. Any suggestion? Thanks. I'm a new subscriber of your channel.

  • @kyrsid
    @kyrsid 9 หลายเดือนก่อน

    nice video. you say "there you go" repeatedly. good work.

  • @aldotanca9430
    @aldotanca9430 ปีที่แล้ว

    Thanks, I particularly appreciated the detailed explanation of the process. Very clear.
    I am planning on an application which will use a large corpus of text and it is likely to be unfunded, so I am finding it hard to decide on what approaches to follow, given new stuff seems to come up every week.
    But I think I will give this approach a go, as a proof of concept at least, and move from there.

  • @suashischakraborty3650
    @suashischakraborty3650 ปีที่แล้ว +1

    At 57:18 ,response = st.session_state.conversation({'question': user_question}) I am getting error as InvalidRequestError.__init__() missing 1 required positional argument: 'param' .Can anyone tell me the reason or provide solution to this

    • @miguelesparza1081
      @miguelesparza1081 ปีที่แล้ว

      Please help me about this as well....🙏

    • @suashischakraborty3650
      @suashischakraborty3650 ปีที่แล้ว

      @@miguelesparza1081 make sure that ur passing all necessary parameters to the llm like if ur using azure open ai , then pass the engine or deployment name , model name etc