Wow so clear! Really appreaciate the multi-level numbered comments. Really helps a novice like me to see what you are doing, follow along, and code. A good practice anyhoo!
@@DevelopersDigest I'm still new at this, so I can't comment yet what parts you haven't covered. In your channel you covered most of the basics already and the quality of the videos are great. I'm learning a great deal out of it. I like your videos because you start with actual running examples, well commented, instead of a lot of unnecessary explanations. Learning as you practice is much easier like in your videos. You encouraged us to "do it" instead of just watching.
I would like to know the performance implications of running LangChain in a production Nodejs environment, but not sure if that's in your scope. Mostly on the internet people show how to run LLM locally, but I'm interested in using this actually on production nodejs servers, inside an express app for example.
just dropping by to say thanks. It took me forever to find a tutorial that works. I just wish I found you a week ago, but thanks again, keep them coming.
the only one video that made me learn. FCK pinecode and all shit, broken those days... Making me unable to learn. THANKS a lot, dunno why everyone is doing all the guides for python :( U are neat thanks man
Thank you, works :) May I have a question: I want to tell my model to behave in a certain way, the way I want to. I simply don't know where I can do this to pass the instructions to him. If you know the answer I will be grateful too see it. Thanks.
You can always pass the results within a query for an LLM. So the basic examples you could try is first passing in the results from your vector store to the LLM, then you could add instructions before and or after. Alternatively you could add a system message to the OpenAI payload which are weighted higher when past. There are also a handful of langchain ways you could accomplish this but I found using the code model api as a good place to learn!
Thank you for your answer. For now I simply changed the prompt in node_modules -> dist -> chains and so on. It works, because there is a template with {question} from User and {context} from Retriever. It's kinda a system message.
For simplicity. Rationale is that this was a quick way to show someone to get started with Langchain without having to dive into Vector databases. I consider this my introduction to Langchain video. I have a video going in depth on how to do something very similar to this with pinecone which I recommend as a next step after this if you are interested. Depending on your requirements and scale you require you could use something like this in production, it would really depend on the use case.
@@DevelopersDigest Thanks for your answer. Right... I'd assume it's pretty much the same as hosting your own mongodb database instead of using something as Atlas o Scalegrid. For a static database as the one in this video (and I'd assume it's the case for pretty much every other PDF or Doc), I still haven't figured out a case in which a cloud DB like Pinecone is a must. For dynamic documents, as in the MongoDB example, they're useful because of their snapshots, security, version control and so on.
Hi amazing!, I have some question. Using OpenAI model and APIKEY has cost??? IT´s posibble to create the vector index and uploading for example into s3 bucket and then use this url? Thanks!!
Great Video! Helped me test things locally. Can openAI read and use the embeddings that we generate? If I use a pdf that I bought to generate embeddings, am I violationg any laws?
Great question! Digging into OpenAI's privacy policy and your regional laws would be a first step I would take a look at. The copyright question surrounding LLMs as a whole is an interesting area of discussion right now!
Loved the tutorial, really help get up to speed with something I am building, I want to know what can we do if we have a website FAQs that need to be embedded as a chatbot for the any website?
Thanks Salman, check out Databerry, I have a video on my channel, it’s an open source project that can accomplish what you are looking for, alternatively you could look into organizing your data in a vector database on pinecone. I have a handful of langchain and pinecone videos coming out if you are interested. Stay tuned !
OH NOOOO! I'M getting the dreaded "Package subpath './dist/text_splitter' is not defined by "exports" " As in all LangChain attempts! Have been all over GH and the docs and still not solved. Please someone help...
Absolutely. I will post a repo to the description of the video tonight. I’ll also respond to this comment with the repo link so you will get an alert when it’s up!
What a great video! After trying it out myself i was quite impressed, how easy it seemed to integrate my own data as an embedding. However i was wondering, if you can combine your embeddings and the trained chatGPT Model? As an example i've provided some sauces as an embedding with their ingredients and then tasked the AI to provide me some recepies based on these, but it said it didnt know any. However if i do the same one GPT4 or GPT3.5 even it does provide me with a good answer. Is there any way to combine your data with the trained model?
If querying your vector store isn’t working as well as you would have expected you could look at fine tuning a model. OpenAI does have some good documentation on how to start fine tuning. I can also make a video on fine tuning if that helps! platform.openai.com/docs/guides/fine-tuning
Depends on what exactly you are looking to do with it but say for embedding and using hosted inference endpoints. You would be able to use most computers, it becomes a different question if you want your computer to start to do more of the inference locally though!
Love the video, it was super helpful and helped me get a server running so I can make requests to it from my website I'm creating. Am I able to also use the base chatgpt knowledge base on top of the custom data? It's very good at answering any question in the data but just says "I dont know" for any other question.
Absolutely. I will post a repo to the description of the video tonight. I’ll also respond to this comment with the repo link so you will get an alert when it’s up!
Thank you for this amazing tutorial 🙌. I just have a quick question, is there a way to reference multiple text files? Or should all text be in one file? Thanks
You can have multiple text files that convert into multiple vector store with this approach it does become a bit more involved to retrieve the data from multiple vector stores however. Alternatively, if you want one vector store for all text files, you could read all the txt files before embeddings and concatenate them to be within one vector store. Hopefully this helps!
Great question and absolutely, there would be a handful of different approaches to accomplish this. I will be creating more Langchain and OpenAI content on the coming weeks and I will make sure to cover this in an upcoming video! 🙂
i can not run it, seemly the issue happened on storage " await vectorStore.save(VECTOR_STORE_PATH)" to fetch the .env code; and showed the authorization bearer the code is not the code in .env file. please help me find the solusion. thanks.
Sir what if 1 million user using the api, that time many file will be created for cashing,storing one million file is scalable, this is how work in real world or any other solution
I have a video on Pinecone and langchain that is a scalable version of something very similar to this you might like checking out. This is intended to be an introduction :)
sor this is the updated code as the method shown is degraded now. thanks import { OpenAI } from 'langchain/llms/openai'; import {RetrievalQAChain} from 'langchain/chains'; import { HNSWLib} from "langchain/vectorstores/hnswlib"; import { OpenAIEmbeddings } from 'langchain/embeddings/openai'; import { RecursiveCharacterTextSplitter} from 'langchain/text_splitter'; import * as fs from 'fs'; import * as dotenv from 'dotenv';
i tried to use loadLLM to use my local gpt4all-lora-quantized-ggml.bin model. i failed to do so as it says BaseLanguageModel(return from loadLLM) does not have methods cal() or generate(). send help
I feel like that Langchain bird could be a great bird to deliver a set of debugging instructions😆 I will be looking into gpt4all model later this week, if I find anything I will circle back!
@@DevelopersDigest quick update, i managed to use loadLLM for langchain.js. it was my fault for trying to convert the file path of model into file uri whereas loadLLM takes model path only. However it seems loadFromFile has a memory limit of 2GB and it cant load my 3gb gpt4all model. The error is code:'ERR_FS_FILE_TOO_LARGE'. It seems a limitation of the 'fs' node module itself.
It won't run getting an error with some headers object: /Get_Started_with_LangChain_in_Nodejs/node_modules/langchain/dist/util/axios-fetch-adapter.js:234 const headers = new Headers(config.headers); ReferenceError: Headers is not defined
i don't know why but on running npm i hnswlib-node there are a lots of error. gives the error " npm ERR! code 1 npm ERR! path C:\Users\heman\Desktop\project ode_modules\hnswlib-node npm ERR! command failed npm ERR! command C:\WINDOWS\system32\cmd.exe /d /s /c node-gyp rebuild npm ERR! gyp info it worked if it ends with ok npm ERR! gyp info using node-gyp@9.3.1 npm ERR! gyp info using node@18.16.0 | win32 | x64 npm ERR! gyp info find Python using Python version 3.11.3 found at "C:\Python311\python.exe" npm ERR! gyp ERR! find VS npm ERR! gyp ERR! find VS msvs_version not set from command line or npm config" this is only 40% of error log. can you suggest why its not working
Building A Chatbot with Langchain and Upstash Redis in Next js
th-cam.com/video/gpXXIvfSCto/w-d-xo.html
Happy you did a node version 👍
Thanks Mario!
Wow so clear! Really appreaciate the multi-level numbered comments. Really helps a novice like me to see what you are doing, follow along, and code. A good practice anyhoo!
Thanks!
0:42 The way you held that Fffff is absolutely legendary 🤣🤣 Thanks a bunch for the video 🙏
Haha - Putting an introverted dev in front of a mic 🎙️ has proved to have some with some side effects 😅
Thanks for watching, cheers!
Underrated video. Not enough content yet on this. You jumped straight to the point. Props for using those comments for high level understanding.
Thank you so much Satadru, what concepts would you like to see covered in Langchain?
@@DevelopersDigest I'm still new at this, so I can't comment yet what parts you haven't covered. In your channel you covered most of the basics already and the quality of the videos are great. I'm learning a great deal out of it. I like your videos because you start with actual running examples, well commented, instead of a lot of unnecessary explanations. Learning as you practice is much easier like in your videos. You encouraged us to "do it" instead of just watching.
I would like to know the performance implications of running LangChain in a production Nodejs environment, but not sure if that's in your scope. Mostly on the internet people show how to run LLM locally, but I'm interested in using this actually on production nodejs servers, inside an express app for example.
Love the idea of having conversations with your favorite author by going back and forth with chat via the book text. Great share 👍
Thank you Mario 🙂
Finally a nodejs version without pinecone. Thanks.
Thanks for watching Leke! 🙂
just dropping by to say thanks. It took me forever to find a tutorial that works. I just wish I found you a week ago, but thanks again, keep them coming.
Thanks CoinHeadz! I love to hear that 🙏
after a week long of search, this has been my saviour. Great job.
Thanks Nashons! A lot more Langchain content planned for this week, stay tuned !🙂
the only one video that made me learn. FCK pinecode and all shit, broken those days... Making me unable to learn. THANKS a lot, dunno why everyone is doing all the guides for python :( U are neat thanks man
Thanks for watching Onar!
thanks a lot dude this helped me complete my internship assignment
That is great! Thanks for watching 🙂
Thank you, works :) May I have a question: I want to tell my model to behave in a certain way, the way I want to. I simply don't know where I can do this to pass the instructions to him. If you know the answer I will be grateful too see it. Thanks.
You can always pass the results within a query for an LLM. So the basic examples you could try is first passing in the results from your vector store to the LLM, then you could add instructions before and or after. Alternatively you could add a system message to the OpenAI payload which are weighted higher when past. There are also a handful of langchain ways you could accomplish this but I found using the code model api as a good place to learn!
Thank you for your answer. For now I simply changed the prompt in node_modules -> dist -> chains and so on. It works, because there is a template with {question} from User and {context} from Retriever. It's kinda a system message.
Second time watching it. Getting closer to understanding this whole thing 😅
💪
Great tutorial! Can you please explain in detail why didn’t you use Pinecone? And could a production app don’t use it as well? Thanks
For simplicity. Rationale is that this was a quick way to show someone to get started with Langchain without having to dive into Vector databases. I consider this my introduction to Langchain video. I have a video going in depth on how to do something very similar to this with pinecone which I recommend as a next step after this if you are interested.
Depending on your requirements and scale you require you could use something like this in production, it would really depend on the use case.
@@DevelopersDigest Thanks for your answer. Right... I'd assume it's pretty much the same as hosting your own mongodb database instead of using something as Atlas o Scalegrid. For a static database as the one in this video (and I'd assume it's the case for pretty much every other PDF or Doc), I still haven't figured out a case in which a cloud DB like Pinecone is a must. For dynamic documents, as in the MongoDB example, they're useful because of their snapshots, security, version control and so on.
Hi amazing!, I have some question. Using OpenAI model and APIKEY has cost??? IT´s posibble to create the vector index and uploading for example into s3 bucket and then use this url? Thanks!!
Very very Useful. THANK YOUU❤
Thank you for watching!
Great Video! Helped me test things locally.
Can openAI read and use the embeddings that we generate? If I use a pdf that I bought to generate embeddings, am I violationg any laws?
Great question! Digging into OpenAI's privacy policy and your regional laws would be a first step I would take a look at. The copyright question surrounding LLMs as a whole is an interesting area of discussion right now!
thank you. I enjoy wathcing this. :-)
Thank you for watching!
is it possible to include RAG with mongodb?
Loved the tutorial, really help get up to speed with something I am building, I want to know what can we do if we have a website FAQs that need to be embedded as a chatbot for the any website?
Thanks Salman, check out Databerry, I have a video on my channel, it’s an open source project that can accomplish what you are looking for, alternatively you could look into organizing your data in a vector database on pinecone. I have a handful of langchain and pinecone videos coming out if you are interested. Stay tuned !
@@DevelopersDigest Thanks would be waiting.
OH NOOOO! I'M getting the dreaded "Package subpath './dist/text_splitter' is not defined by "exports" " As in all LangChain attempts! Have been all over GH and the docs and still not solved. Please someone help...
Great video! If it's possible, can you link to the repository? I would love to fork it and start playing with my own books :)
Absolutely. I will post a repo to the description of the video tonight. I’ll also respond to this comment with the repo link so you will get an alert when it’s up!
Repo link: github.com/developersdigest/Get_Started_with_LangChain_in_Nodejs 😀
Well done, will implement this in a project i have
Thanks Mike! Glad to hear it
What a great video! After trying it out myself i was quite impressed, how easy it seemed to integrate my own data as an embedding. However i was wondering, if you can combine your embeddings and the trained chatGPT Model? As an example i've provided some sauces as an embedding with their ingredients and then tasked the AI to provide me some recepies based on these, but it said it didnt know any. However if i do the same one GPT4 or GPT3.5 even it does provide me with a good answer. Is there any way to combine your data with the trained model?
If querying your vector store isn’t working as well as you would have expected you could look at fine tuning a model. OpenAI does have some good documentation on how to start fine tuning. I can also make a video on fine tuning if that helps! platform.openai.com/docs/guides/fine-tuning
Also thank you for the kind words, thank you for watching! 🙂
great video, may I know the hardware requirements to run lang-chain ? Thanks
Depends on what exactly you are looking to do with it but say for embedding and using hosted inference endpoints. You would be able to use most computers, it becomes a different question if you want your computer to start to do more of the inference locally though!
Love the video, it was super helpful and helped me get a server running so I can make requests to it from my website I'm creating.
Am I able to also use the base chatgpt knowledge base on top of the custom data? It's very good at answering any question in the data but just says "I dont know" for any other question.
Yes, check out my video here. It should have a similar implementation to what you are describing. m.th-cam.com/video/EFM-xutgAvY/w-d-xo.html
@@DevelopersDigest awesome, thanks. I'll check it out. Your page is a gold mine. You really should have more subscribers and views
@@reillywynn4005 thank you that means a lot!
How to return customized answer like " i don't know" when the context not match
is there a way to substitute the OpenAiEmbeddings with HuggingfaceEmbeddings?
You can swap out for many different embeddings types yes!
sir, the quality of tutorial is really good, can you just post it somewhere after video upload
Absolutely. I will post a repo to the description of the video tonight. I’ll also respond to this comment with the repo link so you will get an alert when it’s up!
@@DevelopersDigest sir that's really nice of you. It's something really helpful that you're doing 😊
@@hmtbt4122 Repo link: github.com/developersdigest/Get_Started_with_LangChain_in_Nodejs
@@DevelopersDigest thanks sir
Could you please make another tutorial to expose this as an API so that we can deploy it and let each user have their own session with a chat? ❤
Great idea. I have an authenticated application with chat, langchain, and more that will be coming soon! Stay tuned!
Thank you for this amazing tutorial 🙌. I just have a quick question, is there a way to reference multiple text files? Or should all text be in one file? Thanks
You can have multiple text files that convert into multiple vector store with this approach it does become a bit more involved to retrieve the data from multiple vector stores however. Alternatively, if you want one vector store for all text files, you could read all the txt files before embeddings and concatenate them to be within one vector store. Hopefully this helps!
Hi, is there a way to do this but make the AI remember the previous text as well?
Great question and absolutely, there would be a handful of different approaches to accomplish this. I will be creating more Langchain and OpenAI content on the coming weeks and I will make sure to cover this in an upcoming video! 🙂
@@DevelopersDigest Alright thank you very much!
Very useful, thanks for getting me into this with a simple video 😄
I love to hear that! Thank you 🙏
i can not run it, seemly the issue happened on storage " await vectorStore.save(VECTOR_STORE_PATH)" to fetch the .env code; and showed the authorization bearer the code is not the code in .env file. please help me find the solusion. thanks.
Did you add your api key to the env? Cheers
Sir what if 1 million user using the api, that time many file will be created for cashing,storing one million file is scalable, this is how work in real world or any other solution
I have a video on Pinecone and langchain that is a scalable version of something very similar to this you might like checking out. This is intended to be an introduction :)
@@DevelopersDigestthank you
Sir you are Batman. All works on Ukrainian long text
Love to hear it!
sor this is the updated code as the method shown is degraded now. thanks
import { OpenAI } from 'langchain/llms/openai';
import {RetrievalQAChain} from 'langchain/chains';
import { HNSWLib} from "langchain/vectorstores/hnswlib";
import { OpenAIEmbeddings } from 'langchain/embeddings/openai';
import { RecursiveCharacterTextSplitter} from 'langchain/text_splitter';
import * as fs from 'fs';
import * as dotenv from 'dotenv';
i tried to use loadLLM to use my local gpt4all-lora-quantized-ggml.bin model. i failed to do so as it says BaseLanguageModel(return from loadLLM) does not have methods cal() or generate(). send help
I feel like that Langchain bird could be a great bird to deliver a set of debugging instructions😆 I will be looking into gpt4all model later this week, if I find anything I will circle back!
@@DevelopersDigest quick update, i managed to use loadLLM for langchain.js. it was my fault for trying to convert the file path of model into file uri whereas loadLLM takes model path only. However it seems loadFromFile has a memory limit of 2GB and it cant load my 3gb gpt4all model. The error is code:'ERR_FS_FILE_TOO_LARGE'. It seems a limitation of the 'fs' node module itself.
It won't run getting an error with some headers object:
/Get_Started_with_LangChain_in_Nodejs/node_modules/langchain/dist/util/axios-fetch-adapter.js:234
const headers = new Headers(config.headers);
ReferenceError: Headers is not defined
The first thing I would check would be to see which version of nodejs you are running and update it to at least version 18 if need be 🙂
i don't know why but on running npm i hnswlib-node there are a lots of error. gives the error " npm ERR! code 1
npm ERR! path C:\Users\heman\Desktop\project
ode_modules\hnswlib-node
npm ERR! command failed
npm ERR! command C:\WINDOWS\system32\cmd.exe /d /s /c node-gyp rebuild
npm ERR! gyp info it worked if it ends with ok
npm ERR! gyp info using node-gyp@9.3.1
npm ERR! gyp info using node@18.16.0 | win32 | x64
npm ERR! gyp info find Python using Python version 3.11.3 found at "C:\Python311\python.exe"
npm ERR! gyp ERR! find VS
npm ERR! gyp ERR! find VS msvs_version not set from command line or npm config" this is only 40% of error log. can you suggest why its not working
I would first make sure you have python 3 installed :)
@@DevelopersDigest yes sir, i reinstalled it.