For Right now I am going try to create RAG project using google makersuit LLM which is free. if i am able to create it am I allow to share the github repo's link?
I want to create a marketplace to match job posts with applicants. i would like both the job creators and the job seekers to be able to submit their requirements via a chatbot (chatgpt e.g) as well as a structured form. So ideally i'd like the llm to push the postings into the db, and also call an api function to pull the potential matches from the postings to the applicant requirements. Do you think this solution could work?
I understand MongoDB sponsored this but I’d really have appreciated WHY someone should choose MongoDB vs other options. Same with embedding model. WHY use the hugging face model vs OpenAI Ada. There are so many different options for vector store and model, so a tutorial that deep dives into this decision is super important.
It was touched on: - Mongo DB allows you to store the vectors alongside the original data (i.e. in the same document). this means you can filter out documents that you don't want to use in your vector search before you run a vector query - Huggingface is free when starting out, Open AI's API costs money
The thing with openai, Claude and so on and so forth is that you are at the mercy of the suppliers. The most obvious concern would be that if for any reason openai Claude and the likes had downtime and or their servers are not responsive, your businesses will absolutely be affected. Take Openai as example, Openai lib gets updated super frequently, also they provide API instead of model. So you are absolutely at the mercy of Openai when they decide to change endpoints, decommission old models and etc. You are also at the mercy of their pricing. There's nothing wrong with just using openai's API just that you have to position your business well. If you're just an integrator then all's good but if you're an ai consultancy firm then it makes sense for you to have ur own model that is tuned specifically for specific task. E.g. Mistral mixture of experts. It is also cheaper if you make a leaner model and host it urself. Why is mongodb chosen? Because they are the sponsor. Obviously right. It doesn't really matter for now what db you are using because it's just a tutorial. However if you're really going into production then it is perfectly ok to have specific dbs for specific tasks. Lastly it's all about use case, no one has infinite money to burn. There's only small or big budget to use. If your wallet is deep then use openai for everything. If your wallet is shallow then you should provision resources correctly.
There’s a lot missing. I get this is basic, but the metadata is crucial.. and 90% of people will be using cosine similarly, especially in RAG systems. Great video by the way. It’s awesome that you take time out to help others…
It would really help everyone if you followed the best practices of using your tokens/logins safely. The old practice what you preach. Many of your viewers might not really know how to do that. They NEED to do it. I appreciate it makes your video less expository and is a burden in terms of prep.
Loved this whole training, especially the handson examples. Really help people like me who are new to Python and coming traditioanlly from non-Open Source toolsets. One pet-peeve though - Please list all the example and code files in your github library. For example 2, I am really stuck following your example, since I dont have the exact sample_files. Also, the github link you mentioned only has javascript code and not the Python ones (Unless I am missed it :) ) Would really appreciate it you can upload all these files for our reference! Thanks!
You can generate vector embeddings by calling rest api exposed by Vendors like HuggingFace, OpenAI etc. One thing to note that, these vendors employ rate limiting at their ending basically throttling the no of request that you can make to theirs apis within second. You need to buy subscription accordingly depending on your requirement
Great video! I really enjoyed your introduction to Rag. Your explanation was clear and informative. I noticed you broke the text into segments instead of using the whole text. Could you explain the reasoning behind this approach? Thank you in advance!
Hi, I have the following error "ValueError: Request failed with status code 400: {"error":["Input should be a valid dictionary or instance of SentenceSimilarityInputsCheck: received `freeCodeCamp is awesome` in `parameters`"]}"
Would you be able to point me to some tutorials that achieves the same thing as Project 2, but without using langchain? The query_data function from that tutorial is pretty mysterious, and I'd love to learn what's happening behind the scenes.
Hi, I loved this session. I wanted to have my own Embedding Server. Can you please make a video on this. I want to have it based on Opensource LLM Model. Please Guide. 🙏🙏🙏🙏
Is the accuracy of the documents retrieved influenced by the user's query? For instance, you mentioned using "imaginary characters from outer space at war" as a user query at 25:14. Would employing a more detailed query, such as "Please, I need to find all the imaginary characters from outer space at war in the collected data, could you do that for me, please?" result in better or worse outcomes?
I think for bpe models should have their tokens masked for problem/common character while sentencepiece need high quality repetitive data sets to leverage them correctly. Don't fix what ain't broken, right?
I could not find the same endpoint for the embedding model using in the video for the first project. Could you tell me where to get it for this specific model?
How do we make the conversation with the ChatOpenAi model context aware(not limited to the freeCodeCamp documentation in this case but the question asked too), like if i asked 1st question "How to create a PR" then 2nd question like "Who reviews it?" How will it know in the 2nd question that I am talking about PRs?
Where are the correct source for project 2 and 3. It is hard to actually implement these projects without the .txt files. I am surprised that git hasn't been updated yet.
Can we create a new search index using code instead of using the MongoDB UI? Using the UI is not practical when making a real-world project. It's fine for fun project.
just self-host your own MongoDB. You would have to change the URL to your db in your code to something like "localhost:27017". You would do everything in code then
Is there some kind of a limit on how much data I can provide? If I have documents with 1,000,000 words in total, will the RAG be able to retrieve the most relevant documents? And if most of the documents are relevant, will the LLM be able to take all of those as an input? Sorry, I just noticed I've asked quite a few questions 😂
🎯 Key Takeaways for quick navigation: 00:00 *🕵️ Vector search allows searching based on meaning, transforming data into high-dimensional vectors.* 01:10 *🚀 Vector search enhances large language models, offering knowledge beyond keywords, useful in various contexts like natural language processing and recommendations.* 02:03 *💡 Benefits of vector search include semantic understanding, scalability for large datasets, and flexibility across different data types.* 03:11 *🔗 Storing vectors with data in MongoDB simplifies architecture, avoiding data sync issues and ensuring consistency.* 04:06 *📈 MongoDB Atlas supports vector storage and search, scaling for demanding workloads with efficiency.* 05:02 *🔄 Setting up MongoDB Atlas trigger and OpenAI API integration for embedding vectors in documents upon insertion.* 06:38 *🔑 Safely storing API keys in MongoDB Atlas using secrets for secure integration with external services.* 08:56 *📄 Functions triggered on document insertion/update generate embeddings using OpenAI API and update MongoDB documents.* 10:33 *🧩 Indexing data with vector embeddings in MongoDB Atlas enables efficient querying for similar content.* 11:15 *📡 Using Node.js to query MongoDB Atlas with vector embeddings, transforming queries into embeddings for similarity search.* Made with HARPA AI
Its a shame the files arent there for the final two. I followed along with the second one but the third might be a push. anyone find the files elsewhere ?
Thanks that was really helpful! I want to create a marketplace to match job posts with applicants. i would like both the job creators and the job seekers to be able to submit their requirements via a chatbot (chatgpt e.g) as well as a structured form. So ideally i'd like the llm to push the postings into the db, and also call an api function to pull the potential matches from the postings to the applicant requirements. Do you think this solution could work with the vector search / RAG approach youve shown here?
Hello, I am getting following error can you please help me by sharing your thoughts OperationFailure: Unrecognized pipeline stage name: $vectorSearch, full error: {'ok': 0.0, 'errmsg': 'Unrecognized pipeline stage name: $vectorSearch', 'code': 40324, 'codeName': 'UnrecognizedCommand'} Thanks in advance !
Thank you for the course! I have a question, how can I search between data in multiple languages? I'd have to create embeddings for every language (though being the same data, ie "house" in English and "casa" in Spanish, which have the same meaning but I want to be able to search in any language)
I tried your 1st project it throws an error if i pass {"inputs":text}. Doc says we need to pass like this "inputs": { "source_sentence": "", "sentences": ["That is a happy person",], } but then I'm able generate 1 dimensionlity data e.g [0.111111145]
Why do you have to ask for "imaginary characters" from space? Its a movie search. Aren't most characters in movies "imaginary"? Why couldn't you just ask for "aliens"?
Hi, thanks for the video, very good content, I have a question: how can I specify a "prompt" or how can I specify limits in the answers, for example, I ask the question: "from your knowledge base of what topics could you answer questions?" in my database I only have information of my company but the program adds general topics (movies, books, music, etc), the only way to limit the answers is in the .md files I must explicitly specify the topics or I must write the "prompt" in the file? thanks for your help
you're speed running through the code and your project while it takes mongoDB atlas search as the vector store, you are not able to even briefly explain how integrations with other vector stores might happen. please explain in more detail next time
What kinds of projects do you plan to make with Vector Search?
Currently making a discord chatbot with long term memory
Currently making Product Recommendation Project for My Organisation for which I'm working [Ecommerce Platform]
THIS COURSE IS AMAZING!!!!!!!!!!!!!!!
For Right now I am going try to create RAG project using google makersuit LLM which is free.
if i am able to create it am I allow to share the github repo's link?
I want to create a marketplace to match job posts with applicants. i would like both the job creators and the job seekers to be able to submit their requirements via a chatbot (chatgpt e.g) as well as a structured form. So ideally i'd like the llm to push the postings into the db, and also call an api function to pull the potential matches from the postings to the applicant requirements.
Do you think this solution could work?
Woah you're teaching this is the first time I've ever seen one from you
I understand MongoDB sponsored this but I’d really have appreciated WHY someone should choose MongoDB vs other options. Same with embedding model. WHY use the hugging face model vs OpenAI Ada. There are so many different options for vector store and model, so a tutorial that deep dives into this decision is super important.
It was touched on:
- Mongo DB allows you to store the vectors alongside the original data (i.e. in the same document). this means you can filter out documents that you don't want to use in your vector search before you run a vector query
- Huggingface is free when starting out, Open AI's API costs money
The thing with openai, Claude and so on and so forth is that you are at the mercy of the suppliers. The most obvious concern would be that if for any reason openai Claude and the likes had downtime and or their servers are not responsive, your businesses will absolutely be affected.
Take Openai as example, Openai lib gets updated super frequently, also they provide API instead of model. So you are absolutely at the mercy of Openai when they decide to change endpoints, decommission old models and etc. You are also at the mercy of their pricing. There's nothing wrong with just using openai's API just that you have to position your business well. If you're just an integrator then all's good but if you're an ai consultancy firm then it makes sense for you to have ur own model that is tuned specifically for specific task. E.g. Mistral mixture of experts. It is also cheaper if you make a leaner model and host it urself.
Why is mongodb chosen? Because they are the sponsor. Obviously right. It doesn't really matter for now what db you are using because it's just a tutorial. However if you're really going into production then it is perfectly ok to have specific dbs for specific tasks.
Lastly it's all about use case, no one has infinite money to burn. There's only small or big budget to use. If your wallet is deep then use openai for everything. If your wallet is shallow then you should provision resources correctly.
OpenAI is paid
Well this is freecodecamp. The place to get started.
SingleStore would have been a better choice imo
Hi, thanks for video!
What about a follow-up questions in RAG?
Example
Q: Suggest some movie with Johny Depp
A:
Q: What year was it filmed?
A: ...
how can I subscribe this question, so I can know the answer if someone replies it one day.
There’s a lot missing. I get this is basic, but the metadata is crucial.. and 90% of people will be using cosine similarly, especially in RAG systems. Great video by the way. It’s awesome that you take time out to help others…
அண்ணனுக்கு வணக்கம்🙏! சிறப்பா செஞ்சிருக்கீங்க ரொம்ப நல்லா இருந்துச்சு! 😄
It would really help everyone if you followed the best practices of using your tokens/logins safely. The old practice what you preach. Many of your viewers might not really know how to do that. They NEED to do it. I appreciate it makes your video less expository and is a burden in terms of prep.
The files for project two in the Github repository do not match this video. Could you kindly verify the files please? Thanks
Loved this whole training, especially the handson examples. Really help people like me who are new to Python and coming traditioanlly from non-Open Source toolsets.
One pet-peeve though - Please list all the example and code files in your github library. For example 2, I am really stuck following your example, since I dont have the exact sample_files. Also, the github link you mentioned only has javascript code and not the Python ones (Unless I am missed it :) ) Would really appreciate it you can upload all these files for our reference!
Thanks!
This is brilliant. Thanks so much from a grateful student at the School Of Code
Where code for project two is available ? in github repository it is different, thanks
You @beau are a much better teacher, I wish you created most of the tutorials! .. but then I don't want you to burn out! take care of yourself Sir.
You can generate vector embeddings by calling rest api exposed by Vendors like HuggingFace, OpenAI etc. One thing to note that, these vendors employ rate limiting at their ending basically throttling the no of request that you can make to theirs apis within second. You need to buy subscription accordingly depending on your requirement
شكرا لك علي الشرح الرائع
Where is the sample_data used in project 2? Doesn't seem to be in the repository that is linked
Have you got the sample_data?
Thanks for the video tutorial. Helped me to understand the core ideas used in this technology!
That was awesome. I learnt a lot 🎉
आपका बहुत-बहुत आभार
This was really helpful thank you!
There is no project code for project 2 and 3 in the github. can you please check
Great video! I really enjoyed your introduction to Rag. Your explanation was clear and informative. I noticed you broke the text into segments instead of using the whole text. Could you explain the reasoning behind this approach? Thank you in advance!
Thats why he's the goat
We are using this. Thank you!
Can you please upload these 3 files in the git repo? aerodynamics.txt, chat_conversation.txt and log_example.txt.
The default index configuration is
{
"mappings": {
"dynamic": true
}
}
but where did you get??
{
"mappings": {
"dynamic": true,
"fields": {
"plot_embedding_hf": {
"dimensions": 384,
"similarity": "dotProduct",
"type": "knnVector"
}
}
}
}
Please commit the latest code to git, the .txt files are missing
Guys please make a video with opensource llms API, like palm or hugging face. Please..
Agreed. Nice video but calling openAI APIs is not practical for most folks trying to learn anything.
Fantastic source of information! Learnt a lot 🤓
Hi. Could you be so kind to add the three TXT files mentioned in project#2?. The are mandatory for completing the example... thanks.
Have you got the txt files, please send :)
@@nitansshujain811 have you got the txt files now? :)
best video of the year ❤
Great content!
Where are the text files talked about in project 2? I couldn't find them in the repo..
Hi, I have the following error "ValueError: Request failed with status code 400: {"error":["Input should be a valid dictionary or instance of SentenceSimilarityInputsCheck: received `freeCodeCamp is awesome` in `parameters`"]}"
Would you be able to point me to some tutorials that achieves the same thing as Project 2, but without using langchain? The query_data function from that tutorial is pretty mysterious, and I'd love to learn what's happening behind the scenes.
I cannot for the life of me find the .py and .txt files for project number two and three?
have you found any solution
Hi, I loved this session. I wanted to have my own Embedding Server. Can you please make a video on this. I want to have it based on Opensource LLM Model. Please Guide. 🙏🙏🙏🙏
Is the accuracy of the documents retrieved influenced by the user's query? For instance, you mentioned using "imaginary characters from outer space at war" as a user query at 25:14. Would employing a more detailed query, such as "Please, I need to find all the imaginary characters from outer space at war in the collected data, could you do that for me, please?" result in better or worse outcomes?
yeah that's why we have "prompt engineering"
dude.. you are a bomb!!
I think for bpe models should have their tokens masked for problem/common character while sentencepiece need high quality repetitive data sets to leverage them correctly. Don't fix what ain't broken, right?
I could not find the same endpoint for the embedding model using in the video for the first project. Could you tell me where to get it for this specific model?
Wow great Video thank you!
How does this compares to just using chatgpt api for semantic search within our data?
Which is a selfhosted opensource alternative to Mongodb cloud ?
Selfhosted mongoDB 🙂
How do we make the conversation with the ChatOpenAi model context aware(not limited to the freeCodeCamp documentation in this case but the question asked too), like if i asked 1st question "How to create a PR" then 2nd question like "Who reviews it?"
How will it know in the 2nd question that I am talking about PRs?
Where are the correct source for project 2 and 3. It is hard to actually implement these projects without the .txt files. I am surprised that git hasn't been updated yet.
In project 1 how did you get the embedding_url ?
Awesome 🎉
Is there any way to do this with pdfs or to convert pdfs to something that can be used like the one with chatbot?
How did you choose the dimension while creating the vector search index?
Can we load multiple vector databases to the same model ?
@beau -The github repo doesnt match the contents of the video for Project two atleast.
AMAZING!!!!!!!!!!!!!!!!!!!
when i log the vectorSearch api, why does it always return [] even if the data in mongodb correct?
Is there a way to use vector db or vector search with a laravel back-end project, please help
Can we create a new search index using code instead of using the MongoDB UI? Using the UI is not practical when making a real-world project. It's fine for fun project.
just self-host your own MongoDB. You would have to change the URL to your db in your code to something like "localhost:27017". You would do everything in code then
How does this compare to Qdrant and weaviate ?
What are the prerequisites for this tutorial?
May i ask, where did you get the hugging face embedding_url?
Can it be down privately? May one question local .pdfs? At 30:00, why Euclidean? Thought it was 4 images vs. Test (cosign similarity).
Yes
Is the embedding_url still valid? When I run the code at 15:09, it just returns "None". I tried pasting the url in a browser and it returns a 404.
is there a way to use any other model other than openai , for doing these operations ? something like open source models ?
Author did not provide a lot of details, e.g. how did he got the reponse structure, embedding url.
Is there some kind of a limit on how much data I can provide? If I have documents with 1,000,000 words in total, will the RAG be able to retrieve the most relevant documents? And if most of the documents are relevant, will the LLM be able to take all of those as an input?
Sorry, I just noticed I've asked quite a few questions 😂
why is throwing an error in generate_embedding function?
🎯 Key Takeaways for quick navigation:
00:00 *🕵️ Vector search allows searching based on meaning, transforming data into high-dimensional vectors.*
01:10 *🚀 Vector search enhances large language models, offering knowledge beyond keywords, useful in various contexts like natural language processing and recommendations.*
02:03 *💡 Benefits of vector search include semantic understanding, scalability for large datasets, and flexibility across different data types.*
03:11 *🔗 Storing vectors with data in MongoDB simplifies architecture, avoiding data sync issues and ensuring consistency.*
04:06 *📈 MongoDB Atlas supports vector storage and search, scaling for demanding workloads with efficiency.*
05:02 *🔄 Setting up MongoDB Atlas trigger and OpenAI API integration for embedding vectors in documents upon insertion.*
06:38 *🔑 Safely storing API keys in MongoDB Atlas using secrets for secure integration with external services.*
08:56 *📄 Functions triggered on document insertion/update generate embeddings using OpenAI API and update MongoDB documents.*
10:33 *🧩 Indexing data with vector embeddings in MongoDB Atlas enables efficient querying for similar content.*
11:15 *📡 Using Node.js to query MongoDB Atlas with vector embeddings, transforming queries into embeddings for similarity search.*
Made with HARPA AI
May I ask why you did not use spacy to create vectors but llm models instead?
Only for searching, is embeding method efficient? can any expert enliten me?
Its a shame the files arent there for the final two. I followed along with the second one but the third might be a push. anyone find the files elsewhere ?
In 22:29, How to get Index Json on right the side? Thanks
Recently getting in Data Science/ML do you guys recommend any resources to learn more about vectors for programming?
Where dis you get the hf model’s embedding url from?
hi. please help me. how to create custom model from many pdfs in Persian language? tank you.
Thanks that was really helpful!
I want to create a marketplace to match job posts with applicants. i would like both the job creators and the job seekers to be able to submit their requirements via a chatbot (chatgpt e.g) as well as a structured form. So ideally i'd like the llm to push the postings into the db, and also call an api function to pull the potential matches from the postings to the applicant requirements.
Do you think this solution could work with the vector search / RAG approach youve shown here?
Hello,
I am getting following error can you please help me by sharing your thoughts
OperationFailure: Unrecognized pipeline stage name: $vectorSearch, full error: {'ok': 0.0, 'errmsg': 'Unrecognized pipeline stage name: $vectorSearch', 'code': 40324, 'codeName': 'UnrecognizedCommand'}
Thanks in advance !
Thank you for the course! I have a question, how can I search between data in multiple languages? I'd have to create embeddings for every language (though being the same data, ie "house" in English and "casa" in Spanish, which have the same meaning but I want to be able to search in any language)
I tried your 1st project it throws an error if i pass {"inputs":text}. Doc says we need to pass like this "inputs": {
"source_sentence": "",
"sentences": ["That is a happy person",],
} but then I'm able generate 1 dimensionlity data e.g [0.111111145]
did you manage to resolve this?
genial gracias
How to protect a company's information with technology ?
to bypass the HuggingFace rate limit, could I just download the model, and do the embedding on my laptop?
was this a good work around? I'm facing the same issue, even though I have pro
I got it working locally, but the embedding were slightly different after the 6th level of precision in the floating point number
How to get the embedding_url
Is there any video in this channel for math? For AI u need linear algebra and all
We have quite a few math courses. Here is a linear algebra course: th-cam.com/video/JnTa9XtvmfI/w-d-xo.html
Can I put this course in the cv
The github files are completely different from the tutorial, at least for the second project.
Let's go
Why do you have to ask for "imaginary characters" from space? Its a movie search. Aren't most characters in movies "imaginary"?
Why couldn't you just ask for "aliens"?
😍
Hi, thanks for the video, very good content, I have a question: how can I specify a "prompt" or how can I specify limits in the answers, for example, I ask the question: "from your knowledge base of what topics could you answer questions?" in my database I only have information of my company but the program adds general topics (movies, books, music, etc), the only way to limit the answers is in the .md files I must explicitly specify the topics or I must write the "prompt" in the file? thanks for your help
you're speed running through the code and your project while it takes mongoDB atlas search as the vector store, you are not able to even briefly explain how integrations with other vector stores might happen. please explain in more detail next time
a diagram initially would have been more helpful
Hi 👋 I'm new here
4:36
First
Thats why he's the goat
In the privided link for the repos on github, the project two is missing!
Thats why he's the goat
Thats why he's the goat