Thank you +freeCodeCamp for collaboration, and thank you all for your support. If you'd like to connect with me, you can visit me on my channel @vincibits.
Thanks for the great video :) I was wondering why you would consider a search improvement in the last part(visualization of the embeddings) . Relevant documents were retrieved using the joint query, so it's expected that their embeddings would be more similar compared to the original query. Could you maybe elaborate more on this?
I'd like more of this. This field is getting insanely hot right now. Companies want knowledge graphs. I could watch hours of just someone setting up a knowledge graphing project for an organization. What would be especially cool if knowledge graphs could give insight into a company's mail traffic or slack/discord whatever chat logs they have or maybe meeting transcripts. I understand the privacy concerns here so if there was a sanitized dataset that could be worked with that would be especially amazing.
knowledge graphs are cool but there are allot of downsides still: - vendor locked in (Microsoft) - very pricey to set-up (although you could potentially use non-LLM models like BERT to do entity extraction) - labor intensive, for automatic set-up the cost will 10x - difficult to make updates on the graph. - much longer retrieval time. this is from conversing with some of the top experts in the field. While it is great to see retrieval accuracy it's still too early to tell if the juice is worth the squeeze.
The problem in 1:15:26 is that the retrieved embeddings come from the the augmented query but not the true answer getting from the original microsoft document. This makes the comparison not entirely fair.
i am shock with these kind of simple explination of these topic ,before watching this i will think that is very very difficult to understand but now every thing is very very good
Wonderful vendor thank you very much! Could someone post any resources related to rag with other documents other than text particularly JSON, XML, PDF, CSV, etc
Thank you for sharing this video . Its really very informative and exciting to see how LLM and RAG can work together. Do you have another video that will follow on how you can refine the Adv RAG technique to get more relevant info and reduce noise? Also is not the Adv. RAG with multiple queries is how Chabot works?
embedding_functions.OpenAIEmbeddingFunction() there is no class OpenAIEmbeddingFunction in embedding_functions module Can somebody help me out if anyone else is also facing the same issue?
First of all thanks for this great video!!! I really hope I can get a QWiki content into chromadb with the help of this tutorial. I have one question: Isn't ChatGPT 3.5 Turbo deprecated?
Bro switching to another LLM does not cost more than 1 minute of coding if you actually research it. Read langchain documents and try some other LLM like Gemini for example
Im building along using pycharm and i dont get it. The first hurdle is running it, it simply wont run, that openaiembeddingfunction gives alot of prolems also python aswell.. it wont run
Is there a way to do this with a llamafile of an OpenSource model such as Llama 3.1 8b or 70b? If llamafile is still too new then maybe using ollama (with open-webui)?
Thank you +freeCodeCamp for collaboration, and thank you all for your support. If you'd like to connect with me, you can visit me on my channel @vincibits.
Thanks for the great video :) I was wondering why you would consider a search improvement in the last part(visualization of the embeddings) . Relevant documents were retrieved using the joint query, so it's expected that their embeddings would be more similar compared to the original query. Could you maybe elaborate more on this?
I knew you for paid courses. I appreciate your contribution to the community! This is commendable, keep it up 🙂
@@RoX38gi Thank you!
Greatest course of all times on RAG. PLUS all python neatly in github! PLUS ASMR! Thank you!
came for RAG design, stayed for unintentional ASMR
lol! I get that a lot.. maybe I need to change careers!
@@vincibits don't even think about it
@@RijoSLal LOL!
This was a very informative and straightforward course. Thank you for everything!
I'd like more of this. This field is getting insanely hot right now. Companies want knowledge graphs. I could watch hours of just someone setting up a knowledge graphing project for an organization. What would be especially cool if knowledge graphs could give insight into a company's mail traffic or slack/discord whatever chat logs they have or maybe meeting transcripts. I understand the privacy concerns here so if there was a sanitized dataset that could be worked with that would be especially amazing.
knowledge graphs are cool but there are allot of downsides still:
- vendor locked in (Microsoft)
- very pricey to set-up (although you could potentially use non-LLM models like BERT to do entity extraction)
- labor intensive, for automatic set-up the cost will 10x
- difficult to make updates on the graph.
- much longer retrieval time.
this is from conversing with some of the top experts in the field. While it is great to see retrieval accuracy it's still too early to tell if the juice is worth the squeeze.
what a video. one of the best that I see about theme. congrats.
So perfectly explained. Thanks you for the video
The problem in 1:15:26 is that the retrieved embeddings come from the the augmented query but not the true answer getting from the original microsoft document. This makes the comparison not entirely fair.
How are can I get the news_articles? The github link is pointing to an old repo. I don't think it's the right one?
i am shock with these kind of simple explination of these topic ,before watching this i will think that is very very difficult to understand but now every thing is very very good
Very useful tutorial
Simply superb. Thank you
Wonderful vendor thank you very much! Could someone post any resources related to rag with other documents other than text particularly JSON, XML, PDF, CSV, etc
Awesome
& straightforward
very instructive tutorial
Thank you for sharing this video . Its really very informative and exciting to see how LLM and RAG can work together. Do you have another video that will follow on how you can refine the Adv RAG technique to get more relevant info and reduce noise? Also is not the Adv. RAG with multiple queries is how Chabot works?
Wow!!! It is awsome.
This is a beauty
Just AMAZING, can you make a simple instructions video for RAG using ollama
I think this video covers it. You just have to find the equivalent libraries and functions for llama. Google is your friend.
Please more content like this
From Nepal 🇳🇵
Hey wow Nepali are also interested in tech 😊
Thank you
13th video, asking for some projects using IDX
embedding_functions.OpenAIEmbeddingFunction()
there is no class OpenAIEmbeddingFunction in embedding_functions module
Can somebody help me out if anyone else is also facing the same issue?
Good quality video, it would be even better with LESS zoom on the code. Just use normal zoom, it was VERY hard to follow.
First of all thanks for this great video!!! I really hope I can get a QWiki content into chromadb with the help of this tutorial. I have one question: Isn't ChatGPT 3.5 Turbo deprecated?
Awesome
But the only problem is the Open AI API key, that hinders a lot.
Isn't it is paid ?
Bro switching to another LLM does not cost more than 1 minute of coding if you actually research it. Read langchain documents and try some other LLM like Gemini for example
how to import news articles folder
No way i just started yesterday getting into this stuff 😂
😅 me too
same here, saved it to my playlist as soon as it was released, been searching about this a few hours before it was uploaded, LMAO, now I'm back at it
Please show us how ro build agents and chatbots using free open source llama models. Everyone is sick of OpenAI at this point of time
awesome
Im building along using pycharm and i dont get it. The first hurdle is running it, it simply wont run, that openaiembeddingfunction gives alot of prolems also python aswell.. it wont run
What about RAPTOR or c-RAG or Self-RAG ?
Please make video on Salesforce technology
My question is,if Still required OPENAPI key? Then use directly chatgpt
Exactly, I'm looking for llama 3 with RAG but every tutor comes with OPENAI😢
The same here🎉 I am sick of Openai this😢
This tells you all guys are noob at python.
@@siddhubhai2508 what's the relation with python here? Here we're looking for an open source LLM especially llama 3.
@@GoingOnouse a different Python package that does embedding functions for local llms
Before starting they should mention you need openai credit point to send api request 😓💔
❤
Please the API key is it for free
1
Is there a way to do this with a llamafile of an OpenSource model such as Llama 3.1 8b or 70b? If llamafile is still too new then maybe using ollama (with open-webui)?