Thanks for the explanation, was super clear. We just planning to move from vector search to hybrid, and your explanation on BM25 helps a lot to understand what edge cases it can solve. Appreciate a lot! Guess we will see a surge on BM25 due to Anthropic Contextual retrieval paper .
Great video. This is why when building a search engine- I like to use BM25 for sparse search, and use Vector based search later, once most of the corpus has been filtered out. This allows me to stay precise and efficient. One additional thing- people often assume that you need a Vector Db for vector search, but you can do completely without. Just store the vectors in a normal DB.
Nice discussion, thanks! I wish there was more structure to the video so the “why” of the title I served as a main dish, ie let’s define the terms up front, explain how each works, then do why discussion and give a teaser for hybrid approach discussion. Instead there are some gaps and jumps around, which leaves it feeling incomplete or maybe not quite capturing the essence? I have a feeling this is partly a result of editing many clips, so don’t take this feedback too seriously. Cheers
Thanks, guys, YT recommended me this video, a very pleasant snippet of explanation. Trying to work through your website to understand what the service is.
BM25 doesn't do anything to address any of the issues you bring up at the beginning of the video. TF IDF is dumber than vector search in every aspect. It's just much cheaper to run. Not saying it doesn't have value as part of the toolkit but not sure why you spend the first half setting all thes problems with vector search up as if BM25 addresses any of them.
I believe a vector search is still better for rag applications. Bm25 is better for more literal matches. Also what does this have to do with LLMs doing math?
Oh no, this is going to make texts like I do!!! ok, drama aside, I do believe this will improve things a lot. I still see some caveats that would be left for luck, but huges amount of data might overcome that. I do believe we already have enough with GPT and a few previous ideas, still improving the language model itself is always a plus.
BM25. Frequency-weighted by sponsored-definition-tag vector search. Yeah google search do that too, you know. If you ever did seo optimization for your website or some kind of smm you know that it works
ty for the insight to "pair" numerical rep (vector) w/ MB25... can the same be achieved w/ just using a knowledge graph? i'm experimenting w/ sci/phi triplex... what do you think, do you have any preliminary ideas, or have you already tested it and found using "entities_and_triples" not as effective / not effective at all? 6 mo ago you did a vid on knowledge graphs, i haven't watched it yet, i'll check it out...
@@sladeTek It's just this video and a few others that play with very low volume. I try other videos in youtube, in general, they sound acceptably loud. Dunno why.
@@sladeTek Try watching the video in youtube with this title: "The Best RAG Technique Yet? Anthropic’s Contextual Retrieval Explained!" It is significantly much louder. Just my 2 cents.
Thanks for the explanation, was super clear.
We just planning to move from vector search to hybrid, and your explanation on BM25 helps a lot to understand what edge cases it can solve. Appreciate a lot!
Guess we will see a surge on BM25 due to Anthropic Contextual retrieval paper .
Great video. This is why when building a search engine- I like to use BM25 for sparse search, and use Vector based search later, once most of the corpus has been filtered out. This allows me to stay precise and efficient.
One additional thing- people often assume that you need a Vector Db for vector search, but you can do completely without. Just store the vectors in a normal DB.
I mean, at the end of the day, the embedsings are data period
It should be the other way around. Most prompts may not have exact matches. Use vector search first, then BM25 and rerank the results.
Nice discussion, thanks! I wish there was more structure to the video so the “why” of the title I served as a main dish, ie let’s define the terms up front, explain how each works, then do why discussion and give a teaser for hybrid approach discussion. Instead there are some gaps and jumps around, which leaves it feeling incomplete or maybe not quite capturing the essence? I have a feeling this is partly a result of editing many clips, so don’t take this feedback too seriously. Cheers
Nice video. I’ve been on the opposite side of the coin, but I like hearing the balanced argument to keep me educated
Thank you for delving into this important topic!
Thanks, guys, YT recommended me this video, a very pleasant snippet of explanation.
Trying to work through your website to understand what the service is.
BM25 doesn't do anything to address any of the issues you bring up at the beginning of the video. TF IDF is dumber than vector search in every aspect. It's just much cheaper to run. Not saying it doesn't have value as part of the toolkit but not sure why you spend the first half setting all thes problems with vector search up as if BM25 addresses any of them.
is English not your first language?
Great overview, thank you!
Excellent presentation/explanation. Very useful. Thank you!
Wonderful explanation. Thank you.
Great video! This is one of the most misunderstood concepts. Will def share this next time it comes up!
This is so easy to understand, thank you!
Great explanation. Thank you so much!
this is an amazing explanation. im an instant follower
Why don’t we include semantic dimensions in vectors
Oh man you are amazing!!
Love channel I subscribed. Please do a video on working with such graphs using a vector database
thats insightful, thank you so much boss
thanks this was great!
I believe a vector search is still better for rag applications. Bm25 is better for more literal matches. Also what does this have to do with LLMs doing math?
Thanks for the video.
The mathematics behind chatGPT is amazing
Oh no, this is going to make texts like I do!!!
ok, drama aside, I do believe this will improve things a lot.
I still see some caveats that would be left for luck, but huges amount of data might overcome that.
I do believe we already have enough with GPT and a few previous ideas, still improving the language model itself is always a plus.
Otima exemplificacao de como word2vec não é a solucao definitiva.
BM25. Frequency-weighted by sponsored-definition-tag vector search. Yeah google search do that too, you know. If you ever did seo optimization for your website or some kind of smm you know that it works
This is something Anthropic has shared with their contextual retrieval.
super
wait...I think I am in love...
Poland mentioned
ty for the insight to "pair" numerical rep (vector) w/ MB25... can the same be achieved w/ just using a knowledge graph? i'm experimenting w/ sci/phi triplex... what do you think, do you have any preliminary ideas, or have you already tested it and found using "entities_and_triples" not as effective / not effective at all? 6 mo ago you did a vid on knowledge graphs, i haven't watched it yet, i'll check it out...
Golden nugget
TURN YOUR VOLUME UP
i love this bot...
But vs is enough to scam dummies and create a market bubble.
Try tokenizing engendered languages 😂
Excuse me, but your volume is just too low. Just saying.
Seems fine to me
No it’s not, your device is the issue
@@sladeTek It's just this video and a few others that play with very low volume. I try other videos in youtube, in general, they sound acceptably loud. Dunno why.
@@sladeTek Try watching the video in youtube with this title:
"The Best RAG Technique Yet? Anthropic’s Contextual Retrieval Explained!"
It is significantly much louder. Just my 2 cents.
Her audio is fine. Turn up your volume.
Contributed 3blue1brown