Thirded! Lot of hybrid-front-enders out there who need to implement this stack like yesterday who don't have time to learn python just to learn Langchain(and now LangGraph-which i can't find any thorough tutorial like this one for in this stack). If I can get an intellectual, organic competency of Lang by building in TS stack, then I could hack in the Python if it truly is a differentiating advantage for these types of apps. I just feel like I can't devote the time and energy to build up an organic competency of Langchain and Python at the same time right now with things moving so fast.
We embed so we can do RAG later, it makes for gathering faster & more refined context, as to not confuse the LLM. Not to mention the cost savings of only passing a few retrieved chunks instead of the entire paper for each question.
I would love more tutorials with this stack, great job
Seconded
Thirded!
Lot of hybrid-front-enders out there who need to implement this stack like yesterday who don't have time to learn python just to learn Langchain(and now LangGraph-which i can't find any thorough tutorial like this one for in this stack).
If I can get an intellectual, organic competency of Lang by building in TS stack, then I could hack in the Python if it truly is a differentiating advantage for these types of apps. I just feel like I can't devote the time and energy to build up an organic competency of Langchain and Python at the same time right now with things moving so fast.
This is very helpful. I'm really appreciate it. I still have got a alot to learn but surely this boosted the learning journey.
This is very impressive. Thank you for the video. Epic
Thanks. I did learn a lot. Very helpful.
Great video. RAG taught by Paul Dano himself. psych..
Langchain has become so complex in version 0.1 and 0.2
Hey thanks for sharing.
Wouldn't it be more efficient using a Vector database?
But why do we need to save QA to the supabase? There seems no usage in the code?
good stuff!
Why embed if you can give the entire paper to gpt4 ?
We embed so we can do RAG later, it makes for gathering faster & more refined context, as to not confuse the LLM. Not to mention the cost savings of only passing a few retrieved chunks instead of the entire paper for each question.
Can you do a Python version please
So true!
huge if true
Please
booooooo 👎 :D :D :D