This is a wonderful walkthrough. It's concise but thorough and helps explains why we are doing everything along the way. I'm impressed by how easy this is to do! I'll definitely be using this in my commercial application.
This sample is fantastic. I have a question about errors within chain pipes - how to make them more robust. As I use this, about 1 in 50 times, the "Write 3 google search queries..." chain returns the queries but the JSON array is invalid syntax. It will omit commas or use newlines instead, etc. This makes the json loader throw an exception. Is it possible to catch this kind of error within a chain? And possibly retry the sub-chain itself (or do I just try to catch the exception on the very outer chain.invoke() all with something like Tenacity? The entire outer invoke is big/expensive so I'd like to keep the error retries within that one part (in this case 'search_question_chain'). Cheers!
thanks for the video! I am wondering, what is a key difference of the research assistant built in this tutorial and the one developed in research-GPT repo?
Why can't it be on Ollama & local llms like llama3, Mistral etc so we can work locally, safely, secure and no need of Open API keys etc? It's like vendor lock with OpenAI
Hi dear Lang Chain family the video was excellent. But I have a question how can we apply this python code for academic research. There are lots of api for it but can we do it wıth LangChain ?
the tutorial is hard to follow to get it running. I think the issue is that you didn't go over setup of langserve or provide like in the description to set that up so figuring that out was a task within itself. Then after that I had to skip the whole langskith thing because I don't have access to that.
This is a wonderful walkthrough. It's concise but thorough and helps explains why we are doing everything along the way. I'm impressed by how easy this is to do! I'll definitely be using this in my commercial application.
Thanks a lot for making This.
Great video! Thanks!
Love this! ❤
This is great thank you Harrison!
thank your for this vid, it'ssooo cool => I am now more productive in my searches !
Awesome 👏 thank you for sharing
This sample is fantastic. I have a question about errors within chain pipes - how to make them more robust. As I use this, about 1 in 50 times, the "Write 3 google search queries..." chain returns the queries but the JSON array is invalid syntax. It will omit commas or use newlines instead, etc. This makes the json loader throw an exception. Is it possible to catch this kind of error within a chain? And possibly retry the sub-chain itself (or do I just try to catch the exception on the very outer chain.invoke() all with something like Tenacity? The entire outer invoke is big/expensive so I'd like to keep the error retries within that one part (in this case 'search_question_chain'). Cheers!
great tutorial
Awesome ! Now with the same code im trying to do something else..
thanks for the video! I am wondering, what is a key difference of the research assistant built in this tutorial and the one developed in research-GPT repo?
Why can't it be on Ollama & local llms like llama3, Mistral etc so we can work locally, safely, secure and no need of Open API keys etc? It's like vendor lock with OpenAI
High quality
Hi dear Lang Chain family the video was excellent. But I have a question how can we apply this python code for academic research. There are lots of api for it but can we do it wıth LangChain ?
Cool.
Great video. What is the recording software you are using ?
loom!
I can build this same application using typescript and javascript? or do i have to use python?
🎉🎉🎉🎉🎉🎉
Hi. Thank you for the content. How can i send you a message for the LangSmith access as you mentioned?
if you shoot harrison (hwchase17) a message on twitter he will likely respond
@@LangChain I have to be subscribed to Twitter in order to send him messages. Any other way of requesting access?
@@Eeeff Feel free to reach out to him on LinkedIn :)
Why not using ollama or something local like Mistral or anything else 8:47 ?
what is special about OpenAi API?
its generally the most performant one, but we absolutely should do one with local llms!
The difference is the infrastructure you have.
If there is an issue of securing information, you can certainly do it on-frame.
the tutorial is hard to follow to get it running. I think the issue is that you didn't go over setup of langserve or provide like in the description to set that up so figuring that out was a task within itself. Then after that I had to skip the whole langskith thing because I don't have access to that.
Good points - in the future we will cover that more deeply!
I will advise you go through their blog where you will see a lot of step by step instructions.
Thanks for the content! Can you help me get off the langsmith waitlist please?
if you shoot harrison (hwchase17) a message on twitter he will likely respond
Hi Harrison, could you help me get off the langsmith waitlist, pls?
if you shoot harrison (hwchase17) a message on twitter he will likely respond