I can't believe how undervalued is this guy, he's bringing high quality videos that have a huge range of applications and barely receiving many visits with them :/
You're probably making a killing consulting right now. Even though I'm well versed with langchain. Still watch your content...well simply because it's 🔥. Stay blessed.
Great video Greg! Very clear explanation. Will share it with one of my consulting clients, they will find it super useful for extra context & what's possible.
Hey, thanks for the video. Do you know if there is any public dataset with call center transcripts or sales transcripts? i would like to test this with some data. Thanks!
Other than the transcript used in this video github.com/gkamradt/langchain-tutorials/blob/main/data/Transcripts/acme_co_v2.txt I haven't seen a repo yet, that would be fun to have.
but doesn't openai now have all your text? So there is no privacy. What are your thoughts about sending your text to openai that has this kind of data?
Our data is shared across a lot of different services. We used to worry about sending our data to google/amazon but now we don't think twice about it. Each company/person should make their own decisions about where they want to send data too. I'm personally ok with it right now.
this is quite helpful.can you help me with some references code/links how to extract transcribe from anyvideo, not just youtube links, any link and do al these analysis after it?
What's the best way if you want to include time-stamped references to questions? Would you have to use embeddings? How do apps implement summaries wih timestamps?
Timestamps are a bit tougher. But you would want to do a method that piggy backs on "QA w/ sources" and then have the timestamp retrieved with the quotes. However a piece of a summary may be pieced together from multiple parts of the call so it's more difficult to assign a specific timestamp to it.
Love this! I would love, if you have the time and it seems like a good idea, if you expanded on this and tune a model that act as a sales person. Say we have a lot of b2b calls. How would you go about training a model that acts as a sales trainer, or assistant that helps reps sell better?
mp4s are tough because they don't carry speaker information. Luckily Gong (and other call recorders) have this information. If it's just from an MP4 you may have to do a preprocessing step where the LLM guesses who's speaking but this isn't ideal. I haven't found a tool yet that splits speakers cleanly
Great work Greg. Ive been following your work since before autogpt. Have you considered using open assistant in stead of gpt3.5? Or any other open source llm with vdb and langchain?
Hey, Greg! Great video! Why do we have to use Map Prompt and Combine Prompt if they are the same thing? ``` chain = load_summarize_chain(llm, chain_type="map_reduce", map_prompt=chat_prompt_map, combine_prompt=chat_prompt_combine, verbose=True ) ``` What is the difference between them?
I talk more about it in my "work around openai token limit video" You're right, in this video they are pretty close. However I put my output format instructions in the combine prompt because this is the step that consolidates the other summaries for me.
That part comes straight from Gong. It's the luxury of having a tool that gives you the transcript. If you didn't have it you'd have to manually map it out
I've been attempting to build something like this for reading through documentary interview transcripts. I ran into an issue with langchain timing out while it was forming the combined chain output, specifically when kicking out bullet points. Some of the transcripts are long, and while it was forming the combined prompt naturally it took a while. Is there a way to accommodate for a timeout and then retry only the final output prompt? Here's the error: **Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.._completion_with_retry in 4.0 seconds as it raised Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=60)**
Hey Greg! Great stuff. I’ve been binge watching your content. What do you think about the ability to parse earnings calls? Understanding that a majority of the financial data mentioned is available in financial filings, it may be helpful to capture more of the “unstructured” conversation.
Hi there, I have recently started getting into langchain & your video is immensely helpful. But I had received an error on Windows machine when I installed Langchain & imported it. Error was pexpect has no method called spawn. I've put up a solution video for it on my channel. I hope the video will be helpful for everyone. Thanks a ton for your video though. Langchain Pexpect Error Solution video: th-cam.com/video/hCJyITK1iis/w-d-xo.html
I can't believe how undervalued is this guy, he's bringing high quality videos that have a huge range of applications and barely receiving many visits with them :/
Nice thank you! The visits aren't bad, but always trying to add more value with leads to more visits.
Great content! Nice to see you launched thimble too! Hoping to launch my support tool soon too!
Thanks Clint! It's a side project testing the waters right now. We'll see how it goes.
I'd love to check it out when you're launched
You're probably making a killing consulting right now. Even though I'm well versed with langchain. Still watch your content...well simply because it's 🔥. Stay blessed.
Thanks Zandrr - I haven't opened up to consulting as much as I probably could. A bit though
Great video! Thanks for high-quality content.
Love your langchain tutorials. It's so on point and consise
Great video Greg! Very clear explanation. Will share it with one of my consulting clients, they will find it super useful for extra context & what's possible.
Nice glad to hear it - what field are they in?
@@DataIndependent Sales coaching
Super cool! I wait for your new vedios every single day.
You're videos or so clear, concise and informative. thank you
Fantastic, thank you.
great sir. keep up the good work.
Hey, thanks for the video. Do you know if there is any public dataset with call center transcripts or sales transcripts? i would like to test this with some data. Thanks!
Other than the transcript used in this video
github.com/gkamradt/langchain-tutorials/blob/main/data/Transcripts/acme_co_v2.txt
I haven't seen a repo yet, that would be fun to have.
but doesn't openai now have all your text? So there is no privacy. What are your thoughts about sending your text to openai that has this kind of data?
Our data is shared across a lot of different services. We used to worry about sending our data to google/amazon but now we don't think twice about it.
Each company/person should make their own decisions about where they want to send data too. I'm personally ok with it right now.
GREAT. exactly what i needed OMG
Nice - what project are you working on?
this is quite helpful.can you help me with some references code/links how to extract transcribe from anyvideo, not just youtube links, any link and do al these analysis after it?
What's the best way if you want to include time-stamped references to questions? Would you have to use embeddings? How do apps implement summaries wih timestamps?
Timestamps are a bit tougher. But you would want to do a method that piggy backs on "QA w/ sources" and then have the timestamp retrieved with the quotes.
However a piece of a summary may be pieced together from multiple parts of the call so it's more difficult to assign a specific timestamp to it.
Love this! I would love, if you have the time and it seems like a good idea, if you expanded on this and tune a model that act as a sales person.
Say we have a lot of b2b calls. How would you go about training a model that acts as a sales trainer, or assistant that helps reps sell better?
Great video, super useful! Thanks a lot 🙏🏻
Great video thanks but how did you get the transcript in that format, say from an mp4?
mp4s are tough because they don't carry speaker information. Luckily Gong (and other call recorders) have this information.
If it's just from an MP4 you may have to do a preprocessing step where the LLM guesses who's speaking but this isn't ideal. I haven't found a tool yet that splits speakers cleanly
Great work Greg. Ive been following your work since before autogpt. Have you considered using open assistant in stead of gpt3.5? Or any other open source llm with vdb and langchain?
i think thats a yes
How do you use the chat model with the large token limit?
You can apply to use gpt4 on the openai website
Looks nice!
Hey, Greg! Great video! Why do we have to use Map Prompt and Combine Prompt if they are the same thing?
```
chain = load_summarize_chain(llm,
chain_type="map_reduce",
map_prompt=chat_prompt_map,
combine_prompt=chat_prompt_combine,
verbose=True
)
```
What is the difference between them?
I talk more about it in my "work around openai token limit video"
You're right, in this video they are pretty close. However I put my output format instructions in the combine prompt because this is the step that consolidates the other summaries for me.
How did get the name and role part to work?
That part comes straight from Gong. It's the luxury of having a tool that gives you the transcript.
If you didn't have it you'd have to manually map it out
With this, I can identify the following entities:
Amounts
Expenses
Incomes
Concept
I've been attempting to build something like this for reading through documentary interview transcripts. I ran into an issue with langchain timing out while it was forming the combined chain output, specifically when kicking out bullet points.
Some of the transcripts are long, and while it was forming the combined prompt naturally it took a while. Is there a way to accommodate for a timeout and then retry only the final output prompt?
Here's the error:
**Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.._completion_with_retry in 4.0 seconds as it raised Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=60)**
Hey Greg! Great stuff. I’ve been binge watching your content.
What do you think about the ability to parse earnings calls? Understanding that a majority of the financial data mentioned is available in financial filings, it may be helpful to capture more of the “unstructured” conversation.
Nice! Think this model would work well for that.
Have you tried it out yet?
Where are the Insights?
Hi Greg, thanks for another great video!
Have you tried utilising Slack bots to chat with documents by any chance?
You forgot to remove hardcoded "Greg" from other places of the code.
keep your videos free of music , i like being laser focused on what you are saying
Thanks for the tip - I've made the change and not going back
Hi there, I have recently started getting into langchain & your video is immensely helpful. But I had received an error on Windows machine when I installed Langchain & imported it. Error was pexpect has no method called spawn. I've put up a solution video for it on my channel. I hope the video will be helpful for everyone. Thanks a ton for your video though.
Langchain Pexpect Error Solution video: th-cam.com/video/hCJyITK1iis/w-d-xo.html