Absolutely brill James. Such exciting times! Really like how you break down the information when you demo stuff. On a bit of a tangent, I've been thinking recently, that the labels people told us when we were young (positive or negative) , are a bit like the initial prompt we give to a language model. In that these labels ground us at a certain perspective and colour our belief systems. Maybe what makes us human is that we can learn to set our own initial prompts ^^
that's interesting, we all have system prompts that set us up for future interaction, cool to hear parallels between how we work and how ML models work
One question that I had was if we want the model to remember the conversation thread, do we need to feedback the previous output in the messages list with the next question ? Have you checked this James? I am hoping that is not the case as may limit the prompt token size in a conversation?
that would be the simplest approach, but there are other methods, langchain covers a few in their docs here langchain.readthedocs.io/en/latest/modules/memory/key_concepts.html
Excellent video, I have read the release on OpenAI website, but it's way more useful to watch it in action like you demoed it, you are doing a great service for the AI community- Thank you.
Something was still confusing to me is if the token limit is per message or for the whole message history. To which case, do we need to pick and choose which messages to send to the API if there is a long conversation? Or is that done automatically through the API?
Thanks a ton for this quick video about the new OpenAI api👍 Really exiting times, so many new developents in LLM but also confusing which model to use or combining with other tools like Langchain, Pinecone, Cohere... and LLM model from meta LLAMA. Maybe OpenAI feels the heat of free LLAMA that they subsantially decrease the pricing...😇 Eeagerly waiting for the next video about this new api and for example finetune with your own documents...
Adding to this comment, I am wondering if this is ideal for a use case where you have your own documents. Should we just use GPT for search or this API has an advantage in such use case?
One thing I found interesting while migrating is "system" has weaker instruction enforcement than "user" for some reason, quite the opposite what I imagined. If you put the typical "if you cannot answer base on context..." clause in system vs user, they produce totally different result!
Qq: how do I embed a larger corpus , do I break it into chunks ? Say I have 5 long articles and I'd like to create embeddings to compare their similarity - what do you suggest I do?
comparing your UI-based ChatGPT Vs Your API test outcomes looks like the generative output content size is shorter. perhaps the API model is to facilitate short-form content suitable for conversational applications...
@@jamesbriggs Tried it, used pip show - I have latest open ai and Python 3.9 (Spyder) I think it's possibly to do with how my environment is set up. I've since migrated everything to VsCode and it works fine. I haven't read anywhere that it's incompatible with Spyder so the environment and overall set up is the only thing I can think that's causing the issue
The response of gpt3 davincii feels more natural and unrestrictet i think, say its an Turing-Test Environment improves a litte over the "iam a llm blahblah"
Absolutely brill James. Such exciting times! Really like how you break down the information when you demo stuff.
On a bit of a tangent, I've been thinking recently, that the labels people told us when we were young (positive or negative) , are a bit like the initial prompt we give to a language model. In that these labels ground us at a certain perspective and colour our belief systems. Maybe what makes us human is that we can learn to set our own initial prompts ^^
that's interesting, we all have system prompts that set us up for future interaction, cool to hear parallels between how we work and how ML models work
One question that I had was if we want the model to remember the conversation thread, do we need to feedback the previous output in the messages list with the next question ? Have you checked this James? I am hoping that is not the case as may limit the prompt token size in a conversation?
that would be the simplest approach, but there are other methods, langchain covers a few in their docs here langchain.readthedocs.io/en/latest/modules/memory/key_concepts.html
@@jamesbriggs Thanks for the info.
Channels of your quality are rare! 💯
thanks I appreciate it 🙏
Excellent video, I have read the release on OpenAI website, but it's way more useful to watch it in action like you demoed it, you are doing a great service for the AI community- Thank you.
glad to hear, can definitely do more like this
Thanks for the demo!
Thanks for this, fun to follow along. You have a new Patreon supporter 😊
Awesome, thanks a ton! :)
Something was still confusing to me is if the token limit is per message or for the whole message history. To which case, do we need to pick and choose which messages to send to the API if there is a long conversation? Or is that done automatically through the API?
actually it seems like max_tokens in this case is for the generated output only - usually it refers to the full context window (input + output tokens)
Thank you. Recommendation: Increase the audio volume of your recording.
thanks, I'm currently recording in a new location and still figuring out the dynamics - this is very helpful input
Love the shirt brother!!
thanks man!
how do you get that navy theme on colab?
I use this chrome.google.com/webstore/detail/colab-themes/hledcfghfgmmjpnfkklcifpcdogjlgig
the navy theme is night owl
Thanks a ton for this quick video about the new OpenAI api👍 Really exiting times, so many new developents in LLM but also confusing which model to use or combining with other tools like Langchain, Pinecone, Cohere... and LLM model from meta LLAMA. Maybe OpenAI feels the heat of free LLAMA that they subsantially decrease the pricing...😇 Eeagerly waiting for the next video about this new api and for example finetune with your own documents...
yeah there's sooo much to cover, looking forward to sharing more
Adding to this comment, I am wondering if this is ideal for a use case where you have your own documents. Should we just use GPT for search or this API has an advantage in such use case?
Adding to the comment, interesting take on llama causing openai to bring competitive pricing
Hi James, brilliant video advice to upgrade your AV setup, this will add more “weight” to your videos 😀
thanks, I'm currently traveling so trying to find the balance between too little and too much equipment - the feedback is massively appreciated
One thing I found interesting while migrating is "system" has weaker instruction enforcement than "user" for some reason, quite the opposite what I imagined. If you put the typical "if you cannot answer base on context..." clause in system vs user, they produce totally different result!
that's interesting, I would've expected system to be stronger too - will try this out
Asking what tomorrow’s date is only works if you put in current date as user not system, strange
You really deserve a lot more viewers than you have
I have no idea what's happening here, when I run this code I get nothing from the output
Thank you.
If I had to pick who I want to get my AI information from, it is definitely a guy who looks like a footballer, in a Hawaiian shirt. Great vids, thx!
Thank you James. Great content as usual!
glad you enjoyed it
Does it work with node or npm?
yeah you can do direct HTTP requests
Qq: how do I embed a larger corpus , do I break it into chunks ? Say I have 5 long articles and I'd like to create embeddings to compare their similarity - what do you suggest I do?
it's best to break into chunks yes, you can follow something like what I did here th-cam.com/video/ocxq84ocYi0/w-d-xo.html
Very helpful. Thanks!
glad it was helpful!
comparing your UI-based ChatGPT Vs Your API test outcomes looks like the generative output content size is shorter. perhaps the API model is to facilitate short-form content suitable for conversational applications...
I think one approach that may work is to encourage the model to provide longer answers in the initial "system" message, but I haven't tested yet
@@jamesbriggs Yes. That may work.
Great content as usual!
I have just enrolled in your Udemy course, looking forward to completing it as soon as possible.
you have the coolests shirts man! they rock !
time to get more of these shirts!
can you show how it work with langchain?
yeah working on some conversational AI videos w/ langchain
Anyone know why I get this error?
AttributeError: module 'openai' has no attribute 'openai_response'
can you do `pip install -U openai` and try again?
@@jamesbriggs Tried it, used pip show - I have latest open ai and Python 3.9 (Spyder)
I think it's possibly to do with how my environment is set up. I've since migrated everything to VsCode and it works fine. I haven't read anywhere that it's incompatible with Spyder so the environment and overall set up is the only thing I can think that's causing the issue
awesome
nice man
The response of gpt3 davincii feels more natural and unrestrictet i think, say its an Turing-Test Environment improves a litte over the "iam a llm blahblah"
🤯🔥
👏👏👏