Good stuff. I too have been pretty amazed with how quickly my little test python scripts have been receiving responses from openai on the chatgpt-3.5-turbo model. And not having the 'simulated typing' slowdown mechanism is a real boon to productivity too ;) New sub and looking forward to seeing what else you come up with.
Thank you for performing the comparison. I am curious about the impact of the 'system' and 'temperature' parameters on the sentence structure generated by ChatGPT. I assume that slight adjustments to these parameters would bring the output of both models closer together. It would also be interesting to see how the API for both models handles some simple encoding.
System is, from my experience, the most important thing in chat completion. the system prompt is the bot’s ‘brief’ which defines its character, purpose and available actions. Temperature is one of the previous settings that controls how random or deterministic the responses can be.
Being able to adjust temperature has been helpful for me. I adjust it based on what response i need. If i want it to stay in line i turn down the temp, if i need it to suggest alternatives i turn the temp up. So I'm changing it back and forth ine the middle of the conversation
@@KunjaBihariKrishna it’s interesting that humans also have a temperature setting, it’s nicely correlated to stress/relaxation levels. It’s not possible to be creative during high stress, instead we get short and precise bursts of information.
Great vid 👍! I'm still looking for an example however, where ChatGPT3 Turbo API is used to access the entire Internet? Or just a single website for texts ingesting?
Technically the corpus of data that the ChatGPT model uses by default is a webCrawl of about 60% of the internet (all of it that is not behind a wall). Then Books, Wiikiepedia and others. So asking ChatGPT anything will access that corpus. BUT the ChatGPT API can be used to query a single website, documents, proprietary data, etc as well.
I assumed it would be as easy as changing the model within our code but that doesn't seem to be the case. Would certainly appreciate more info on this!
Yeah I was also hoping it would be as easy as the Davinci-002 to 003 update. But the API call changed to now include roles and a breakdown of messages. What I discovered was in order to emulate the Davinci type call, you to a chat completion API call with one message that is set to role=user and add the prompt in the content. Then no other messages.
Good stuff. I too have been pretty amazed with how quickly my little test python scripts have been receiving responses from openai on the chatgpt-3.5-turbo model. And not having the 'simulated typing' slowdown mechanism is a real boon to productivity too ;)
New sub and looking forward to seeing what else you come up with.
Another good video, thx
Thank you.
Thanks!
Thank you for performing the comparison. I am curious about the impact of the 'system' and 'temperature' parameters on the sentence structure generated by ChatGPT. I assume that slight adjustments to these parameters would bring the output of both models closer together. It would also be interesting to see how the API for both models handles some simple encoding.
System is, from my experience, the most important thing in chat completion. the system prompt is the bot’s ‘brief’ which defines its character, purpose and available actions. Temperature is one of the previous settings that controls how random or deterministic the responses can be.
Being able to adjust temperature has been helpful for me. I adjust it based on what response i need. If i want it to stay in line i turn down the temp, if i need it to suggest alternatives i turn the temp up.
So I'm changing it back and forth ine the middle of the conversation
@@KunjaBihariKrishna it’s interesting that humans also have a temperature setting, it’s nicely correlated to stress/relaxation levels. It’s not possible to be creative during high stress, instead we get short and precise bursts of information.
Hello, the chat gpt API has a rate limit of tokens? Like davinci is 4000?
Thank you
Same as Davinci: 4096
Looks like 2048 on the playground
@@blisphul8084 Yeah thats interesting, they are only allowing 2048 on the playground, but the API documentation definitely shows 4096. Thats curious.
Great vid 👍!
I'm still looking for an example however, where ChatGPT3 Turbo API is used to access the entire Internet? Or just a single website for texts ingesting?
Technically the corpus of data that the ChatGPT model uses by default is a webCrawl of about 60% of the internet (all of it that is not behind a wall). Then Books, Wiikiepedia and others. So asking ChatGPT anything will access that corpus. BUT the ChatGPT API can be used to query a single website, documents, proprietary data, etc as well.
Perhaps you are looking "search" that ChatGPT is using in the Bing implmentation?
@@iSolutionsAI Do you happen to know a tutorial that discusses how to code such a query?
can you show how to migrate the code from text-davinci-003 to gpt-3.5-turbo pls?
I assumed it would be as easy as changing the model within our code but that doesn't seem to be the case. Would certainly appreciate more info on this!
Yeah I was also hoping it would be as easy as the Davinci-002 to 003 update. But the API call changed to now include roles and a breakdown of messages. What I discovered was in order to emulate the Davinci type call, you to a chat completion API call with one message that is set to role=user and add the prompt in the content. Then no other messages.
@@iSolutionsAI yeh i tried exactly that but it stops working still. My application is in kotlin, guess im doing something wrong