I never really comment on videos but just wanted to say you do a great job at explaining things and keeping it stupid simple. On top of that the extra time put into the editing to make it visually easier to comprehend. And the source file included in the description. 🙌
Hey Nathan! Thank you so much for saying that! I really appreciate it because I've been putting a lot of work into these videos. I have a lot more content coming out for you guys!
I can tell! This is the type of content I was thinking to make if nobody else did because it’s always content consisting of code heavy videos without real life examples or anything actually done in the whole video. Sometimes it’s like “what could’ve been done better” if views don’t blow up. But trust, this video was perfect. I think I followed a little while ago based off one of your other videos, probably because you explain things well. And now this one popped up on my feed when I opened the app. Keep at it 🙌
Dude..Of all the AI videos that I have read till now, this one is probably one of the best. I know it is focused on Assistance, and u killed it. So, straight forward and well explained...
Really cool video. Kudos for building a UI to visualise the process. People underestimate the amount of work that goes into videos like this. Keep it up bud.
Subbed. I needed this. You did great at explaining it in simple terms and the visual UI to go along was the icing on the cake! Can you cover the differences with the v2 Assistant API if you havent done that yet?
I also found out that it's necessary to have credits to use GPT-4, because when I clicked the 'create' button, it did nothing, and it only started working after I added credits to my account.
Will passing threads drain my openAI creds faster since it would be using more tokens as the conversation grows? Or does it truncate up to a certain depth?
Im trying to do this for my expressJs server, when I create the createRun function, it's saying thread_Id not found, is it supposed to be stored somewhere? I am making API calls to each function and passing it via an API client like insomnia but I cannot find the thread_Id anywhere.
great video - one think I was hoping to find out though was more info around actions. The custom GPT actions seem a lot more intuitive with supplying a path and auth token. The assistant actions seem a bit more confusing and less documentation and discussion around them. Info or a video about those would be amamazing. I think a lot of people are making custom gpts and then find out out you cant call those through the API.
Great explantion. Thanks a bunch. I've tried to get the source code by filling in the form with my first name and e-mail address but have had no luck in getting access. I am in Portugal; may that be an issue?
Hey Pedro! I just checked out ConvertKit and it says the the email was delivered to you at 1:06 PM EST. Maybe it’s in your spam or something? If not, shoot me a DM on X @bhancock_ai and I can shoot you the links manually!
I don't understand...I cannot seem to Assistant File "List" or "create" with your project. it uploads because I can see it in the "Storage" section of platform openai. For whatever reason I keep getting Error creating assistant file: TypeError: Cannot read properties of undefined (reading 'id')
Brandon, you are so helpful and clear. Thank you. Quick question: using the Tesla example you shared, can the assistant respond utilising both the file and the openai LLM as source mixing content for the response?
Great questions! So all you have to do is adjust the instructions you provide to the assistant and instruct it to provide a hybrid response, 50% from the file & 50% from general knowledge within the LLM. Please let me know if you have any other questions! Happy to help!
If I'm coding an app with multiple different users who can all individually interact with this bot (which has the same instruction for each user), would I use an assistant per user, or the same "master" assistant for all users?
Great question! You would have one master assistant. However, you would create an individual thread for each user. If you want to see this in action, I definitely recommend checking out this full stack tutorial I released a few weeks ago! th-cam.com/video/b1S04PFjIOY/w-d-xo.html Hope that helps!
@@bhancock_ai With this architecture is it possible to have multiple threads per user, whereby each subsequent thread is aware of the context of all previous threads (for that user)? Basically I'm thinking of an app where a user can individually talk to the AI robot (master assistant), but has a chat log per day. Intuitively this means a thread per day, but when a user starts a new chat for a new day (and so a new thread is created), the AI robot they're talking to has to be aware of all previous messages sent on previous days in the other threads? Is this possible? So basically what I'm asking is, is it possible to create a new thread with an assistant, whereby the assistant responding in this new thread is responding based off context from a set of other previous threads?
Thank you, Brandon!!!! This information is priceless!!!!! I could not find where the website that you created is located, was that something there was a link for?
Great question! The reason I'm not the biggest fan of doing web projects in Python is because they are more difficult to deploy for others to use. For example, if you make a NextJS project like I did in the tutorial, you can easily deploy the project to Vercel so that thousands of people can use your project instantly. Please let me know if you have any other questions! Happy to help!
Hey Brandon .. Thanks for this. I entered my info but didn't receive the source? Is the source to the tool you were using. It looks like the perfect tool for me as I'm not a coder, but have created a very sophisticated assistant and now want to create a chatbot to us it. Hope you're tool will help. Let me know how to get it, or re-enter my info. thx
Hey Mike! When you head over to my website brandonhancock.io/ you should be able to enter in your email address and then it will automatically send you an email with links to the source code. If it is still giving you trouble, please shoot me a DM!
great video, Thanks Brandon. Just one question: we passed user entered question to message object, and triggered a Run with assistant_id and thread_id. If this Rum was completed, how do I get assistant's response? Is there any code like this: response.choices[0].message.content ?
Great question! You actually have to go back and refetch the thread once the run completes successfully. It would be nice if the run had that final generated message for us but they OpenAI is more focused on separating out the logic to make it super scalable for us. I hope that helps! Let me know if you have any other questions!
@@bhancock_ai Thanks mate. I have worked out the solution, see sample code below: run = openai.beta.threads.runs.retrieve( thread_id = threadID, run_id = runID )
thread_messages = openai.beta.threads.messages.list(threadID) for msg in thread_messages.data: return JsonResponse({"message": msg.content[0].text.value})
Well, having said that i don't have any prior knowledge of how does this new open ai (API) world is connects, works, I can say now I have an idea in some case after watching this. So, Can you create one more video which can explain a unit and average cost to perform all of these functions?, take any business that needs this assistant and they want to implement this, but they don't have any ideas of how does the pricing model works, and if they do so, they maybe don't know how to optimized it. It will be very helpful if you crate any short video that give use an overview of pricing too. TIA.
Hey Owais! Great question! If I were you, I'd checkout this page here: openai.com/pricing The short answer to your question is that OpenAI is going to charge your everything. Every time you use ChatGPT 4, it costs you a few pennies.When you use the code interpreter, it costs a few more pennies. When you use the retrieval functionality, it costs a few more pennies.
You can use the Assistant for free but you can only make a few request per hour. If you are going to create a project or do a lot of testing, you'll probably want to upgrade out of the free tier.
This was probably one of the BEST explanations on OpenAI Assistant API.
I never really comment on videos but just wanted to say you do a great job at explaining things and keeping it stupid simple. On top of that the extra time put into the editing to make it visually easier to comprehend. And the source file included in the description. 🙌
Hey Nathan! Thank you so much for saying that! I really appreciate it because I've been putting a lot of work into these videos.
I have a lot more content coming out for you guys!
I can tell! This is the type of content I was thinking to make if nobody else did because it’s always content consisting of code heavy videos without real life examples or anything actually done in the whole video.
Sometimes it’s like “what could’ve been done better” if views don’t blow up. But trust, this video was perfect. I think I followed a little while ago based off one of your other videos, probably because you explain things well. And now this one popped up on my feed when I opened the app.
Keep at it 🙌
Dude..Of all the AI videos that I have read till now, this one is probably one of the best. I know it is focused on Assistance, and u killed it. So, straight forward and well explained...
Thank you so much Dhrubajyoti! I really appreciate that!
@@bhancock_ai Now you have got a responsibility to create the video on Function calling and other tools...Waiting Waiting....
phenomenal guide. absolutely one of the best explanations ive found.
Really cool video. Kudos for building a UI to visualise the process. People underestimate the amount of work that goes into videos like this.
Keep it up bud.
Thanks Leon! and so true! I have a small team helping me out to make it more manageable.
I have some more videos coming out for you guys soon!
Congratulations, what a creative way to demo the capabilities
I love how fast the narration is. I often bump up playback speed in settings. No need this time :) great tutorial, thanks.
You didn't cover the most important topic : "function calling"
AMAZING WAY OF EXPLAIN IT! THANK YOU VERY MUCH!
Very neat, I would love to see you use functions with the assistants as well tho! 👍
Great video. Nicely explained. Thanks
Thanks for this awesome explanation
Amazing, did amazing job. Best video. Keep shining
Subbed. I needed this. You did great at explaining it in simple terms and the visual UI to go along was the icing on the cake! Can you cover the differences with the v2 Assistant API if you havent done that yet?
It's on the todo list! I think I will be able to get to this in early september!
@@bhancock_ai Dope! Can't wait
Thanks alot indeed for the video! I am eagerly waiting for the new videos you kindly promised
Thanks! I've had to take a quick break but I will start churning out more videos this week! Anything in particular that you'd like to see more of?
Best open ai api explaination video
Excellent explanation
This is an awesome tutorial!! Thanks so much.
This is super clear, thankyou so much!
I also found out that it's necessary to have credits to use GPT-4, because when I clicked the 'create' button, it did nothing, and it only started working after I added credits to my account.
Will passing threads drain my openAI creds faster since it would be using more tokens as the conversation grows? Or does it truncate up to a certain depth?
Im trying to do this for my expressJs server, when I create the createRun function, it's saying thread_Id not found, is it supposed to be stored somewhere? I am making API calls to each function and passing it via an API client like insomnia but I cannot find the thread_Id anywhere.
great video - one think I was hoping to find out though was more info around actions. The custom GPT actions seem a lot more intuitive with supplying a path and auth token. The assistant actions seem a bit more confusing and less documentation and discussion around them. Info or a video about those would be amamazing. I think a lot of people are making custom gpts and then find out out you cant call those through the API.
Great explantion. Thanks a bunch. I've tried to get the source code by filling in the form with my first name and e-mail address but have had no luck in getting access. I am in Portugal; may that be an issue?
Hey Pedro! I just checked out ConvertKit and it says the the email was delivered to you at 1:06 PM EST. Maybe it’s in your spam or something? If not, shoot me a DM on X @bhancock_ai and I can shoot you the links manually!
this super great! would you also make one with streaming options
thanks! this was very helpful
love you Brandon 💕 w8ing for uR next videos . I am interested to learn Ai so I think you are best option for me
Thanks Mr umer! I have a lot more videos coming out soon!
I don't understand...I cannot seem to Assistant File "List" or "create" with your project. it uploads because I can see it in the "Storage" section of platform openai. For whatever reason I keep getting Error creating assistant file: TypeError: Cannot read properties of undefined (reading 'id')
There has been update to assistant API lately. Check if your Open one has v2
Brandon, you are so helpful and clear. Thank you. Quick question: using the Tesla example you shared, can the assistant respond utilising both the file and the openai LLM as source mixing content for the response?
Great questions! So all you have to do is adjust the instructions you provide to the assistant and instruct it to provide a hybrid response, 50% from the file & 50% from general knowledge within the LLM. Please let me know if you have any other questions! Happy to help!
Thanks a lot, this is amazing
Thank you. You are my hero
Excelent!" greath video thanks for sharing
Thanks Damian!
If I'm coding an app with multiple different users who can all individually interact with this bot (which has the same instruction for each user), would I use an assistant per user, or the same "master" assistant for all users?
Great question! You would have one master assistant. However, you would create an individual thread for each user. If you want to see this in action, I definitely recommend checking out this full stack tutorial I released a few weeks ago!
th-cam.com/video/b1S04PFjIOY/w-d-xo.html
Hope that helps!
@@bhancock_ai With this architecture is it possible to have multiple threads per user, whereby each subsequent thread is aware of the context of all previous threads (for that user)? Basically I'm thinking of an app where a user can individually talk to the AI robot (master assistant), but has a chat log per day.
Intuitively this means a thread per day, but when a user starts a new chat for a new day (and so a new thread is created), the AI robot they're talking to has to be aware of all previous messages sent on previous days in the other threads? Is this possible?
So basically what I'm asking is, is it possible to create a new thread with an assistant, whereby the assistant responding in this new thread is responding based off context from a set of other previous threads?
Thank you, Brandon!!!! This information is priceless!!!!! I could not find where the website that you created is located, was that something there was a link for?
Thank you 😁 and the source code link in the description will direct you to the code for the website. Please let me know if you have any other issues!
thanks for sharing great knowledge sir, can we have a video about new OpenAI assistant api implementation using php
I'm not really a PHP guy but I think ChatGPT 4 could help you convert the code in the video into PHP code for you!
Many thanks!
PYTHON...any comments regarding doing this on Python? I mean I know I can do the translation from Node JS ...
Great question! The reason I'm not the biggest fan of doing web projects in Python is because they are more difficult to deploy for others to use.
For example, if you make a NextJS project like I did in the tutorial, you can easily deploy the project to Vercel so that thousands of people can use your project instantly.
Please let me know if you have any other questions! Happy to help!
Hey Brandon .. Thanks for this. I entered my info but didn't receive the source? Is the source to the tool you were using. It looks like the perfect tool for me as I'm not a coder, but have created a very sophisticated assistant and now want to create a chatbot to us it. Hope you're tool will help. Let me know how to get it, or re-enter my info. thx
Hey Mike! When you head over to my website brandonhancock.io/ you should be able to enter in your email address and then it will automatically send you an email with links to the source code. If it is still giving you trouble, please shoot me a DM!
great video, Thanks Brandon. Just one question: we passed user entered question to message object, and triggered a Run with assistant_id and thread_id. If this Rum was completed, how do I get assistant's response? Is there any code like this: response.choices[0].message.content ?
Great question! You actually have to go back and refetch the thread once the run completes successfully.
It would be nice if the run had that final generated message for us but they OpenAI is more focused on separating out the logic to make it super scalable for us.
I hope that helps! Let me know if you have any other questions!
@@bhancock_ai Thanks mate. I have worked out the solution, see sample code below:
run = openai.beta.threads.runs.retrieve(
thread_id = threadID,
run_id = runID
)
thread_messages = openai.beta.threads.messages.list(threadID)
for msg in thread_messages.data:
return JsonResponse({"message": msg.content[0].text.value})
Can you make a new one for the one that was recently released with Vector Storage?
thankyou
You're welcome!
You sound just like Bucky of the new Boston
Bucky is the GOAT!
Well, having said that i don't have any prior knowledge of how does this new open ai (API) world is connects, works, I can say now I have an idea in some case after watching this.
So, Can you create one more video which can explain a unit and average cost to perform all of these functions?, take any business that needs this assistant and they want to implement this, but they don't have any ideas of how does the pricing model works, and if they do so, they maybe don't know how to optimized it.
It will be very helpful if you crate any short video that give use an overview of pricing too.
TIA.
Hey Owais! Great question! If I were you, I'd checkout this page here:
openai.com/pricing
The short answer to your question is that OpenAI is going to charge your everything.
Every time you use ChatGPT 4, it costs you a few pennies.When you use the code interpreter, it costs a few more pennies. When you use the retrieval functionality, it costs a few more pennies.
Hi where is the homepage link?
Hi is using the assitant API free?
You can use the Assistant for free but you can only make a few request per hour. If you are going to create a project or do a lot of testing, you'll probably want to upgrade out of the free tier.
What is the need of Assistant. Can't we create our own code. I am not using assistant. What is the use of code interpreter over here