Thank you! And, right now I’m not doing any client work but if you need help with something specific, feel free to shoot me a DM on Twitter @bhancock_ai and I’d be happy to help!
🎯 Key Takeaways for quick navigation: 00:00 🎥 *The video covers five examples of OpenAI's new Assistant API technology, demonstrating practical applications for developers.* 00:29 📝 *Example 1: A TH-cam description generator using the Assistant API, showcased by Wes. It creates video descriptions from provided transcripts quickly and efficiently.* 02:37 🔄 *Example 2: Versin introduces "Agent Swarms," where multiple assistants control each other. Demonstrated with agents fetching stock data and generating Python Plotly charts.* 03:45 💼 *Example 3: OpenAI's example of an assistant aiding personal finance tasks, working with a code interpreter to analyze expense data and generate charts based on user queries.* 05:09 🔄 *Example 4: Mvin demonstrates using the Assistant API with retrieval to create a second brain, simplifying data indexing and retrieval for programmers.* 05:51 🤖 *Example 5: A Telegram bot utilizing the Assistant API to answer user-submitted questions, with the creator providing the code on GitHub for reference.* 06:41 📚 *The video concludes, encouraging viewers to explore other AI-related content on the channel.* Made with HARPA AI
Is there a way so I can use the assistant I made inside the code? I know I can retrieve the assistant but I also want to use it so I can implement it into my own GUI. Great video anyways!
Of course! In your own GUI, all you need to do is access the ID of the assistant and you can access it anywhere! I hope that helps! If you still have questions, feel free to shoot me some screenshots on X (@bhancock_ai) with your questions and I'll be able to help you further!
these don't seem "insane" per say. I have seen almost all of the vids and most use jupyter notebooks. They don't show how to wait for an async response and put it in a real app. Though I do like seeing them cause they put serious effort in to learning the API and THAT is what they show on YT. So I feel good lol. Like noone is talking about the tools that much and only 1 other person has does it on YT that I have seen and they still use a jupyter notebook lol
I actually just uploaded a full tutorial on the Assistant API so if you want to see how in real world example works, I would definitely check out that tutorial! Here’s the link: th-cam.com/video/-HrILeVJwMY/w-d-xo.htmlsi=Lw-J4of5m2HQ3cXy
Great video, thanks. Do you take on private clients to assist with implementing the above and other projects?
Thank you! And, right now I’m not doing any client work but if you need help with something specific, feel free to shoot me a DM on Twitter @bhancock_ai and I’d be happy to help!
🎯 Key Takeaways for quick navigation:
00:00 🎥 *The video covers five examples of OpenAI's new Assistant API technology, demonstrating practical applications for developers.*
00:29 📝 *Example 1: A TH-cam description generator using the Assistant API, showcased by Wes. It creates video descriptions from provided transcripts quickly and efficiently.*
02:37 🔄 *Example 2: Versin introduces "Agent Swarms," where multiple assistants control each other. Demonstrated with agents fetching stock data and generating Python Plotly charts.*
03:45 💼 *Example 3: OpenAI's example of an assistant aiding personal finance tasks, working with a code interpreter to analyze expense data and generate charts based on user queries.*
05:09 🔄 *Example 4: Mvin demonstrates using the Assistant API with retrieval to create a second brain, simplifying data indexing and retrieval for programmers.*
05:51 🤖 *Example 5: A Telegram bot utilizing the Assistant API to answer user-submitted questions, with the creator providing the code on GitHub for reference.*
06:41 📚 *The video concludes, encouraging viewers to explore other AI-related content on the channel.*
Made with HARPA AI
Thank you for butting this summary together!
Can i use an assistant to do OCR (optical character recognition) to extract data from a screenshot?
You definitely can! You need to use one of the vision models to do that but it’s 100% possible!
Is there a way so I can use the assistant I made inside the code? I know I can retrieve the assistant but I also want to use it so I can implement it into my own GUI. Great video anyways!
Of course! In your own GUI, all you need to do is access the ID of the assistant and you can access it anywhere!
I hope that helps! If you still have questions, feel free to shoot me some screenshots on X (@bhancock_ai) with your questions and I'll be able to help you further!
@@bhancock_ai It worked! :D Thank you!
great!
these don't seem "insane" per say. I have seen almost all of the vids and most use jupyter notebooks. They don't show how to wait for an async response and put it in a real app. Though I do like seeing them cause they put serious effort in to learning the API and THAT is what they show on YT. So I feel good lol. Like noone is talking about the tools that much and only 1 other person has does it on YT that I have seen and they still use a jupyter notebook lol
I actually just uploaded a full tutorial on the Assistant API so if you want to see how in real world example works, I would definitely check out that tutorial!
Here’s the link: th-cam.com/video/-HrILeVJwMY/w-d-xo.htmlsi=Lw-J4of5m2HQ3cXy