This was so valuable, I have a feeling this work was fundamentally required for o1, and we are now all benefiting 😊 This innovation has potentially the ability to upgrade every UX to a interoperable, lightweight and fast API. Bringing context to all the data moving around. Agents having an ability to exchange a cache of how to structurally interact with what they see. The fact index is a tree means that selecting the appropriate index cache can also be an inference step, opening up the ability to scale down the effort to compile novel versions. With most UX being derivative. I’m very impressed, and love the philosophy surrounding this work effort. The ending comments about our collective ability to see and make the future was lovely ❤
19:20 - not too far from agentic workflow that’s fully automated, this is promising. 25:28 - I just realised she’s incredibly pleasant to listen to, both professional and professionally but friendly presented. It would be a huge improvement if we had Sam Altman replaced with her for all the important presentations. 36:07 - 🧐 cumbersome but I’ll wait until after I actually test it. 39:34 - the agentic improvement is seriously impressive. 40:19 - AGI is certainly shaping up to be positive if this is how we are going to get there. 40:39 - thank you 🙏🏻
Atty and Michelle, you presentation was tight and your delivery was spot on! From a dev's perspective who need convincing that this is worth using, Michelle's explanation at 25:00 is really key. FWIW, consider front loading that on future explanations.
People missed this: OpenAI has not only showcased groundbreaking development in AI but also its talent from all walks of life. Ethnicity, Gender, Age; all balanced. I love how smart you're. Keep this burning 💥
Pls OpenSource that fictitious convex app, as I like to see how you guys made the generative ui work without pre-building components with that schema or point me to a documentation that describes the concept.
Pretty much all of current models work most of the time, sometimes they have some json formating errors but that's it. There's a lot of tools to extract json. Also I normally use them to output multiple code blocks at the same time. The amount of times I have triggered a format error with my json schema validator (I use ajv) can be counted with my fingers 😅. Most of the errors I see are not adding quotation marks to the keys.
I don't see why someone would use function calling over response format. Function calling seems like a subset of response format. If I get a response with a format that I wanted, I can then use the response to call a function or any other use case.
Choosing from a set of well defined and modular functions is often easier for the model than handling highly variable context-dependent outputs, even with a fixed schema imposed.
I would say function calling & structured output is the best thing ever of LLM.
This was so valuable, I have a feeling this work was fundamentally required for o1, and we are now all benefiting 😊 This innovation has potentially the ability to upgrade every UX to a interoperable, lightweight and fast API. Bringing context to all the data moving around. Agents having an ability to exchange a cache of how to structurally interact with what they see. The fact index is a tree means that selecting the appropriate index cache can also be an inference step, opening up the ability to scale down the effort to compile novel versions. With most UX being derivative. I’m very impressed, and love the philosophy surrounding this work effort. The ending comments about our collective ability to see and make the future was lovely ❤
19:20 - not too far from agentic workflow that’s fully automated, this is promising.
25:28 - I just realised she’s incredibly pleasant to listen to, both professional and professionally but friendly presented.
It would be a huge improvement if we had Sam Altman replaced with her for all the important presentations.
36:07 - 🧐 cumbersome but I’ll wait until after I actually test it.
39:34 - the agentic improvement is seriously impressive.
40:19 - AGI is certainly shaping up to be positive if this is how we are going to get there.
40:39 - thank you 🙏🏻
Thanks for the erectile timeline. Get out of the basement once in a while.
Atty and Michelle, you presentation was tight and your delivery was spot on! From a dev's perspective who need convincing that this is worth using, Michelle's explanation at 25:00 is really key. FWIW, consider front loading that on future explanations.
People missed this: OpenAI has not only showcased groundbreaking development in AI but also its talent from all walks of life. Ethnicity, Gender, Age; all balanced. I love how smart you're. Keep this burning 💥
13:20
16:13 - Can use function calls to control the client UI
39:29 - Agentic flows can work 100% of the time
Pls OpenSource that fictitious convex app, as I like to see how you guys made the generative ui work without pre-building components with that schema or point me to a documentation that describes the concept.
Guys, we developers are the first to see the future, what an honor OpenAI has given us. Let's do it
thank you for this great work, as a developer I'm super excited to try them all :D
amazing presentation
They literally said "you will help us to reach AGI, thankyou for building us."
Гении презентации...чтобы не заснуть нужно очень стараться
Pretty much all of current models work most of the time, sometimes they have some json formating errors but that's it. There's a lot of tools to extract json. Also I normally use them to output multiple code blocks at the same time. The amount of times I have triggered a format error with my json schema validator (I use ajv) can be counted with my fingers 😅. Most of the errors I see are not adding quotation marks to the keys.
Great Presentation! good to know the insights Thank you.
Is this a new vid or recap on Dev day in Oct?
its from oct
Why would we ever need strict property to be false?
you wouldn't - but we needed to keep the option there for backwards compatibility with functions that existed before structured outputs was launched.
@@nikunj-openai I had the same question, and I think @nikunj-openai gave a reasonable answer.
@@nikunj-openai to not overfit it down the wrong thinking path with your prompts and assumpstions. Or so it can be flexible and creative etc.
How function calling implemented? Token mask?
2024 was all about chatbots and 2025 is all about AI Agents
I don't see why someone would use function calling over response format. Function calling seems like a subset of response format. If I get a response with a format that I wanted, I can then use the response to call a function or any other use case.
Choosing from a set of well defined and modular functions is often easier for the model than handling highly variable context-dependent outputs, even with a fixed schema imposed.
Am I getting something wrong? I thought these were already in place what’s new with this?
You could almost do the same talk today that you did yesterday and have it already be obsolete...
Crazy’s
What a boring presentstion get to the point
能不能不用咖喱英语
first again😅😅😅😅😮
Nobody cares
Omg so cool