🎯 Key points for quick navigation: 00:05 *🎄 The session highlights OpenAI’s API success, with 2 million developers globally.* 00:29 *🎁 OpenAI 01 launches with function calling, structured outputs, developer messages, and vision inputs.* 02:47 *🛠️ Demo showcased tax form error detection using new API features like vision and structured outputs.* 08:13 *🧪 Evaluations show 01 surpasses GPT-4 in function calling, structured outputs, and coding tasks.* 09:33 *🚀 01 is faster and more efficient, using 60% fewer tokens than before.* 10:15 *🔊 Real-Time API now supports WebRTC, improving latency, audio quality, and reducing complexity.* 15:29 *💰 Audio token costs drop by 60%, with GPT-4 Mini support announced.* 16:13 *🎨 Preference fine-tuning improves model alignment with user preferences, boosting performance.* 20:30 *💻 New SDKs for Go and Java simplify developer integration.* 21:52 *🤝 AMA session launched for live developer Q&A.*
Highlights ... Introduction of New Features: OpenAI announces several new features and models to enhance developer experience on the API. Function Calling: The new function calling capability allows models to interact with backend APIs seamlessly. Structured Outputs: Developers can now define response formats using JSON schemas for more structured data output. Vision Inputs: The addition of vision inputs enables models to process images, enhancing applications in fields like manufacturing and science. Performance Improvements: The new models show significant improvements in performance, speed, and cost efficiency compared to previous versions.
apparently. if i were him demoing this system, i would take 30 seconds to recognize that these things happen during complex analysis tasks and that developers need to be aware of them when deploying wrappers around the model in production. but i guess ignoring it works too. :/
Holiday wish list: - Canvas management within mobile app - Project management within mobile app - o1 same file upload managements as 4o - Projects having more file options with o1 model - Better naming conventions established for models 😂 - Actions: integrations with email, calendars, and files - Agents!
Note that you can link your google/Microsoft accounts with your OpenAI account, so it can read data from those accounts. Not exactly integration, but it's something, I guess.
I do not understand the Tier 5 requirement to use the 01 API. How are developers, during the testing phase, expected to properly test their product if they cannot access the model they ultimately aim to use, given that they cannot meet the usage threshold required to reach Tier 5? $1,000 in a month is an almost unacheivable amount of testing!
PLEASE, PELASE, I love that they are enabling O-1 in the API, but they need to fix the hallucinations in the vision model. Sometimes I use the ChatGPT version to transcribe documents while preserving the content and layout, and it gets a lot of information completely wrong. For example, in bank statements, it simply changes the data, making it completely unreliable for my use case.
@ That’s part of my job; it saves me tons of hours a month. (When adding the images in parts, the model accuracy improves, but it takes time to do it “chunk by chunk.”)
Best of N approach might work. -> Compare 5 -> Pick best most consistent version. Only drawback is obviously the costs. But I'm not sure if caching would fix it. Also ReRead.
@@moose6781 no, he thinks that because during premiere they deactivate chat and comments. But they are activated when the premiere is over and the wating intro of the stream got cut out. I was confused first too.
Imagine supporting OpenAi since 2014 and their mission.... And then Sam decides that he actually does want a mega yacht. Totally fine and I would love him to have one - if he hadn't misled about the mission and purpose of OpenAi. Love seeing the innovation but unfortunately I am more excited to see who will be the giant to stand on the shoulders of the once important OpenAi
Exciting updates! The introduction of function calling and structured outputs in OpenAI 01 looks promising for developers. The reduced token costs and enhanced API performance will definitely make it easier for us to create more advanced applications. Can’t wait to experiment with the vision inputs and the preference fine-tuning feature! Looking forward to seeing how these improvements impact real-world projects!
They reuploaded with better Audio good work! Here are the highlights: 🚀 What’s New for Developers? Full o1 Release for API: - Faster, more accurate, and cheaper than the previous o1 preview. New Features Added for o1: - Function Calling: Call APIs directly within your apps. - Structured Outputs: Define response formats, like JSON schema. - Developer Messages: Improved system messages for steering model behavior. - Reasoning Effort: Control how long the model "thinks" before responding. - Vision Inputs: Process images via the API. 💡 Real-Time API Enhancements: Build your own ChatGPT Voice Mode into your applications. WebRTC Support: Enables real-time voice applications - Cool Use Case: Connect a microcontroller (like Wi-Fi-enabled devices) to create tools like a custom Alexa or even a talking teddy bear! 📉 Cost Improvements: - GPT-4o Audio Tokens: 60% cheaper. - 4o Mini: 10x cheaper for audio tokens. 🛠 Other Announcements: - Preference Fine-Tuning: Align models with user preferences for more tailored outputs. - New SDKs: Support added for Go and Java.
Isn't the line 11 is actually off by $2000? They were supposed to substract 1000 but added 1000. So, model should correctly say it is off by 1000. Did they gloss over that error or Am I missing something here?
Oooohhh, 60% cheaper GPT-4o audio? Yes, please! 😁I've wanted to play with it for awhile, but my first test lasted 10 minutes (of test time, not audio time) and fully drained my credits. I'll have to do some math to figure out if it's now affordable enough for my use cases! With the change from "system" to "developer", do I need to change my 4o message roles as well, or does that still understand "system" as a role as always?
does it except pdf though? I MOSTLY work in pdfs and have to pass all pdfs to claude first, then ask gpt to parse the results into json. Would be nice if it just accepted pdf
Folks, this Dev Day hits harder than my morning coffee! New features, vision for forms, and even WebRTC! I can already picture a microcontroller talking taxes… epic! Can’t wait to unleash productive chaos with these updates!
Wait what? Am I getting this correctly? Yesterday day the only thing you launched bad o1 in api and today you also launch o1 to api? Is there like tow versions of o1 that I am not aware of?
Why store keys on the frontend? The proxy seems the best option to me. Storing it on the frontend seems like you're just asking to have your key exposed. If I was going to build a frontend I would use JWTs from login and then make validated calls to an API gateway for introspection and call routing from there with an always private API key.
@@TransmentalMe exactly. And that’s what I do, but in this demo they showed using the device’s audio device directly with the API in a way that you can’t possibly ship without leaking API keys. Unless there is some trendy secret management system that I don’t know about.
So if I got this right short version is we got API access to 401 model. They drastically simplified real-time voice to voice integration so you can make anything with a speaker an ai voice. They reduced token cost by 60% for 01 and 90% ish 401 preview. And they offered another API preferential fine-tuning where you can say make stuff more like a not like b even though both technically answer the question, where you provide the A and the b as training data. I feel like there is some other stuff like maybe what is rolling out and what isn't but, I mean it seems like it'll be easier to do the voice to voice stuff I've never mess with API but I don't know maybe I will
@@epicboy330 Wrote it using voice to text. If it's wrong, then write your own summary. Wrote it because I didn't see a summary posted yet and wanted one myself. Geez dude. Chill.
Color prompt text purple and responses in black or white (depending on contrast settings). It’s so annoying when scrolling up to previous prompts and not quickly seeing where the prompt is, especially when responses are long and the prompts get buried.
They said they released search for o1 but he doesn’t allow me to and I pay for pro I check both my desktop and mobile version. Anyone else have this issue
Just basic improvements -> - Add a pause button on voice chat (To avoid consuming daily limit) - We write while AI answer in voice mode - allowing the Voice chat in the project documents features Maybe hard one -> - Despite the Apple and its high security feature, I am still wondering why there is no Screensharing method in chatgpt. - Please add extension to use gpt on other websites/apps Please Also add: -Why the texts and all of surface are black and white, allow gpt to write text in different colors, highlights for different topics and allow collapsing feature the topics that chatgpt gives. -For every conversation page between user and gpt, please add a section on the somewhere of the page to understand quickly what this conversation is all about, it is hard to remember old conversations from the first glance.
I have done well past my daily limit by not talking for a while but keeping the chat going. I think it already only accounts for when you or the model are talking
to me the actual future will be when you can use voice and video and big models without worrying about limits. I really hope someone builds hardware that consumes much less energy. Otherwise it just becomes stress on choosing what to pay and what to say on each session, balancing on the line between getting what you need to work better, and ignoring what is just a toy that looks cool. It is the difference between a happy life where everything is easier, and a world where you are constantly trying to compete with others to get the most profit out of this tool, not enjoying it at all. I really hope the world doesn't become like the latter.
Hmm, my previous comment got deleted. Not sure why. Just sharing that I don't seem to have access to o1 on tier 4, my models["list"] only shows o1-preview and o1-mini. Also, they respond to requests that 4o is happy to honour with a 400, so I guess not API-compatible. Maybe this is rolling out globally over time?
Thanks for the demo, exciting stuff. Only request would be to speak openly about hallucinations. As other people mentioned, the first output at 4:05 seems to be incorrect. Speaking about it openly would be preferred for me instead of ignoring it.
FINALLY they make realtime cheaper!!! Altough after testing gpt-4o-mini-realtime-preview it can't do accents unlike its larger counterpart which is gpt-4o-realtime-preview
@@epicboy330 Try Gemini 2.0 Flash Experimental in AI Studio my man.. It's actually insane. Otherwise, I would've doubled down on your statement with you cause I thought the same too before using that. It's taking on full on videos minute lengths, audio, images and files at the same time and providing ridiculous value in terms of multimodal and contextual reasoning comparing to when I use o1 Pro.
on the 9th day of Christmas openAI took away from me is model o1 because i hit my limit and told me i would need to buy the $200 pro plan if i wanted to use it. good going guys been a long time subscriber. a guy with the rain deer horns and the guy with strawberry shirt LOL I guess every circus needs some clowns
How on Earth Do you guys justify that you cannot use faces with the standard version of Sora but you can with the Pro plan????? Are Pro Users more Responsible Users????
Day 9 feels like a complete waste of time. Why are we getting long-winded videos for features that every tech company is already implementing? WebRTC support? Cool, but not groundbreaking-it’s industry standard at this point. Stop hyping basics as if they’re revolutionary. Time is money, and these videos are dragging on without offering real value. If OpenAI can’t bring meaningful innovation to the table, it’s better to skip the fluff entirely. Focus on real advancements that actually matter to developers and users. We deserve better.
I thought chat gpt would remain at the top of the game but it looks like Google is destroying them. Sora actually sucks big time. Should’ve released it to the public sooner so they could have gathered more feedback to improve, but instead they kept it behind closed doors. Maybe they’ll learn their lesson next time
🎯 Key points for quick navigation:
00:05 *🎄 The session highlights OpenAI’s API success, with 2 million developers globally.*
00:29 *🎁 OpenAI 01 launches with function calling, structured outputs, developer messages, and vision inputs.*
02:47 *🛠️ Demo showcased tax form error detection using new API features like vision and structured outputs.*
08:13 *🧪 Evaluations show 01 surpasses GPT-4 in function calling, structured outputs, and coding tasks.*
09:33 *🚀 01 is faster and more efficient, using 60% fewer tokens than before.*
10:15 *🔊 Real-Time API now supports WebRTC, improving latency, audio quality, and reducing complexity.*
15:29 *💰 Audio token costs drop by 60%, with GPT-4 Mini support announced.*
16:13 *🎨 Preference fine-tuning improves model alignment with user preferences, boosting performance.*
20:30 *💻 New SDKs for Go and Java simplify developer integration.*
21:52 *🤝 AMA session launched for live developer Q&A.*
thanks
using chatgpt to summarise the chatgpt announcement
Damn that’s pretty lame. Hopefully they’re saving the best for last.
Highlights ...
Introduction of New Features: OpenAI announces several new features and models to enhance developer experience on the API.
Function Calling: The new function calling capability allows models to interact with backend APIs seamlessly.
Structured Outputs: Developers can now define response formats using JSON schemas for more structured data output.
Vision Inputs: The addition of vision inputs enables models to process images, enhancing applications in fields like manufacturing and science.
Performance Improvements: The new models show significant improvements in performance, speed, and cost efficiency compared to previous versions.
i literally can't understand what he's saying
Are you deaf?
Me neither he’s french
He is French, I believe, not hard to understand. GPT understands our typos so we have to adjust our listening skills as well! :D
skill issue
I'm tired of people being super excited, I long for the days that people were just regular excited
4:05 - So are we just going to completely ignore the massive hallucination in the first point here?
apparently. if i were him demoing this system, i would take 30 seconds to recognize that these things happen during complex analysis tasks and that developers need to be aware of them when deploying wrappers around the model in production. but i guess ignoring it works too. :/
Point 2 had a hallucination as well, it thought that line 10 said $0 instead of $1,000.
Yes 😌
yes we are
Am I missing something? Which hallucination? 1) seems to make sense to me?
I think you mean 2) claiming line 10 is $0, which it isn't.
Holiday wish list:
- Canvas management within mobile app
- Project management within mobile app
- o1 same file upload managements as 4o
- Projects having more file options with o1 model
- Better naming conventions established for models 😂
- Actions: integrations with email, calendars, and files
- Agents!
God please put canvas in the mobile app
And pay GTP subscription using WLD directly from the worldapp
I would add calendar integration on that list
Note that you can link your google/Microsoft accounts with your OpenAI account, so it can read data from those accounts. Not exactly integration, but it's something, I guess.
Projects have search
Clone chats
Continue threads in new chats
I restarted my cellphone like twice with the last video because I thought my speakers were broken or smth
"like" twice
@@endoflevelboss ??? whats the problem
Sound on livestream was awful, and comments was turned off!
Looks like one channel was out of phase with the other got flipped 180°
@@profahren8476 who like said like there was like er a problem?
I do not understand the Tier 5 requirement to use the 01 API. How are developers, during the testing phase, expected to properly test their product if they cannot access the model they ultimately aim to use, given that they cannot meet the usage threshold required to reach Tier 5? $1,000 in a month is an almost unacheivable amount of testing!
PLEASE, PELASE, I love that they are enabling O-1 in the API, but they need to fix the hallucinations in the vision model. Sometimes I use the ChatGPT version to transcribe documents while preserving the content and layout, and it gets a lot of information completely wrong. For example, in bank statements, it simply changes the data, making it completely unreliable for my use case.
You’re comfortable using GPT for bank statements?
@ That’s part of my job; it saves me tons of hours a month. (When adding the images in parts, the model accuracy improves, but it takes time to do it “chunk by chunk.”)
Best of N approach might work. -> Compare 5 -> Pick best most consistent version. Only drawback is obviously the costs. But I'm not sure if caching would fix it. Also ReRead.
I think they sell data to advertisers, I've noticed my advertising changes based on what I use GPT for sometimes. Be careful.
Nobody even knows what API is shut up.
I'm literally listening to this while I'm driving to work for a full day and I can't wait to get home to play with it
Texting and Driving, heh, naughty you.
@b326yr voice to text. Think again.
@@tomgreen8246 ha, clever response.
Me too, been feeling myself lately
Here before comments get disabled!
They get disabled?
@@moose6781 no, he thinks that because during premiere they deactivate chat and comments. But they are activated when the premiere is over and the wating intro of the stream got cut out. I was confused first too.
Looks like you've hallucinated censorship.
Imagine supporting OpenAi since 2014 and their mission.... And then Sam decides that he actually does want a mega yacht. Totally fine and I would love him to have one - if he hadn't misled about the mission and purpose of OpenAi.
Love seeing the innovation but unfortunately I am more excited to see who will be the giant to stand on the shoulders of the once important OpenAi
Thank you for fixing the audio and reuploading
Oh wow, exactly the life-changing information I’ve been desperately waiting for.
better audio good AI
still not perfect, you can hear it peak every now and then
Exciting updates! The introduction of function calling and structured outputs in OpenAI 01 looks promising for developers. The reduced token costs and enhanced API performance will definitely make it easier for us to create more advanced applications. Can’t wait to experiment with the vision inputs and the preference fine-tuning feature! Looking forward to seeing how these improvements impact real-world projects!
They reuploaded with better Audio good work!
Here are the highlights:
🚀 What’s New for Developers?
Full o1 Release for API:
- Faster, more accurate, and cheaper than the previous o1 preview.
New Features Added for o1:
- Function Calling: Call APIs directly within your apps.
- Structured Outputs: Define response formats, like JSON schema.
- Developer Messages: Improved system messages for steering model behavior.
- Reasoning Effort: Control how long the model "thinks" before responding.
- Vision Inputs: Process images via the API.
💡 Real-Time API Enhancements:
Build your own ChatGPT Voice Mode into your applications.
WebRTC Support: Enables real-time voice applications
- Cool Use Case: Connect a microcontroller (like Wi-Fi-enabled devices) to create tools like a custom Alexa or even a talking teddy bear!
📉 Cost Improvements:
- GPT-4o Audio Tokens: 60% cheaper.
- 4o Mini: 10x cheaper for audio tokens.
🛠 Other Announcements:
- Preference Fine-Tuning: Align models with user preferences for more tailored outputs.
- New SDKs: Support added for Go and Java.
Any guesses on what that microcontroller was?
Most likely Seeed Studio XIAO ESP32C3
edit: corrected model name
Isn't the line 11 is actually off by $2000? They were supposed to substract 1000 but added 1000. So, model should correctly say it is off by 1000. Did they gloss over that error or Am I missing something here?
Asked chatgpt to translate. It crashed.
Thanks for making this feel more useful and accessible. Feels like Google is finally starting to get some things right.
I have the API, do I need a certain tier for this?
Oooohhh, 60% cheaper GPT-4o audio? Yes, please! 😁I've wanted to play with it for awhile, but my first test lasted 10 minutes (of test time, not audio time) and fully drained my credits. I'll have to do some math to figure out if it's now affordable enough for my use cases!
With the change from "system" to "developer", do I need to change my 4o message roles as well, or does that still understand "system" as a role as always?
No, developer messages are only for o1 but gpt-4o and any other models like 4 turbo or 4 or 4o mini and even the 3.5 series stay the same.
@@elprox1290 Cool, thanks!
All nice and good but you guys really need to set proper audio lol
So proud to see the French represented in tech! Bravo a toi et on équipe Olivier!
Thanks guys! Great material.
does it except pdf though? I MOSTLY work in pdfs and have to pass all pdfs to claude first, then ask gpt to parse the results into json. Would be nice if it just accepted pdf
13:01 Where is the code?
lol. Audio this time. Nice touch
Except they talk at 100mph
@@XAMPOL you can slow that down, but earlier that put out a version of this audio that literally had 0 sound lol
It was really a pain to integrate and use the real time api, so thank you for this update!
Folks, this Dev Day hits harder than my morning coffee! New features, vision for forms, and even WebRTC! I can already picture a microcontroller talking taxes… epic! Can’t wait to unleash productive chaos with these updates!
First I couldn't hear it.
Now I can't understand it.
I don't understand anything the French guy is saying at all
Wait what? Am I getting this correctly? Yesterday day the only thing you launched bad o1 in api and today you also launch o1 to api? Is there like tow versions of o1 that I am not aware of?
You can’t ship anything based on this demo because you can’t store API keys on the front end at all. What’s the solution to this?
So far I just proxy every call through a backend. But it just seems so wasteful.
Why store keys on the frontend? The proxy seems the best option to me. Storing it on the frontend seems like you're just asking to have your key exposed.
If I was going to build a frontend I would use JWTs from login and then make validated calls to an API gateway for introspection and call routing from there with an always private API key.
@@TransmentalMe exactly. And that’s what I do, but in this demo they showed using the device’s audio device directly with the API in a way that you can’t possibly ship without leaking API keys. Unless there is some trendy secret management system that I don’t know about.
Underwhelmed. Let's hope they saved the exciting stuff for the last three days.
I mean, yeah, most likely.
Not everything is for you. This is great for devs.
Well this day for me was the most exiting apart from o1 launch and Sora
So if I got this right short version is we got API access to 401 model. They drastically simplified real-time voice to voice integration so you can make anything with a speaker an ai voice. They reduced token cost by 60% for 01 and 90% ish 401 preview. And they offered another API preferential fine-tuning where you can say make stuff more like a not like b even though both technically answer the question, where you provide the A and the b as training data.
I feel like there is some other stuff like maybe what is rolling out and what isn't but, I mean it seems like it'll be easier to do the voice to voice stuff I've never mess with API but I don't know maybe I will
tf is 401? It's o1
@@huk2617i guess he means 4o
Stop pretending like you know what you’re talking about just to complain. You don’t even know what the name of the model is
@@epicboy330 Wrote it using voice to text. If it's wrong, then write your own summary. Wrote it because I didn't see a summary posted yet and wanted one myself. Geez dude. Chill.
@@huk2617 Wrote using voice to text. Didn't care enough to to edit. Still don't.
Holy smokes thank you for the webrtc. Saved me so much work.
Color prompt text purple and responses in black or white (depending on contrast settings). It’s so annoying when scrolling up to previous prompts and not quickly seeing where the prompt is, especially when responses are long and the prompts get buried.
They said they released search for o1 but he doesn’t allow me to and I pay for pro I check both my desktop and mobile version. Anyone else have this issue
Tried the code but it didn't work.
Did anyone else get it to work?
Just basic improvements ->
- Add a pause button on voice chat (To avoid consuming daily limit)
- We write while AI answer in voice mode
- allowing the Voice chat in the project documents features
Maybe hard one ->
- Despite the Apple and its high security feature, I am still wondering why there is no Screensharing method in chatgpt.
- Please add extension to use gpt on other websites/apps
Please Also add:
-Why the texts and all of surface are black and white, allow gpt to write text in different colors, highlights for different topics and allow collapsing feature the topics that chatgpt gives.
-For every conversation page between user and gpt, please add a section on the somewhere of the page to understand quickly what this conversation is all about, it is hard to remember old conversations from the first glance.
I have done well past my daily limit by not talking for a while but keeping the chat going. I think it already only accounts for when you or the model are talking
I wish there is a middle tier. 200 is a bit steep.
to me the actual future will be when you can use voice and video and big models without worrying about limits. I really hope someone builds hardware that consumes much less energy. Otherwise it just becomes stress on choosing what to pay and what to say on each session, balancing on the line between getting what you need to work better, and ignoring what is just a toy that looks cool. It is the difference between a happy life where everything is easier, and a world where you are constantly trying to compete with others to get the most profit out of this tool, not enjoying it at all. I really hope the world doesn't become like the latter.
Great Stuff, i just did my german Taxreturn :) You guys are excellent :)🤩
search gpt with O1 and file upload, and maybe filter selection mode even if it is for the $200 ones
She's a smart strawberry...
I can’t understand anything from the right side guy
We have the same handle?
AGENTS, Demo AGI and 4.5 ?
This is cool especially the realtime API.
Is there an API to translate him in real time?
Are the strawberry shirts some kind of hint for tomorrow?
Intergrate everything into o1 pro
Projects with context across chats please. Not just a project customer gpt with chat organization, but context across chats.
Can't wait to use this ❤
is anyone else hoping they will announce GPT‘s will be able to interact using advanced voice mode and/or o1?
How to get api
finally , great drop today 👌
Wow audio is available! must be teasing gpt4o native audio generation
What do you mean by audio is available? What audio and where is it available
Hmm, my previous comment got deleted. Not sure why. Just sharing that I don't seem to have access to o1 on tier 4, my models["list"] only shows o1-preview and o1-mini. Also, they respond to requests that 4o is happy to honour with a 400, so I guess not API-compatible. Maybe this is rolling out globally over time?
I have confirmed that some friends with tier 5 have it, so I guess it's a tiers issue.
11:49 there is no way this is your setup bro like how do you live with yourself knowing thats your theme
You are talking about cheaper price. Cheaper price always right. Thx.
Please change the searchgpt button back to blue in the app like web
Batch API?
Oh hell yeah SDK for Java baby let's go
How is this new?
Thanks for the demo, exciting stuff. Only request would be to speak openly about hallucinations. As other people mentioned, the first output at 4:05 seems to be incorrect. Speaking about it openly would be preferred for me instead of ignoring it.
Biggest shocker in this demo is that the taxable income of an OpenAI engineer is $9250
If only OpenAI was truly Open.
amazing ❤
FINALLY they make realtime cheaper!!! Altough after testing gpt-4o-mini-realtime-preview it can't do accents unlike its larger counterpart which is gpt-4o-realtime-preview
Where gpt5?
Damn, now that's what im talking about.
why was one guy talking in mandarin
I feel that this was a huge deal and not understanding technical basics makes me miss the point 😅
Chatpt is the best AI!
Release search API!
I liked the earlier audio
Dude Google is crushing you guys
You’re either sick in the head or you haven’t actually used Gemini
@@epicboy330 Try Gemini 2.0 Flash Experimental in AI Studio my man.. It's actually insane. Otherwise, I would've doubled down on your statement with you cause I thought the same too before using that. It's taking on full on videos minute lengths, audio, images and files at the same time and providing ridiculous value in terms of multimodal and contextual reasoning comparing to when I use o1 Pro.
You’re either sick in the head or you haven’t actually used Veo 2 and Imagen 3
@@epicboy330 he's obviously speaking about recent developments, like gemini 2.0.
on the 9th day of Christmas openAI took away from me is model o1 because i hit my limit and told me i would need to buy the $200 pro plan if i wanted to use it. good going guys been a long time subscriber. a guy with the rain deer horns and the guy with strawberry shirt LOL I guess every circus needs some clowns
Strawberries hints almost everyday. Maybe we'll finally have Q* on day 12?
I thought o1 was supposed to be strawberry
Bring back teddy ruxpin
Speaking Alien language Detected 😂
I love the project strawberry
Dunning Kruger effect unravelled by bad audio.
How on Earth Do you guys justify that you cannot use faces with the standard version of Sora but you can with the Pro plan????? Are Pro Users more Responsible Users????
Audio's fine for me.
How about... SOME COAL??
Price drops are cool.
New O1 is just downgrade of O1-preview... Really sad, honestly, but I guess understandable
shoutout to Day[9]
Spotted a Vimium user!
Make collaboration on projects with paid and non paid account.
Day 9 feels like a complete waste of time. Why are we getting long-winded videos for features that every tech company is already implementing? WebRTC support? Cool, but not groundbreaking-it’s industry standard at this point. Stop hyping basics as if they’re revolutionary.
Time is money, and these videos are dragging on without offering real value. If OpenAI can’t bring meaningful innovation to the table, it’s better to skip the fluff entirely. Focus on real advancements that actually matter to developers and users. We deserve better.
What the heck he's saying?
Yeah!
I thought chat gpt would remain at the top of the game but it looks like Google is destroying them. Sora actually sucks big time. Should’ve released it to the public sooner so they could have gathered more feedback to improve, but instead they kept it behind closed doors. Maybe they’ll learn their lesson next time
i just want to edit messages with photos
Strawberry soon 🔜
Using AI to do taxes is such a GROSSLY inefficient use of time and resources