For the full show, and others like it, check out Big Technology Podcast in your app of choice. Thanks! Spotify: spoti.fi/32aZGZx Apple: apple.co/3AebxCK Etc. pod.link/1522960417/
o1 is an intermediary model focused on stem and reasoning.. when openai introduced vision, they added vision AI as a separate AI vision machine to GPT-4V.. later a new model where multi-modality is built natively was introduced, gpt-4o.. the same will most likely happen with o1 series... it is a necessary preparation for gpt-5 which will incorporate o1 reasoning skills natively.. OpenAI is still the leader in pushing the extremely complex disruptive technology of generative AI.. they have my utmost respect thank you and your guest for the informative discussion
If you doing stupid sht count the r's in strawberry or how do i break into a car. Then yes 100% it isnt impressive. But using it in workflows it does work better
General vibe I have been getting from the ai communities are “good not great yet”. As someone not from a technology profession, my mind is still being blown by everything even before o1. I’m playing catch up with some things, but I love it, very exciting things to come.
Everyone scared to be the AI optimist. "It won't impact EBITDA and still hallucinates..." we're quickly gonna move past this era and doubters will be left scratching their heads imo. Good on Alex for "buying the hype" when he'd previously been critical. Fluid thinking like this will be increasingly important
I tried both ChatGPT-o and strawberry on a research level statistical problem that required both derivation and reasoning. Both did the derivation correctly (strawberry was faster) but they both couldn't do the reasoning part completely correct. Basically, both gave me the same answer.
My analogy is the calculator. There is the normal mode for most people and the scientific mode for detailed work. ChatGPT 4o for my general/creative queries and OpenAI o1 for trying to understand Einstein's General Theory of Relativity. Either way in my case it's all about learning.
Yes, OpenAI’s release paper said it was preferred by people using it for math, data, and science while GPT-4o was preferred for writing tasks. I wrote a bit more about it here: www.bigtechnology.com/p/is-openais-new-o1-model-the-big-step?r=14e7&
So AI is not a single monolithic thing. If you look at a typical organization, you'll see that there are people in the organization who are there because they specialize in the numbers, and there are other people in the organization who are there because they specialize in putting stuff into words. And what we have with the latest development is a marked increase inability with numbers and problem solving, and you can put this alongside the llms that have been there all along and are good at words, and now they don't have to just b*******, you can ask them to describe. The number stuff that has been worked out but by the reasoning model and so you have a team of AI capability.
@@Alex.kantrowitzI think consumer products and capabilities from OpenAI’s (or any frontier AI lab’s) perspective, is just a frustrating distraction. There is nothing more important (to these companies) than building a human-level AI programmer, because an army of human-level programmers is by far the biggest force multiplier. Everything else is a waste of time. Imagine spending a year or two working on voice, SORA, and a better creative writer while Google Deep Mind puts all of their efforts into building a capable (possibly superhuman) programmer? Or vice versa. Does Google really care about their AI enterprise offerings if OpenAI builds an AI that makes most traditional enterprise software (and the workers using it) obsolete? I think we may be naive here, and not seeing what’s actually at stake. It’s like building slingshots while your adversary builds a nuke. A nuke that can also build better slingshots. So there is nothing more important than building an AI than can build better AI. Whoever achieves that solves all of their other challenges. And no AI company can afford to let another build it first. All of these consumer facing products and capabilities are just crumbs to keep us interested and invested, IMHO.
For the full show, and others like it, check out Big Technology Podcast in your app of choice. Thanks! Spotify: spoti.fi/32aZGZx
Apple: apple.co/3AebxCK
Etc. pod.link/1522960417/
o1 is an intermediary model focused on stem and reasoning.. when openai introduced vision, they added vision AI as a separate AI vision machine to GPT-4V.. later a new model where multi-modality is built natively was introduced, gpt-4o.. the same will most likely happen with o1 series... it is a necessary preparation for gpt-5 which will incorporate o1 reasoning skills natively.. OpenAI is still the leader in pushing the extremely complex disruptive technology of generative AI.. they have my utmost respect
thank you and your guest for the informative discussion
If you doing stupid sht count the r's in strawberry or how do i break into a car. Then yes 100% it isnt impressive. But using it in workflows it does work better
General vibe I have been getting from the ai communities are “good not great yet”. As someone not from a technology profession, my mind is still being blown by everything even before o1. I’m playing catch up with some things, but I love it, very exciting things to come.
Thank you, both! Excellent work Alex and Parmy; much appreciated.
Thank you, Lee!
Everyone scared to be the AI optimist. "It won't impact EBITDA and still hallucinates..." we're quickly gonna move past this era and doubters will be left scratching their heads imo. Good on Alex for "buying the hype" when he'd previously been critical. Fluid thinking like this will be increasingly important
I tried both ChatGPT-o and strawberry on a research level statistical problem that required both derivation and reasoning. Both did the derivation correctly (strawberry was faster) but they both couldn't do the reasoning part completely correct. Basically, both gave me the same answer.
Wow, strawberry was faster even with the thinking time?
According to the team that built it. o1 is not the full model. It's a preview of what's coming.
My analogy is the calculator. There is the normal mode for most people and the scientific mode for detailed work. ChatGPT 4o for my general/creative queries and OpenAI o1 for trying to understand Einstein's General Theory of Relativity. Either way in my case it's all about learning.
This is an excellent analogy! Best I’ve heard so far :)
which company is saying they are happy with what has been put out there already? is this lady even aware of what she is saying?
What a waste of my time. Does she even know anything on ai to be on this show. Disappointing
geez chill
NO!
So this model is worse in certain tasks than previous models? I think you guys are wasting time, saving time on research.
Yes, OpenAI’s release paper said it was preferred by people using it for math, data, and science while GPT-4o was preferred for writing tasks. I wrote a bit more about it here: www.bigtechnology.com/p/is-openais-new-o1-model-the-big-step?r=14e7&
So AI is not a single monolithic thing. If you look at a typical organization, you'll see that there are people in the organization who are there because they specialize in the numbers, and there are other people in the organization who are there because they specialize in putting stuff into words. And what we have with the latest development is a marked increase inability with numbers and problem solving, and you can put this alongside the llms that have been there all along and are good at words, and now they don't have to just b*******, you can ask them to describe. The number stuff that has been worked out but by the reasoning model and so you have a team of AI capability.
@@Alex.kantrowitzI think consumer products and capabilities from OpenAI’s (or any frontier AI lab’s) perspective, is just a frustrating distraction.
There is nothing more important (to these companies) than building a human-level AI programmer, because an army of human-level programmers is by far the biggest force multiplier.
Everything else is a waste of time.
Imagine spending a year or two working on voice, SORA, and a better creative writer while Google Deep Mind puts all of their efforts into building a capable (possibly superhuman) programmer?
Or vice versa. Does Google really care about their AI enterprise offerings if OpenAI builds an AI that makes most traditional enterprise software (and the workers using it) obsolete?
I think we may be naive here, and not seeing what’s actually at stake.
It’s like building slingshots while your adversary builds a nuke. A nuke that can also build better slingshots.
So there is nothing more important than building an AI than can build better AI. Whoever achieves that solves all of their other challenges. And no AI company can afford to let another build it first. All of these consumer facing products and capabilities are just crumbs to keep us interested and invested, IMHO.
This lady know nothing about AI and only care about paparazzi, what's wrong her?