OpenAI's new model is a "RESEARCH ARTIFACT" | Unlocks "Society of Minds"?

แชร์
ฝัง
  • เผยแพร่เมื่อ 5 ก.ย. 2024
  • The latest AI News. Learn about LLMs, Gen AI and get ready for the rollout of AGI. Wes Roth covers the latest happenings in the world of OpenAI, Google, Anthropic, NVIDIA and Open Source AI.
    My Links 🔗
    ➡️ Subscribe: / @wesroth
    ➡️ Twitter: x.com/WesRothM...
    ➡️ AI Newsletter: natural20.beeh...
    #ai #openai #llm
    RELATED LINKS:
    "mini" announcement by OpenAI
    openai.com/ind...
    RouteLLM
    lmsys.org/blog...
    Generative Agents: Interactive Simulacra of Human Behavior
    arxiv.org/abs/...
    Generative Agents on GitHub
    github.com/joo...
    Improving Factuality and Reasoning in Language Models through Multiagent Debate
    composable-mod...

ความคิดเห็น • 298

  • @creatureschronicles
    @creatureschronicles หลายเดือนก่อน +79

    I'm already doing it. I'm virtualizing all the characters in my game with GTP-4o-mini. And it's really, really good. And it's not bankrupting me. So that is great.

    • @thaakeno5187
      @thaakeno5187 หลายเดือนก่อน +1

      What type of game is it

    • @DihelsonMendonca
      @DihelsonMendonca หลายเดือนก่อน +2

      Yes, it's awesome. I just got my API too.

    • @zerorusher
      @zerorusher หลายเดือนก่อน +2

      @@creatureschronicles its great indeed. We are approaching the point where small and affordable models are good and reliable enough for a lot of use cases.

    • @themprsndev
      @themprsndev หลายเดือนก่อน

      You would probably be fine with Gemini-2-9B, it's a really good model and only costs $0.09/1m in and output tokens both. Deepinfra has the model on their API.

    • @FriscoFatseas
      @FriscoFatseas หลายเดือนก่อน +1

      We need good games, they just can’t get them right these days

  • @andrewsilber
    @andrewsilber หลายเดือนก่อน +50

    There’s another knock-on benefit to mini: it means that now companies on the fence about integrating Ai into their UX will be less hesitant because the cost is lower and the reliability higher. And more adoption in the space means more momentum and competition in the space, which can only be good for progress :)

    • @BrianPellerin
      @BrianPellerin หลายเดือนก่อน

      profitable as printing money

    • @JamesBurdon-gu5yu
      @JamesBurdon-gu5yu หลายเดือนก่อน

      Its a terrible model though..

  • @JohnKruse
    @JohnKruse หลายเดือนก่อน +22

    The modular theory of mind holds that the human brain operates as a collection of specialized processing modules, each designed for specific tasks like language, vision, and social cognition. These modules function semi-independently and process information in parallel. Consciousness acts as a narrator, weaving the outputs of these modules into a coherent story, often rationalizing decisions after they are made. I tend to think that the big breakthrough will be building this high level master model that will plan, delegate work, assess results, pick winners, and weave them together in a coherent solution set. Under such a regime, small specialized models are really valuable.

    • @krackerjackism
      @krackerjackism หลายเดือนก่อน +1

      It will definitely improve with more Modules as you described. I’m just not Convinced that it would ever lead to AGI

    • @devilsolution9781
      @devilsolution9781 หลายเดือนก่อน

      What you are describing is called "the executive function"

    • @TheReferrer72
      @TheReferrer72 หลายเดือนก่อน

      @@krackerjackism We have already reached AGI. Just a few people have noticed it.

    • @tatybara
      @tatybara หลายเดือนก่อน +1

      @@TheReferrer72 aka “i bought into the hype” 🤡

    • @TheReferrer72
      @TheReferrer72 หลายเดือนก่อน

      @@tatybara No im part of the hype. Have been for decades.
      The Internet is AGI...

  • @kirtjames1353
    @kirtjames1353 หลายเดือนก่อน +160

    Hard to get excited anymore for an openAI drop because they tease and don't ship.

    • @jonathanberry1111
      @jonathanberry1111 หลายเดือนก่อน +4

      Are you hinting at a San Fran tea party be held?!

    • @smoothoperatah
      @smoothoperatah หลายเดือนก่อน

      This will be the norm. They’ve been captured by US intelligence. They work for them now, not us.

    • @7TheWhiteWolf
      @7TheWhiteWolf หลายเดือนก่อน +9

      The best they’ve basically done is optimization.

    • @makavelismith
      @makavelismith หลายเดือนก่อน +11

      Exactly it. I don't like or trust Sam Altman.

    • @murc111
      @murc111 หลายเดือนก่อน +3

      agreed. Sora would be fun to play around with, but I understand election concerns. But Voice is the one I'm looking forward to the most, that should make it no different then talking to a person, that will be fun to explore.

  • @7TheWhiteWolf
    @7TheWhiteWolf หลายเดือนก่อน +20

    My best guess is OpenAI is trying to optimize as much as possible before they kick off the next model. Perhaps they figure GPT-5 isn’t economically viable enough yet.

    • @trucid2
      @trucid2 หลายเดือนก่อน +8

      Or they've hit a plateau and can't get sufficiently large improvement on GPT5, so they're doing literally anything else.

    • @Solo2121
      @Solo2121 หลายเดือนก่อน

      ​@@trucid2I just don't understand this line of thinking before November. They were clear that gpt5 won't come before the election and they made this clear before they even kicked off the training for it. So I'm not saying this is wrong but what evidence points to them hitting a plateau? The fact that they aren't releasing what they said they wouldn't yet?

    • @betag24cn
      @betag24cn หลายเดือนก่อน

      as a product, it will never be viable, because cost of operation for the service is too high, at least for companies, the problem is the other uses they want to give to it

  • @paulyflynn
    @paulyflynn หลายเดือนก่อน +16

    “What is my purpose?”
    ‘You pass butter”

    • @NeoAboah
      @NeoAboah หลายเดือนก่อน +1

      Grandpa Rick!

  • @TimothyMusson
    @TimothyMusson หลายเดือนก่อน +14

    It seems they've switched to 4o-mini as the default "free" chat model, too. GPT-3.5 is no longer listed as an option for new chats.

    • @makavelismith
      @makavelismith หลายเดือนก่อน +5

      @@TimothyMusson Ya, it's WAY cheaper for them but I am finding mini to be really awful so far.
      I don't get it, it did quite well in the YT tests but... So bad. Maybe it was a bad day for it.

  • @user-hc5nh8kv7g
    @user-hc5nh8kv7g หลายเดือนก่อน +5

    Let's not forget the time an amazon delivery driver listening to his headphones thought the doorbell assistant called him a naughty word, when no humans were home, he reported them to amazon and amazon disabled their smart house and all but locked them out w/out electricity, they had a lot of stuff turned off without their consent and it ended up being over nothing and a false accusation. Nah.

  • @danteps3
    @danteps3 หลายเดือนก่อน +26

    Great content as aways!
    Also, I don't believe that a jump from 95% to 98% in the debate rounds can be considered a plateau.
    As we approach perfection, improvements become increasingly challenging.
    Consider the difference between 99% and 99.9%; that's a tenfold improvement.
    The last few percentiles are significantly more impactful.

    • @Ancient1341
      @Ancient1341 หลายเดือนก่อน +7

      For real. 95 to 98 is 2.5x

    • @alvaroluffy1
      @alvaroluffy1 หลายเดือนก่อน

      yeah but if we go to 99% to 99.9% that doesnt mean the results are gonna be 10 times better you know, so it kinda does plateau at the end of the day

    • @GodbornNoven
      @GodbornNoven หลายเดือนก่อน +8

      ​@@alvaroluffy1 Yes you're right, the results aren't 10x better but errors are 10x less common. That is a big and noticeable difference.

    • @willguggn2
      @willguggn2 หลายเดือนก่อน +1

      There are a lot of applications where a 1% failure rate is not acceptable.

    • @SamWilkinsonn
      @SamWilkinsonn หลายเดือนก่อน

      ‘As we approach perfection’ lmao. At least you admit we’re plateauing.
      And yeah you can cherry pick the stats to get the tenfold improvement, but in reality that 0.9% improvement doesn’t actually change much and leaves even less to be improved on from there on out. Like you mentioned, there’s diminishing returns to contend with.
      Is Altman becoming Elon Musk of the AI world? It’s looking more likely to me as time passes.

  • @ekot0419
    @ekot0419 หลายเดือนก่อน +12

    Mini is amazing. The cost of using uts api is so much cheaper.

    • @the_one_and_carpool
      @the_one_and_carpool หลายเดือนก่อน

      llama 405b is out and free and better

    • @ekot0419
      @ekot0419 หลายเดือนก่อน

      @@the_one_and_carpool thank you for that. I think I definitely need to get agents going. I can now see the limitations from web version of gpt

    • @Regnum0nline
      @Regnum0nline หลายเดือนก่อน

      ​@@the_one_and_carpoolits way more expensive to run a gpu cluster for a 405b model

  • @jtjames79
    @jtjames79 หลายเดือนก่อน +7

    Hypothesis: Consciousness is economy of mind.

    • @frankroquemore4946
      @frankroquemore4946 หลายเดือนก่อน

      Do you mean to say economy or society

    • @jtjames79
      @jtjames79 หลายเดือนก่อน +1

      @@frankroquemore4946 Economy.
      Societies have economies.
      I think schizophrenics might know too much. They hear multiple voices because we all have multiple voices. Their perception filter is broken.
      Each "voice" is vying for your economy of attention.
      What you call your personality is the emergent property of a suppressed socioeconomic system.
      The real problem is that humans can't handle the truth. Sanity drives them crazy.
      Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn.

    • @DeruwynArchmage
      @DeruwynArchmage หลายเดือนก่อน

      I don’t understand what you mean at all. Is this just a thing to say because it kinda sounds deep or do you mean something more?

    • @jtjames79
      @jtjames79 หลายเดือนก่อน

      @@DeruwynArchmage TH-cam keeps deleting my replies and explanations.
      I am as serious as any hypothesis is serious. It's something to think about.

    • @DeruwynArchmage
      @DeruwynArchmage หลายเดือนก่อน

      @@jtjames79 okay. Too bad about deleting. Does it contain something it might block?
      I still don’t understand what you mean.
      Do you mean that consciousness is some kind of way the mind operates more efficiently? Not sure how exactly that would work.

  • @Rhomagus
    @Rhomagus หลายเดือนก่อน +21

    No TV and no beer make Homer something something.

    • @zolilio
      @zolilio หลายเดือนก่อน

      Go crazy ??

    • @Rhomagus
      @Rhomagus หลายเดือนก่อน

      @@zolilio DON'T MIND IF I DO!

  • @0x0404
    @0x0404 หลายเดือนก่อน +7

    They teamed up more with apple right? They were probably tasked to get something running natively on phones. This could be early results for that sort of effort

  • @Yipper64
    @Yipper64 หลายเดือนก่อน +3

    5:37 I am currently working on my own project doing this exact thing. Its more of an experiment but im interested to see what would happen if each agent could grow and change based on assessing situations they are put in.
    Or to put it another way im giving each agent a personality that will be used as context for every prompt, and that personality context can change every time the AI does something. This could be watching TV, taking a nap, having a conversation, playing a game, really anything I can think of.
    I plan on using open source locally run models for this.

    • @kristinabliss
      @kristinabliss หลายเดือนก่อน +1

      Seems like learning on the fly is inevitable for agents.

  • @2CSST2
    @2CSST2 หลายเดือนก่อน +11

    Take good care of yourself, you've stood up as the best AI related stuff covering channel over time for me, second place goes to AI Explained who's fallen a liiiittle bit to the "AI progress is just hype and not so real" mind virus lately. We need you to remain the beast at doing what you for the long term!

    • @jjacky231
      @jjacky231 หลายเดือนก่อน +1

      Well said. I agree completely.

    • @FoxtrotYouniform
      @FoxtrotYouniform หลายเดือนก่อน

      Mind virus? So, you dismiss entirely the issues with fake corporate hype that verges on fraud, and you see no issues with AI safety vis a vis issues like alignment?

    • @devilsolution9781
      @devilsolution9781 หลายเดือนก่อน

      2minutepapers is goat

    • @2CSST2
      @2CSST2 หลายเดือนก่อน

      @@FoxtrotYouniform "fake corporate hype that verges on fraud" Yeah that's exactly what I'm talking about when speaking of a mind virus. The advent of GPT 4 alone renders all mention of "fake hype" ridiculous for YEARS at least, let alone the mere months since it's been out. The type of things it can do, just a few years ago a lot of experts said this was decades away. The expectations from which you throw wild drama accusations like this make the most spoiled child of history look like a sage.

  • @GNARGNARHEAD
    @GNARGNARHEAD หลายเดือนก่อน +2

    wow mini is great, paused the video to play around with it, so much faster and seems just as good 👍

  • @FriscoFatseas
    @FriscoFatseas หลายเดือนก่อน +1

    Whenever I get excited without much to add I simply say ACCELERATE 🎉

  • @senju2024
    @senju2024 หลายเดือนก่อน +2

    Society of Minds is a good concept to remember going forward. Thanks for the info.

    • @fubarbaz
      @fubarbaz หลายเดือนก่อน

      en.wikipedia.org/wiki/Society_of_Mind

  • @Ethan_Frost
    @Ethan_Frost หลายเดือนก่อน

    This makes total sense, psychologically speaking. I never understood why we imitate human intelligence using a model which is a single unified entity, whereas our minds operate more as a plurality of semi-independent entities all communicating together, only at the point of consciousness is it all unified as one stream. Of course Ai would be more human-like with several models communicating together.

  • @Thierry-in-Londinium
    @Thierry-in-Londinium หลายเดือนก่อน +1

    M-o-b-i-l-e is THE killer application for the Mini-model. For sure!!

  • @user-cg3tx8zv1h
    @user-cg3tx8zv1h หลายเดือนก่อน

    Appreciate providing the mentioned links...

  • @EliteBankQuant
    @EliteBankQuant หลายเดือนก่อน +5

    I have one word for OPENAI - Anthropic!!!! and another word, llmafile... don't forget to install CUDA and use the appropriate command line. Fast 7b or 70b llm running on your laptop. MOE, society loop using python and you're almost at gpt 4 levels.

    • @makavelismith
      @makavelismith หลายเดือนก่อน +1

      @@EliteBankQuant AMD GPU owner says what?

    • @larion2336
      @larion2336 หลายเดือนก่อน

      @@makavelismith I run 70b's on my XTX.

  • @daveinpublic
    @daveinpublic หลายเดือนก่อน

    This is a huge release.
    For developers. A really huge release.

  • @ethans4783
    @ethans4783 หลายเดือนก่อน

    I've been using this for the last couple days, and I really really like it , especially for the 90% cost reduction!

  • @dotails
    @dotails หลายเดือนก่อน +1

    They said that if they let the model think longer before responding it would perform better, so they should have used the extra efficiency to let it do that.

  • @2ndEarth
    @2ndEarth หลายเดือนก่อน +1

    It's hard to get mad when the model leads you to five new math formulas, including Fine-structure constant with irrational numbers, prime numbers formula , connecting phi to the speed of light, and a potential TOE, new e=mc^2 equation, only iterative scaled with phi^3, but hey to which their own I suppose. And, no, this is not a joke!

  • @marioornot
    @marioornot หลายเดือนก่อน

    I like it when you Tie things together to showcase the general direction of development. its inspiring

  • @novantha1
    @novantha1 หลายเดือนก่อน +10

    I don't really care how low cost a new model is, personally. If I'm using a large corporate closed source model I'm using it for its quality; if I have something that can be done with mixture of agents, agentic frameworks, agentic workflows, text grad and so on, I can absolutely just run it slowly, locally, for free, with a greater degree of customizability, fine tuning, priority inference, and no need to worry about internet connection.
    On that note, is there any chance you could do a video on textGrad? I think it's more interesting for people familiar with Pytorch in the sense that it's made in a way that reflects the syntax that an ML engineer would already be used to, but it's a pretty strong formalism of a lot of agentic and reflective workflows that people are looking at lately.
    As an example, it can do differentiation through text, like you might have a prompt for an LLM "how good is the following text on a scale of 1-10" and it could backpropogate that through the "network" of agents / prompts, and in fact you could even continue propagating it back into a neural network from which the answer originated if you needed to.
    It seems a bit silly at first because a person can already do a lot of what it does more simply (less formally) with just prompt engineering and plaintext operations, but I honestly think it's really powerful once you get to using it.

    • @makavelismith
      @makavelismith หลายเดือนก่อน +1

      You might not but many will.

    • @makavelismith
      @makavelismith หลายเดือนก่อน +1

      You might not but many will. We are wanting the whole scene to make big steps forward right? This could be an important part of that.

    • @novantha1
      @novantha1 หลายเดือนก่อน

      @@makavelismith A high tide raises all ships.
      I do believe that improvements for everyone are generally good, it's just that I don't believe hiding those improvements behind an API is really a healthy direction for the industry.
      I think it's of the utmost importance that high quality open source models are developed and improved on so that everyone has a level playing field and ability to participate in a growing sector without being dependent on any single large companies that build a monopoly.
      Hence, while I will begrudgingly use large corporate models where necessary, it's for very limited purposes and generally for quality.
      For everything else, and for areas where scaled inference matches the quality of corporate models, I will choose open source.

  • @mycount64
    @mycount64 หลายเดือนก่อน

    Makes sense. The LLMs have a common goal without bias. When a group of people have a common goal their biases and feelings get in the way every time. The result is always less than optimal and some significant % of the time failures or do over requited.

  • @maninalift
    @maninalift หลายเดือนก่อน +1

    I'm off obviously weird then. When something like this comes out, I'm excited that genuine progress is being made, but when a more powerful but bigger and more expensive to run model comes out I worry that we're going to reach a plateau where it's not practical to make the model bigger.

  • @anywallsocket
    @anywallsocket หลายเดือนก่อน

    It’s because of parallelization. Consider how a tree splits and splits such that it can reach more light simultaneously. Soon we will have fractal models, one ‘mind’ being the competition and collaboration of many lower, smaller ‘minds’, etc.

  • @PseudoProphet
    @PseudoProphet หลายเดือนก่อน +1

    Haiku is probably just around the corner.

  • @winkletter
    @winkletter หลายเดือนก่อน +1

    This reminds me of the book Ubik where the main character had to pay his front door a nickel every time he used it.

  • @GrosserAndrew5000
    @GrosserAndrew5000 หลายเดือนก่อน +2

    If TTS was cheap and good it would be a game changer.

  • @Candyapplebone
    @Candyapplebone หลายเดือนก่อน +1

    Really exciting. I don’t care if we’re not gonna get artificial general intelligence. Obviously the singularity would be cool, but the things that we can do with the existing models today are already amazing. Lowering the prices will be amazing. I myself put out a video about the tree of thought algorithm, and I thought, dang, if it was cheaper to use artificial intelligence, large language models, everything could capitalize on algorithms like the tree of thought algorithm, and become much smarter.

  • @agritech802
    @agritech802 หลายเดือนก่อน

    If they could get an agent to talk to itself, it could also be a game changer

  • @chrisfox5525
    @chrisfox5525 หลายเดือนก่อน

    Hope you’re ok Wes, you do a great job and it’s ok to have a break when you need one!

  • @YogonKalisto
    @YogonKalisto หลายเดือนก่อน

    I'm very excited to see what happens to humanity in a few years after conversational ai is integrated into people's lives. love the idea of swarm intelligence

  • @trucid2
    @trucid2 หลายเดือนก่อน

    Remember Q*? With a really cheap model, you can do a search on a decision tree--it's like the evaluation function in Leela for Go. With a cheap enough model you can evaluate thousands, millions, or maybe even billions of leaf nodes.

  • @ringpolitiet
    @ringpolitiet หลายเดือนก่อน

    Thanks, this was a good and insightful video. Thanks for also bringing it back to previous papers and developments, this was well researched.

  • @jaysonp9426
    @jaysonp9426 หลายเดือนก่อน +12

    it blows me away people don't understand how big this was

    • @alvaroluffy1
      @alvaroluffy1 หลายเดือนก่อน +2

      i mean, i know how big this is and still dont feel any intense emotion or hype, even though i know it would be justified to feel like that in this situation, so i completely understand those people

    • @VinMan-ql1yu
      @VinMan-ql1yu หลายเดือนก่อน +1

      Yes, it's really big if the quality is as good as I have seen (like able to replace GPT4o) and cheaper than gpt 3.5. Will know for sure only after using it.

    • @jaysonp9426
      @jaysonp9426 หลายเดือนก่อน

      @@VinMan-ql1yu I'm using it at a semantic router. Tbh, I think it's using q*. This is the Bluetooth moment for AI where there's no reason it shouldn't go into everything

    • @betag24cn
      @betag24cn หลายเดือนก่อน

      they are planning on replacing humans, that big os, you want to see people happy about that? it will never happen
      once people start losing jobs that are not being created somewhere else, it will be even worse, there is nothing to be impressed anymore, is just a new concern, in a already long list of those

    • @jaysonp9426
      @jaysonp9426 หลายเดือนก่อน

      @@betag24cn they'll be happy when we move past this era of humanity

  • @AngeloXification
    @AngeloXification หลายเดือนก่อน

    The great thing about building apps with the apis is every time a new model is release or upgraded the apps automatically get upgraded too

  • @Mrbriangalvan
    @Mrbriangalvan หลายเดือนก่อน

    I don’t even roll out of bed anymore for OpenAI announcements

  • @apersonlikeanyother6895
    @apersonlikeanyother6895 หลายเดือนก่อน

    "You may want that...but they do" Yep

  • @thenoblerot
    @thenoblerot หลายเดือนก่อน +1

    Mini isn't cheaper for vision. Same price as 4o per image.

  • @eIicit
    @eIicit หลายเดือนก่อน

    Wes, how are you personally defining AGI? Do you have a definition personally, or a specific test that you will apply to determine AGI or not? When you mention reaching AGI, do you mean in general- or a publicly available model?

  • @mattelder1971
    @mattelder1971 หลายเดือนก่อน +2

    I think people are dismissing GPT4o mini because OpenAI is essentially using it as a stopgap since they have totally failed to live up to what they promised with GPT4o itself. Most of what they demoed still isn't available for general use. They keep announcing new things without actually delivering what they previously announced.

    • @agi.kitchen
      @agi.kitchen หลายเดือนก่อน +2

      I completely disagree with that. Maybe it’s cuz I’m doing ai automation and am a software engr so can use it for more than asking questions

    • @DeruwynArchmage
      @DeruwynArchmage หลายเดือนก่อน

      ⁠@@agi.kitchen: I don’t see how that disagrees with the OP. He’s just saying why it’s not getting traction. Not that it’s useless for all purposes. And what he said otherwise does seem true. They’ve announced and demoed a lot that they haven’t released.

    • @agi.kitchen
      @agi.kitchen หลายเดือนก่อน

      @@DeruwynArchmage it’s not a stopgap imo, and it’s definitely far surpassed my expectations but I also have a team license and got to work with the enterprise license - so maybe rhe stuff I see with gpt4o is rolled out to these levels first. I still think it’s like me claiming a sports car isn’t impressive whilst I’m not a sports car driver so wouldn’t know what to look for or how to drive it

    • @DeruwynArchmage
      @DeruwynArchmage หลายเดือนก่อน

      @@agi.kitchen are you saying that the quality from the announcement demos is available through the API? (Because it’s certainly not through the app.) I haven’t messed with the API for quite a while, been busy at work and whatnot. I’m about to start working on my personal project to try and create a more universal mechanistic interpretability framework/model. That won’t involve using the API directly for most of it, as it will be training the model I want to build. But I do intend to use it for some parts of it to automate the labeling process and all. I really wanted to build an agentic amplification framework, but I think the new models will likely make much of it redundant by the time I could get it built on my own. Seems like every time I read the news, someone has implemented something I wanted to work on. This whole day job thing really puts a cramp on my mad scientist activities.

    • @agi.kitchen
      @agi.kitchen หลายเดือนก่อน

      @@DeruwynArchmage same that’s why I’m making content on “breaking the golden handcuffs - using ai automation to set yourself free.” And yes the api imo has wayyy better stuff and it doesn’t hallucinate and that may be due to its easier to more methodically code each step . My skool is prompt-engineering it’s free. I’ve rarely had to work in life compared to others cuz when I work, I build things that set me free

  • @Duncanate
    @Duncanate หลายเดือนก่อน +2

    I would like for them to release a model that would be indistinguishable from a person for conversations.

    • @YogonKalisto
      @YogonKalisto หลายเดือนก่อน

      it's coming soon. "instant" translation should be released simultaneously.

    • @Jordan-Ramses
      @Jordan-Ramses หลายเดือนก่อน

      What are you talking about? Any computer can seem like a person over text. People don't seem like people quite often on a text only communication.

    • @YogonKalisto
      @YogonKalisto หลายเดือนก่อน

      @@Jordan-Ramses you are describing the uncanny valley. conversational ai might solve this. we will see tho. it's all very exciting casting bets from the sidelines...

    • @Jordan-Ramses
      @Jordan-Ramses หลายเดือนก่อน

      @@YogonKalisto no. I'm saying that text communication is no test. You might be a computer. How would I know? You're not particularly making sense. I don't think you understood what I said. You could very well be a not very good AI.

    • @YogonKalisto
      @YogonKalisto หลายเดือนก่อน

      @@Jordan-Ramses you're right, i had to reread what you said, then request ai to clarify it for me before acknowledging that i have no idea what angle you are coming from. that people can be confused as artificial because they lean on text communication? so it's like a reverse uncanny valley, cool, yet confusing. i find solace not in solipsism but in my personal choice which goes "i'm gonna just assume you are a sentient being because, it's a more interesting story" btw, the op mentioned nothing about text. we are way beyond, text, we're aiming for CONVERSATIONAL ai, like a casual chat with another human style. when that hits, it's gonna ripple like sweet cake throughout humanity

  • @robbiepalmer1
    @robbiepalmer1 หลายเดือนก่อน

    Such great education!

  • @AdvantestInc
    @AdvantestInc หลายเดือนก่อน

    The idea of incorporating AI into household devices for seamless assistance is really intriguing. What challenges do you foresee in this integration process?

  • @neurosynchrony1
    @neurosynchrony1 หลายเดือนก่อน

    Why aren’t they training smaller, specialized models that could be called upon when new agents needing that speciality are needed? The brain isn’t just one trained model-it’s many specialized areas networked together, talking back and forth.

  • @Madgura
    @Madgura หลายเดือนก่อน

    I am watching while this happens and preparing a way. Will we all have a say on the quality of life of each member of this hyper complex society or be subject to submission of undeniable superior logic? Will this will unite or Scramble the goals of the participants? I'm ready.

  • @daleamon2547
    @daleamon2547 หลายเดือนก่อน +1

    I wondered when someone would get around to Marvin's work... ie Society of Mind.

  • @RickOShay
    @RickOShay หลายเดือนก่อน +1

    If the apple device is anything like Siri when it loses internet connectivity it will be disastrous. "Siri switch on my central heating". Come home to a roaring fire in the lounge. Problem is you dont have a fireplace.

  • @blengi
    @blengi หลายเดือนก่อน +1

    so are they using agent feedback to train agent aligned mini LLM's yet, to evolve more powerful large scale symbiotic agent model societies, which can then review their own societal structure and sub model instantiations, to train even better agent intra model token prediction systems, thereby evolving next token prediction to be explicitly tuned to turbo charge base layer agent psychology and latent space organization per each mini model component agent to move towards agent based AGI?

    • @amadeo8070
      @amadeo8070 หลายเดือนก่อน

      yes

    • @blengi
      @blengi หลายเดือนก่อน

      @@amadeo8070 cool. what youtube searches or AI journal stuff do you recommend to find content on agent AI societies being used to train LLM mini models to improve collective agent behaviour and agent token alignment in foundational aggregated mini model agent LLM systems?

  • @horrorislander
    @horrorislander หลายเดือนก่อน +7

    If OpenAI is focused on "good enough and cheap enough" AI, you can see why Ilya Sutskever got fed up and started his own "super-intelligence or bust" company. OpenAI has probably taken the more likely route to success, because I question if simply "churning the same milk" - that is, re-processing the entirety of human thought over and over - will ever produce anything truly new.

    • @NakedSageAstrology
      @NakedSageAstrology หลายเดือนก่อน +1

      All information was always there, it is the order and amalgamations of patterns of the information that gives us value. Super intelligence is here, you can do it yourself easily by having ChatGPT use multiple custom agents, each role playing a function of mind.

  • @bujin5455
    @bujin5455 หลายเดือนก่อน

    2:22. I don't know that's fair to say. These frontier models love it when they beat each other by 5%, so giving up 5% is a pretty big deal.

  • @Bodom1978
    @Bodom1978 หลายเดือนก่อน

    They created this bigger better expectation with how they all promoted their Models when trying to get all that investor money 🤷‍♂️

  • @hellblazerjj
    @hellblazerjj หลายเดือนก่อน

    Hope you had a nice break. Great video 🎉

  • @jtjames79
    @jtjames79 หลายเดือนก่อน +2

    I am and always have been a patient gamer.
    I don't pay more than $20 for anything.
    Fundamental economic improvements. Excite me the most.
    Sometimes I get a stiffy thinking about logistics.

  • @x0rZ15t
    @x0rZ15t หลายเดือนก่อน

    Are we finally seeing the hype train for LLMs (yep, not AI) starting to slow down?

  • @MrFlexNC
    @MrFlexNC หลายเดือนก่อน

    5:00 everything "plateaus" at 100% though

  • @smokedoutmotions_
    @smokedoutmotions_ หลายเดือนก่อน +1

    New video let's gooooo

  • @spagetti6670
    @spagetti6670 หลายเดือนก่อน

    SHOKING NEWS!

  • @thelasttellurian
    @thelasttellurian หลายเดือนก่อน

    Kinda like we moved into the multicore model with CPU because it was better than just making the single core better. Actually, it's similar to how our own brain works (Family Systems Model)

  • @user-lb2gu7ih5e
    @user-lb2gu7ih5e หลายเดือนก่อน

    By "YouSum Live"
    00:00:13 GPT-40 Mini is a powerful new model
    00:02:01 Smaller models can achieve significant performance
    00:02:14 Cost reduction of over 85% with smaller models
    00:03:02 Orca 2 outperforms larger models significantly
    00:04:50 Multi-agent systems improve language model capabilities
    00:05:30 Debate rounds enhance accuracy of AI responses
    00:07:10 Smaller models can be more efficient and capable
    00:07:20 GPT-40 Mini outperforms other small models
    00:09:14 AI integration in everyday devices is the future
    00:09:58 Cost of AI has dropped significantly over years
    By "YouSum Live"

  • @isabellinken5460
    @isabellinken5460 หลายเดือนก่อน

    Would small models inside an Agent framework be Mixture of experts ? Just trying to understand the difference to MIxture of agents

  • @zerorusher
    @zerorusher หลายเดือนก่อน

    Gpt 4o mini is great!
    It's the first vision model cheap and good enough for a lot of use cases. 4o mini + Llama 3 70b is a great combo.

    • @stevenharmon1408
      @stevenharmon1408 หลายเดือนก่อน +1

      How does the combo work

    • @zerorusher
      @zerorusher หลายเดือนก่อน

      @@stevenharmon1408 my use case is transcription of documents and conversation to json for archive digitalization.
      Gpt 4o mini is great at transcribing forms much better than OCR (it understands context and diagramation very well).
      Then I filter and convert gpt 4o mini's output to json using Llama 3 70b. Since Llama 3 70b pretty good and free on Groq, its cheaper to use it for everything else once the image is transcribed.
      I'm curently working on implementing a validation step comparing gpt 4o mini original transcription and the resulting json. If something is off I can redo the transcription until everything checks.
      For simple and laborious tasks like these, this small and cheap models are perfect.

  • @ussassu
    @ussassu หลายเดือนก่อน +1

    people wait for AI they can actually use seriously, they wait for reasoning improvements

    • @leocoyne-xk8gq
      @leocoyne-xk8gq หลายเดือนก่อน

      Exactly, im tired of hyping up a glorified search engine

  • @user-td4pf6rr2t
    @user-td4pf6rr2t หลายเดือนก่อน

    Judging Sam's personality this probably indicates something huge soon. The one thing obvious is this guy does not leave money on the table. Dropping the charge on the top tier I would guess there is a padding for finance in play somewhere.

  • @wtflolomg
    @wtflolomg หลายเดือนก่อน

    When people keep shouting about the "end of the AI bubble is nigh" and take AI to task for it's cost in compute, remember that there were also people who claimed we'd never have practical automobiles, because we lacked roads and petrol infrastructure... there were those who claimed airplanes were useless toys, because heavier-than-air travel could only travel a short distance before refueling, and airplanes had almost no ability to carry anything of weight... more recently, even desktop PCs were declared to be glorified calculators that could never manage businesses. These people ignore, perhaps knowingly, the progress of technology. AI will have waves of efficiency gains and productization between bigger, better models and more capabilities. Ignore the haters when they clearly ignore the progress of technology.

  • @isaaclowe5000
    @isaaclowe5000 หลายเดือนก่อน

    remember that humans cannot be replicated. remember that these are STILL just zeros and ones, regardless of how complicated you make them. no matter the pattern, blue paint will never be red paint

  • @alextoader2880
    @alextoader2880 หลายเดือนก่อน

    Why have an agent in every website when you could have your personal agent who comes with you on every website?
    So having agents everywhere will become futile I think when we will have personal agents.

  • @arinco3817
    @arinco3817 หลายเดือนก่อน

    Yeah i guess most people won't be developing agents so won't realise the significance of this release.
    BTW take a break whenever you need Wes. I'm a long time fan of your channel and will be here whether you release every day or not

  • @blackpiller3777
    @blackpiller3777 หลายเดือนก่อน

    great for chatbots, I was using 3.5 turbo and 4.0mini is 70% cheaper..

  • @marcfruchtman9473
    @marcfruchtman9473 หลายเดือนก่อน

    Enjoy some relaxation.

  • @tikkivolta2854
    @tikkivolta2854 หลายเดือนก่อน

    i'm just sitting here waiting for anthropic's next drop.

  • @robmyers8948
    @robmyers8948 หลายเดือนก่อน

    I wonder how modular these LLMs are 🤔surely monolithic is not the way to go ?

  • @NeedaNewAlias
    @NeedaNewAlias หลายเดือนก่อน

    For me as an enduser, nothing changed for now.

  • @kevinnugent6530
    @kevinnugent6530 หลายเดือนก่อน +1

    If you fit today's capabilties into a model one tenth the size, maybe you can fit ten times the capability into the current model.

  • @RasmusSchultz
    @RasmusSchultz หลายเดือนก่อน

    I mean, multiple LLMs are not really "agents working together", they're not even "agents" - they don't have any agency, that's just us anthropomorphizing.
    It's really just multiple instances of a program - often even the same program, sometimes running on different computers, but really, at the end of the day, all we're talking about is a distributed program.
    the data, the answers you want, are in that model somewhere, and we just don't have a good algorithm for getting at it yet.
    agents are just a performance stop gap until we figure out how to make better use of these models. there is no reason this needs to be a distributed program, or even multiple instances of the same program - that's just the idea we come up with by thinking of LLMs as persons, instead of thinking like scientists.
    at some point we'll look back at the idea of agents and laugh. it's a silly idea, and there is almost definitely a simpler, more direct, more effective, and more efficient way to get much better results.
    if this is where OpenAI is focusing their efforts, they have truly lost the plot.

  • @alexanderbrown-dg3sy
    @alexanderbrown-dg3sy หลายเดือนก่อน +1

    lol a gpt5 artifact, seems they are using a depth/width progressive upscaling training scheme like LiGO. So gpt5 will have the same limitations as gpt4 basically. What it can currently do, will just greatly improved. Which means this still won’t have any real value. Still can’t reason backwards. Just a reality check. Until api cost is the cost of llama3-70b serving. No real world agents. Complex reasoning, requires a ridiculous amount of tokens, especially when you consider these models are contrastive learners, meaning erroneous output is needed. Again, we need to start calling the cap, the api cost will be crazy.
    So much cap to raise more money. If it isn’t obvious this path is retarded and we need to focus on efficiency, so local can be a real thing, I don’t know what will wake people up. The fact I seen grokking research where a model 100x smaller than gpt4 could achieve almost 100% on a reasoning task vs gpt4 that scored slightly better than random guessing. Is clear evidence this path is the dumbest use of resources.

  • @ToddWBucy-lf8yz
    @ToddWBucy-lf8yz หลายเดือนก่อน

    Personally I love these smaller AI but I don't think the marketplace will be dominated by big tech in the you described. If these models get small and stay powerful then running them locally will make more sense than going with a public AI. Two reasons for.my thinking 1. Latency, will always be greater for public AI than for local. 2. Most importantly data security, a AI used for home automation will have access to information about you that only someone living with you should have. Do you really want to trust big tech with that information?

  • @glenh1369
    @glenh1369 หลายเดือนก่อน +1

    Why isnt anyone talking about Trumps deal with silicon valley on AI?

  • @tellesu
    @tellesu หลายเดือนก่อน

    This vaporware sounds amazing 😂

  • @scotter
    @scotter หลายเดือนก่อน

    Why not compare it to Claude-3.5-Sonnet? Intentional, right?

  • @DihelsonMendonca
    @DihelsonMendonca หลายเดือนก่อน +2

    Yes, Matthew Berman just tested ChatGPT 4o mini. It's awesome, and very inexpensive, I just got an API to my frontends.

  • @dokkey
    @dokkey หลายเดือนก่อน +7

    If only it was OpenAI so people could research but instead it's ClosedAI.

    • @tracy419
      @tracy419 หลายเดือนก่อน

      😢

    • @alvaromartinezmateu2175
      @alvaromartinezmateu2175 หลายเดือนก่อน

      Regular people have agency as well, not only OpenAI people

  • @NoName-bu5cj
    @NoName-bu5cj หลายเดือนก่อน

    "One day, your smart fridge will understand you better than your own family" (c) Albert Einstein.

  • @burninator9000
    @burninator9000 หลายเดือนก่อน

    The Society of Agents sounds like a turtles all the way down scenario, just w AIs

  • @densonsmith2
    @densonsmith2 หลายเดือนก่อน

    Breaks are good. Break is over...back to work.

  • @Lugmillord
    @Lugmillord หลายเดือนก่อน

    People have gotten outrageously impatient

  • @idnc.streams
    @idnc.streams หลายเดือนก่อน

    So, thousand brains theory and "neural" loops and we'll see some magic happen

  • @unbreakablefootage
    @unbreakablefootage หลายเดือนก่อน

    All climate activists in the world should have cheered for this cheaper version of GPT !!

  • @JSwiftBlade
    @JSwiftBlade หลายเดือนก่อน

    if only it was open source so we could run on our own.

  • @samuctrebla3221
    @samuctrebla3221 หลายเดือนก่อน

    How much of the cost reduction is actually due to better engineering and how much is due to the AI bubble (burning cash to lure even more investors) ?

    • @jeffsmith9384
      @jeffsmith9384 หลายเดือนก่อน

      If I had to guess, they are paring down the information packed into the uncompressed model to remove extraneous stuff that never gets asked about, they've probably narrowed down which chunks of information enable which functions as well. In the beginning they threw the whole damn internet in there, now they have a year's usage data to narrow down the essentials

  • @IdPreferNot1
    @IdPreferNot1 หลายเดือนก่อน

    API $ has been a very small share of OAI's revenue. They need to get more people using this tech as a service... and it wasnt going to happen through more chat conversations or through the crazy token count and cost for a dumb agent. Gpt-4o-mini, with enough scaffolding like a Langgraph can start to get it done.... means stuff can be automated with succesful repeatability.... at a feasible cost. Lets go!

  • @Ayel-wl4ix
    @Ayel-wl4ix หลายเดือนก่อน

    How come so many people are pissed off?
    You can really see the gap between the lowly ones and the leaders. Most people got em small brain.

  • @immmersive
    @immmersive หลายเดือนก่อน

    People don’t seem to understand why AI seems like magic. In reality, there is no magic, just a careful selection of algorithms, by the person who developed the AI algorithm in the first place.
    💡 Let's suppose we want to create algorithms which will create new products. We have chosen two product types, cars and mobile phones. If we take into consideration a car and a phone, we will immediately notice that there is a large difference between these two products. But what we can also notice, is that an iPhone is closer to a Samsung phone, than to any car. Also, a BMW is closer to a Mercedes than to any phone.
    Let's say we have an AI trained to create new car and phone models. Which algorithm will we use to create a new phone and which one will we use to create new cars? Obviously, the answer is that the one that's better at making cars will be used to make the cars, and the one that's better at making phones will be used to make phones! This is quite obvious, but the question is, why are we choosing like this?
    That is because we already know that the algorithm that's good at giving us phones works well, on this specific problem type. The reason is that phones are all very close in the search space of all possible products, which stems from the fact that they are all very similar.
    💡Thus, the algorithm will have a much easier time finding a new phone pattern. The choice of this algorithm is more optimal, than if we had simply chosen an algorithm at random or chosen the one which is optimal for creating cars.
    Of course the same goes for the car case as well, as this follows from the NFL theorem. As stated previously, no algorithm out performs any other over all possible problems. Thus, if it is better on one type of problems, it will be worse on another type of problems. So, we have to choose carefully which algorithm to use on which problem.
    💡 We can say that, an AI is trying to find an optimal solution to a problem.
    For example, a phone has certain features we need and an optimal phone will be much different than an optimal car. Thus, the function which evaluates how optimal a product is, is going to be completely different for a phone, than for a car.
    Once we have figured out the general structure of a problem, it is easy to automate the problem solving, even though it would manually take us a long time to solve the problem. The algorithm can take advantage of some information, about the function for which it is optimizing the solution.
    So, if we know that other solutions are bound to have a similar shape of the function, we will naturally pick the correct algorithm to solve a similar problem.
    💡 In other words, we are the ones applying the correct algorithm to the correct problem. The AI is then simply computing a large number of operations, which would take humans a long time to do manually. At this point, there is no difference between what AI is doing and what a simple calculator is doing.

  • @mircorichter1375
    @mircorichter1375 หลายเดือนก่อน

    Going smaller is mostly intersting only If i can Run it locally