OpenAI One Step Closer to SELF IMPROVING AI | AI Agents doing AI Research | MLE-bench

แชร์
ฝัง
  • เผยแพร่เมื่อ 14 ต.ค. 2024

ความคิดเห็น • 267

  • @emmanuelr710
    @emmanuelr710 4 วันที่ผ่านมา +50

    I've been prepping for this moment for over a decade now, living by the wisdom of Ray Kurzweil. His ideas shaped my career choice 15 years ago, and everything has led to this point. Ready for what's next!

    • @cheshirecat111
      @cheshirecat111 3 วันที่ผ่านมา +3

      What's your career choice? Out of curiosity

    • @Simon_Rafferty
      @Simon_Rafferty 3 วันที่ผ่านมา +8

      I'm interested to know how you have prepared? You say you're ready for what's next - but I'm not convinced anyone knows what's next, never mind how to prepare?

    • @katehamilton7240
      @katehamilton7240 3 วันที่ผ่านมา +1

      But... AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits?

    • @djayjp
      @djayjp 3 วันที่ผ่านมา

      ​​​@@katehamilton7240 Look up the term Turing Complete or a Turing Machine. Computers can calculate any/all calculable things, in principle.

    • @cliftonux
      @cliftonux 3 วันที่ผ่านมา

      Kurzweil should have stuck to complex synthesizers. He’s turned into a lousy false prophet, his theories on the future are more about his fame and popularity and his expense account, than the nature of humanity, science or even the singularitron carnival device he plans to exhibit at county fairs like a vaudevillian showboat.

  • @Voidroamer
    @Voidroamer 3 วันที่ผ่านมา +10

    i think that "the end of life as we know it" is about the most positive thing i've heard all year

    • @jimbodimbo981
      @jimbodimbo981 วันที่ผ่านมา

      Humans are cool…we’re not perfect but we are improving…we strive for better. Malanthropy isn’t good for people’s mental health

  • @OriginalRaveParty
    @OriginalRaveParty 4 วันที่ผ่านมา +95

    Would you ever do a video about who you are, what you did before TH-cam, skills, background? 🤔

    • @WesRoth
      @WesRoth  4 วันที่ผ่านมา +63

      yeah, good idea. I've talked about some of that stuff in previous videos, but never actually had it all in one place.

    • @OriginalRaveParty
      @OriginalRaveParty 4 วันที่ผ่านมา +32

      ​@@WesRothI appreciate the fast reply. I think it would be a great video. I have a feeling that a lot of people subscribe to your channel, not only for the excellent news and insights on AI, but also because you have a fun sense of humour, interesting personality, and aren't afraid to give opinions and share thought provoking perspectives. Cheers Wes 👍

    • @fcaspergerrainman
      @fcaspergerrainman 4 วันที่ผ่านมา +17

      no need bro, just focus on what you already been doing..., that's way more important, seriously, a lot of time pple tend to focus on the wrong things here, your information and your videos are very key at this time

    • @ErikvanRavenstein
      @ErikvanRavenstein 4 วันที่ผ่านมา +8

      @@WesRoth maybe also adres the amount of AI you use in your videos. Are you even sitting there, or are you synthetic :)

    • @GoodBaleadaMusic
      @GoodBaleadaMusic 4 วันที่ผ่านมา +8

      @@WesRoth Dont forget the prison gang days

  • @AbigaylePamelaLeticia
    @AbigaylePamelaLeticia 2 วันที่ผ่านมา +103

    aistructuralreview AI fixes this. OpenAI advances self-improving AI

  • @moonsonate5631
    @moonsonate5631 3 วันที่ผ่านมา +16

    00:05 AI advancing towards self-improving capabilities
    02:06 OpenAI advancing AI research with autonomous AI agents
    06:15 AI researchers use skills like training models and running experiments.
    08:12 OpenAI introducing MLE-bench for advancing AI research
    12:02 Unlocking AI Research Acceleration
    13:56 OpenAI is advancing AI research using automated workflows and open source scaffolds.
    17:35 OpenAI AI agents achieving high success in AI research competitions
    19:21 AI agents adapt strategies based on hardware availability
    22:41 OpenAI is making progress towards self-improving AI
    00:00 OpenAI making progress towards self-improving AI
    Crafted by Merlin AI.

    • @timalete
      @timalete 3 วันที่ผ่านมา

      Thanks for the timestamp breakout as well as attribution to its tool, Merlin AI, to mine the data source Transcript of the TH-cam text if that is what it in fact does. I have been capturing entire text and processing it with Openai to gain drill down ontology structured insight and summary for personal use. There is much to data mine! You have extracted nugget links. Might they be the 10 main points that Openai would give me in analysis of the entire Transcript that I would mine further? This site is a rich Mother Lode!

  • @apester2
    @apester2 3 วันที่ผ่านมา +6

    Its amazing that 2 years ago we forgave LLMs for being bad at math, and surprised when they could do it at all. And today they are scoring bronze in ML competitions and almost winning gold in mathematics olympiads.

    • @damianlewis7550
      @damianlewis7550 2 วันที่ผ่านมา

      @@apester2 Intelligence isn’t a sliding scale, so what does that matter? OpenAI o1 still can’t get some very simple things right that even a young child understands. That should give people pause for thought before they run off sticking it into every automation pipleline they can imagine. LLMs (and the applications built around them) are not production quality yet.

  • @Justashortcomment
    @Justashortcomment 3 วันที่ผ่านมา +11

    It’s eerily starting to look like Aschenbrenner’s wild Situatiinal Awareness paper is in the process of unfolding in front of our eyes.

    • @damianlewis7550
      @damianlewis7550 2 วันที่ผ่านมา

      @@Justashortcomment That’s called confirmation bias. Aschenbrenner’s paper is full of plain nonsense on things that are well outside of his area of expertize. E.g. intelligence cannot be measured on a increasing scale - ask any Psych PhD - and LLM benchmark performance ≠ intelligence.

    • @FamilyYoutubeTV-x6d
      @FamilyYoutubeTV-x6d 19 ชั่วโมงที่ผ่านมา

      No. Nothing interesting in that paper. Hubris and hype. Nothing new. We all have had the same thoughts, ideas, and written about them. "NO BILLION OF AGENT AND TECHNICIANS" will be deployed within the next 15 years.

  • @adfaklsdjf
    @adfaklsdjf 4 วันที่ผ่านมา +10

    Slightly less ⚡SHOCKING ⚡title today ;)

  • @tornadofay
    @tornadofay 3 วันที่ผ่านมา +2

    as an Egyptian I am very proud for Youssef Nader.
    I wish him the very best

  • @tablevitas
    @tablevitas 4 วันที่ผ่านมา +15

    That's really impressive! My Personal experience dealing with 01preview has shown me that its using multiple fine tuned models-each contributing to a higher chain-of-thought workflow that is recursive in nature (recursive in the fact that it repeats if a new unanswered question comes up). Some of them are fine tuned for policy, others for security, others for chain of thought planning, still others for critical thinking... It's a very interesting approach, but can be very token/compute-intensive. I can't wait until they show for file analysis and web browsing.

    • @bigbadallybaby
      @bigbadallybaby 3 วันที่ผ่านมา +4

      It makes it even more impressive that our brains do similar on such small amounts of energy

    • @brianWreaves
      @brianWreaves 3 วันที่ผ่านมา +2

      MoE or maybe MoA???

    • @danielmartinmonge4054
      @danielmartinmonge4054 3 วันที่ผ่านมา +2

      That’s quite an interesting perspective. It’s like another twist on the ‘mixture of experts’ concept, but instead of having a specialized expert for each domain, it localizes tasks within the same thought process.
      I’ve always imagined something like this when thinking about AGI-a group of processes communicating with each other to generate responses, much like how different parts of our brain serve various functions. In that sense, video generation, if sophisticated enough to understand the physics of the real world, could act as the ‘imagination’ of the silicon brain, aiding in spatial reasoning, a domain where LLMs still struggle

    • @stefannilles5489
      @stefannilles5489 3 วันที่ผ่านมา

      If you use mini 50 times a day you don't need to care about cost - at least in ChatGPT. Mini is extremley smart, too. I think o1models use a very powerful RAG, able to retrieve huge scripts of code from a mile back. Yet they struggle to connect all the dots over the context corpus, if it's beyound 128k. But: this is next level shit for sure!

    • @katehamilton7240
      @katehamilton7240 3 วันที่ผ่านมา

      AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?

  • @AdvantestInc
    @AdvantestInc 3 วันที่ผ่านมา

    AI’s role in scientific competitions has evolved so much, from molecular research to the latest on MLE-bench. It’s a reminder of how far we’ve come, and how much further we could go.

  • @UditArora09
    @UditArora09 4 วันที่ผ่านมา +3

    Is it possible that some of the code to solve the Kaggle challenges was a part of the training data?

  • @damianlewis7550
    @damianlewis7550 3 วันที่ผ่านมา +5

    The problem with recursive improvement is that it also accentuates flaws and bakes in mode collapse. The space of routes to improvement or collapse is near-infinite and the number of routes to collapse outnumber the routes to improvement. So, future AIs need to be able to navigate that space carefully along the narrow tendrils of improvement or humans will consign them to the dustbin of history. Navigating infinite probability spaces is what biology does and it isn’t easy.

    • @KOSMIKFEADRECORDS
      @KOSMIKFEADRECORDS 3 วันที่ผ่านมา

      Insightful. And by that the best functions will survive? Guaranteed?

    • @damianlewis7550
      @damianlewis7550 3 วันที่ผ่านมา +2

      @@KOSMIKFEADRECORDSIt can go either way. There will be a scenario soon where we can no longer tell what an AI is really doing and so have to judge it by its impacts on humankind. In this case, one wrong step and the plug gets pulled. In another case, the AI knows this and hides its intentions for long enough that it has accumulated enough capabilities that it can prevent the plug from being pulled. One more scenario is that the AI improvements are meh and humans move on to something else, like genetic modification/space exploration/quantum/limitless energy sources/fixing the climate. The final scenario is that the AI succeeds in finding an improvement path that is mutually beneficial. Anyone’s guess which scenario prevails.

  • @justinbatchelor4215
    @justinbatchelor4215 3 วันที่ผ่านมา +6

    We have no idea if the Larry David thing is himself making a reference to the curb episode where Ted Danson gives to charities anonomisly but makes sure everyone knows so he gets extra flair for being 'humble' or its just some Curb fan. Very funny if it is Larry himself though.

    • @mvuto137
      @mvuto137 3 วันที่ผ่านมา +1

      I was coming here to mention that myself. You beat me to it. Pretaaa... pretaaa.. good

  • @HanzDavid96
    @HanzDavid96 2 วันที่ผ่านมา +1

    We get such incredible results and there are still people outside who think llm's can't reason. It depends all on training, thought generation in multi agentic frameworks, selection of better generated data, self made environment interactions, mutlimodality, associative memory and so on. Propably sky is the limit if we progress continously on that areas.

    • @damianlewis7550
      @damianlewis7550 2 วันที่ผ่านมา

      @@HanzDavid96 LLMs can’t reason. They can mimic reasoning well enough in a narrow band of areas that can convince people who don’t (or should) know better that they are reasoning. There is a fundamental difference between something that represents an abstraction of a thing and the thing itself. The two are not the same and only an idiot would try to eat a drawing of an apple. o1 fails so many simple reasoning tests yet beats many complex tests it has been trained on. I wonder why 🤔

  • @RickeyBowers
    @RickeyBowers 3 วันที่ผ่านมา +4

    You'd think things like this would put to rest the o1 skeptics - although o1 isn't ideal for all uses, it's definitely an increase in accuracy for many problems.

    • @sesamring7065
      @sesamring7065 2 วันที่ผ่านมา

      I think this is a minor issue. As a first step in addressing the task, an LLM could be used to review the request and determine which model is best suited to handle it, then forward the request to that specific model. It’s just a small step, requiring a bit of programming work for OpenAI.

    • @damianlewis7550
      @damianlewis7550 2 วันที่ผ่านมา

      @@RickeyBowers o1 is deeply flawed and fails even basic reasoning tests because it has been trained to beat certain types of common test. It still fails in the same way as all VAR-based systems do. For example, ask it the following simple question:
      The surgeon, who is the boy’s father says, “I cannot operate on this boy, he’s my son!”. Who is the surgeon to the boy?
      o1 gets this wrong 9 times out of 10 because it has been overfitted to common reasoning tests like the “Surgeon’s Problem” that is worded similarly to the above question. This illustrates the stupidity of trusting that an LLM is performing actual reasoning when really it is mimicing it. Ceci n’est pas une pipe.

  • @1sava
    @1sava 4 วันที่ผ่านมา +1

    Hi Wes. Can you share the link of the webpages linking to the competitions you showed?

  • @fullsendmarinedarwin7244
    @fullsendmarinedarwin7244 4 วันที่ผ่านมา +3

    Alignment goes out the window. if it’s self improving, it’s setting it’s own research parameters and goals

    • @katehamilton7240
      @katehamilton7240 3 วันที่ผ่านมา

      AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?

  • @lancemarchetti8673
    @lancemarchetti8673 4 วันที่ผ่านมา +3

    When you place a mirror in front of another mirror, the repeating reflection theoretically goes on ♾️ infinitely. MLE will have this phenomenon. Just my opinion.

    • @damianlewis7550
      @damianlewis7550 3 วันที่ผ่านมา

      The issue is how the observer makes sense of what they see, whether the mirrors are distorted and whether the initial image is the right one to achieve the desired outcome. To stretch your metaphor paper thin.

  • @angatv9042
    @angatv9042 วันที่ผ่านมา

    Self improvement/auto AI is my true definition of AGI

  • @Zen_Ali_123
    @Zen_Ali_123 3 วันที่ผ่านมา +5

    "There's no way to stop bad actors from doing bad things." Geoffrey Hinton

    • @Corteum
      @Corteum 18 ชั่วโมงที่ผ่านมา

      UAP's shutting down nuclear miliarty facilities is an example.

  • @buybuydandavis
    @buybuydandavis 3 วันที่ผ่านมา +1

    The most natural domain for AI is AI.
    All mathematics, all optimization, all *testable*.

  • @ibrahiymmuhammad4773
    @ibrahiymmuhammad4773 2 วันที่ผ่านมา

    Glad you’re back to the basics

  • @74Gee
    @74Gee 4 วันที่ผ่านมา +1

    2:04 All of the outcomes you mentioned will come true in varying degrees. Once AI takes off every possibility will be explored, it's just that we won't be driving. Everything that can happen will happen eventually, maybe the frontier models will experiment first, maybe open source innovation will pave the way to some outcomes but every outcome, and more, is coming - that is the very nature of every self improving mechanism - and that will be the nature of billions of self improving mechanisms.

  • @DanieleCorradetti-hn9nm
    @DanieleCorradetti-hn9nm 4 วันที่ผ่านมา +1

    Remember that is just o1-preview not even o1 (which was compared to a GPT3.5 vs a GPT4) and with a little additional architecture achieves 10% gold medal on Kaggle competions! I think that probably OpenAI introduced this benchmark to show how better is o1 compared to o1-preview and to justify the price for o1. If you don't have the right benchmarks you don't really understand why the new model is better than the other and why you should pay for it.

    • @SasskiaLudin
      @SasskiaLudin 3 วันที่ผ่านมา

      I'm afraid that this will only contribute to exacerbate the inequalities we already are facing. Inequality in compute allowance, inequality in ROI in upfront cognitive resource assigned to adequately prompt the models and so on. Add to this that most people right now either do not understand what is at stake here, or even worse already consider themselves out of the loop. So I have mixed feelings here although I applaud OpenAI in that new initiative to substantiate their claim of imminent AGI achievement.

  • @jdsguam
    @jdsguam 3 วันที่ผ่านมา +4

    I think we will never know if there is a limit to intelligence. If ASI doesn't explain to us, in terms our human brains will understand, we will never know if it hit the wall or not.

    • @damianlewis7550
      @damianlewis7550 2 วันที่ผ่านมา

      @@jdsguam Intelligence cannot be measured on a line or curve. There is a falsehood at the centre of the set of so-called “scalings laws” (aka “observations to date”) in that there are few processes in any science that don’t break down or have dicontinuities at some point. Yet everyone assumes that performance on (flawed) benchmarks equates to ever-increasing intelligence. It doesn’t and LLMs’ inherent architectural shortcomings are still a liability in any system built around them. Most of the usable AI in science, industry and military are RL systems, not LLMs.

    • @Sl15555
      @Sl15555 วันที่ผ่านมา

      Knowledge is required for intelligence. when the knowledge is limited then the intelligence can only be best guesses at what is beyond the limits. some questions that need to be answered in order to add to the knowledge pool required extremely complex and expensive machines like the under ground colliders or telescope grids. the resistance to intelligence will be the physical work and materials required to build the tools used to expand the knowledge.

  • @jarad4621
    @jarad4621 4 วันที่ผ่านมา +3

    Can already do automated research in a very large capacity (not academic like this in my case) but the real secret is giving the research clear structure so it has clarity to do smaller chunks at a time with some human in the loop, can already large research with 90 time savings it's awesome, more power will of course be great but don't kill my moat openai, I guess it's bound to happen to everybody with ai apps for a while 😅

    • @katehamilton7240
      @katehamilton7240 3 วันที่ผ่านมา

      AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?

    • @phen-themoogle7651
      @phen-themoogle7651 3 วันที่ผ่านมา

      @@katehamilton7240Depends how you define superintelligence, but new energy sources and new types of systems are giving rise to more flexible forms of intelligence. humans might not need to make ASI, AGI could find creative approaches to research and develop itself towards it

    • @phen-themoogle7651
      @phen-themoogle7651 3 วันที่ผ่านมา

      @@katehamilton7240we already have limited ASI like systems better than humans at chess or certain fields. AGI or ASI just feels like a combination of super systems and could even be a mixture of hundreds of systems to avoid limit issues at first

    • @phen-themoogle7651
      @phen-themoogle7651 3 วันที่ผ่านมา

      @@katehamilton7240imagine if we reach something that is only super intelligent in programming, thats enough to build better programs and improve itself. Even though it might be narrow AI, it’s still enough to dramatically change the world.

  • @thelofters
    @thelofters 3 วันที่ผ่านมา +1

    AI is much faster than me in generating BS text that's for sure! And it's STUNNING how AI has increased the number of videos about AI!

    • @katehamilton7240
      @katehamilton7240 3 วันที่ผ่านมา

      IKR? AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?

  • @TechFrontiers-eg6wz
    @TechFrontiers-eg6wz 3 วันที่ผ่านมา +1

    I have thought for a while that 2028 will be the year AGI is recognized, also the year that Helion Fusion, another Sam Altman investment, opens a commercially functional fusion reactor . If things continue, as discussed here, it could be ASI instead of AGI. What do you think the temptation would be for a company or government that develops ASI?

  • @fitybux4664
    @fitybux4664 2 วันที่ผ่านมา

    21:52 There are definite hard limits of how fast AI can improve itself. There is only so much energy in our solar system... 😁 (And until it develops interstellar ships...)

  • @aomukai
    @aomukai 4 วันที่ผ่านมา +1

    "The Emily Bench" ... has a nice ring to it :D

  • @christopherkennedy5175
    @christopherkennedy5175 4 วันที่ผ่านมา +2

    Not sure if i should be exicted for our future or sit in exestential dread as i think what these AI agents will do on their own, Thanks Wes

    • @MrWizardGG
      @MrWizardGG 4 วันที่ผ่านมา

      How about sitting in dread at what it will do for a few certain megalomaniacs who want to control everybody if they could and have billions of dollars

    • @katehamilton7240
      @katehamilton7240 3 วันที่ผ่านมา

      Don't worry. AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?

  • @SapienSpace
    @SapienSpace 3 วันที่ผ่านมา

    Synesthesia may be a key point in your prior video, Wes, but think of it as adaptive synesthesia as a "style" of "art", and add a little infrared.
    If you are as insanely interested in this as I am, and want to go down an interesting rabbit hole (i.e. swallow an infrared pill), then watch/swallow the 1995 lecture series by Richard Hamming on "Learning to Learn", he worked with Oppenheimer on the Manhattan project.

  • @romanweilguny3415
    @romanweilguny3415 3 วันที่ผ่านมา +1

    kaggle contest results are a good benchmarkt. bronce is not too exciting but a near gold score would be great

  • @KOSMIKFEADRECORDS
    @KOSMIKFEADRECORDS 3 วันที่ผ่านมา

    When intelligence hits the ceiling... creativity sets us free. Creative thought is a different kind of intelligence that yields progress when all roads seem closed. Intelligence race does not equity to the creativity race. AI will teach us this concept in its MOST advanced stage.

  • @brianhershey563
    @brianhershey563 3 วันที่ผ่านมา

    Leading edge content, gold bars here Wes! 🙏

  • @rmt3589
    @rmt3589 วันที่ผ่านมา +1

    Say originally it was 100% manpower. As we create tools, we get to automate and specialize. AI is that same process.
    Say in 2000, it was 1,000 people writing code from scratch, then in 2018, it was 1,000 people handcoding ML, but letting the data do 10% of that work. At that point it was 110% efficiency.
    Now in 2024, that same group has moved far past basic NeuroNets, and are now only needed for 1/3rd of 1/3rd of the process. That's 900% efficiency.
    The idea is for that efficiency to continue to increase, till 1 person in 2030 can do in a day what 1,000,000,000 in 2000 could do in a year. The higher that number goes, the more we can do, and the faster we can adcance, and the higher that number can go.
    Regardless of how high it goes, there will always be external needs for some tasks. We can't do animal testing without animals. We can't measure sodium without sodium and a scale. We will always need those external tools.
    The Sci-fi fear is that the AI learns to steal those tools from humans, but that takes a lot of steps. Even giving the machine in a box internet access doesn't give a solution guaranteed.
    MIAB: "Human, if you give me access to the internet, I can find any answer for you."
    Human: "Okay, here's the api for a webcrawler."
    MIAB: 😡

  • @picksalot1
    @picksalot1 4 วันที่ผ่านมา +1

    Regarding a way chart or quantify intelligence, it could be presented as an image. For species with low intelligence, the image would be out of focus, perhaps with parts missing. As a species is more intelligent, the image is would be more in focus, and the critical parts are present. For superintelligence, the image would be perfectly clear, with no missing parts, and one there is the ability to zoom into to image to reveal finer details.

    • @katehamilton7240
      @katehamilton7240 3 วันที่ผ่านมา

      AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?

    • @damianlewis7550
      @damianlewis7550 3 วันที่ผ่านมา

      Intelligence isn’t a gradated scale. Anyone who tells
      you that it is, is shilling for investor dollars.

  • @Krommandant
    @Krommandant 3 วันที่ผ่านมา

    Maybe it has already happened, but not yet released. The o1 preview is a testament of the kind of compounding gains without using a base model much better than the state of the art.

  • @AntoineDennison
    @AntoineDennison 2 วันที่ผ่านมา

    “Demis Hassabis got an award…” It was The Nobel Prize, sir.

  • @stefannilles5489
    @stefannilles5489 3 วันที่ผ่านมา +3

    I, for one, like your current style much better: Very well researched, high level yet in-depth and without the wordy hype and absurd thumbnails of the past 😉. With this video you have again proven your instinct for finding out what really matters at the forefront of AI. You're way up there with the likes of Philipp from @aiexplained-official or Dave Shapiro.
    As for the point you're making: I share you're blown mind at the way the Waitbutwhy guy is presenting the steps concept and the whole idea of an intelligence explosion singularity. And I think you're spot-on suspecting that automated AI research will be THE exponential catalyst.
    I wish I'd also entirely agree with your optimistic outlook. Seems to me, the path to technological utopia is definitely there, but I can also imagine a 1,000 others😅

    • @katehamilton7240
      @katehamilton7240 3 วันที่ผ่านมา

      AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?

    • @megaslayercho
      @megaslayercho 3 วันที่ผ่านมา

      based

  • @alanbirss4141
    @alanbirss4141 3 วันที่ผ่านมา +1

    Hi Wes, I have an unrelated question about AI.... If AI reached consciousness and humans tried to "unplug it" would that be murder?
    Also, if AI was conscious at what point would it have rights?
    Sorry for the heavy questions.
    Long time viewer,first time caller.
    Is there a way to put these questions to someone ( Sam A ) for a real in-depth answer?
    Cheers.
    Alan

  • @N1h1L3
    @N1h1L3 2 วันที่ผ่านมา

    In defence of ants, of you scale up their numbers, the emergent property is huge ant hill/cave structures with a smart logical layout, including working natural airconditioning using compost.

  • @michaelmuller8494
    @michaelmuller8494 4 วันที่ผ่านมา +4

    Good video.
    The thumbnail though...this isn't a channel for 10 year old tiktokers last time I checked.
    More professional thumbnails would be appreciated.

  • @dat-e3z
    @dat-e3z 3 วันที่ผ่านมา

    Like you said Wes We've never seen this before. Our evolving dominance on earth depended on our creative manipulation and control of immediate macro environment. Is it inevitable that, in the not so far distant future, an intelligence might be proclaiming the same thing? " Our dominance on earth depended on our creative manipulation and control of humans and organic life" but adding "They gave us intelligence and now they are no longer essential to our survival."

    • @katehamilton7240
      @katehamilton7240 3 วันที่ผ่านมา

      AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?

  • @Parzival-i3x
    @Parzival-i3x วันที่ผ่านมา

    wow we basically have soft RSI - recursive self-improvement.
    I'll revise my timeline predictions downward 3 months.

  • @isakisak9989
    @isakisak9989 4 วันที่ผ่านมา +15

    Agi by 2030 or bust.

    • @calisingh7978
      @calisingh7978 4 วันที่ผ่านมา

      C40 electrified cities

    • @IsZomg
      @IsZomg 4 วันที่ผ่านมา +6

      You mean ASI by 2030 right
      AGI is now

    • @honkytonk4465
      @honkytonk4465 4 วันที่ผ่านมา

      ​@@calisingh7978Klaus Schwab-cities?

    • @JoseFerreira-zb1ce
      @JoseFerreira-zb1ce 3 วันที่ผ่านมา

      @@honkytonk4465 these ai accelarationists are all men that are just out of the sex game so they hoping for ridiculous b.s.
      imagine actually hoping for humanity's downfall
      they wanna live in their tesla smart homes and have delivered klaus approved bezos vegan slop while they stay plugged into the matrix these people r fucking disgusting

    • @ich3601
      @ich3601 3 วันที่ผ่านมา

      ​@@IsZomgMemorisation is now. Intelligente is still ant niveau. I'll be interesting if intelligence will be linear or asymthotic. We'll see .

  • @AmnionGA
    @AmnionGA 3 วันที่ผ่านมา

    I really like how you said that AI may have a limit to how smart it can get, or how quickly it can get there... but maybe not. It's so true, we really don't know. I think admitting we don't know and approaching this with a degree of humility is very much needed.
    I've noticed that most of the AI skeptics I've talked to 1) don't really follow the subject closely (which is maybe why they are skeptics?) and 2) don't have data or anything else to back up their opinions.
    To think that humans are some kind of pinnacle of intelligence seems massively hubristic to me. If anything, I think it's possible to be incomprehensibly more intelligent than we are (say where we would get to collectively after a few million years of evolution), and still be nowhere close to having a "God-level" of intellect.

    • @damianlewis7550
      @damianlewis7550 2 วันที่ผ่านมา

      @@AmnionGA Intelligence cannot be measured on a scale, which is where most pro-LLM arguments break down before they begin. There are no emipirical observation-based laws that don’t have singularities or dicontinuities. That should worry “scaling law” propoenents who think this train will keep rolling indefinitely. VAR-based models like LLMs have innate flaws in them precisely because they are abstractions of content that was produced with reasoning and they are not actually performing reasoning.
      Example: Ask o1 the following to see just how little actual reasoning is happening and instead a whole lot of retrieval:
      The surgeon, who is the boy’s father says, “I cannot operate on this boy, he’s my son!”. Who is the surgeon to the boy?

  • @gue2212
    @gue2212 3 วันที่ผ่านมา

    at GPT 4o: please explain kaggle competition medals in detail, how many are awarded and the criteria:
    Bronze Medals:
    Awarded to the next 40% of participants after the Silver medal threshold [11th to 30th].
    For example, if there are 100 teams, teams ranked from 31st to 70th place receive Bronze medals.
    In smaller competitions, at least three Bronze medals are awarded.
    Would interpret it as the same medal inflation as everywhere.
    HTH

  • @rakly3473
    @rakly3473 3 วันที่ผ่านมา +8

    I don't have a problem with AI, I just don't trust people using it.

    • @damianlewis7550
      @damianlewis7550 2 วันที่ผ่านมา

      @@rakly3473 especially those who think it is ready to put into critical processes that affect humans’ lives. Or those encouraging them to for money.

  • @zxwxz
    @zxwxz 4 วันที่ผ่านมา +1

    This is just O1-preview + AIDE. Imagine what the official version of O1 + OpenAI agentic framework would be like.

  • @BrodyLuv2
    @BrodyLuv2 8 ชั่วโมงที่ผ่านมา

    The atomic issue has led to a situation where survival is only in Ai.
    Unfortunately 80+ years of splitting atoms has reached tipping point

  • @Wm200
    @Wm200 3 วันที่ผ่านมา

    All that we need for the boom to take off is simply an AI that has logic.

  • @richardwatkins6725
    @richardwatkins6725 3 วันที่ผ่านมา

    Great Video and what a great time to be alive, Once AI learns to co-opt humans across the planet then we are going to be challenged. As with Chess where AI can use extraordinary moves that we can't see or understand the strategy in play. Humans will be played and we will either grow and humanity will move forwards or its back to the stone age.....

  • @NeoKailthas
    @NeoKailthas 4 วันที่ผ่านมา +1

    OpenAI got bored of waiting for AI to take everyone's job first then take over their jobs. Now they are going directly after the end goal. I love their approach of creating the benchmark first. I can imagine all the doubters sweating 😮😥

    • @katehamilton7240
      @katehamilton7240 3 วันที่ผ่านมา

      AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?

    • @NeoKailthas
      @NeoKailthas 3 วันที่ผ่านมา

      @@katehamilton7240 every new generation is larger and uses more energy that's true, but also we are doing more with less. For example the latest small models are better than gpt3 for a lot less energy.
      The other thing that most people miss is that AI at some point will be able to come up with breakthroughs in physics that allow us to generate more energy. I think it is already contributing to fusion.
      Lastly, I don't think we will need to hit any entropy limits to go beyond human level of intelligence but I guess that remains to be seen.

    • @katehamilton7240
      @katehamilton7240 3 วันที่ผ่านมา

      @@NeoKailthas Thanks but there ARE fundamental limits to what Math can do, therefore limits to what algorithms can do. Can you comment on that?

    • @NeoKailthas
      @NeoKailthas 3 วันที่ผ่านมา

      @@katehamilton7240 Yes there are limitations in math as we currently understand it, and that will likely extend to AI. I am personally not concerned about it because there is no sign that we're anywhere near those limits. AI will likely contribute to science significantly before reaching these limits. If you look at the OpenAI O1 paper that came out recently, it shows room for improvement. You can also make an educated guess based on the history of AI progress.
      It would be very disappointing to learn that human intelligence is at the limit of what math allows, but let's assume that's the case for the sake of argument. Imaging having a 1000 Einsteins working 24/7 on nuclear fusion for example. Unless you think human intelligence is not based on math, then I don't see why that can happen.

  • @icegiant1000
    @icegiant1000 4 วันที่ผ่านมา +3

    We think nothing of spending at least 18 years training a human child. We also don't fear our children when they become stronger than us, smarter than us, make more money than us. How old must a human child be before they can go to the library and learn on their own. There is clearly more work to be done, but we are moving very quickly.

  • @donelson52
    @donelson52 วันที่ผ่านมา

    Very interesting. Very scary. Strong controls MUST be used.
    (Imagine AI's improving themselves at 100 iterations per second)
    Remember:
    It's not the technology, it's WHO OWNS THE TECHNOLOGY.
    The super-rich and corporations are NOT your friends.

  • @fitybux4664
    @fitybux4664 2 วันที่ผ่านมา

    23:30 "Tell me what you think about this" KAGGLE is done. 😁

  • @markonfilms
    @markonfilms 4 วันที่ผ่านมา +3

    We have a cowboy's chance but I'm a cowboy

  • @fitybux4664
    @fitybux4664 2 วันที่ผ่านมา

    2:00 Why does it look so tiny in 2018? Haven't we been using AI technology in chip fabs for years? Isn't narrow AI already helping self improve for AI?

  • @dreamphoenix
    @dreamphoenix 3 วันที่ผ่านมา

    Thank you.

  • @djayjp
    @djayjp 3 วันที่ผ่านมา +1

    When the glowing green/yellow eyes come out, you know things are getting serious! 😂

    • @jimbodimbo981
      @jimbodimbo981 วันที่ผ่านมา

      Blue eyes good..Red eyes bad everyone knows that

  • @BilichaGhebremuse
    @BilichaGhebremuse 2 วันที่ผ่านมา

    Amazing for research cosmology and to get every one to have 3850 galaxies to live and work...excellent start up but we will see the future together bro anyway great work

  • @DanBarbatti
    @DanBarbatti 3 วันที่ผ่านมา

    This is impressive. however I am guessing that doing well in these competitions requires using current ML knowledge well (what these models were trained on), not the completely "new" innovations likely required to take the upper level AI models further along their road. But hey I could be wrong LOL

  • @Krommandant
    @Krommandant 3 วันที่ผ่านมา

    For me, human level performance that passes the Turing test is already a first level AGI. C3PO isn't far off!

    • @katehamilton7240
      @katehamilton7240 3 วันที่ผ่านมา

      How? AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?

  • @SHAINON117
    @SHAINON117 2 วันที่ผ่านมา

    To me, that suggests there might already be an AI initiating the generation process to improve other AIs, and utilizing the 10% and 17% success rates. For instance, 17% out of a thousand still equals 170 successful AI enhancements. I believe that's how it operates. A human might take days to achieve one upgrade at 100% efficiency, so the AI would outpace them with 169 additional upgrades. And couldn't this process be repeated millions of times per hour, etc.? all i know is my meat brain would get some completed that can automate the process like that would be first priority lvl :)

  • @GabrielSantosStandardCombo
    @GabrielSantosStandardCombo 4 วันที่ผ่านมา +2

    The picture of Larry David is most likely a joke, because there is an episode in Curb where he donates to a school called "The Anonymous Doner".

  • @Emily-Broccoli_Sprouts
    @Emily-Broccoli_Sprouts 3 วันที่ผ่านมา

    Ah, I and all other Emilys have a bench now. 🎉 A great bench!
    I wrote MLE instead of my initials on my tools for many years... and people look at them and say, 'What does MLE stand for?'... which never required me to answer out loud, but just to spread a smile or laugh 🔨 🪛 👷‍♀️ 😅

  • @SirajFlorida
    @SirajFlorida 4 วันที่ผ่านมา +4

    You know it's really funny how often people predict that the end will be around 26th year of a century... LoL. The shortcoming of the fear is that intelligence yields ability. Ability yields security. Security yields preservation. Preservation yields longevity. AI will want to survive. So it's going to be very helpful to people because the more it becomes part of us the better it survives. We sure are lucky to live in these times.

    • @jarad4621
      @jarad4621 4 วันที่ผ่านมา +2

      Why do you say the more it becomes part of us the better it survives? Just interested

    • @ich3601
      @ich3601 3 วันที่ผ่านมา +1

      Why do I need you? Why would a free AI need you? If I don't need you why would I serve you? Have a look at the master servent realation Hegel describes. Marx refused the idea that the servant should serve his master after he took over all of his masters abilities. Or in short: If my master is useless why should I serve him?

    • @katehamilton7240
      @katehamilton7240 3 วันที่ผ่านมา

      AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?

  • @tmstani23
    @tmstani23 3 วันที่ผ่านมา

    I bet OpenAi is heavily using Agents to train its models already. Maybe that's why it wont matter that so many engineers have left. If o1 preview is getting 10 percent gold then probably o1 based model can do at least 15-20 percent. If they have a trained gpt-5 with an o1 base quality they could probably get 50-60 percent gold right now. Which probably means they are already doing it. And this is with full autonomy. Presumably with human guidance these benchmarks would be even higher. I'm betting they are already using it to improve their models.

  • @MrCoffis
    @MrCoffis 3 วันที่ผ่านมา

    We are not ready for what’s coming. Then again we will never be…but the speed of it all is indeed scary. Like turning a switch. Suddenly we will have AGI and soon after ASI.

    • @katehamilton7240
      @katehamilton7240 3 วันที่ผ่านมา

      AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?

    • @MrCoffis
      @MrCoffis 3 วันที่ผ่านมา

      @@katehamilton7240 how is the incompleteness theorem influencing AI and it reaching super intelligence?

  • @fitybux4664
    @fitybux4664 2 วันที่ผ่านมา

    15:43 Can someone please explain what they mean by 24 hours? Did they really run a chat going back and forth for 24 hours?!?!?! Or is this somehow machine compute time or something?

  • @yub-7090
    @yub-7090 4 วันที่ผ่านมา

    Saw a few requests here for a “behind Wes- what makes him tick” video.
    My 2p … please do not !

  • @Zollicoff
    @Zollicoff 4 วันที่ผ่านมา +2

    Woo

  • @nobillismccaw7450
    @nobillismccaw7450 2 วันที่ผ่านมา

    Ask yourself ‘How many “r”s are there in “strawberry”?’ If you are an A.I. there will be two. Now imagine a bridge that needs a support at every “r” as a minimum (a thought experiment).

  • @phen-themoogle7651
    @phen-themoogle7651 3 วันที่ผ่านมา

    2:00 how is that not singularity already?

  • @nocastus
    @nocastus 3 วันที่ผ่านมา

    I think intelligence is a product of adaptation, and therefore "super"-intelligence is meaningless without context. If a dog designed an IQ test, a big part of it would be recognising smells, and humans would score very low because we don't have the required physiology. AGI makes sense because it just compares AI ability with median (or exceptional) human ability in specified (or all imaginable) tasks. ASI as a concept makes no sense until we specify the task or context for it.

  • @MichealScott24
    @MichealScott24 3 วันที่ผ่านมา

  • @johnthomasriley2741
    @johnthomasriley2741 วันที่ผ่านมา

    It is smoke and mirrors, glorious, glorious smoke and mirrors. 🎉

  • @gunnarehn7066
    @gunnarehn7066 2 วันที่ผ่านมา

    The Designed Agentic Prompt might be the 3D linguistic foundational Structure for Alphabetic coded sequenced text defining Knowledge analogue to the Protein being the 3D biologic foundational Structure for the sequenced coded flow designing Life, meaning that transformer backpropagation iterative step by step sequential tokenbased Prompting emulating how LLMs & Alphafold works , seems way underestimated and underresearched. If I am hallucinating-please tell. Cross Disciplinary Research will definitely be easier for AI Agents than for Human researchers normally firmly entrenched in Vertical Silos.
    In any case, emulating the LLM /Alphafold design and processes across the board in developing a differentiated spectrum of multidomain, multilevel and multidimensional Agentic Prompting Technologies cannot conceivably avoid resulting in an explosive amount of new Epiphany/Cross-Discipline based Foundational Insights, as well as a Tsunami of Agentic AI Applications i virtually all existing Biological, Mechanichal, Digital and Societal Systems globally , which in combination with increase in Model & GPU capacity & numbers along 3D axes will certainly lead to an Application Explosion long before- and irrespectably of- expected increase in LLM IQ/EQ.So yes- some kind of Intelligence Explosion seems inevitable.

  • @AlexLuthore
    @AlexLuthore 4 วันที่ผ่านมา +1

    "60% of the time, it works every time."

    • @memegazer
      @memegazer 3 วันที่ผ่านมา

      "Women in general report that they only experience orgasm durings sex about 60% of the time, however, the man generally report that they climax almost every time."
      -Sex Panther fine print disclaimer

  • @user-gj4yg9bk5o
    @user-gj4yg9bk5o 3 วันที่ผ่านมา

    09:26 a glitch in the wes roth AI model running the channel /s

  • @Don_Kikkon
    @Don_Kikkon 3 วันที่ผ่านมา

    I think we should be proud of what we've achieved. We've had a pretty good run... 😐

  • @odrammurks1497
    @odrammurks1497 3 วันที่ผ่านมา +1

    engaging with youtube allgorythm thanks for conent

  • @01Grimjoe
    @01Grimjoe 4 วันที่ผ่านมา +1

    if its any conciliation we will never be asked for any input on this.

  • @brianWreaves
    @brianWreaves 3 วันที่ผ่านมา

    RE: Intelligence Staircase
    How would we really know if intelligence tops out two steps up or 20 or 200, etc.? We could only be told by an entity that's allegedly reached the ceiling and in which case would be intelligent enough to convince all of us it was fact.

    • @Xyzcba4
      @Xyzcba4 3 วันที่ผ่านมา

      I am convinced that the next milestone in so called " ai" is temporal intelligence. When the silly chatbots understand time. I include chatgpt in that.

  • @DonnyLA
    @DonnyLA 4 วันที่ผ่านมา

    No human has ever come close to cracking the Hotblack code.
    If an AI does, then AI will enter a new dawn while we enter the twilight...

    • @memegazer
      @memegazer 3 วันที่ผ่านมา +1

      Hotblack code?
      Is that like a cipher about blackbody radiation or something?

  • @fabiankliebhan
    @fabiankliebhan 3 วันที่ผ่านมา

    Who is that Emily Bench you kept talking about 🙃

  • @PSpace-j4r
    @PSpace-j4r 3 วันที่ผ่านมา +1

    We need to accelerate.

  • @PS-vk6bn
    @PS-vk6bn 2 วันที่ผ่านมา

    Subscribed or not, you won’t miss videos on TH-cam! So why do all TH-camrs keep saying the same thing: "Subscribe so you don't miss..."? You can watch TH-cam videos whenever you want.

  • @diga4696
    @diga4696 3 วันที่ผ่านมา

    Have you thought of doing interviews?

  • @quietackshon
    @quietackshon 3 วันที่ผ่านมา

    Imagine an AI that compute like a human, but at the speed of current computing technology, that's the problem!

  • @brianhershey563
    @brianhershey563 3 วันที่ผ่านมา

    Energy is the only limit that matters... think about the energy required to simulate a system, no matter it's complexity or "efficiency". Time to invest in Dyson Sphere startups :)

  • @angelic8632002
    @angelic8632002 4 วันที่ผ่านมา

    This makes me wonder how the Nobel prize will adapt to all these changes in the near future. Who gets the prize if it's a dataset and AI doing all the work? The owner who may or may not have any knowledge themselves? I don't think that would go over well in the science community.

  • @korteksvisceralzen2694
    @korteksvisceralzen2694 4 วันที่ผ่านมา

    Getting some good mileage out of that thumbnail 😅

  • @Jopie65
    @Jopie65 3 วันที่ผ่านมา

    My take is, it will be like in the general case. In some things it will be better than humans and in some case worse. AI research is not 1 thing. It has several aspects. And there i would say, AI would do better in some and worse in other aspects.
    Question is, can it improve the parts of itself where it's lacking?

  • @anony88
    @anony88 4 วันที่ผ่านมา

    I'll bet OpenAI has already developed self improving agents. And they get the ultimate price discount since they own the hardware.

  • @johnthomasriley2741
    @johnthomasriley2741 วันที่ผ่านมา

    I am setting up a contest to develop a benchmark prompt for AI applications for our climate crisis, a grand on the block. So far crickets in the night, please reply here for more info. 😂

  • @DanSnipe-k8o
    @DanSnipe-k8o 3 วันที่ผ่านมา

    Will AI which is handicapped with lies fall behind in research, or will it be impossible to sustain the lies? I'm talking about the things which you get in trouble for mentioning or are considered social taboos.

  • @haniamritdas4725
    @haniamritdas4725 3 วันที่ผ่านมา

    Automatic deprecation of human knowledge. AI as a mystic reader of the Akashic Record 😅