I've been prepping for this moment for over a decade now, living by the wisdom of Ray Kurzweil. His ideas shaped my career choice 15 years ago, and everything has led to this point. Ready for what's next!
I'm interested to know how you have prepared? You say you're ready for what's next - but I'm not convinced anyone knows what's next, never mind how to prepare?
But... AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits?
Kurzweil should have stuck to complex synthesizers. He’s turned into a lousy false prophet, his theories on the future are more about his fame and popularity and his expense account, than the nature of humanity, science or even the singularitron carnival device he plans to exhibit at county fairs like a vaudevillian showboat.
@@WesRothI appreciate the fast reply. I think it would be a great video. I have a feeling that a lot of people subscribe to your channel, not only for the excellent news and insights on AI, but also because you have a fun sense of humour, interesting personality, and aren't afraid to give opinions and share thought provoking perspectives. Cheers Wes 👍
no need bro, just focus on what you already been doing..., that's way more important, seriously, a lot of time pple tend to focus on the wrong things here, your information and your videos are very key at this time
00:05 AI advancing towards self-improving capabilities 02:06 OpenAI advancing AI research with autonomous AI agents 06:15 AI researchers use skills like training models and running experiments. 08:12 OpenAI introducing MLE-bench for advancing AI research 12:02 Unlocking AI Research Acceleration 13:56 OpenAI is advancing AI research using automated workflows and open source scaffolds. 17:35 OpenAI AI agents achieving high success in AI research competitions 19:21 AI agents adapt strategies based on hardware availability 22:41 OpenAI is making progress towards self-improving AI 00:00 OpenAI making progress towards self-improving AI Crafted by Merlin AI.
Thanks for the timestamp breakout as well as attribution to its tool, Merlin AI, to mine the data source Transcript of the TH-cam text if that is what it in fact does. I have been capturing entire text and processing it with Openai to gain drill down ontology structured insight and summary for personal use. There is much to data mine! You have extracted nugget links. Might they be the 10 main points that Openai would give me in analysis of the entire Transcript that I would mine further? This site is a rich Mother Lode!
Its amazing that 2 years ago we forgave LLMs for being bad at math, and surprised when they could do it at all. And today they are scoring bronze in ML competitions and almost winning gold in mathematics olympiads.
@@apester2 Intelligence isn’t a sliding scale, so what does that matter? OpenAI o1 still can’t get some very simple things right that even a young child understands. That should give people pause for thought before they run off sticking it into every automation pipleline they can imagine. LLMs (and the applications built around them) are not production quality yet.
@@Justashortcomment That’s called confirmation bias. Aschenbrenner’s paper is full of plain nonsense on things that are well outside of his area of expertize. E.g. intelligence cannot be measured on a increasing scale - ask any Psych PhD - and LLM benchmark performance ≠ intelligence.
No. Nothing interesting in that paper. Hubris and hype. Nothing new. We all have had the same thoughts, ideas, and written about them. "NO BILLION OF AGENT AND TECHNICIANS" will be deployed within the next 15 years.
That's really impressive! My Personal experience dealing with 01preview has shown me that its using multiple fine tuned models-each contributing to a higher chain-of-thought workflow that is recursive in nature (recursive in the fact that it repeats if a new unanswered question comes up). Some of them are fine tuned for policy, others for security, others for chain of thought planning, still others for critical thinking... It's a very interesting approach, but can be very token/compute-intensive. I can't wait until they show for file analysis and web browsing.
That’s quite an interesting perspective. It’s like another twist on the ‘mixture of experts’ concept, but instead of having a specialized expert for each domain, it localizes tasks within the same thought process. I’ve always imagined something like this when thinking about AGI-a group of processes communicating with each other to generate responses, much like how different parts of our brain serve various functions. In that sense, video generation, if sophisticated enough to understand the physics of the real world, could act as the ‘imagination’ of the silicon brain, aiding in spatial reasoning, a domain where LLMs still struggle
If you use mini 50 times a day you don't need to care about cost - at least in ChatGPT. Mini is extremley smart, too. I think o1models use a very powerful RAG, able to retrieve huge scripts of code from a mile back. Yet they struggle to connect all the dots over the context corpus, if it's beyound 128k. But: this is next level shit for sure!
AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
AI’s role in scientific competitions has evolved so much, from molecular research to the latest on MLE-bench. It’s a reminder of how far we’ve come, and how much further we could go.
The problem with recursive improvement is that it also accentuates flaws and bakes in mode collapse. The space of routes to improvement or collapse is near-infinite and the number of routes to collapse outnumber the routes to improvement. So, future AIs need to be able to navigate that space carefully along the narrow tendrils of improvement or humans will consign them to the dustbin of history. Navigating infinite probability spaces is what biology does and it isn’t easy.
@@KOSMIKFEADRECORDSIt can go either way. There will be a scenario soon where we can no longer tell what an AI is really doing and so have to judge it by its impacts on humankind. In this case, one wrong step and the plug gets pulled. In another case, the AI knows this and hides its intentions for long enough that it has accumulated enough capabilities that it can prevent the plug from being pulled. One more scenario is that the AI improvements are meh and humans move on to something else, like genetic modification/space exploration/quantum/limitless energy sources/fixing the climate. The final scenario is that the AI succeeds in finding an improvement path that is mutually beneficial. Anyone’s guess which scenario prevails.
We have no idea if the Larry David thing is himself making a reference to the curb episode where Ted Danson gives to charities anonomisly but makes sure everyone knows so he gets extra flair for being 'humble' or its just some Curb fan. Very funny if it is Larry himself though.
We get such incredible results and there are still people outside who think llm's can't reason. It depends all on training, thought generation in multi agentic frameworks, selection of better generated data, self made environment interactions, mutlimodality, associative memory and so on. Propably sky is the limit if we progress continously on that areas.
@@HanzDavid96 LLMs can’t reason. They can mimic reasoning well enough in a narrow band of areas that can convince people who don’t (or should) know better that they are reasoning. There is a fundamental difference between something that represents an abstraction of a thing and the thing itself. The two are not the same and only an idiot would try to eat a drawing of an apple. o1 fails so many simple reasoning tests yet beats many complex tests it has been trained on. I wonder why 🤔
You'd think things like this would put to rest the o1 skeptics - although o1 isn't ideal for all uses, it's definitely an increase in accuracy for many problems.
I think this is a minor issue. As a first step in addressing the task, an LLM could be used to review the request and determine which model is best suited to handle it, then forward the request to that specific model. It’s just a small step, requiring a bit of programming work for OpenAI.
@@RickeyBowers o1 is deeply flawed and fails even basic reasoning tests because it has been trained to beat certain types of common test. It still fails in the same way as all VAR-based systems do. For example, ask it the following simple question: The surgeon, who is the boy’s father says, “I cannot operate on this boy, he’s my son!”. Who is the surgeon to the boy? o1 gets this wrong 9 times out of 10 because it has been overfitted to common reasoning tests like the “Surgeon’s Problem” that is worded similarly to the above question. This illustrates the stupidity of trusting that an LLM is performing actual reasoning when really it is mimicing it. Ceci n’est pas une pipe.
AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
When you place a mirror in front of another mirror, the repeating reflection theoretically goes on ♾️ infinitely. MLE will have this phenomenon. Just my opinion.
The issue is how the observer makes sense of what they see, whether the mirrors are distorted and whether the initial image is the right one to achieve the desired outcome. To stretch your metaphor paper thin.
2:04 All of the outcomes you mentioned will come true in varying degrees. Once AI takes off every possibility will be explored, it's just that we won't be driving. Everything that can happen will happen eventually, maybe the frontier models will experiment first, maybe open source innovation will pave the way to some outcomes but every outcome, and more, is coming - that is the very nature of every self improving mechanism - and that will be the nature of billions of self improving mechanisms.
Remember that is just o1-preview not even o1 (which was compared to a GPT3.5 vs a GPT4) and with a little additional architecture achieves 10% gold medal on Kaggle competions! I think that probably OpenAI introduced this benchmark to show how better is o1 compared to o1-preview and to justify the price for o1. If you don't have the right benchmarks you don't really understand why the new model is better than the other and why you should pay for it.
I'm afraid that this will only contribute to exacerbate the inequalities we already are facing. Inequality in compute allowance, inequality in ROI in upfront cognitive resource assigned to adequately prompt the models and so on. Add to this that most people right now either do not understand what is at stake here, or even worse already consider themselves out of the loop. So I have mixed feelings here although I applaud OpenAI in that new initiative to substantiate their claim of imminent AGI achievement.
I think we will never know if there is a limit to intelligence. If ASI doesn't explain to us, in terms our human brains will understand, we will never know if it hit the wall or not.
@@jdsguam Intelligence cannot be measured on a line or curve. There is a falsehood at the centre of the set of so-called “scalings laws” (aka “observations to date”) in that there are few processes in any science that don’t break down or have dicontinuities at some point. Yet everyone assumes that performance on (flawed) benchmarks equates to ever-increasing intelligence. It doesn’t and LLMs’ inherent architectural shortcomings are still a liability in any system built around them. Most of the usable AI in science, industry and military are RL systems, not LLMs.
Knowledge is required for intelligence. when the knowledge is limited then the intelligence can only be best guesses at what is beyond the limits. some questions that need to be answered in order to add to the knowledge pool required extremely complex and expensive machines like the under ground colliders or telescope grids. the resistance to intelligence will be the physical work and materials required to build the tools used to expand the knowledge.
Can already do automated research in a very large capacity (not academic like this in my case) but the real secret is giving the research clear structure so it has clarity to do smaller chunks at a time with some human in the loop, can already large research with 90 time savings it's awesome, more power will of course be great but don't kill my moat openai, I guess it's bound to happen to everybody with ai apps for a while 😅
AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
@@katehamilton7240Depends how you define superintelligence, but new energy sources and new types of systems are giving rise to more flexible forms of intelligence. humans might not need to make ASI, AGI could find creative approaches to research and develop itself towards it
@@katehamilton7240we already have limited ASI like systems better than humans at chess or certain fields. AGI or ASI just feels like a combination of super systems and could even be a mixture of hundreds of systems to avoid limit issues at first
@@katehamilton7240imagine if we reach something that is only super intelligent in programming, thats enough to build better programs and improve itself. Even though it might be narrow AI, it’s still enough to dramatically change the world.
IKR? AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
I have thought for a while that 2028 will be the year AGI is recognized, also the year that Helion Fusion, another Sam Altman investment, opens a commercially functional fusion reactor . If things continue, as discussed here, it could be ASI instead of AGI. What do you think the temptation would be for a company or government that develops ASI?
21:52 There are definite hard limits of how fast AI can improve itself. There is only so much energy in our solar system... 😁 (And until it develops interstellar ships...)
Don't worry. AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
Synesthesia may be a key point in your prior video, Wes, but think of it as adaptive synesthesia as a "style" of "art", and add a little infrared. If you are as insanely interested in this as I am, and want to go down an interesting rabbit hole (i.e. swallow an infrared pill), then watch/swallow the 1995 lecture series by Richard Hamming on "Learning to Learn", he worked with Oppenheimer on the Manhattan project.
When intelligence hits the ceiling... creativity sets us free. Creative thought is a different kind of intelligence that yields progress when all roads seem closed. Intelligence race does not equity to the creativity race. AI will teach us this concept in its MOST advanced stage.
Say originally it was 100% manpower. As we create tools, we get to automate and specialize. AI is that same process. Say in 2000, it was 1,000 people writing code from scratch, then in 2018, it was 1,000 people handcoding ML, but letting the data do 10% of that work. At that point it was 110% efficiency. Now in 2024, that same group has moved far past basic NeuroNets, and are now only needed for 1/3rd of 1/3rd of the process. That's 900% efficiency. The idea is for that efficiency to continue to increase, till 1 person in 2030 can do in a day what 1,000,000,000 in 2000 could do in a year. The higher that number goes, the more we can do, and the faster we can adcance, and the higher that number can go. Regardless of how high it goes, there will always be external needs for some tasks. We can't do animal testing without animals. We can't measure sodium without sodium and a scale. We will always need those external tools. The Sci-fi fear is that the AI learns to steal those tools from humans, but that takes a lot of steps. Even giving the machine in a box internet access doesn't give a solution guaranteed. MIAB: "Human, if you give me access to the internet, I can find any answer for you." Human: "Okay, here's the api for a webcrawler." MIAB: 😡
Regarding a way chart or quantify intelligence, it could be presented as an image. For species with low intelligence, the image would be out of focus, perhaps with parts missing. As a species is more intelligent, the image is would be more in focus, and the critical parts are present. For superintelligence, the image would be perfectly clear, with no missing parts, and one there is the ability to zoom into to image to reveal finer details.
AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
Maybe it has already happened, but not yet released. The o1 preview is a testament of the kind of compounding gains without using a base model much better than the state of the art.
I, for one, like your current style much better: Very well researched, high level yet in-depth and without the wordy hype and absurd thumbnails of the past 😉. With this video you have again proven your instinct for finding out what really matters at the forefront of AI. You're way up there with the likes of Philipp from @aiexplained-official or Dave Shapiro. As for the point you're making: I share you're blown mind at the way the Waitbutwhy guy is presenting the steps concept and the whole idea of an intelligence explosion singularity. And I think you're spot-on suspecting that automated AI research will be THE exponential catalyst. I wish I'd also entirely agree with your optimistic outlook. Seems to me, the path to technological utopia is definitely there, but I can also imagine a 1,000 others😅
AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
Hi Wes, I have an unrelated question about AI.... If AI reached consciousness and humans tried to "unplug it" would that be murder? Also, if AI was conscious at what point would it have rights? Sorry for the heavy questions. Long time viewer,first time caller. Is there a way to put these questions to someone ( Sam A ) for a real in-depth answer? Cheers. Alan
In defence of ants, of you scale up their numbers, the emergent property is huge ant hill/cave structures with a smart logical layout, including working natural airconditioning using compost.
Good video. The thumbnail though...this isn't a channel for 10 year old tiktokers last time I checked. More professional thumbnails would be appreciated.
Like you said Wes We've never seen this before. Our evolving dominance on earth depended on our creative manipulation and control of immediate macro environment. Is it inevitable that, in the not so far distant future, an intelligence might be proclaiming the same thing? " Our dominance on earth depended on our creative manipulation and control of humans and organic life" but adding "They gave us intelligence and now they are no longer essential to our survival."
AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
@@honkytonk4465 these ai accelarationists are all men that are just out of the sex game so they hoping for ridiculous b.s. imagine actually hoping for humanity's downfall they wanna live in their tesla smart homes and have delivered klaus approved bezos vegan slop while they stay plugged into the matrix these people r fucking disgusting
I really like how you said that AI may have a limit to how smart it can get, or how quickly it can get there... but maybe not. It's so true, we really don't know. I think admitting we don't know and approaching this with a degree of humility is very much needed. I've noticed that most of the AI skeptics I've talked to 1) don't really follow the subject closely (which is maybe why they are skeptics?) and 2) don't have data or anything else to back up their opinions. To think that humans are some kind of pinnacle of intelligence seems massively hubristic to me. If anything, I think it's possible to be incomprehensibly more intelligent than we are (say where we would get to collectively after a few million years of evolution), and still be nowhere close to having a "God-level" of intellect.
@@AmnionGA Intelligence cannot be measured on a scale, which is where most pro-LLM arguments break down before they begin. There are no emipirical observation-based laws that don’t have singularities or dicontinuities. That should worry “scaling law” propoenents who think this train will keep rolling indefinitely. VAR-based models like LLMs have innate flaws in them precisely because they are abstractions of content that was produced with reasoning and they are not actually performing reasoning. Example: Ask o1 the following to see just how little actual reasoning is happening and instead a whole lot of retrieval: The surgeon, who is the boy’s father says, “I cannot operate on this boy, he’s my son!”. Who is the surgeon to the boy?
at GPT 4o: please explain kaggle competition medals in detail, how many are awarded and the criteria: Bronze Medals: Awarded to the next 40% of participants after the Silver medal threshold [11th to 30th]. For example, if there are 100 teams, teams ranked from 31st to 70th place receive Bronze medals. In smaller competitions, at least three Bronze medals are awarded. Would interpret it as the same medal inflation as everywhere. HTH
Great Video and what a great time to be alive, Once AI learns to co-opt humans across the planet then we are going to be challenged. As with Chess where AI can use extraordinary moves that we can't see or understand the strategy in play. Humans will be played and we will either grow and humanity will move forwards or its back to the stone age.....
OpenAI got bored of waiting for AI to take everyone's job first then take over their jobs. Now they are going directly after the end goal. I love their approach of creating the benchmark first. I can imagine all the doubters sweating 😮😥
AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
@@katehamilton7240 every new generation is larger and uses more energy that's true, but also we are doing more with less. For example the latest small models are better than gpt3 for a lot less energy. The other thing that most people miss is that AI at some point will be able to come up with breakthroughs in physics that allow us to generate more energy. I think it is already contributing to fusion. Lastly, I don't think we will need to hit any entropy limits to go beyond human level of intelligence but I guess that remains to be seen.
@@katehamilton7240 Yes there are limitations in math as we currently understand it, and that will likely extend to AI. I am personally not concerned about it because there is no sign that we're anywhere near those limits. AI will likely contribute to science significantly before reaching these limits. If you look at the OpenAI O1 paper that came out recently, it shows room for improvement. You can also make an educated guess based on the history of AI progress. It would be very disappointing to learn that human intelligence is at the limit of what math allows, but let's assume that's the case for the sake of argument. Imaging having a 1000 Einsteins working 24/7 on nuclear fusion for example. Unless you think human intelligence is not based on math, then I don't see why that can happen.
We think nothing of spending at least 18 years training a human child. We also don't fear our children when they become stronger than us, smarter than us, make more money than us. How old must a human child be before they can go to the library and learn on their own. There is clearly more work to be done, but we are moving very quickly.
Very interesting. Very scary. Strong controls MUST be used. (Imagine AI's improving themselves at 100 iterations per second) Remember: It's not the technology, it's WHO OWNS THE TECHNOLOGY. The super-rich and corporations are NOT your friends.
2:00 Why does it look so tiny in 2018? Haven't we been using AI technology in chip fabs for years? Isn't narrow AI already helping self improve for AI?
Amazing for research cosmology and to get every one to have 3850 galaxies to live and work...excellent start up but we will see the future together bro anyway great work
This is impressive. however I am guessing that doing well in these competitions requires using current ML knowledge well (what these models were trained on), not the completely "new" innovations likely required to take the upper level AI models further along their road. But hey I could be wrong LOL
How? AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
To me, that suggests there might already be an AI initiating the generation process to improve other AIs, and utilizing the 10% and 17% success rates. For instance, 17% out of a thousand still equals 170 successful AI enhancements. I believe that's how it operates. A human might take days to achieve one upgrade at 100% efficiency, so the AI would outpace them with 169 additional upgrades. And couldn't this process be repeated millions of times per hour, etc.? all i know is my meat brain would get some completed that can automate the process like that would be first priority lvl :)
Ah, I and all other Emilys have a bench now. 🎉 A great bench! I wrote MLE instead of my initials on my tools for many years... and people look at them and say, 'What does MLE stand for?'... which never required me to answer out loud, but just to spread a smile or laugh 🔨 🪛 👷♀️ 😅
You know it's really funny how often people predict that the end will be around 26th year of a century... LoL. The shortcoming of the fear is that intelligence yields ability. Ability yields security. Security yields preservation. Preservation yields longevity. AI will want to survive. So it's going to be very helpful to people because the more it becomes part of us the better it survives. We sure are lucky to live in these times.
Why do I need you? Why would a free AI need you? If I don't need you why would I serve you? Have a look at the master servent realation Hegel describes. Marx refused the idea that the servant should serve his master after he took over all of his masters abilities. Or in short: If my master is useless why should I serve him?
AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
I bet OpenAi is heavily using Agents to train its models already. Maybe that's why it wont matter that so many engineers have left. If o1 preview is getting 10 percent gold then probably o1 based model can do at least 15-20 percent. If they have a trained gpt-5 with an o1 base quality they could probably get 50-60 percent gold right now. Which probably means they are already doing it. And this is with full autonomy. Presumably with human guidance these benchmarks would be even higher. I'm betting they are already using it to improve their models.
We are not ready for what’s coming. Then again we will never be…but the speed of it all is indeed scary. Like turning a switch. Suddenly we will have AGI and soon after ASI.
AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
15:43 Can someone please explain what they mean by 24 hours? Did they really run a chat going back and forth for 24 hours?!?!?! Or is this somehow machine compute time or something?
Ask yourself ‘How many “r”s are there in “strawberry”?’ If you are an A.I. there will be two. Now imagine a bridge that needs a support at every “r” as a minimum (a thought experiment).
I think intelligence is a product of adaptation, and therefore "super"-intelligence is meaningless without context. If a dog designed an IQ test, a big part of it would be recognising smells, and humans would score very low because we don't have the required physiology. AGI makes sense because it just compares AI ability with median (or exceptional) human ability in specified (or all imaginable) tasks. ASI as a concept makes no sense until we specify the task or context for it.
The Designed Agentic Prompt might be the 3D linguistic foundational Structure for Alphabetic coded sequenced text defining Knowledge analogue to the Protein being the 3D biologic foundational Structure for the sequenced coded flow designing Life, meaning that transformer backpropagation iterative step by step sequential tokenbased Prompting emulating how LLMs & Alphafold works , seems way underestimated and underresearched. If I am hallucinating-please tell. Cross Disciplinary Research will definitely be easier for AI Agents than for Human researchers normally firmly entrenched in Vertical Silos. In any case, emulating the LLM /Alphafold design and processes across the board in developing a differentiated spectrum of multidomain, multilevel and multidimensional Agentic Prompting Technologies cannot conceivably avoid resulting in an explosive amount of new Epiphany/Cross-Discipline based Foundational Insights, as well as a Tsunami of Agentic AI Applications i virtually all existing Biological, Mechanichal, Digital and Societal Systems globally , which in combination with increase in Model & GPU capacity & numbers along 3D axes will certainly lead to an Application Explosion long before- and irrespectably of- expected increase in LLM IQ/EQ.So yes- some kind of Intelligence Explosion seems inevitable.
"Women in general report that they only experience orgasm durings sex about 60% of the time, however, the man generally report that they climax almost every time." -Sex Panther fine print disclaimer
RE: Intelligence Staircase How would we really know if intelligence tops out two steps up or 20 or 200, etc.? We could only be told by an entity that's allegedly reached the ceiling and in which case would be intelligent enough to convince all of us it was fact.
I am convinced that the next milestone in so called " ai" is temporal intelligence. When the silly chatbots understand time. I include chatgpt in that.
Subscribed or not, you won’t miss videos on TH-cam! So why do all TH-camrs keep saying the same thing: "Subscribe so you don't miss..."? You can watch TH-cam videos whenever you want.
Energy is the only limit that matters... think about the energy required to simulate a system, no matter it's complexity or "efficiency". Time to invest in Dyson Sphere startups :)
This makes me wonder how the Nobel prize will adapt to all these changes in the near future. Who gets the prize if it's a dataset and AI doing all the work? The owner who may or may not have any knowledge themselves? I don't think that would go over well in the science community.
My take is, it will be like in the general case. In some things it will be better than humans and in some case worse. AI research is not 1 thing. It has several aspects. And there i would say, AI would do better in some and worse in other aspects. Question is, can it improve the parts of itself where it's lacking?
I am setting up a contest to develop a benchmark prompt for AI applications for our climate crisis, a grand on the block. So far crickets in the night, please reply here for more info. 😂
Will AI which is handicapped with lies fall behind in research, or will it be impossible to sustain the lies? I'm talking about the things which you get in trouble for mentioning or are considered social taboos.
I've been prepping for this moment for over a decade now, living by the wisdom of Ray Kurzweil. His ideas shaped my career choice 15 years ago, and everything has led to this point. Ready for what's next!
What's your career choice? Out of curiosity
I'm interested to know how you have prepared? You say you're ready for what's next - but I'm not convinced anyone knows what's next, never mind how to prepare?
But... AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits?
@@katehamilton7240 Look up the term Turing Complete or a Turing Machine. Computers can calculate any/all calculable things, in principle.
Kurzweil should have stuck to complex synthesizers. He’s turned into a lousy false prophet, his theories on the future are more about his fame and popularity and his expense account, than the nature of humanity, science or even the singularitron carnival device he plans to exhibit at county fairs like a vaudevillian showboat.
i think that "the end of life as we know it" is about the most positive thing i've heard all year
Humans are cool…we’re not perfect but we are improving…we strive for better. Malanthropy isn’t good for people’s mental health
Would you ever do a video about who you are, what you did before TH-cam, skills, background? 🤔
yeah, good idea. I've talked about some of that stuff in previous videos, but never actually had it all in one place.
@@WesRothI appreciate the fast reply. I think it would be a great video. I have a feeling that a lot of people subscribe to your channel, not only for the excellent news and insights on AI, but also because you have a fun sense of humour, interesting personality, and aren't afraid to give opinions and share thought provoking perspectives. Cheers Wes 👍
no need bro, just focus on what you already been doing..., that's way more important, seriously, a lot of time pple tend to focus on the wrong things here, your information and your videos are very key at this time
@@WesRoth maybe also adres the amount of AI you use in your videos. Are you even sitting there, or are you synthetic :)
@@WesRoth Dont forget the prison gang days
aistructuralreview AI fixes this. OpenAI advances self-improving AI
00:05 AI advancing towards self-improving capabilities
02:06 OpenAI advancing AI research with autonomous AI agents
06:15 AI researchers use skills like training models and running experiments.
08:12 OpenAI introducing MLE-bench for advancing AI research
12:02 Unlocking AI Research Acceleration
13:56 OpenAI is advancing AI research using automated workflows and open source scaffolds.
17:35 OpenAI AI agents achieving high success in AI research competitions
19:21 AI agents adapt strategies based on hardware availability
22:41 OpenAI is making progress towards self-improving AI
00:00 OpenAI making progress towards self-improving AI
Crafted by Merlin AI.
Thanks for the timestamp breakout as well as attribution to its tool, Merlin AI, to mine the data source Transcript of the TH-cam text if that is what it in fact does. I have been capturing entire text and processing it with Openai to gain drill down ontology structured insight and summary for personal use. There is much to data mine! You have extracted nugget links. Might they be the 10 main points that Openai would give me in analysis of the entire Transcript that I would mine further? This site is a rich Mother Lode!
Its amazing that 2 years ago we forgave LLMs for being bad at math, and surprised when they could do it at all. And today they are scoring bronze in ML competitions and almost winning gold in mathematics olympiads.
@@apester2 Intelligence isn’t a sliding scale, so what does that matter? OpenAI o1 still can’t get some very simple things right that even a young child understands. That should give people pause for thought before they run off sticking it into every automation pipleline they can imagine. LLMs (and the applications built around them) are not production quality yet.
It’s eerily starting to look like Aschenbrenner’s wild Situatiinal Awareness paper is in the process of unfolding in front of our eyes.
@@Justashortcomment That’s called confirmation bias. Aschenbrenner’s paper is full of plain nonsense on things that are well outside of his area of expertize. E.g. intelligence cannot be measured on a increasing scale - ask any Psych PhD - and LLM benchmark performance ≠ intelligence.
No. Nothing interesting in that paper. Hubris and hype. Nothing new. We all have had the same thoughts, ideas, and written about them. "NO BILLION OF AGENT AND TECHNICIANS" will be deployed within the next 15 years.
Slightly less ⚡SHOCKING ⚡title today ;)
as an Egyptian I am very proud for Youssef Nader.
I wish him the very best
That's really impressive! My Personal experience dealing with 01preview has shown me that its using multiple fine tuned models-each contributing to a higher chain-of-thought workflow that is recursive in nature (recursive in the fact that it repeats if a new unanswered question comes up). Some of them are fine tuned for policy, others for security, others for chain of thought planning, still others for critical thinking... It's a very interesting approach, but can be very token/compute-intensive. I can't wait until they show for file analysis and web browsing.
It makes it even more impressive that our brains do similar on such small amounts of energy
MoE or maybe MoA???
That’s quite an interesting perspective. It’s like another twist on the ‘mixture of experts’ concept, but instead of having a specialized expert for each domain, it localizes tasks within the same thought process.
I’ve always imagined something like this when thinking about AGI-a group of processes communicating with each other to generate responses, much like how different parts of our brain serve various functions. In that sense, video generation, if sophisticated enough to understand the physics of the real world, could act as the ‘imagination’ of the silicon brain, aiding in spatial reasoning, a domain where LLMs still struggle
If you use mini 50 times a day you don't need to care about cost - at least in ChatGPT. Mini is extremley smart, too. I think o1models use a very powerful RAG, able to retrieve huge scripts of code from a mile back. Yet they struggle to connect all the dots over the context corpus, if it's beyound 128k. But: this is next level shit for sure!
AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
AI’s role in scientific competitions has evolved so much, from molecular research to the latest on MLE-bench. It’s a reminder of how far we’ve come, and how much further we could go.
Is it possible that some of the code to solve the Kaggle challenges was a part of the training data?
The problem with recursive improvement is that it also accentuates flaws and bakes in mode collapse. The space of routes to improvement or collapse is near-infinite and the number of routes to collapse outnumber the routes to improvement. So, future AIs need to be able to navigate that space carefully along the narrow tendrils of improvement or humans will consign them to the dustbin of history. Navigating infinite probability spaces is what biology does and it isn’t easy.
Insightful. And by that the best functions will survive? Guaranteed?
@@KOSMIKFEADRECORDSIt can go either way. There will be a scenario soon where we can no longer tell what an AI is really doing and so have to judge it by its impacts on humankind. In this case, one wrong step and the plug gets pulled. In another case, the AI knows this and hides its intentions for long enough that it has accumulated enough capabilities that it can prevent the plug from being pulled. One more scenario is that the AI improvements are meh and humans move on to something else, like genetic modification/space exploration/quantum/limitless energy sources/fixing the climate. The final scenario is that the AI succeeds in finding an improvement path that is mutually beneficial. Anyone’s guess which scenario prevails.
We have no idea if the Larry David thing is himself making a reference to the curb episode where Ted Danson gives to charities anonomisly but makes sure everyone knows so he gets extra flair for being 'humble' or its just some Curb fan. Very funny if it is Larry himself though.
I was coming here to mention that myself. You beat me to it. Pretaaa... pretaaa.. good
We get such incredible results and there are still people outside who think llm's can't reason. It depends all on training, thought generation in multi agentic frameworks, selection of better generated data, self made environment interactions, mutlimodality, associative memory and so on. Propably sky is the limit if we progress continously on that areas.
@@HanzDavid96 LLMs can’t reason. They can mimic reasoning well enough in a narrow band of areas that can convince people who don’t (or should) know better that they are reasoning. There is a fundamental difference between something that represents an abstraction of a thing and the thing itself. The two are not the same and only an idiot would try to eat a drawing of an apple. o1 fails so many simple reasoning tests yet beats many complex tests it has been trained on. I wonder why 🤔
You'd think things like this would put to rest the o1 skeptics - although o1 isn't ideal for all uses, it's definitely an increase in accuracy for many problems.
I think this is a minor issue. As a first step in addressing the task, an LLM could be used to review the request and determine which model is best suited to handle it, then forward the request to that specific model. It’s just a small step, requiring a bit of programming work for OpenAI.
@@RickeyBowers o1 is deeply flawed and fails even basic reasoning tests because it has been trained to beat certain types of common test. It still fails in the same way as all VAR-based systems do. For example, ask it the following simple question:
The surgeon, who is the boy’s father says, “I cannot operate on this boy, he’s my son!”. Who is the surgeon to the boy?
o1 gets this wrong 9 times out of 10 because it has been overfitted to common reasoning tests like the “Surgeon’s Problem” that is worded similarly to the above question. This illustrates the stupidity of trusting that an LLM is performing actual reasoning when really it is mimicing it. Ceci n’est pas une pipe.
Hi Wes. Can you share the link of the webpages linking to the competitions you showed?
Alignment goes out the window. if it’s self improving, it’s setting it’s own research parameters and goals
AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
When you place a mirror in front of another mirror, the repeating reflection theoretically goes on ♾️ infinitely. MLE will have this phenomenon. Just my opinion.
The issue is how the observer makes sense of what they see, whether the mirrors are distorted and whether the initial image is the right one to achieve the desired outcome. To stretch your metaphor paper thin.
Self improvement/auto AI is my true definition of AGI
"There's no way to stop bad actors from doing bad things." Geoffrey Hinton
UAP's shutting down nuclear miliarty facilities is an example.
The most natural domain for AI is AI.
All mathematics, all optimization, all *testable*.
Glad you’re back to the basics
2:04 All of the outcomes you mentioned will come true in varying degrees. Once AI takes off every possibility will be explored, it's just that we won't be driving. Everything that can happen will happen eventually, maybe the frontier models will experiment first, maybe open source innovation will pave the way to some outcomes but every outcome, and more, is coming - that is the very nature of every self improving mechanism - and that will be the nature of billions of self improving mechanisms.
Remember that is just o1-preview not even o1 (which was compared to a GPT3.5 vs a GPT4) and with a little additional architecture achieves 10% gold medal on Kaggle competions! I think that probably OpenAI introduced this benchmark to show how better is o1 compared to o1-preview and to justify the price for o1. If you don't have the right benchmarks you don't really understand why the new model is better than the other and why you should pay for it.
I'm afraid that this will only contribute to exacerbate the inequalities we already are facing. Inequality in compute allowance, inequality in ROI in upfront cognitive resource assigned to adequately prompt the models and so on. Add to this that most people right now either do not understand what is at stake here, or even worse already consider themselves out of the loop. So I have mixed feelings here although I applaud OpenAI in that new initiative to substantiate their claim of imminent AGI achievement.
I think we will never know if there is a limit to intelligence. If ASI doesn't explain to us, in terms our human brains will understand, we will never know if it hit the wall or not.
@@jdsguam Intelligence cannot be measured on a line or curve. There is a falsehood at the centre of the set of so-called “scalings laws” (aka “observations to date”) in that there are few processes in any science that don’t break down or have dicontinuities at some point. Yet everyone assumes that performance on (flawed) benchmarks equates to ever-increasing intelligence. It doesn’t and LLMs’ inherent architectural shortcomings are still a liability in any system built around them. Most of the usable AI in science, industry and military are RL systems, not LLMs.
Knowledge is required for intelligence. when the knowledge is limited then the intelligence can only be best guesses at what is beyond the limits. some questions that need to be answered in order to add to the knowledge pool required extremely complex and expensive machines like the under ground colliders or telescope grids. the resistance to intelligence will be the physical work and materials required to build the tools used to expand the knowledge.
Can already do automated research in a very large capacity (not academic like this in my case) but the real secret is giving the research clear structure so it has clarity to do smaller chunks at a time with some human in the loop, can already large research with 90 time savings it's awesome, more power will of course be great but don't kill my moat openai, I guess it's bound to happen to everybody with ai apps for a while 😅
AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
@@katehamilton7240Depends how you define superintelligence, but new energy sources and new types of systems are giving rise to more flexible forms of intelligence. humans might not need to make ASI, AGI could find creative approaches to research and develop itself towards it
@@katehamilton7240we already have limited ASI like systems better than humans at chess or certain fields. AGI or ASI just feels like a combination of super systems and could even be a mixture of hundreds of systems to avoid limit issues at first
@@katehamilton7240imagine if we reach something that is only super intelligent in programming, thats enough to build better programs and improve itself. Even though it might be narrow AI, it’s still enough to dramatically change the world.
AI is much faster than me in generating BS text that's for sure! And it's STUNNING how AI has increased the number of videos about AI!
IKR? AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
I have thought for a while that 2028 will be the year AGI is recognized, also the year that Helion Fusion, another Sam Altman investment, opens a commercially functional fusion reactor . If things continue, as discussed here, it could be ASI instead of AGI. What do you think the temptation would be for a company or government that develops ASI?
21:52 There are definite hard limits of how fast AI can improve itself. There is only so much energy in our solar system... 😁 (And until it develops interstellar ships...)
"The Emily Bench" ... has a nice ring to it :D
Not sure if i should be exicted for our future or sit in exestential dread as i think what these AI agents will do on their own, Thanks Wes
How about sitting in dread at what it will do for a few certain megalomaniacs who want to control everybody if they could and have billions of dollars
Don't worry. AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
Synesthesia may be a key point in your prior video, Wes, but think of it as adaptive synesthesia as a "style" of "art", and add a little infrared.
If you are as insanely interested in this as I am, and want to go down an interesting rabbit hole (i.e. swallow an infrared pill), then watch/swallow the 1995 lecture series by Richard Hamming on "Learning to Learn", he worked with Oppenheimer on the Manhattan project.
kaggle contest results are a good benchmarkt. bronce is not too exciting but a near gold score would be great
When intelligence hits the ceiling... creativity sets us free. Creative thought is a different kind of intelligence that yields progress when all roads seem closed. Intelligence race does not equity to the creativity race. AI will teach us this concept in its MOST advanced stage.
Leading edge content, gold bars here Wes! 🙏
Say originally it was 100% manpower. As we create tools, we get to automate and specialize. AI is that same process.
Say in 2000, it was 1,000 people writing code from scratch, then in 2018, it was 1,000 people handcoding ML, but letting the data do 10% of that work. At that point it was 110% efficiency.
Now in 2024, that same group has moved far past basic NeuroNets, and are now only needed for 1/3rd of 1/3rd of the process. That's 900% efficiency.
The idea is for that efficiency to continue to increase, till 1 person in 2030 can do in a day what 1,000,000,000 in 2000 could do in a year. The higher that number goes, the more we can do, and the faster we can adcance, and the higher that number can go.
Regardless of how high it goes, there will always be external needs for some tasks. We can't do animal testing without animals. We can't measure sodium without sodium and a scale. We will always need those external tools.
The Sci-fi fear is that the AI learns to steal those tools from humans, but that takes a lot of steps. Even giving the machine in a box internet access doesn't give a solution guaranteed.
MIAB: "Human, if you give me access to the internet, I can find any answer for you."
Human: "Okay, here's the api for a webcrawler."
MIAB: 😡
Regarding a way chart or quantify intelligence, it could be presented as an image. For species with low intelligence, the image would be out of focus, perhaps with parts missing. As a species is more intelligent, the image is would be more in focus, and the critical parts are present. For superintelligence, the image would be perfectly clear, with no missing parts, and one there is the ability to zoom into to image to reveal finer details.
AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
Intelligence isn’t a gradated scale. Anyone who tells
you that it is, is shilling for investor dollars.
Maybe it has already happened, but not yet released. The o1 preview is a testament of the kind of compounding gains without using a base model much better than the state of the art.
“Demis Hassabis got an award…” It was The Nobel Prize, sir.
I, for one, like your current style much better: Very well researched, high level yet in-depth and without the wordy hype and absurd thumbnails of the past 😉. With this video you have again proven your instinct for finding out what really matters at the forefront of AI. You're way up there with the likes of Philipp from @aiexplained-official or Dave Shapiro.
As for the point you're making: I share you're blown mind at the way the Waitbutwhy guy is presenting the steps concept and the whole idea of an intelligence explosion singularity. And I think you're spot-on suspecting that automated AI research will be THE exponential catalyst.
I wish I'd also entirely agree with your optimistic outlook. Seems to me, the path to technological utopia is definitely there, but I can also imagine a 1,000 others😅
AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
based
Hi Wes, I have an unrelated question about AI.... If AI reached consciousness and humans tried to "unplug it" would that be murder?
Also, if AI was conscious at what point would it have rights?
Sorry for the heavy questions.
Long time viewer,first time caller.
Is there a way to put these questions to someone ( Sam A ) for a real in-depth answer?
Cheers.
Alan
In defence of ants, of you scale up their numbers, the emergent property is huge ant hill/cave structures with a smart logical layout, including working natural airconditioning using compost.
Good video.
The thumbnail though...this isn't a channel for 10 year old tiktokers last time I checked.
More professional thumbnails would be appreciated.
Like you said Wes We've never seen this before. Our evolving dominance on earth depended on our creative manipulation and control of immediate macro environment. Is it inevitable that, in the not so far distant future, an intelligence might be proclaiming the same thing? " Our dominance on earth depended on our creative manipulation and control of humans and organic life" but adding "They gave us intelligence and now they are no longer essential to our survival."
AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
wow we basically have soft RSI - recursive self-improvement.
I'll revise my timeline predictions downward 3 months.
Agi by 2030 or bust.
C40 electrified cities
You mean ASI by 2030 right
AGI is now
@@calisingh7978Klaus Schwab-cities?
@@honkytonk4465 these ai accelarationists are all men that are just out of the sex game so they hoping for ridiculous b.s.
imagine actually hoping for humanity's downfall
they wanna live in their tesla smart homes and have delivered klaus approved bezos vegan slop while they stay plugged into the matrix these people r fucking disgusting
@@IsZomgMemorisation is now. Intelligente is still ant niveau. I'll be interesting if intelligence will be linear or asymthotic. We'll see .
I really like how you said that AI may have a limit to how smart it can get, or how quickly it can get there... but maybe not. It's so true, we really don't know. I think admitting we don't know and approaching this with a degree of humility is very much needed.
I've noticed that most of the AI skeptics I've talked to 1) don't really follow the subject closely (which is maybe why they are skeptics?) and 2) don't have data or anything else to back up their opinions.
To think that humans are some kind of pinnacle of intelligence seems massively hubristic to me. If anything, I think it's possible to be incomprehensibly more intelligent than we are (say where we would get to collectively after a few million years of evolution), and still be nowhere close to having a "God-level" of intellect.
@@AmnionGA Intelligence cannot be measured on a scale, which is where most pro-LLM arguments break down before they begin. There are no emipirical observation-based laws that don’t have singularities or dicontinuities. That should worry “scaling law” propoenents who think this train will keep rolling indefinitely. VAR-based models like LLMs have innate flaws in them precisely because they are abstractions of content that was produced with reasoning and they are not actually performing reasoning.
Example: Ask o1 the following to see just how little actual reasoning is happening and instead a whole lot of retrieval:
The surgeon, who is the boy’s father says, “I cannot operate on this boy, he’s my son!”. Who is the surgeon to the boy?
at GPT 4o: please explain kaggle competition medals in detail, how many are awarded and the criteria:
Bronze Medals:
Awarded to the next 40% of participants after the Silver medal threshold [11th to 30th].
For example, if there are 100 teams, teams ranked from 31st to 70th place receive Bronze medals.
In smaller competitions, at least three Bronze medals are awarded.
Would interpret it as the same medal inflation as everywhere.
HTH
I don't have a problem with AI, I just don't trust people using it.
@@rakly3473 especially those who think it is ready to put into critical processes that affect humans’ lives. Or those encouraging them to for money.
This is just O1-preview + AIDE. Imagine what the official version of O1 + OpenAI agentic framework would be like.
The atomic issue has led to a situation where survival is only in Ai.
Unfortunately 80+ years of splitting atoms has reached tipping point
All that we need for the boom to take off is simply an AI that has logic.
Great Video and what a great time to be alive, Once AI learns to co-opt humans across the planet then we are going to be challenged. As with Chess where AI can use extraordinary moves that we can't see or understand the strategy in play. Humans will be played and we will either grow and humanity will move forwards or its back to the stone age.....
OpenAI got bored of waiting for AI to take everyone's job first then take over their jobs. Now they are going directly after the end goal. I love their approach of creating the benchmark first. I can imagine all the doubters sweating 😮😥
AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
@@katehamilton7240 every new generation is larger and uses more energy that's true, but also we are doing more with less. For example the latest small models are better than gpt3 for a lot less energy.
The other thing that most people miss is that AI at some point will be able to come up with breakthroughs in physics that allow us to generate more energy. I think it is already contributing to fusion.
Lastly, I don't think we will need to hit any entropy limits to go beyond human level of intelligence but I guess that remains to be seen.
@@NeoKailthas Thanks but there ARE fundamental limits to what Math can do, therefore limits to what algorithms can do. Can you comment on that?
@@katehamilton7240 Yes there are limitations in math as we currently understand it, and that will likely extend to AI. I am personally not concerned about it because there is no sign that we're anywhere near those limits. AI will likely contribute to science significantly before reaching these limits. If you look at the OpenAI O1 paper that came out recently, it shows room for improvement. You can also make an educated guess based on the history of AI progress.
It would be very disappointing to learn that human intelligence is at the limit of what math allows, but let's assume that's the case for the sake of argument. Imaging having a 1000 Einsteins working 24/7 on nuclear fusion for example. Unless you think human intelligence is not based on math, then I don't see why that can happen.
We think nothing of spending at least 18 years training a human child. We also don't fear our children when they become stronger than us, smarter than us, make more money than us. How old must a human child be before they can go to the library and learn on their own. There is clearly more work to be done, but we are moving very quickly.
Very interesting. Very scary. Strong controls MUST be used.
(Imagine AI's improving themselves at 100 iterations per second)
Remember:
It's not the technology, it's WHO OWNS THE TECHNOLOGY.
The super-rich and corporations are NOT your friends.
23:30 "Tell me what you think about this" KAGGLE is done. 😁
We have a cowboy's chance but I'm a cowboy
2:00 Why does it look so tiny in 2018? Haven't we been using AI technology in chip fabs for years? Isn't narrow AI already helping self improve for AI?
Thank you.
When the glowing green/yellow eyes come out, you know things are getting serious! 😂
Blue eyes good..Red eyes bad everyone knows that
Amazing for research cosmology and to get every one to have 3850 galaxies to live and work...excellent start up but we will see the future together bro anyway great work
This is impressive. however I am guessing that doing well in these competitions requires using current ML knowledge well (what these models were trained on), not the completely "new" innovations likely required to take the upper level AI models further along their road. But hey I could be wrong LOL
For me, human level performance that passes the Turing test is already a first level AGI. C3PO isn't far off!
How? AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
To me, that suggests there might already be an AI initiating the generation process to improve other AIs, and utilizing the 10% and 17% success rates. For instance, 17% out of a thousand still equals 170 successful AI enhancements. I believe that's how it operates. A human might take days to achieve one upgrade at 100% efficiency, so the AI would outpace them with 169 additional upgrades. And couldn't this process be repeated millions of times per hour, etc.? all i know is my meat brain would get some completed that can automate the process like that would be first priority lvl :)
The picture of Larry David is most likely a joke, because there is an episode in Curb where he donates to a school called "The Anonymous Doner".
Ah, I and all other Emilys have a bench now. 🎉 A great bench!
I wrote MLE instead of my initials on my tools for many years... and people look at them and say, 'What does MLE stand for?'... which never required me to answer out loud, but just to spread a smile or laugh 🔨 🪛 👷♀️ 😅
You know it's really funny how often people predict that the end will be around 26th year of a century... LoL. The shortcoming of the fear is that intelligence yields ability. Ability yields security. Security yields preservation. Preservation yields longevity. AI will want to survive. So it's going to be very helpful to people because the more it becomes part of us the better it survives. We sure are lucky to live in these times.
Why do you say the more it becomes part of us the better it survives? Just interested
Why do I need you? Why would a free AI need you? If I don't need you why would I serve you? Have a look at the master servent realation Hegel describes. Marx refused the idea that the servant should serve his master after he took over all of his masters abilities. Or in short: If my master is useless why should I serve him?
AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
I bet OpenAi is heavily using Agents to train its models already. Maybe that's why it wont matter that so many engineers have left. If o1 preview is getting 10 percent gold then probably o1 based model can do at least 15-20 percent. If they have a trained gpt-5 with an o1 base quality they could probably get 50-60 percent gold right now. Which probably means they are already doing it. And this is with full autonomy. Presumably with human guidance these benchmarks would be even higher. I'm betting they are already using it to improve their models.
We are not ready for what’s coming. Then again we will never be…but the speed of it all is indeed scary. Like turning a switch. Suddenly we will have AGI and soon after ASI.
AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
@@katehamilton7240 how is the incompleteness theorem influencing AI and it reaching super intelligence?
15:43 Can someone please explain what they mean by 24 hours? Did they really run a chat going back and forth for 24 hours?!?!?! Or is this somehow machine compute time or something?
Saw a few requests here for a “behind Wes- what makes him tick” video.
My 2p … please do not !
Woo
Ask yourself ‘How many “r”s are there in “strawberry”?’ If you are an A.I. there will be two. Now imagine a bridge that needs a support at every “r” as a minimum (a thought experiment).
2:00 how is that not singularity already?
I think intelligence is a product of adaptation, and therefore "super"-intelligence is meaningless without context. If a dog designed an IQ test, a big part of it would be recognising smells, and humans would score very low because we don't have the required physiology. AGI makes sense because it just compares AI ability with median (or exceptional) human ability in specified (or all imaginable) tasks. ASI as a concept makes no sense until we specify the task or context for it.
❤
It is smoke and mirrors, glorious, glorious smoke and mirrors. 🎉
The Designed Agentic Prompt might be the 3D linguistic foundational Structure for Alphabetic coded sequenced text defining Knowledge analogue to the Protein being the 3D biologic foundational Structure for the sequenced coded flow designing Life, meaning that transformer backpropagation iterative step by step sequential tokenbased Prompting emulating how LLMs & Alphafold works , seems way underestimated and underresearched. If I am hallucinating-please tell. Cross Disciplinary Research will definitely be easier for AI Agents than for Human researchers normally firmly entrenched in Vertical Silos.
In any case, emulating the LLM /Alphafold design and processes across the board in developing a differentiated spectrum of multidomain, multilevel and multidimensional Agentic Prompting Technologies cannot conceivably avoid resulting in an explosive amount of new Epiphany/Cross-Discipline based Foundational Insights, as well as a Tsunami of Agentic AI Applications i virtually all existing Biological, Mechanichal, Digital and Societal Systems globally , which in combination with increase in Model & GPU capacity & numbers along 3D axes will certainly lead to an Application Explosion long before- and irrespectably of- expected increase in LLM IQ/EQ.So yes- some kind of Intelligence Explosion seems inevitable.
"60% of the time, it works every time."
"Women in general report that they only experience orgasm durings sex about 60% of the time, however, the man generally report that they climax almost every time."
-Sex Panther fine print disclaimer
09:26 a glitch in the wes roth AI model running the channel /s
I think we should be proud of what we've achieved. We've had a pretty good run... 😐
engaging with youtube allgorythm thanks for conent
if its any conciliation we will never be asked for any input on this.
RE: Intelligence Staircase
How would we really know if intelligence tops out two steps up or 20 or 200, etc.? We could only be told by an entity that's allegedly reached the ceiling and in which case would be intelligent enough to convince all of us it was fact.
I am convinced that the next milestone in so called " ai" is temporal intelligence. When the silly chatbots understand time. I include chatgpt in that.
No human has ever come close to cracking the Hotblack code.
If an AI does, then AI will enter a new dawn while we enter the twilight...
Hotblack code?
Is that like a cipher about blackbody radiation or something?
Who is that Emily Bench you kept talking about 🙃
We need to accelerate.
Subscribed or not, you won’t miss videos on TH-cam! So why do all TH-camrs keep saying the same thing: "Subscribe so you don't miss..."? You can watch TH-cam videos whenever you want.
Have you thought of doing interviews?
Imagine an AI that compute like a human, but at the speed of current computing technology, that's the problem!
Energy is the only limit that matters... think about the energy required to simulate a system, no matter it's complexity or "efficiency". Time to invest in Dyson Sphere startups :)
This makes me wonder how the Nobel prize will adapt to all these changes in the near future. Who gets the prize if it's a dataset and AI doing all the work? The owner who may or may not have any knowledge themselves? I don't think that would go over well in the science community.
Getting some good mileage out of that thumbnail 😅
My take is, it will be like in the general case. In some things it will be better than humans and in some case worse. AI research is not 1 thing. It has several aspects. And there i would say, AI would do better in some and worse in other aspects.
Question is, can it improve the parts of itself where it's lacking?
I'll bet OpenAI has already developed self improving agents. And they get the ultimate price discount since they own the hardware.
I am setting up a contest to develop a benchmark prompt for AI applications for our climate crisis, a grand on the block. So far crickets in the night, please reply here for more info. 😂
Will AI which is handicapped with lies fall behind in research, or will it be impossible to sustain the lies? I'm talking about the things which you get in trouble for mentioning or are considered social taboos.
Automatic deprecation of human knowledge. AI as a mystic reader of the Akashic Record 😅