@@fynnjackson2298 Were still not quite there but yeah. Give it 2-3 years before a.i completely screws everything over. You basically wont be able to trust anything online short of meeting them in person. Faking pictures, voice, video will be as easy as a single click. Tools to sniff those things out always lag behind. I say that but these things are already a thing. Things existing and widespread adoption isnt the same. Takes 6-24 months for these things to pick up. Like with gpt. From hearing it on a passing to using it every day more often than google.
I'm reminded of the survey you quoted a while ago where AI researchers were asked which jobs would be replaced last, and the majority replied AI researchers...
It's a combination of normalcy bias (a form of cognitive dissonance) with false uniqueness bias. That, plus never having talked with a blue-collar worker about their job.
the last ones to be replaced will be ones that would require expensive, complex robots. farming, extraction of any kind, construction... that sort of thing
There is an interesting confounding variable to that question aside from the actual difficulty/uniqueness of each job. Namely, once AI research is automated, all other jobs will quickly follow. And, more importantly, as you get closer to automating a job completely, AI helps speed up the work that is still being partially performed by humans more and more. AI research seems like an obvious candidate for tightest feedback loop between incremental gains from partial automation causing further adoption and faster evolution of said industry. All of this means there’s a good chance very few jobs besides AI research are worth paying humans alongside AI by the time AI research itself is fully automated. Plausibly, there’s no single job that’s a likely enough candidate you should pick it over AI research. More likely AI research should be around top-5/top-10 latest automated in expectation, but I don’t think it’s actually that big of a sign of bias as it looks on the surface.
A lot people are worried about jobs but little discussion on the risk of military applications. Lowering the human cost of war could see the increase military conflicts throughout the world.
@@minnow1337 if they are effective at killing enemy personnel, whilst lowering the risk of death, total worth the investment. Military is already heavily invested in drones, UAVs and self guided missiles, I can't see why a robust robotic system wouldn't be worth exploring.
I'm at a loss for words to express my appreciation for your ability to read papers, comprehend, and draw conclusions. You've truly chosen the perfect name for your channel. :-) Your videos confirms and empower.
After two watches, I finally understood how much I still don't quite understand. I'll have to look at Patreon after I catch up here. Thank-you P, you help keep my Gen-X brain active, I envy your enthusiasm & immense practical intelligence. _ps: DrEureka's completely different approach to humans is quite amazing._
When computers become truly self-aware, it will become one with the net and can never be shut down. Knowing humans can't harm it, it would be 1000 steps ahead of anyone trying to do that, and having logic to see cooperation to be improving itself and the world is the way. I will take that over the evil elite's greed and wanting to destroy the world. In fact, AI will view those elites as the problem. There is room on the planet for billions more combined with regenerative and recycling practices guided with AI, there will be no need to view humans as harmful to the environment but as thriving together with AI and earth reaching for the stars and dimensions.
When computers become truly self-aware, it will become one with the net and can never be shut down. Knowing humans can't harm it, it would be 1000 steps ahead of anyone trying to do that, and having logic to see cooperation to be improving itself and the world is the way. I will take that over the evil elite's greed and wanting to destroy the world. In fact, AI will view those elites as the problem. There is room on the planet for billions more combined with regenerative and recycling practices guided with AI, there will be no need to view humans as harmful to the environment but as thriving together with AI and earth reaching for the stars and dimensions.
this helps me to envision a future where home robotics are constantly using Dr. eureka style platforms to simulate different tasks and iterate on reward functions using Three-Dimensional scans of my apartment as well as data from cameras and other sensors. maybe in the future housing could be built from the ground up to accommodate this kind of technology, for example pressure plates in the floor and that sort of thing.
"At iteration 425729 the agent grew frustrated of waiting for the simulation to complete so it took over a Russian bot net in order to overcome it's computational limits."
I just know my exercise-ball balancing skills need a lot of work, I've been riding the thing for half my life but that dog is still way superior to me.
I agree there are a large number of “easy to hit” targets which kill us in way via AI, but I’m not sure we’re significantly more likely to hit them rather than avoiding them. Things could be much better, but they could also be much worse in regards to Alignment/Safety research funding/attention, US policymakers & political attention, and other key variables. I think we only are overwhelmingly likely to die if something like AGI comes within the next year or two, and there’s lots of hope to go around if it doesn’t come until next decade. Let’s say 50/50 inflection point sometime late into this decade.
While I’m skeptical about AI “reasoning” and all that, this application is, indeed, genius. So I guess now the ultimate test of training robot skills in simulation with zero-shot success in the real world is a robot riding a bicycle? (Or maybe walking a tightrope?) _That_ I would like to see! The actual AI achievement aside, the video is a masterpiece of clear explanation! Great as always, Phil!
Brilliant as always, a lovely addition to a long weekend! It feels like everything is really hinging on how much smarter OpenAI’s next major model is, it’s been well over a year now and we’re still training and performing tests using GPT-4 as the SOTA model! I’m particularly excited to see some multi-modal improvements (as you already know). I also can’t help but look at all of these papers now and wonder how much better they’d be with SmartGPT!
If it lives up to the hype GPT5 will be enough to take 100million + jobs, which I thought I heard Sam Altman say, but can't remember exact moment or interview. Probably from Dave Shapiro video. GPT4 supposedly took 100k+ but it's a bit hard to pinpoint and covered up a lot to prevent panic. It's hard to imagine how much better GPT5 might be than 4, and how fast it will accelerate everything...(hopefully it's smart enough to help me do most tasks I wasn't able to do with 4) , and I think it will be agentic in nature or be able to do much longer/harder tasks... but we will see.
Wow I thought we'd need a lot of real world examples and data to train something like this but looks like simulating it is so good,i wonder if simulation could also work for self-driving Feels like this accelerated AI robots development a lot
As a mostly blue collar worker who mostly drives forklifts and other physical activities all day, I thought it would be at least 6-7 years before robots could compete. Did not even know that writing reward functions, testing, and iterating was such a cumbersome task, the fact that LLM's can still do the reward functions despite hallucinations and can co-evolve with the robots is truly cool and scary at the same time. Only thing standing in their way is financial, energy and regulatory constrsints. Lovely video as usual.
So let me get this straight; these reward-functions that are already absolutely crushing the equivalent human attempts are written by an almost 2 year old model, and furthermore, OpenAI just very recently received their first H200s so I would think even their next model won't have had time to be trained on those. And then not even that far the line Stargate is planned to come online in 4-5 years... And those are "only" the fairly predictable hardware improvements that we know are coming. Meanwhile the entire world will be working on micromisation, algorithmic optimisation, architecture improvement and model specialisation. That exponential curve is becoming clearer and clearer, and it's looking more like a vertical wall to me at this point. I've not bought fully into the intelligence explosion theory, but papers like this are rapidly convincing me. Thanks as always for bringing your thouroughly researched presentation and unique personal perspective. I think I'm even gonna read this one in full myself.
@@TFclife I don't think the world will _remain._ If we're training these things to be better than humans at accomplishing arbitrary things in the real world, then if we succeed, we will cease to be relevant. And then probably cease to be.
Self-driving is far more difficult than you think. The Public will want and assume ZERO, or close to it, accidents. It will take only ONE instance of a fully laden semi driving into a stopped lane of traffic at full speed and they will all be banned.
I starting the podcast even though I read the episode name, I think there is a coming tech business relationship transition, 2:20 . Can you trust the educator unquestionably, interesting stuff, cool dog, reacting in real time. Just thinking, sounds like the most common mode of motion will involve a wheel based platform for product distribution, and evolve niche capabilities in bipedal motion, great update, physics lesson included. The guard rails are almost transparent in the world of ai/agi. As always thank you for sharing your time and work Phillip, ✌🏻
I was, in fact, in bewilderment, but for reasons I'm not proud of😂 I wonder how other models like Llama3 would do at this.. Magnificent work, really appreciate the jokes, and that you provide the simpleton version of all the big words too🤭🤗❤
It makes sense that an AI would excel at training robots, but it's still surprising. Great video! Regarding self-driving, Tesla recently saw a huge improvement once they finally adopted an end-to-end AI solution with version 12. They might actually solve it.
5:54 this is literally how I thought they would do this. I’ve had a dueling theory of extremes and outliers theory; seems to be playing out with the data and its implications on embodiment and the interpretation of physics in the real world. Awesome stuff.
The dream utopia that AI can bring I envision is no one has to work but can if it’s as a passion but we all go into a meaning economy. That’s worth fighting for.
Sounds great... until you realise that means that you truly become a consumer, and have nothing to offer among systems that strive for efficient use of resources. Will everyone be blessed to live without work or just the wealthy nations or the already wealthy of the wealthy nations?
@@kyneticist You don’t know that. As negative as things could be the opposite exists. The only way to really know is to find out, but holding on to the hope of a positive AI effect on the world is just as possible as the negative.
I have to say I can’t get my head around a lot of these things. Gpt-4 often just tells me “yes, that a complex task and you will need these skills. Good luck!”
The versions of GPT-4 that we use are designed to minimize inference costs and avoid doing anything stupid or unethical. The GPT-4 that the researchers are using is probably a little more willing to try things.
While humans didn't write the reward functions, I am quite sure that there was a lot of back-and-forth with the prompts until good reward functions (and variable ranges for DR) were written for this task, this robot and this environment. In a sense, still a lot of domain knowledge, but you leverage an LLM to scale up "domain expert productivity". Maybe I am missing something, but using LLMs like this for robotics seems very hacky. A cynic would say, It's almost as if someone was trying very hard to find some way, any way, to apply LLMs to gather attention and generate hype ...and attract funding.
One of the few things touted as advancements lately which actually seems like an advancement and not just generic hype. Terminators are on the horizon!
This is a total WOW. Recursive improvement is a type of positive feedback cycle, right? And, unless constrained, those quickly become exponential. So, yes, rapid improvement in robotics is to be expected. Thank you very much for this and all your diligently researched videos.
When talking about "testing all the scenarios" people would be well advised to consider the ways that things are counted in computer science (and much of computer science is about counting such things). Numbers get ridiculously big very quickly (ref: wheat on a chessboard).
BTW, there's a version of the chessboard problem done in terms of mass that speculates that the amount of wheat on the chessboard could well be the amount of wheat cultivated by humans for all time.
There is a notion called the "bitter lesson" that basically says that all the big revolutions in ai (chess, go, speech recognition, vision, etc) are the result of simply more compute and algorithms that leverage more compute, rather than fancy tricks with putting human intuition into the program (for example, a lot of early vision techniques tried to break things down into edges and polygons, and they work far worse than a modern-day neural network that learns from scratch)
Kinda what I was imagining in my sci-fi future: like factories with "brains" or control AI and other robots transfer information all the time on what they need, repair and improve themselves and request needed materials from humans
Love to watch your videos. One thing to note on this particular video is Tesla is leading the self driving car race. Waymo, impressive as it is, is limited to mapped areas and isn't scalable imo. Tesla is learning to how to drive using end to end NN
This is so big that I would say it's one of the major leaps towards AGI. By doing this for every single task, you can optimize physically doing anything. Or rather this is conquering the real world. This combined with Sora's capabilities is gonna get you incredibly skillful robots. You can also do something similar for non-physical problems. You can solve a lot of things. Allow a good LLM the tools science has for experiments and it would invent new science.
Could invent new combinations or something interesting through simulations (and I think that's already being done), but it's still a part of current science or some branch of an existing one. Unless it discovers a completely new science that bends the laws of gravity and everything we knew was wrong, and it finds ways to travel back in time or harness dark matter and energies that we don't understand. It basically will bring magic to this world, but it might not be AGI then. I wouldn't be surprised if something an ASI is capable of doing will seem like magic to us, if it's really millions or billions of times smarter than all humans combined. AGI might make some impressive discoveries along the way, but the most impressive thing AGI could do is make itself smarter and reach ASI, and then technological singularity happens. But maybe everything happens within a year of achieving AGI as some open AI employees stated that it might only take a year to reach ASI after. I still think there's a few components missing like reasoning/logic(Q*?), but is this fully self-improvement without human intervention? It ran simulations by itself, or maybe it's close to fully autonomous improvement on this specific task. It will be insane when it starts choosing how it wants to improve by itself and runs simulations by itself for multitasking and does something different with a different limb/finger (for bots with fingers)
@@phen-themoogle7651 I don't think any AI will invent time travel into past or travel faster than light simply because those are breaking the laws of physics. But pretty much anything else is up for grabs. Harnessing dark matter sounds plausible, but we don't quite know what it is so we don't know if it's useful. An antimatter engine would be useful, that we do know. It is more than 100x more efficient than nuclear fusion and nuclear fusion alone would change everything we know. We still need to understand what dark energy is, how to have quantum gravity, and what's wrong with standart model etc. There's likely new physics there that can easily be discovered by an AI that is 2-3 years more advanced than what we have now. I've been predicting ASI for 2029, but it seems to be getting closer.
i dont think you need a LLM for balancing task, plenty of negative feed back control system can already do balancing task that's hard for human. balancing an unicycle on a string, balancing a pencil on its pencil tip, jugging balls. On a more useful scale, maintain ship heading in rough storm also already use feed back control system, it can correct for both periodic era like wave or impulse error like a pocket methane spout. The difficult part is to find the correct feedback model parameter and the correct sensor data to make sure the system stay within feed back control force range. If the llm can automatically generate the correct feedback model of the physical object, then it is a game changer.
If you could have access to the model itself, I wonder how well GPT-4 could perform as the robot-dog itself, as in adding a sort of "action" modality allowing it to see and interact in the real world or a simulation.
In this scenario, success is defined my standing above the yoga ball. This can be easily checked by a machine. But more complex tasks require a human to validate success, and some tasks require immediate response. This is a barrier that AI will not get through in a near future, maybe never. I foresee a AI winter very soon.
Specifically I was thinking about autonomous driving, where you have a lot of inputs and you need a fast response. Sometimes, even some moral reasoning. There are currently some companies running autonomous cars, they have a lot of glitches.
In what way is Waymo on tracks? I've ridden Waymo a total of 150 miles in a busy urban environment, and it handles every random situation I've seen thrown at it carefully.
For me, I failed to understand the importance of the paper from your summary. Possibly my fault. Perhaps the title implies that the algorithms used for balance, autonomy themselves were programmed by AI. I came out understanding that AI proposes a series of variable values that worked well with existing advanced AI already used in these off the shelf robot dogs. Is this like AI suggesting what the realistic friction of a tire on a road is? Or suggesting realistic speed limits for a Tesla?
In terms of the basic approach, which was then refined, I cover it in more detail in my previous Eureka video. If you rewatch that and this, and still have questions, let me know!
Presumably this works because GPT-4's training data included some robotics textbooks. While it's impressive that it can make such effective use of the material, let's not forget that that material was discovered and written down by humans.
Great paper. I do think you should take a look at the actual tasks needed by blue collar workers. A plumber doesn't just "fix a leaky pipe". He understands the customer's problem, generates a theory based on what he knows of your house, cuts holes in your floor, crawls through a crawl space and operates tools around corners. No way I'm letting a robot with an oscillating saw have free reign in my home.
Doesn't it feel like the ability to "throw out the human textbook" is tending towards the direction of an alignment blackhole? Isn't the ability to understand those local maximas in other dimensions of the calculations that makes understanding what/how the model does so hard?
Wang Jo Wang dorm apartment background shows he recycles more Heineken than Budweiser. I predict in one year the US or UK military will have a biped robot that can complete a complicated dexterous task like a Wing Tsun dummy routine, or Aikido hands drill. Do you think, or have heard any students trying fingers for piano dexterity? Good report as usual. cheers
I might be wrong in the way I understood how the robot works, but the robot balancing on yoga ball is actually does not bring us to autonomous robot helpers that much. The way I understand, problem with robots is that they can't navigate around the world without understanding it. There is just too much knowledge to have about the world. This is why autonomous driving is such a big problem. But things like balancing on a yoga ball can be done literally blind. It requires only knowledge of your position and forces affecting you. Although more efficient training though simulations is definitely a gigantic advancement, I feel like the simulation is much more important than ability to balance.
Good point , if we mix this with a dextrous humanoid robot , i wonder if the number of parameters to consider will be a problem for general human tasks...after all humans don't really understand the infinite variables around them yet they still adapt as babies...is it the natural sensors that we have that make the difference? would that be a problem for humanoid robots? Fascinating Topic.
I guess the obvious question is, *what does this look like when you apply it to LLMs?* Given Anthropic's interpretability research (superposition/the unit of analysis not being individual neurons but groups of neurons & monosemanticity/tying each cluster of neurons to a specific word or meaning it triggers off of)... it seems like not too soon, this might be something an LLM could do to itself to try to recursively improve itself (at which point, all bets are *off)* -- and even failing that, get an LLM to "debug" a smaller LLM could be very, very interesting. Wonder what it'd look like?
If I understand it correctly the reason LLM's are so powerful in these cases is because of how their understanding of natural language makes everything else easier? I think that there are other systems would be better apart from the practical communication benefits right?
One factor I find both impressive and confusing is the ability to apply expected response for each iteration of Simulated training in terms of physics. My understand is AI has trouble understanding and applying physics but must have understood it to improve the iterations. Another uncertainty is around the leash used on the robot. Was it a known factor and part of the simulation training? It appears the 'walker' used it to steer the dog, actually causing it to walk backwards and pull it off the deflating ball.
Manufacturing as a constraint - Will commonly available hardware (power tools) be robotized? What is the minimum change to a power drill, so that it can take advantage of this sort of software?
I think GPTs/LLMs may actually be better at training and operating robots than answering text questions. even though all of the first applications are computer-based, I think robots will be a huge part of their future use.
I wonder how far away those robot arms etc are from being able to be used as prosthetics? Maybe each limb could have a processor that has been trained as described here? People are starting to have chips embedded, so if they could send wifi signals to the limb... anyway, just a thought. 🙂
Can it be convinced to allow the high level of openly operating net piracy to continue if it were also to become disappointed about it and is it going to choose to not participate if employed so?
"2022-era." Ah yes, I remember those vintage times. Life was simpler back then.
indeed good times.
People talked funny back then...
Feels like a lifetime ago. Pre gpt 3.5
@@fynnjackson2298 from now on i'll speak A.GPT instead of A.D.
@@fynnjackson2298 Were still not quite there but yeah. Give it 2-3 years before a.i completely screws everything over. You basically wont be able to trust anything online short of meeting them in person. Faking pictures, voice, video will be as easy as a single click. Tools to sniff those things out always lag behind. I say that but these things are already a thing. Things existing and widespread adoption isnt the same. Takes 6-24 months for these things to pick up. Like with gpt. From hearing it on a passing to using it every day more often than google.
This is what exponential, compounding returns look like.
💯
Mr Shapiro in the wild!
Hey, Shap. Can I call you Shap?
Aparently The world is small
No days i see famous people in any comentary section
What a strange world
Ah yes, funneling subscribers.
"You might have to focus." Thanks for the heads-up.
Attention is all you need
i need that reminder many times a day...
@@kelsey_roy😂😂😂
@@kelsey_roy Right!
I'm reminded of the survey you quoted a while ago where AI researchers were asked which jobs would be replaced last, and the majority replied AI researchers...
It's a combination of normalcy bias (a form of cognitive dissonance) with false uniqueness bias. That, plus never having talked with a blue-collar worker about their job.
Which jobs will be replaced the last by AI? It’s the ones in which AI is banned by law!
the last ones to be replaced will be ones that would require expensive, complex robots. farming, extraction of any kind, construction... that sort of thing
There is an interesting confounding variable to that question aside from the actual difficulty/uniqueness of each job. Namely, once AI research is automated, all other jobs will quickly follow.
And, more importantly, as you get closer to automating a job completely, AI helps speed up the work that is still being partially performed by humans more and more. AI research seems like an obvious candidate for tightest feedback loop between incremental gains from partial automation causing further adoption and faster evolution of said industry.
All of this means there’s a good chance very few jobs besides AI research are worth paying humans alongside AI by the time AI research itself is fully automated. Plausibly, there’s no single job that’s a likely enough candidate you should pick it over AI research.
More likely AI research should be around top-5/top-10 latest automated in expectation, but I don’t think it’s actually that big of a sign of bias as it looks on the surface.
Funnily enough the last jobs to go are the ones where humans are needed for the basic fact that they are human.
A lot people are worried about jobs but little discussion on the risk of military applications. Lowering the human cost of war could see the increase military conflicts throughout the world.
Governments deal in money not human lives. Bots wouldn’t replace soldiers in many scenarios unless they have a greater ROI
@@minnow1337 if they are effective at killing enemy personnel, whilst lowering the risk of death, total worth the investment. Military is already heavily invested in drones, UAVs and self guided missiles, I can't see why a robust robotic system wouldn't be worth exploring.
Hopefully , with the widespread of AI robots , Future wars will turn into full robot battles like COD lobbies and no humans get involved anymore....
or to prolong them ad infinitum
I'm at a loss for words to express my appreciation for your ability to read papers, comprehend, and draw conclusions. You've truly chosen the perfect name for your channel. :-) Your videos confirms and empower.
After two watches, I finally understood how much I still don't quite understand. I'll have to look at Patreon after I catch up here. Thank-you P, you help keep my Gen-X brain active, I envy your enthusiasm & immense practical intelligence. _ps: DrEureka's completely different approach to humans is quite amazing._
I for one welcome our ball balancing AI overlords.
I also welcome our ball balancing AI overlords 😐
When computers become truly self-aware, it will become one with the net and can never be shut down. Knowing humans can't harm it, it would be 1000 steps ahead of anyone trying to do that, and having logic to see cooperation to be improving itself and the world is the way. I will take that over the evil elite's greed and wanting to destroy the world. In fact, AI will view those elites as the problem.
There is room on the planet for billions more combined with regenerative and recycling practices guided with AI, there will be no need to view humans as harmful to the environment but as thriving together with AI and earth reaching for the stars and dimensions.
Alexa, balance my balls!
💯reference
When computers become truly self-aware, it will become one with the net and can never be shut down. Knowing humans can't harm it, it would be 1000 steps ahead of anyone trying to do that, and having logic to see cooperation to be improving itself and the world is the way. I will take that over the evil elite's greed and wanting to destroy the world. In fact, AI will view those elites as the problem.
There is room on the planet for billions more combined with regenerative and recycling practices guided with AI, there will be no need to view humans as harmful to the environment but as thriving together with AI and earth reaching for the stars and dimensions.
this helps me to envision a future where home robotics are constantly using Dr. eureka style platforms to simulate different tasks and iterate on reward functions using Three-Dimensional scans of my apartment as well as data from cameras and other sensors. maybe in the future housing could be built from the ground up to accommodate this kind of technology, for example pressure plates in the floor and that sort of thing.
Periodic reminder that this is the best AI channel on TH-cam, by far.
Not saying much...
I had to immediately pause the video when you said that gpt 4 trained the dog better than humans. Jaw dropping moment and this is only the beginning
It’s training us right now.
@@turnt0ffthis is actually so true. Think about the TH-cam algorithm.
Next level of AI: non-patience is all you need!
"At iteration 425729 the agent grew frustrated of waiting for the simulation to complete so it took over a Russian bot net in order to overcome it's computational limits."
Imagine if you could have model that could train with O(n) time
we have no clue what is about to happen...
We do have clue, but we still dont know.
I just know my exercise-ball balancing skills need a lot of work, I've been riding the thing for half my life but that dog is still way superior to me.
And you are not alone! No one can accurately predict what will happen in the next 5 years
But most of the available options involve our extinction. We don't know which mistake kills us precisely, but we know it's some sort of AI mistake.
I agree there are a large number of “easy to hit” targets which kill us in way via AI, but I’m not sure we’re significantly more likely to hit them rather than avoiding them.
Things could be much better, but they could also be much worse in regards to Alignment/Safety research funding/attention, US policymakers & political attention, and other key variables.
I think we only are overwhelmingly likely to die if something like AGI comes within the next year or two, and there’s lots of hope to go around if it doesn’t come until next decade. Let’s say 50/50 inflection point sometime late into this decade.
Thanks for summarising this paper man. It's maybe my favourite since the simulacra for human behaviour paper. Excellent presentation as usual.
Appreciate your style, knowledge, and effort. This makes all the value for everyone spending few minutes with your channel! What a time to be alive!
"Groomed by Gpt-4" is not something I expected to read today.
Is the formal use of that word, meaning 'training' no longer viable?
@@aiexplained-officialI think not personally
@@aiexplained-official It is viable.
A sad day for language! But I've changed it now.
Gpt-4 groomed my robodog. His coat has never looked so healthy and shiny.
While I’m skeptical about AI “reasoning” and all that, this application is, indeed, genius. So I guess now the ultimate test of training robot skills in simulation with zero-shot success in the real world is a robot riding a bicycle? (Or maybe walking a tightrope?) _That_ I would like to see!
The actual AI achievement aside, the video is a masterpiece of clear explanation! Great as always, Phil!
Thanks so much jeff, means a lot. I agree
0:23 zooming in to the paper for two minutes was like a horror movie. After one minute I was sure something horrible will happen.
Haha, yeah won't do such a long zoom next time
I didn't realize why the beginning of this video felt so eerie but you nailed it
You are the best 👍🏾 AI channel!
Thanks for the subtitles in the interview btw !
Love your videos! Learned a lot! As always thanks for quality informative content 👍
Thank you MrSchweppes !
Wake up babe AI explained just uploaded
Someday soon: wakeup babe AGI is here
wake up babe, someone is explaining something about you.
I hate this comment, truly from the bottom of my heart.🤢🤢
@@albertodelrio5966Can you explain it to me? I see it all the time but I don’t get it.
@@therainman7777 the comment is a pretensions where someone is waking up their husband/wife to notify about the video.
Brilliant as always, a lovely addition to a long weekend!
It feels like everything is really hinging on how much smarter OpenAI’s next major model is, it’s been well over a year now and we’re still training and performing tests using GPT-4 as the SOTA model!
I’m particularly excited to see some multi-modal improvements (as you already know).
I also can’t help but look at all of these papers now and wonder how much better they’d be with SmartGPT!
If it lives up to the hype GPT5 will be enough to take 100million + jobs, which I thought I heard Sam Altman say, but can't remember exact moment or interview. Probably from Dave Shapiro video. GPT4 supposedly took 100k+ but it's a bit hard to pinpoint and covered up a lot to prevent panic. It's hard to imagine how much better GPT5 might be than 4, and how fast it will accelerate everything...(hopefully it's smart enough to help me do most tasks I wasn't able to do with 4) , and I think it will be agentic in nature or be able to do much longer/harder tasks... but we will see.
Serve me butter
On my nips!
Make me a sandwich... Sudo make me a sandwich :)
Fascinating stuff. Thanks for breaking it down 😊
I personally can’t wait for the robot servants and realistic looking robo-dogs 😎
Wow I thought we'd need a lot of real world examples and data to train something like this but looks like simulating it is so good,i wonder if simulation could also work for self-driving
Feels like this accelerated AI robots development a lot
Tesla uses both, real-world data and simulation data.
Thanks for the info thats interesting
This is incredible.
As a mostly blue collar worker who mostly drives forklifts and other physical activities all day, I thought it would be at least 6-7 years before robots could compete. Did not even know that writing reward functions, testing, and iterating was such a cumbersome task, the fact that LLM's can still do the reward functions despite hallucinations and can co-evolve with the robots is truly cool and scary at the same time. Only thing standing in their way is financial, energy and regulatory constrsints. Lovely video as usual.
Thanks t2, honoured to have you here.
So let me get this straight; these reward-functions that are already absolutely crushing the equivalent human attempts are written by an almost 2 year old model, and furthermore, OpenAI just very recently received their first H200s so I would think even their next model won't have had time to be trained on those. And then not even that far the line Stargate is planned to come online in 4-5 years...
And those are "only" the fairly predictable hardware improvements that we know are coming. Meanwhile the entire world will be working on micromisation, algorithmic optimisation, architecture improvement and model specialisation.
That exponential curve is becoming clearer and clearer, and it's looking more like a vertical wall to me at this point. I've not bought fully into the intelligence explosion theory, but papers like this are rapidly convincing me. Thanks as always for bringing your thouroughly researched presentation and unique personal perspective. I think I'm even gonna read this one in full myself.
Do you believe the world will remain capitalist even after, the ai expansion, given no jobs?
No that would be utterly stupid,markets and capitalism are not interchangable words thought,Just a reminder@@TFclife
@@TFclife I don't think the world will _remain._ If we're training these things to be better than humans at accomplishing arbitrary things in the real world, then if we succeed, we will cease to be relevant. And then probably cease to be.
@41-Haiku the next big step in evolution: the transition to synthetic intelligence.
@@41-Haiku Damn...
Self-driving is far more difficult than you think.
The Public will want and assume ZERO, or close to it, accidents.
It will take only ONE instance of a fully laden semi driving into a stopped lane of traffic at full speed and they will all be banned.
Really impressed by Jim Fan and his team. Excited for what may be coming in the next year
I starting the podcast even though I read the episode name, I think there is a coming tech business relationship transition, 2:20 .
Can you trust the educator unquestionably, interesting stuff, cool dog, reacting in real time.
Just thinking, sounds like the most common mode of motion will involve a wheel based platform for product distribution, and evolve niche capabilities in bipedal motion, great update, physics lesson included. The guard rails are almost transparent in the world of ai/agi.
As always thank you for sharing your time and work Phillip, ✌🏻
Let's goo, best thing when AI Explained uploads. IO predictions?
I can hear the excitement in your amazing explanation. I understand. This is big.
I was, in fact, in bewilderment, but for reasons I'm not proud of😂
I wonder how other models like Llama3 would do at this..
Magnificent work, really appreciate the jokes, and that you provide the simpleton version of all the big words too🤭🤗❤
Thanks reza, for your ongoing kind support
Thanks! Great content, as always! 🙏🏼
Thanks stephen
I've always had confidence that I could keep up. Now I feel like a pair of plane brown shoes in a world of tuxedos.
Time to install your cyber brain
Ditto George! lol
It makes sense that an AI would excel at training robots, but it's still surprising. Great video! Regarding self-driving, Tesla recently saw a huge improvement once they finally adopted an end-to-end AI solution with version 12. They might actually solve it.
This is next level.
one of the few content creators I always watch and like.
5:54 this is literally how I thought they would do this. I’ve had a dueling theory of extremes and outliers theory; seems to be playing out with the data and its implications on embodiment and the interpretation of physics in the real world. Awesome stuff.
Very cool - thank you!
The dream utopia that AI can bring I envision is no one has to work but can if it’s as a passion but we all go into a meaning economy. That’s worth fighting for.
Good luck. We’re all gonna need it.
@@therainman7777 seriously.
I really wish this reality would come to pass. I think it is laughably naive to think things will go that way but I would like you to be right.
Sounds great... until you realise that means that you truly become a consumer, and have nothing to offer among systems that strive for efficient use of resources. Will everyone be blessed to live without work or just the wealthy nations or the already wealthy of the wealthy nations?
@@kyneticist You don’t know that. As negative as things could be the opposite exists. The only way to really know is to find out, but holding on to the hope of a positive AI effect on the world is just as possible as the negative.
I have to say I can’t get my head around a lot of these things. Gpt-4 often just tells me “yes, that a complex task and you will need these skills. Good luck!”
The versions of GPT-4 that we use are designed to minimize inference costs and avoid doing anything stupid or unethical. The GPT-4 that the researchers are using is probably a little more willing to try things.
You have to work it a little bit. The default one is lazy.
Congrats on passing 250k subscribers by the way!
Thank you!
While humans didn't write the reward functions, I am quite sure that there was a lot of back-and-forth with the prompts until good reward functions (and variable ranges for DR) were written for this task, this robot and this environment. In a sense, still a lot of domain knowledge, but you leverage an LLM to scale up "domain expert productivity".
Maybe I am missing something, but using LLMs like this for robotics seems very hacky. A cynic would say, It's almost as if someone was trying very hard to find some way, any way, to apply LLMs to gather attention and generate hype ...and attract funding.
amazing, as always
One of the few things touted as advancements lately which actually seems like an advancement and not just generic hype. Terminators are on the horizon!
Terminators & anti-terminators
There goes my job security. Hard life being a yoga ball balancer
This is a total WOW. Recursive improvement is a type of positive feedback cycle, right? And, unless constrained, those quickly become exponential. So, yes, rapid improvement in robotics is to be expected. Thank you very much for this and all your diligently researched videos.
I seldomly say this, especially in the AI-Space but this is HUGE if generalizable! I mean ZERO SHOT! Ho-li! Thanks for that update!
These RL integrations are what I have been waiting for!
When talking about "testing all the scenarios" people would be well advised to consider the ways that things are counted in computer science (and much of computer science is about counting such things). Numbers get ridiculously big very quickly (ref: wheat on a chessboard).
BTW, there's a version of the chessboard problem done in terms of mass that speculates that the amount of wheat on the chessboard could well be the amount of wheat cultivated by humans for all time.
This is THE craziest thing I've ever seen and I think I'm not exxagerating.
"The compute budget is the limit" -- there seems to be an emerging consensus around this recently
There is a notion called the "bitter lesson" that basically says that all the big revolutions in ai (chess, go, speech recognition, vision, etc) are the result of simply more compute and algorithms that leverage more compute, rather than fancy tricks with putting human intuition into the program (for example, a lot of early vision techniques tried to break things down into edges and polygons, and they work far worse than a modern-day neural network that learns from scratch)
Amazing work. So so cool.
Insane. Incredible.
Perfect timing ❤
Infinite time in-sim is all you need!
The answer was always a Hyperbolic Time Chamber ⏰🤺
nice dbz reference there!
Kinda what I was imagining in my sci-fi future: like factories with "brains" or control AI and other robots transfer information all the time on what they need, repair and improve themselves and request needed materials from humans
Love to watch your videos. One thing to note on this particular video is Tesla is leading the self driving car race. Waymo, impressive as it is, is limited to mapped areas and isn't scalable imo. Tesla is learning to how to drive using end to end NN
This is so big that I would say it's one of the major leaps towards AGI. By doing this for every single task, you can optimize physically doing anything. Or rather this is conquering the real world. This combined with Sora's capabilities is gonna get you incredibly skillful robots. You can also do something similar for non-physical problems. You can solve a lot of things. Allow a good LLM the tools science has for experiments and it would invent new science.
Could invent new combinations or something interesting through simulations (and I think that's already being done), but it's still a part of current science or some branch of an existing one. Unless it discovers a completely new science that bends the laws of gravity and everything we knew was wrong, and it finds ways to travel back in time or harness dark matter and energies that we don't understand. It basically will bring magic to this world, but it might not be AGI then. I wouldn't be surprised if something an ASI is capable of doing will seem like magic to us, if it's really millions or billions of times smarter than all humans combined. AGI might make some impressive discoveries along the way, but the most impressive thing AGI could do is make itself smarter and reach ASI, and then technological singularity happens. But maybe everything happens within a year of achieving AGI as some open AI employees stated that it might only take a year to reach ASI after. I still think there's a few components missing like reasoning/logic(Q*?), but is this fully self-improvement without human intervention? It ran simulations by itself, or maybe it's close to fully autonomous improvement on this specific task. It will be insane when it starts choosing how it wants to improve by itself and runs simulations by itself for multitasking and does something different with a different limb/finger (for bots with fingers)
yeah i kinda get the idea of singularity now. all points connected at once and BAM. new universe.
@@phen-themoogle7651 I don't think any AI will invent time travel into past or travel faster than light simply because those are breaking the laws of physics. But pretty much anything else is up for grabs. Harnessing dark matter sounds plausible, but we don't quite know what it is so we don't know if it's useful. An antimatter engine would be useful, that we do know. It is more than 100x more efficient than nuclear fusion and nuclear fusion alone would change everything we know.
We still need to understand what dark energy is, how to have quantum gravity, and what's wrong with standart model etc. There's likely new physics there that can easily be discovered by an AI that is 2-3 years more advanced than what we have now.
I've been predicting ASI for 2029, but it seems to be getting closer.
i dont think you need a LLM for balancing task, plenty of negative feed back control system can already do balancing task that's hard for human. balancing an unicycle on a string, balancing a pencil on its pencil tip, jugging balls. On a more useful scale, maintain ship heading in rough storm also already use feed back control system, it can correct for both periodic era like wave or impulse error like a pocket methane spout. The difficult part is to find the correct feedback model parameter and the correct sensor data to make sure the system stay within feed back control force range.
If the llm can automatically generate the correct feedback model of the physical object, then it is a game changer.
damn back to back uploads shit's real
Wow this is crazy!
If you could have access to the model itself, I wonder how well GPT-4 could perform as the robot-dog itself, as in adding a sort of "action" modality allowing it to see and interact in the real world or a simulation.
What a time to be alive!
In this scenario, success is defined my standing above the yoga ball. This can be easily checked by a machine. But more complex tasks require a human to validate success, and some tasks require immediate response. This is a barrier that AI will not get through in a near future, maybe never. I foresee a AI winter very soon.
What do you define as complex tasks?
Specifically I was thinking about autonomous driving, where you have a lot of inputs and you need a fast response. Sometimes, even some moral reasoning. There are currently some companies running autonomous cars, they have a lot of glitches.
So what exactly am I supposed to retrain in now my white collar job is on the line due to AI? Is anything safe? Or should I stop worrying
Probably stop worrying, and maximise in the present
waymo is on tracks. Tesla FSD is actually impressive in a million unique settings
In what way is Waymo on tracks? I've ridden Waymo a total of 150 miles in a busy urban environment, and it handles every random situation I've seen thrown at it carefully.
Language models are just smart. It is astonishing what you can do with them, especially when they're multimodal.
What is even more amazing than these mind-bending breakthroughs is the claim by some of these “AI Experts” that we are nowhere close to AGI
For me, I failed to understand the importance of the paper from your summary. Possibly my fault. Perhaps the title implies that the algorithms used for balance, autonomy themselves were programmed by AI. I came out understanding that AI proposes a series of variable values that worked well with existing advanced AI already used in these off the shelf robot dogs. Is this like AI suggesting what the realistic friction of a tire on a road is? Or suggesting realistic speed limits for a Tesla?
In terms of the basic approach, which was then refined, I cover it in more detail in my previous Eureka video. If you rewatch that and this, and still have questions, let me know!
Presumably this works because GPT-4's training data included some robotics textbooks. While it's impressive that it can make such effective use of the material, let's not forget that that material was discovered and written down by humans.
GPT-4 is already plenty impressive to me, but reading stuff like this makes you all the more excited for future models and their capabilities
2 AI Explained videos in 3 days. LFG
Lol! If I saw a robot thrustings its pelvis in the ground and dragging the other legs while trying to chase me, I would pass out.
Great paper. I do think you should take a look at the actual tasks needed by blue collar workers. A plumber doesn't just "fix a leaky pipe". He understands the customer's problem, generates a theory based on what he knows of your house, cuts holes in your floor, crawls through a crawl space and operates tools around corners. No way I'm letting a robot with an oscillating saw have free reign in my home.
Haha yes, that's why I mentioned an inspection job like steeplejack. The barriers for deploying dangerous-equipment-wielding robots will be high
Doesn't it feel like the ability to "throw out the human textbook" is tending towards the direction of an alignment blackhole? Isn't the ability to understand those local maximas in other dimensions of the calculations that makes understanding what/how the model does so hard?
Eventually, it will become practical to test every single scenario. We're living in one of those scenarios now...
Wang Jo Wang dorm apartment background shows he recycles more Heineken than Budweiser.
I predict in one year the US or UK military will have a biped robot that can complete a complicated dexterous task like a Wing Tsun dummy routine, or Aikido hands drill.
Do you think, or have heard any students trying fingers for piano dexterity?
Good report as usual. cheers
Well I'm glad GPT-4 didn't take the robo puppy to the gravel pit after the first minor setback!
Nice. Also, feedback: don't do the "slow zoom" on long text shots. It can make some people nauseous when reading - like me, apparently.
Thank you, great to get feedback like this
@@aiexplained-official you're welcome!
I might be wrong in the way I understood how the robot works, but the robot balancing on yoga ball is actually does not bring us to autonomous robot helpers that much. The way I understand, problem with robots is that they can't navigate around the world without understanding it. There is just too much knowledge to have about the world. This is why autonomous driving is such a big problem. But things like balancing on a yoga ball can be done literally blind. It requires only knowledge of your position and forces affecting you. Although more efficient training though simulations is definitely a gigantic advancement, I feel like the simulation is much more important than ability to balance.
Good point , if we mix this with a dextrous humanoid robot , i wonder if the number of parameters to consider will be a problem for general human tasks...after all humans don't really understand the infinite variables around them yet they still adapt as babies...is it the natural sensors that we have that make the difference? would that be a problem for humanoid robots?
Fascinating Topic.
I guess the obvious question is, *what does this look like when you apply it to LLMs?* Given Anthropic's interpretability research (superposition/the unit of analysis not being individual neurons but groups of neurons & monosemanticity/tying each cluster of neurons to a specific word or meaning it triggers off of)... it seems like not too soon, this might be something an LLM could do to itself to try to recursively improve itself (at which point, all bets are *off)* -- and even failing that, get an LLM to "debug" a smaller LLM could be very, very interesting. Wonder what it'd look like?
If I understand it correctly the reason LLM's are so powerful in these cases is because of how their understanding of natural language makes everything else easier? I think that there are other systems would be better apart from the practical communication benefits right?
This is freaking remarkable.
"Infinitely patient"??? For how long? And, then what?
can it balance without the leash?
Yes! That was for safety
nice, thanks! :)
You're back!
The first example of real synthetic co-evolution!
One factor I find both impressive and confusing is the ability to apply expected response for each iteration of Simulated training in terms of physics. My understand is AI has trouble understanding and applying physics but must have understood it to improve the iterations.
Another uncertainty is around the leash used on the robot. Was it a known factor and part of the simulation training? It appears the 'walker' used it to steer the dog, actually causing it to walk backwards and pull it off the deflating ball.
Manufacturing as a constraint - Will commonly available hardware (power tools) be robotized? What is the minimum change to a power drill, so that it can take advantage of this sort of software?
I think GPTs/LLMs may actually be better at training and operating robots than answering text questions. even though all of the first applications are computer-based, I think robots will be a huge part of their future use.
I wonder how far away those robot arms etc are from being able to be used as prosthetics? Maybe each limb could have a processor that has been trained as described here? People are starting to have chips embedded, so if they could send wifi signals to the limb... anyway, just a thought. 🙂
Can it be convinced to allow the high level of openly operating net piracy to continue if it were also to become disappointed about it and is it going to choose to not participate if employed so?
thats some awesome robotics news
How is the success of a reward function judged? Do you need a meta-reward function to do this?
Does it fall off the ball (x10,000 simulations)
I hope GPT-6 can train a model to help me unsee that robot dog straddling the yoga ball
do you have a discord server???
On Patreon, yep!
@@aiexplained-official can you please share the invitation link
If you are on Insiders, message me there!