This is great! The fact that we can train these digital sets of legs to (almost) realistically run in just a matter of minutes is insane! Maybe two more papers down the line we might be able to make anything move realistically in just a few seconds. Thank you for the showcase 2mp!
A lot of learning how to run is learning how to do it safely with low risk of injury. It'd be amazing to see a future paper where the learning environment accounted for impact on joints and ligaments, and see if it takes a more natural running posture. This does get more into the realm of researching the biology of running than it is AI research, but really cool all the same!
It would definitely be interesting if they added some more realistic restrictions. From what I could tell it looks like the "legs" are running with the feet landing sideways which is not realistic so it would interesting to see the result if the landing angle of the foot was limited.
@@BP-328 yes, or the concept of pain. When I tried to do reinforcement learning with raptors, they wouldn't hesitate to include the head as a running support. And they always ended up rolling Sonic style.
I can't wait for them to put stress calculations on the muscles. So either the muscle breaks if pushed past a certain point, it loses points, or some other penalty for straining muscles. Both using too much power or over a short time using a muscle too strongly.
@@egdm1235i want to see an ai develop unknown superior athletic maneuvers. in 1968, dick fosbury introduced the fosbury flop, which is a new way to perform the high jump. the fosbury flop is still the most effective know method of high jumping. who's to say that you couldn't trade for something better though
One of the few complaints I have about this channel. Completely leaves out all details. You don't need to go through the whole paper ofc ( I understand it's for a general audience ), but atleast give a few sentences explaining the techniques used or what was improved.
@@Beyondarmonia He used to explain a bit more in earlier videos, but nowadays it's basically just stealing the showcase video that the paper's authors did and stripping it down to the fun visualizations without any technical details or explanations. I think he is more careful with his commentary than before since there have been multiple videos where he completely missed the point of the paper. I guess he has found the best views/effort ratio where most people just want to see the results, and that's the easiest part to show since someone else already did the work.
Hi! The work they did is enormous (they created environment which calculates Q values every second manually for you, and makes gradient updates based on that, diminishing/eliminating some cons), but they say that SAC and PPO requires a lot of samples, can you try my model-free algorithm, which beats records in OpenAI leaderboard (without parallelization), it is called Symphony it is present in OpenAI Gym Leaderboard
I would love to see it train with a given energy to spend, to see how it would try and optimize the motions ! Right now it looks like it's frantically running for its life in full panic mode 😅
I think the lack of real energy usage is a problem in almost all of these concepts and models. Energy gathering and transformation are the key concept in life - everything around us. Still, there's no energy accounting for the 'frantic' caffeinated models, and even the computation time for different parts of the model could vary over platforms etc, but such things are never or rarely considered, or dealt with.
Nobody says the word "amazing" quite like you do. When you say it, we know it means something. Your enthusiasm is infectious, and your research inspiring. Keep up the great work!!
I've been watching Lex Fridmans's clip on who would win a fight: a gorrila, a lion or a bear, and I thought when would we be able to simulate biologically accurate movement/tactics of different species in combat simulation seeing this, and the speed at which things evolve nowadays, I can see that happening sooner than I thought I could ever imagine.
@@General12th To be fair we wouldn’t be forcing them to do anything, we’d be putting them together in a cage. They would most likely choose to fight each other since they are wild animals. They could choose to just sit in the cage and do nothing and we wouldn’t be able do anything about that but instead they want to fight each other as predators. There are many calmer animals that do not choose violence and can live together in zoos or ranches e.t.c
Two minute papars you are my inspiration for falling inlove with a.i and for starting my channel, thank you for being an amazing "one way friend" for me ❤😁
Any kind of animation. Humans are truly awful at animating, but really good at telling when an animation looks wrong. So much so that animations and films are often done at 24 FPS; an awful headache inducing, stuttering slideshow; because stuttery motion will hide that the sword the actor is holding is kinda rubbery and the dragon moves unphysically/unconvincingly. Hand animation is even worse and is usually done at 12 FPS; not just because of labour cost but because the motion at even 24 FPS would look hilariously wrong. 24 FPS was chosen back in the early 1900's as the minimum framerate that would synch up sound (recorded onto the sides of the film with amplitude modulation as a squiggly line) with the action in "the talkies"; it is truly laughable in 2022. DLSS3 is also a nice step on the way to ending persistence blur. At 100 Hz an object moving at 1000 pixels per second (quite slow) will appear blurred by 10 pixels. This happens because eyes move in a continuous motion and the image updates in discrete steps. CRT monitors accidentally found a good solution to this; just flash the image briefly and let the after image on the retina follow the movement of the eye. If you can adapt DLSS3 to sit monitor-side and you take a variable input framerate of 100-200 FPS maybe and then you upscale it to around 1000 FPS you will get a near perfect image without having to cheat like a CRT (the raster beam is effectively a rolling shutter BFI). This is easily within the capabilities of an OLED and you don't need more bandwidth because you could integrate something like DLSS3 in the frame interpolation on the monitor side. This would effectively consign LCD screens to the dust heap of history where they belong.
I know it's early in the day, but this is by FAR the coolest thing I'll be seeing this weekend! I can't wait to see how future athletes can use this and to see just how much better they'll be able to perform! so exciting!
I think the problem is that the learning environment doesn't quite match reality enough for the model to learn how to run like humans. If we tried to run like that we'd hurt ourselves, too much stress on joints and ligaments. They're probably not accounting for this to keep it simpler
If you could scan somebody into this model and scale it appropriately and then you could build the chair around that model to perfectly fit someone's body and you could even sell them multiple chairs over a few years in order to correct posture
Is this desirable? Do we want AI-driven-motion animation? I understand why we would want to train an AI to move muscles for real life robots, but for games, is mocap inadequate for any particular task?
@@michaelleue7594 Unfortunately for me, I want to animate animals instead of humans. Mocap will not do for me. This seems to work for anything from humans to animals and machines.
That's what I was thinking. In 3 years time, this Ai could cut the need for mocap in half or even completely. Any normal person from their home PC would be able to move their objects with ease...Running,swimming,dancing,fighting etc. Combine this Ai with a camera based tracking Ai like a phone or webcam that can fill in the blanks for close up movements and we got ourselves a winning formula.
@@masterkc yup and think of the sometimes awkward transitions for movements we still get in games.. or things that physically aren't possible to recreate.. take something like spiderman.. yea we can mocap someone swinging and doing a motion.. but if this gets good enough we could literally have it run... jump and thwip to a building and get proper muscle reactions to it / some dynamic movements for compensation.. or say for instance the mocap performer is a completely different build/ weight... yea you can do certain things to try to match like adding weight to the performer and such but when someone deals with that every day they compensate for things differently.. realistically it would also make for more realistic crowd sims as well.. npc don't get the greatest animation sets but using ai it could literally make up idle animations.. think of say .. this plus stable diffusion.. once it gets trained enough.. and knows what x movement looks like.. we could type in.. pacing on phone then walks away and get multiple versions of the same animation and eventually in the even future future possibly games directly using it instead of having it for preset animation.. the game can be given different phrases of things characters can do and it could procedurally produce the animation for npcs essentially making a completely dynamic world i don't see it replacing things for actual film making but it could be HUGE for games of executed properly
@@digital0785 I completely agree about NPC's using this Ai...this would be a huge leap for gaming. You could have the most graphically beautiful game running at 4k 60 frames per second...but the one thing that always kill the experience is the limited and dull NPC movements. Let's hope we see all this new tech get used in the next couple of years...besides a select few games, the AAA industry has been stagnant for a very long time.
So, basically, this open the possibility in a future for people to bet in a digital world of who’s the best digital athlete in any kind of discipline based on how developers train the Ai and what kind of data they used. Incredible. And who knows what kind of discoveries could be brought from those experiments to the real world.
It's been cool learning more about the art related papers, but I'm glad we took a break for this video. It's awesome seeing other areas using AI so well! What a time to be alive!
Would this Ai be able to help game and movie Industries to make their characters move 100x faster than normal pipeline work?...which takes a team of 3d artists to move each individual body parts...
Thank you for sharing the most recent development in this area! These are indeed remarkable times to be alive! But would it be possible to go a bit deeper into what made the increased learning ability possible? Is it hardware or software? I propose setting up a new channel called 'Two-hour papers' for those interested in the science behind the results ;-) *Who's with me?*
Hmm Question is it me or dose It look like its not running correctly ie the feet are pointing outwards in the wrong direction so it's runing with the inside of the right and left foot?
We need: An untrained neural net as a file An environment to train the net, (maybe a game like Minecraft) The ability to take the trained neural net and put it on different characters in different games The ability to take the neural net and apply it to a robot in reality
We also need to teach it to walk more efficiently for energy consumption. Otherwise it looks like it’s running fast, sure. But in a super awkward way that takes up a lot of energy
someone pls help me , im looking for software for visual details like this, i mean i can use python code create objects and stuff like this, what software is in this video
...and once one model has been trained you simply copy it to another so each generation automatically starts with ALL the learning of the previous ones...
Incredible! Few more papers down the line and I'm sure we can combine this with Boston Dynamics to create Fully Realistically Comfortable Mechanical Legs for people without legs! Amazing things ahead! Thanks for the Video 👏
Such model will likely be at the source of realistic robots, this is insane. Now ad an upper body for equilibrium and pain "sensors" + maximum flexion of the articulation parameters and you can get a really good walk.
Science is going to add in a great way to make Artificial Intelligence npcs for games. I can already see a game you can fully interact with as if it were real in our near future..
Wait, but how? I feel like this video has been unusuallly superficial / short. Are they just showing off results, or is there some understanding of what's different to previous techniques?
It's a cool start, but as other commenters have said it's going to need a more complex system to control in order to get a more realistic running gait- the difference between 200 cycles and 2000 cycles there was almost negligible, if we're being honest. I also find most all evolutionary techniques to have a similar issue, which is that the AI is not necessarily incentivized to find the most natural or robust solution, just the most effective one. This results in an AI that can control well in the insulated bubble of the simulation they've been trained in, but it's also often impractical or sometimes flat-out impossible to transfer the movements they're doing to the real world, especially because 9 times out of 10 they're simply exploiting physics glitches or the overly simplified model of reality they are trained within. Even if you could make a realistic enough model to get a realistic result out of, it feels like you're going against the grain instead of with it. Evolutionary algorithms feel like you're trying to push water through a pipe with 50,000 leaks- you *could* just patch it up every time you find a new leak, but it would be a much better use of your time if you found a better pipe with fewer leaks to begin with.
This is great! The fact that we can train these digital sets of legs to (almost) realistically run in just a matter of minutes is insane! Maybe two more papers down the line we might be able to make anything move realistically in just a few seconds. Thank you for the showcase 2mp!
This is the start of proper robotic assisted exosuits for handicapped and injured people.
@@Chef_PC Didn't even think about that. You're absolutely correct!
What a time to be alive!
Yep, all within 30 minutes!*
*assuming you have the highest end specs in existence
I suspect the apparent lack of "realism" has a lot to do with the fact that it's only a pair of legs and not a full humanoid body.
A lot of learning how to run is learning how to do it safely with low risk of injury. It'd be amazing to see a future paper where the learning environment accounted for impact on joints and ligaments, and see if it takes a more natural running posture. This does get more into the realm of researching the biology of running than it is AI research, but really cool all the same!
Oh right, that's what is missing! It would hurt like a b***ch to run like that, after all. Resulting in joint damage and whatnot
The 100kg deadlift looks like it would result in herniated disks. It hurt my back just to look at it...
It would definitely be interesting if they added some more realistic restrictions. From what I could tell it looks like the "legs" are running with the feet landing sideways which is not realistic so it would interesting to see the result if the landing angle of the foot was limited.
@@BP-328 top ten secret running tricks the illuminati doesn't want you to know
@@BP-328 yes, or the concept of pain.
When I tried to do reinforcement learning with raptors, they wouldn't hesitate to include the head as a running support. And they always ended up rolling Sonic style.
I can't wait for them to put stress calculations on the muscles. So either the muscle breaks if pushed past a certain point, it loses points, or some other penalty for straining muscles. Both using too much power or over a short time using a muscle too strongly.
It would be super interesting to eventually get customized recommendations for athletic postures based on your unique body proportions.
@@egdm1235 it could also adjust to preexisting injuries
@@egdm1235i want to see an ai develop unknown superior athletic maneuvers. in 1968, dick fosbury introduced the fosbury flop, which is a new way to perform the high jump. the fosbury flop is still the most effective know method of high jumping. who's to say that you couldn't trade for something better though
I imagine a prosthetic that predicts where you want to go and moves the foot in the best calculated direction
You could have briefly mentioned what technique they used to accelerate the learning process, maybe another video to explain it?
10000 GPUs
One of the few complaints I have about this channel. Completely leaves out all details. You don't need to go through the whole paper ofc ( I understand it's for a general audience ), but atleast give a few sentences explaining the techniques used or what was improved.
@@Beyondarmonia He used to explain a bit more in earlier videos, but nowadays it's basically just stealing the showcase video that the paper's authors did and stripping it down to the fun visualizations without any technical details or explanations. I think he is more careful with his commentary than before since there have been multiple videos where he completely missed the point of the paper. I guess he has found the best views/effort ratio where most people just want to see the results, and that's the easiest part to show since someone else already did the work.
@@Ginsu131 it runs only on one gpu :)
Hi! The work they did is enormous (they created environment which calculates Q values every second manually for you, and makes gradient updates based on that, diminishing/eliminating some cons), but they say that SAC and PPO requires a lot of samples, can you try my model-free algorithm, which beats records in OpenAI leaderboard (without parallelization), it is called Symphony it is present in OpenAI Gym Leaderboard
I would love to see it train with a given energy to spend, to see how it would try and optimize the motions !
Right now it looks like it's frantically running for its life in full panic mode 😅
I think the lack of real energy usage is a problem in almost all of these concepts and models. Energy gathering and transformation are the key concept in life - everything around us. Still, there's no energy accounting for the 'frantic' caffeinated models, and even the computation time for different parts of the model could vary over platforms etc, but such things are never or rarely considered, or dealt with.
0:29 “realistic human movements”
Nobody says the word "amazing" quite like you do. When you say it, we know it means something. Your enthusiasm is infectious, and your research inspiring. Keep up the great work!!
I've been watching Lex Fridmans's clip on who would win a fight: a gorrila, a lion or a bear, and I thought when would we be able to simulate biologically accurate movement/tactics of different species in combat simulation seeing this, and the speed at which things evolve nowadays, I can see that happening sooner than I thought I could ever imagine.
It sure beats putting lions and bears and gorillas in cages and forcing them to fight to the death to satisfy our curiosity, that's for sure.
Lol this would make DEATH BATTLE! so much more exciting.
@@General12th To be fair we wouldn’t be forcing them to do anything, we’d be putting them together in a cage. They would most likely choose to fight each other since they are wild animals. They could choose to just sit in the cage and do nothing and we wouldn’t be able do anything about that but instead they want to fight each other as predators. There are many calmer animals that do not choose violence and can live together in zoos or ranches e.t.c
3:02 I love how it learned to run using the Naruto run stance...
😂😂😂😂😂
Maybe this kind of AI will find its way to sports to arrive at the optimal technique in various disciplines.
Excelant, thank's again Dr!
all your content is so interesting, i think this is my favorite TH-cam channel ever!
I'm not an esper, but without watching the video I can predict we will be pleased with a classic "What a time to be alive!" Or "I - love-it!" :)
Now it should try to minimize the energy consumption of the muscles. Maybe then it will run more like a real human.
I am impressed the ai didn't just flop over and bug the code into moving quickly
I imagine that Nvidia has a fancy physics engine that doesn't succumb to that kind of error
2:48 AI is pretty good at learning how to Naruto-run 🤣
Two minute papars you are my inspiration for falling inlove with a.i and for starting my channel, thank you for being an amazing "one way friend" for me ❤😁
Next gen video games are going to have some crazy good NPCs!
That's amazing! Training under an hour for that kind of movement is insane!
I wanna see AI figure out the most optimal way for a person to move their body
Great videos as always.
Man I can't wait for these learning AIs to really find their place in gaming. So much potential
Any kind of animation. Humans are truly awful at animating, but really good at telling when an animation looks wrong. So much so that animations and films are often done at 24 FPS; an awful headache inducing, stuttering slideshow; because stuttery motion will hide that the sword the actor is holding is kinda rubbery and the dragon moves unphysically/unconvincingly. Hand animation is even worse and is usually done at 12 FPS; not just because of labour cost but because the motion at even 24 FPS would look hilariously wrong. 24 FPS was chosen back in the early 1900's as the minimum framerate that would synch up sound (recorded onto the sides of the film with amplitude modulation as a squiggly line) with the action in "the talkies"; it is truly laughable in 2022.
DLSS3 is also a nice step on the way to ending persistence blur. At 100 Hz an object moving at 1000 pixels per second (quite slow) will appear blurred by 10 pixels. This happens because eyes move in a continuous motion and the image updates in discrete steps. CRT monitors accidentally found a good solution to this; just flash the image briefly and let the after image on the retina follow the movement of the eye. If you can adapt DLSS3 to sit monitor-side and you take a variable input framerate of 100-200 FPS maybe and then you upscale it to around 1000 FPS you will get a near perfect image without having to cheat like a CRT (the raster beam is effectively a rolling shutter BFI). This is easily within the capabilities of an OLED and you don't need more bandwidth because you could integrate something like DLSS3 in the frame interpolation on the monitor side. This would effectively consign LCD screens to the dust heap of history where they belong.
what was the trick in the neural network that allowed it to learn quickly ?
Me trying to escape the simulation: 0:18
I know it's early in the day, but this is by FAR the coolest thing I'll be seeing this weekend! I can't wait to see how future athletes can use this and to see just how much better they'll be able to perform! so exciting!
But why does the 17 minute trained jog look like a Zombie nearly falling over?
That's amazing, considering that one of the GPUs it has been tested on was a 1080 Ti. Very accessible!
you could probably train this on a worse gpu if you have the patience
QWOP flashbacks.....
This will be amazing when combined with Boston Dynamics’ robotic systems. Should see how many iterations it needs to cycle and fire a weapon.
Cyberdyne you mean
I believe they're forgetting the Above and only generate the Below. It's completely different to run with just legs and feet with no core or head
2:58 proof that naruto running is mathematically optimal
Imagine the wild of lifting with info like this, you would be able to hit every single muscle group perfectly
These 17 minutes, do we know on wich hardware this was trained on?
I always hoped ai would work together with us 3d animators. But I fear now that it will end up replacing us ...
Don't dispare!
It will replace all of us.
If ur animating stuff you're done
@@greendholia5206 I am a 3d animator, let's hope it won't steal my job.
I wouldn't call that running. More like "trying not to fall" than running. But impressive all the same.
I think the problem is that the learning environment doesn't quite match reality enough for the model to learn how to run like humans. If we tried to run like that we'd hurt ourselves, too much stress on joints and ligaments. They're probably not accounting for this to keep it simpler
For most human beings learning to run essentially is trying not to fall. Walking is really just controlled falling.
we live in a simulation
Would be great to see if adding a torso will make the feet point forward
the guy is talking like AI himself
1:44 A missed opportunity to call it "falling with style".
Can't wait to see this applied in games. Imagine fighting a giant spider & whittling it down leg by leg, if not muscle by muscle.
It would be Interesting to see how this would combine with prosthesetics and exosuit technology
If you could scan somebody into this model and scale it appropriately and then you could build the chair around that model to perfectly fit someone's body and you could even sell them multiple chairs over a few years in order to correct posture
crazy we might be able to get some realistic animation for games without mocap now too
Is this desirable? Do we want AI-driven-motion animation? I understand why we would want to train an AI to move muscles for real life robots, but for games, is mocap inadequate for any particular task?
@@michaelleue7594 Unfortunately for me, I want to animate animals instead of humans. Mocap will not do for me. This seems to work for anything from humans to animals and machines.
That's what I was thinking. In 3 years time, this Ai could cut the need for mocap in half or even completely. Any normal person from their home PC would be able to move their objects with ease...Running,swimming,dancing,fighting etc.
Combine this Ai with a camera based tracking Ai like a phone or webcam that can fill in the blanks for close up movements and we got ourselves a winning formula.
@@masterkc yup and think of the sometimes awkward transitions for movements we still get in games.. or things that physically aren't possible to recreate.. take something like spiderman.. yea we can mocap someone swinging and doing a motion.. but if this gets good enough we could literally have it run... jump and thwip to a building and get proper muscle reactions to it / some dynamic movements for compensation.. or say for instance the mocap performer is a completely different build/ weight... yea you can do certain things to try to match like adding weight to the performer and such but when someone deals with that every day they compensate for things differently..
realistically it would also make for more realistic crowd sims as well.. npc don't get the greatest animation sets but using ai it could literally make up idle animations.. think of say .. this plus stable diffusion.. once it gets trained enough.. and knows what x movement looks like.. we could type in.. pacing on phone then walks away and get multiple versions of the same animation and eventually in the even future future possibly games directly using it instead of having it for preset animation.. the game can be given different phrases of things characters can do and it could procedurally produce the animation for npcs essentially making a completely dynamic world
i don't see it replacing things for actual film making but it could be HUGE for games of executed properly
@@digital0785 I completely agree about NPC's using this Ai...this would be a huge leap for gaming. You could have the most graphically beautiful game running at 4k 60 frames per second...but the one thing that always kill the experience is the limited and dull NPC movements.
Let's hope we see all this new tech get used in the next couple of years...besides a select few games, the AAA industry has been stagnant for a very long time.
So, basically, this open the possibility in a future for people to bet in a digital world of who’s the best digital athlete in any kind of discipline based on how developers train the Ai and what kind of data they used. Incredible. And who knows what kind of discoveries could be brought from those experiments to the real world.
It's been cool learning more about the art related papers, but I'm glad we took a break for this video. It's awesome seeing other areas using AI so well!
What a time to be alive!
Is it really 15 times faster or did they use much better computers.
Would this Ai be able to help game and movie Industries to make their characters move 100x faster than normal pipeline work?...which takes a team of 3d artists to move each individual body parts...
This man is more supportive of the AI than my parents ever were of me.
Very. Nice. Video. I. Appreaciate it.
The no training part med me laugh. Looks so funny how the legs just straightens out and falls over
Thank you for sharing the most recent development in this area! These are indeed remarkable times to be alive!
But would it be possible to go a bit deeper into what made the increased learning ability possible? Is it hardware or software?
I propose setting up a new channel called 'Two-hour papers' for those interested in the science behind the results ;-) *Who's with me?*
Hmm Question is it me or dose It look like its not running correctly ie the feet are pointing outwards in the wrong direction so it's runing with the inside of the right and left foot?
Elegance was not part of the objective function.
@@cbuchner1 Oh ok so it not my eye's. Maybe one or two more papers down the line we might be able to see it run for real.
Walking humans are humans falling in a controlled way. That is so very accurate.
We need:
An untrained neural net as a file
An environment to train the net, (maybe a game like Minecraft)
The ability to take the trained neural net and put it on different characters in different games
The ability to take the neural net and apply it to a robot in reality
i wonder if it learns to run in totally different ways each time you restart the training
Its possible, these models are trying to optimise for a given goal so there may be different optimisations for the same goal.
Metal Gear Solid 4's Gekkos are seaming more practical with AI driven muscle control.
Amazing..try the full human body next.
how can we use this movements?
17 min sounds great but with what kind of hardware? One GPU? 4 GPU? An entire server center?
2:58 he really turned into naruto mod 😂
0;43 Those lifts look painful. Probably needs a stress limits for bones, too
We also need to teach it to walk more efficiently for energy consumption. Otherwise it looks like it’s running fast, sure. But in a super awkward way that takes up a lot of energy
Me teaching my pet robot how to run:
someone pls help me , im looking for software for visual details like this, i mean i can use python code create objects and stuff like this, what software is in this video
When will we get to play QWOP hard mode?
What a time to be alive!
now use syntheric bones and muscles to construct an IRL replica, load up the model, and we have a robot
I wonder if it would be harder for an AI to learn to walk under the same conditions.
Thoughts on doing anything over RLgym for rocket league AI? They have huge neural networks learning to play the video game Rocket League.
I always wondered how a pair of legs without an upper body would run and now thanks to computer science I know.
I'm very curious about the monster computer that trained this
I'm waiting for the team to introduce an AI that teaches Human how to Swim and survive Drowning.
Imagine if it runs like the fastest athlete runner.
...and once one model has been trained you simply copy it to another so each generation automatically starts with ALL the learning of the previous ones...
I'd love to see this with Tesla bots actuators and hinges
Runs like the titans from shingeki no kyojin
LOL.. more like controlled falling, but still amazing
Meta just called. They want to buy a billion pair of legs.
This looks like an amazing way for new people at the gym to learn the proper muscle-mind connection!
I feel like this century will be good.
Well, there goes all of Boston Dynamics's efforts for the last five years.
I get more dopamine from this man saying “what a time to be alive” than drugs
What would be the benefits of teaching a program how to walk?
training done in a software environment can be applied to hardware in a physical environment
I really want a game like this
damn those legs TOOK OFF!!
Documenting the birth of skynet - one 2mp at a time :)
The way those "legs" were running was painful to watch! 😬
with this teaching robots will become a lot more easier, no more these slow mouvements
Awww, I hoped they'd raise training time from 1 hour to 1 month
what a time to be alive
Amputee puts on their new body. Yeah, wait a mo'. It takes about 30 seconds to calibrate first time. Great, it's initiated. Try walking now.
Incredible! Few more papers down the line and I'm sure we can combine this with Boston Dynamics to create Fully Realistically Comfortable Mechanical Legs for people without legs!
Amazing things ahead! Thanks for the Video 👏
Such model will likely be at the source of realistic robots, this is insane. Now ad an upper body for equilibrium and pain "sensors" + maximum flexion of the articulation parameters and you can get a really good walk.
Science is going to add in a great way to make Artificial Intelligence npcs for games. I can already see a game you can fully interact with as if it were real in our near future..
Hi Dr.!
So basically if I am not wrong this could be used to learn robots to walk no ?
I held on to my papers too hard. Now they are crinkled :(
Early training 2 on the legs was zesty as shit
Wait, but how? I feel like this video has been unusuallly superficial / short. Are they just showing off results, or is there some understanding of what's different to previous techniques?
It's a cool start, but as other commenters have said it's going to need a more complex system to control in order to get a more realistic running gait- the difference between 200 cycles and 2000 cycles there was almost negligible, if we're being honest.
I also find most all evolutionary techniques to have a similar issue, which is that the AI is not necessarily incentivized to find the most natural or robust solution, just the most effective one. This results in an AI that can control well in the insulated bubble of the simulation they've been trained in, but it's also often impractical or sometimes flat-out impossible to transfer the movements they're doing to the real world, especially because 9 times out of 10 they're simply exploiting physics glitches or the overly simplified model of reality they are trained within. Even if you could make a realistic enough model to get a realistic result out of, it feels like you're going against the grain instead of with it.
Evolutionary algorithms feel like you're trying to push water through a pipe with 50,000 leaks- you *could* just patch it up every time you find a new leak, but it would be a much better use of your time if you found a better pipe with fewer leaks to begin with.
Dani has done that