With the quick reflexes of the robot it seems it would have been easier to teach it to locate the ball in motion and move the cup into position based on the balls trajectory instead of trying to create an optimal swing. So is the robot really learning?
Its a bit slow at learning. Once it knows the way to move its going to always get the ball in the pot. What would be impressive is if it was moving around and doing this!
Just a few questions. It seems the only thing modified for each trial is the starting position while the joint movements are the same each time, right? If you were using only cameras attached to the robot, then I assume the arm was moved along only one axis? I have trouble imagining how the algorithm knew whether its adjustments were making it closer to or further from achieving the goal. Also, recursive algorithms have been around for decades now, is this approach any different or just a unique implementation?
While it is cool that in the end the optimal trajectory pretty much grants a succsess, it would be nice if this learning process could be abstracted and applied to different ball-/cup-/string-dimensions. Im guessing in this case the optimal trajectory is only true for the same test condtitions?
Hello Asya, Are you publishing this method? Also, technically, does Pepper use its own camera to check how close the ball is to the cup? Or do you use external tracking?
We used 2 external cameras, 1 at the top and 1 at the side. When the side camera detected that the ball was in a downward motion and very close to the cup, the top camera took a picture and from this picture we measured the distance from the ball to the cup. I have also been able to successfully optimize the ball-in-cup task with user feedback (score between 0-10) depending on how good the throw was.
Thank you for your answer! I hope I have time to try with Nao soon I see, good to know for the cameras. User feedback is such a simple yet elegant idea! Perfect for Pepper.
Most didn't realize what the f! this means: It learns! And it won't be bored nor tired from the process, like us humans... Imagine the possibilities (like skynet kkk)
lol, "learning"..........more like programmers in the background making adjustments. And the second half of the video is downright funny. Softbank is a joke
看完老高的直播又来回味一下~
我也是
老高揮手山區 😄
After 90 attempts, I was waiting for pepper to throw the ball and cup against the wall and smash it.
Agreed.
But he can't do that because he has no choice but to obey.
that'll be happening when he's on SNL
Imagine with a little practice how good they'd be at capturing humans for their people zoo.
Or dancing.
With the quick reflexes of the robot it seems it would have been easier to teach it to locate the ball in motion and move the cup into position based on the balls trajectory instead of trying to create an optimal swing. So is the robot really learning?
The point isn't to make a robot that can catch a ball in a cup. The point is to make a robot that can learn how to play ball in a cup by practicing
So it actually shouldn't be needing to use the camera?
AYYYYYYYYYY YOOOOOOOOO patty ima big fan of you, didn't think you would be here XD
Poor Pepper 90 trial...but after he never fails...cute and scary
Something so simple... and yet so scary. The Robots Cometh!
For some reason I forget that Pepper is a robot and just a machine. This makes it seem very impressive after the 100 attempts.
Its a bit slow at learning. Once it knows the way to move its going to always get the ball in the pot. What would be impressive is if it was moving around and doing this!
Just a few questions. It seems the only thing modified for each trial is the starting position while the joint movements are the same each time, right? If you were using only cameras attached to the robot, then I assume the arm was moved along only one axis? I have trouble imagining how the algorithm knew whether its adjustments were making it closer to or further from achieving the goal. Also, recursive algorithms have been around for decades now, is this approach any different or just a unique implementation?
Impressive. Even scary...
I wish she could reset my ball and cup.
This is amazing.
While it is cool that in the end the optimal trajectory pretty much grants a succsess, it would be nice if this learning process could be abstracted and applied to different ball-/cup-/string-dimensions. Im guessing in this case the optimal trajectory is only true for the same test condtitions?
Yes, but we have another intern that is now working on generalizing the solution to different starting conditions.
magnifico, complimenti per i vostro lavoro
Will he even learn to get the ball out of the cup himself? Or untangle the string? That would be awesome, but I am impressed nontheless
getting the ball out of the cup would have been simple to do, but untangling the string would be a little harder :)
Hello Asya,
Are you publishing this method?
Also, technically, does Pepper use its own camera to check how close the ball is to the cup? Or do you use external tracking?
We used 2 external cameras, 1 at the top and 1 at the side. When the side camera detected that the ball was in a downward motion and very close to the cup, the top camera took a picture and from this picture we measured the distance from the ball to the cup.
I have also been able to successfully optimize the ball-in-cup task with user feedback (score between 0-10) depending on how good the throw was.
Thank you for your answer! I hope I have time to try with Nao soon
I see, good to know for the cameras.
User feedback is such a simple yet elegant idea! Perfect for Pepper.
is source code for your implementation public?
impressive 😉
what algoritm and parameter used ?
Still better at it than I am!
Most didn't realize what the f! this means: It learns! And it won't be bored nor tired from the process, like us humans... Imagine the possibilities (like skynet kkk)
is that "bilboquet" taped to pepper's hand?
yes! :)
wow, great work!
that's awesome !can this robot be programmed by users to test other algorithm ,and make it smarter ?
Rand Lee Yes.
can it Do the housework
I wonder if this could be used to learn to play Snooker.
Those creepy soulless Chibi eyes bely it's sinister plan for singularity and the end of mankind itself.
Bien joué coach.
fantastic
a curious game... the only winning move is not to play...
すごいな…
EVERYBODY RUN!!!!
Haha, we have done that in 2008!!!
It's Amazing!
but I feel so sad
Get outta here man!, really! :O
와우 소름 돋는다 ㄷㄷㄷ
I can do that. Stupid robots.
lol, "learning"..........more like programmers in the background making adjustments. And the second half of the video is downright funny. Softbank is a joke
lol wut?
It's an evolutionary algorithm -the AI is making the adjustments, not the programmers. Why are you so hostile?
Aggrieved AI?
everything is learned automatically - except stabilizing the ball before each throw.
This is amazing.