I don't understand what's the actual research here. looks like someone just found the AI hammer, and tried hammering something. the confidence map need to be in such high resolution that it's not efficient to be fed into a multi-agent model to control motion. There's no special hidden layer representation that could make this data more digestable for further agents. and it's all done in good lighting. like, we've already solved this problem. use special end effectors/robot feet. try to be more innovative please. this research paper is literally: hey! what if I put this into the machine? oh! it did something!
"I'm sorry Dave, i'm afraid the floor is indeed not made of lava. I can safely cross to the next room"
i only understand bits and pieces, but it’s awesome.
finally, truly autonomous robot dogs are getting closer and closer
Cool! I think the link in the description is broken though
great work.
I don't understand what's the actual research here. looks like someone just found the AI hammer, and tried hammering something.
the confidence map need to be in such high resolution that it's not efficient to be fed into a multi-agent model to control motion. There's no special hidden layer representation that could make this data more digestable for further agents.
and it's all done in good lighting. like, we've already solved this problem. use special end effectors/robot feet.
try to be more innovative please. this research paper is literally: hey! what if I put this into the machine? oh! it did something!
hello!can you share your work!