General Robotics Lab
General Robotics Lab
  • 19
  • 53 109
HUMAC: Enabling Multi-Robot Collaboration from Single-Human Guidance
Project website (paper, code, video): generalroboticslab.com/HUMAC
Abstract: Learning collaborative behaviors is essential for multi-agent systems. Traditionally, multi-agent reinforcement learning solves this implicitly through a joint reward and centralized observations, assuming collaborative behavior will emerge. Other studies propose to learn from demonstrations of a group of collaborative experts. Instead, we propose an efficient and explicit way of learning collaborative behaviors in multi-agent systems by leveraging expertise from only a single human. Our insight is that humans can naturally take on various roles in a team. We show that agents can effectively learn to collaborate by allowing a human operator to dynamically switch between controlling agents for a short period and incorporating a human-like theory-of-mind model of teammates. Our experiments showed that our method improves the success rate of a challenging collaborative hide-and-seek task by up to 58% with only 40 minutes of human guidance. We further demonstrate our findings transfer to the real world by conducting multi-robot experiments.
มุมมอง: 129

วีดีโอ

The Duke Humanoid: Design and Control For Energy Efficient Bipedal Locomotion Using Passive Dynamics
มุมมอง 71212 ชั่วโมงที่ผ่านมา
Project website (paper, code, video): generalroboticslab.com/DukeHumanoidv1 Abstract: We present the Duke Humanoid, an open-source 10-degrees-of-freedom humanoid, as an extensible platform for locomotion research. The design mimics human physiology, with minimized leg distances and symmetrical body alignment in the frontal plane to maintain static balance with straight knees. We develop a reinf...
WildFusion: Multimodal Implicit 3D Reconstructions in the Wild
มุมมอง 13812 ชั่วโมงที่ผ่านมา
Project website (paper, code, video): generalroboticslab.com/WildFusion Abstract: We propose WildFusion, a novel approach for 3D scene reconstruction in unstructured, in-the-wild environments using multimodal implicit neural representations. WildFusion integrates signals from LiDAR, RGB camera, contact microphones, tactile sensors, and IMU. This multimodal fusion generates comprehensive, contin...
CREW: Facilitating Human-AI Teaming Research
มุมมอง 3582 หลายเดือนก่อน
Project website (paper, code, video): generalroboticslab.com/CREW Abstract: With the increasing deployment of artificial intelligence (AI) technologies, the potential of humans working with AI agents has been growing at a great speed. Human-AI teaming is an important paradigm for studying various aspects when humans and AI agents work together. The unique aspect of Human-AI teaming research is ...
[VCC-ALIFE 2024] Text2Robot: Evolutionary Robot Design from Text Descriptions
มุมมอง 7852 หลายเดือนก่อน
Virtual Creature Competition Submission: Text2Robot: Evolutionary Robot Design from Text Descriptions. Duke General Robotics Lab. Authors: Ryan P. Ringel∗, Zachary S. Charlick∗, Jiaxun Liu∗, Boxi Xia, Boyuan Chen. (* denotes equal contribution) Full Project website (paper, code, hardware manual, video): generalroboticslab.com/Text2Robot/ Abstract: Robot design has traditionally been costly and ...
ClutterGen: A Cluttered Scene Generator for Robot Learning
มุมมอง 2422 หลายเดือนก่อน
Project website (paper, code, video): generalroboticslab.com/ClutterGen Abstract: We introduce ClutterGen, a physically compliant simulation scene generator capable of producing highly diverse, cluttered, and stable scenes for robot learning. Generating such scenes is challenging as each object must adhere to physical laws like gravity and collision. As the number of objects increases, finding ...
Text2Robot: Evolutionary Robot Design from Text Descriptions
มุมมอง 1.2K3 หลายเดือนก่อน
Project website (paper, code, hardware manual, video): generalroboticslab.com/Text2Robot/ Abstract: Robot design has traditionally been costly and labor-intensive. Despite advancements in automated processes, it remains challenging to navigate a vast design space while producing physically manufacturable robots. We introduce Text2Robot, a framework that converts user text specifications and per...
Perception Stitching: Zero-Shot Perception Encoder Transfer for Visuomotor Robot Policies
มุมมอง 2463 หลายเดือนก่อน
Project website (paper, code, video): generalroboticslab.com/PerceptionStitching Abstract: Vision-based imitation learning has shown promising capabilities of endowing robots with various motion skills given visual observation. However, current visuomotor policies fail to adapt to drastic changes in their visual observations. We present Perception Stitching that enables strong zero-shot adaptat...
SonicSense: Object Perception from In-Hand Acoustic Vibration
มุมมอง 4203 หลายเดือนก่อน
Project website (paper, code, video): generalroboticslab.com/SonicSense Abstract: We introduce SonicSense, a holistic design of hardware and software to enable rich robot object perception through in-hand acoustic vibration sensing. While previous studies have shown promising results with acoustic sensing for object perception, current solutions are constrained to a handful of objects with simp...
Robot Studio Class - Tutorial Video on Fusion 360 Export
มุมมอง 1968 หลายเดือนก่อน
Tutorial video on Fusion 360 design history export from Robot Studio class at Duke University. Course website: generalroboticslab.com/RobotStudioSpring2024/index.html Code: github.com/general-robotics-duke/FusionHistoryScript Credit: Teaching Assistant: Zach Charlick
Policy Stitching: Learning Transferable Robot Policies
มุมมอง 690ปีที่แล้ว
Conference on Robot Learning 2023 (CoRL 2023). Project Website: generalroboticslab.com/PolicyStitching/ Abstract: Training robots with reinforcement learning (RL) typically involves heavy interactions with the environment, and the acquired skills are often sensitive to changes in task environments and robot kinematics. Transfer RL aims to leverage previous knowledge to accelerate learning of ne...
Discovering State Variables Hidden in Experimental Data
มุมมอง 4.1K2 ปีที่แล้ว
Project website: www.cs.columbia.edu/~bchen/neural-state-variables/ Abstract: All physical laws are described as relationships between state variables that give a complete and non-redundant description of the relevant system dynamics. However, despite the prevalence of computing power and AI, the process of identifying the hidden state variables themselves has resisted automation. Most data-dri...
Full-Body Visual Self-Modeling of Robot Morphologies
มุมมอง 3.2K2 ปีที่แล้ว
The project website is at: robot-morphology.cs.columbia.edu/ Author: Boyuan Chen, Robert Kwiatkowski, Carl Vondrick, Hod Lipson. Abstract: Internal computational models of physical bodies are fundamental to the ability of robots and animals alike to plan and control their actions. These "self-models" allow robots to consider outcomes of multiple possible future actions, without trying them out ...
(Data Collection) Smile Like You Mean It: Driving Animatronic Robotic Face with Learned Models
มุมมอง 15K3 ปีที่แล้ว
Data collection video. To appear at ICRA 2021. The project website is at: www.cs.columbia.edu/~bchen/aiface/ Full overview video: th-cam.com/video/fYURp2OaGn0/w-d-xo.html Hardware description video: th-cam.com/video/STx2HMHJFY8/w-d-xo.html Demo video: th-cam.com/video/L5ZJ8zKJXlk/w-d-xo.html Abstract: Ability to generate intelligent and generalizable facial expressions is essential for building...
(Demos) Smile Like You Mean It: Driving Animatronic Robotic Face with Learned Models (ICRA 2021)
มุมมอง 8K3 ปีที่แล้ว
Demo video. To appear at ICRA 2021. The project website is at: www.cs.columbia.edu/~bchen/aiface/. Full overview video: th-cam.com/video/fYURp2OaGn0/w-d-xo.html Hardware description video: th-cam.com/video/STx2HMHJFY8/w-d-xo.html Data collection video: th-cam.com/video/Ws-me3gYZ74/w-d-xo.html Abstract: Ability to generate intelligent and generalizable facial expressions is essential for buildin...
(Overview) Smile Like You Mean It: Driving Animatronic Robotic Face with Learned Models (ICRA 2021)
มุมมอง 1.4K3 ปีที่แล้ว
(Overview) Smile Like You Mean It: Driving Animatronic Robotic Face with Learned Models (ICRA 2021)
(Hardware Animation) Smile Like You Mean It: Driving Animatronic Robotic Face with Learned Models
มุมมอง 14K3 ปีที่แล้ว
(Hardware Animation) Smile Like You Mean It: Driving Animatronic Robotic Face with Learned Models
The Boombox: Visual Reconstruction from Acoustic Vibrations
มุมมอง 1.5K3 ปีที่แล้ว
The Boombox: Visual Reconstruction from Acoustic Vibrations
Visual Perspective Taking for Opponent Behavior Modeling
มุมมอง 6993 ปีที่แล้ว
Visual Perspective Taking for Opponent Behavior Modeling

ความคิดเห็น

  • @stimpyfeelinit
    @stimpyfeelinit 3 วันที่ผ่านมา

    give it some big meaty cheeks

  • @Flourish38
    @Flourish38 3 วันที่ผ่านมา

    That's sooo cool! I love stuff like this, and I especially like the integration of passive dynamics, since that seems to be often neglected. Heck, I didn't even know that was the word for it until seeing this video... Great work!

  • @Alice8000
    @Alice8000 29 วันที่ผ่านมา

    can i buy that?

  • @wuliwuli241
    @wuliwuli241 หลายเดือนก่อน

    Using the 3D bounding box of objects for positioning with a bin packing algorithm, or even just using the 2D bounding box information in the xy plane to perform bin packing, could solve this problem. Why design such a complex system to address such a basic issue?

    • @yj-th2hw
      @yj-th2hw หลายเดือนก่อน

      This is a very good question! There are several limitations of bin packing algorithms compared to our methods. First, our task is to determine a physics-compliant stable pose for the queried object with even irregular shape. The main challenge is finding the desired position in a cluttered environment where collisions are sometimes acceptable, such as in stacking, which is not allowed in packing algorithms. You can refer to some generated scene setups at time 2:17. Secondly, our method also considers the diversity of generated scene, which is essential for robot training. However, packing algorithms always place objects in fixed or heuristic ways. Finally, our method can zero-shot generalize to different queried regions after training, while even the most efficient 2D bin packing algorithm still requires O(nlogn). Moreover, our task operates in 3D with object rotation. Let us know if you have further feedback!

  • @nowymail
    @nowymail 3 หลายเดือนก่อน

    They have knees but don't use them at all. Their joints are too big and blocky, making them look awkward instead of cute. Making them bigger leaving the hardware the same size might look better. Overall nice concept, but needs a lot of work still.

  • @nahum8240
    @nahum8240 3 หลายเดือนก่อน

    tremendo, tremendous, a step forward to create expected movie-robots hahah, good video

  • @Joshi_Bhai
    @Joshi_Bhai 3 หลายเดือนก่อน

    Very insightful.

  • @fire17102
    @fire17102 3 หลายเดือนก่อน

    This is awesome ❤ Lately ive been thinking that i wouldn't want a humanoid robot in the house, but would definitely go for a small monkey like robot, like an ai animal companion haha

    • @NuntiusLegis
      @NuntiusLegis 3 หลายเดือนก่อน

      Humans are monkeys. :-)

  • @VMasotti
    @VMasotti ปีที่แล้ว

    awesome

  • @lucamatteobarbieri2493
    @lucamatteobarbieri2493 ปีที่แล้ว

    You could also make it produce sounds from the visual clues.

  • @lucamatteobarbieri2493
    @lucamatteobarbieri2493 ปีที่แล้ว

    The internal representation(s) of self and outer world into which we enact thoughts experiments, for example imagination and anticipation, is present in humans and could be expanded in future works like this. Also pursuing goals through action so internal networks that can learn motivation, desires, repulsion. are needed to get to conscience.

  • @Shengineer
    @Shengineer 2 ปีที่แล้ว

    Good stuff!

  • @nottoday2131
    @nottoday2131 2 ปีที่แล้ว

    Incredible.

  • @redpandachannel7981
    @redpandachannel7981 2 ปีที่แล้ว

    C'est peut-être la découverte du siècle le potentiel est presque infini !

  • @corpsdharmonie
    @corpsdharmonie 2 ปีที่แล้ว

    Incroyable découverte! Hâte de suivre l'évolution de vos recherches!! Votre IA semble très prometteuse 😍

    • @redpandachannel7981
      @redpandachannel7981 2 ปีที่แล้ว

      Je viens de découvrir ce concept dans le journal de l'espace et effectivement c'est tout bonnement incroyable !

    • @corpsdharmonie
      @corpsdharmonie 2 ปีที่แล้ว

      @@redpandachannel7981 pareillement! Merci d'ailleurs à Quentin et son équipe s'il passe par là. L'étude des résultats de cette IA ouvre la voie à des perspectives incroyables et découvertes majeures qui pourraient révolutionner notre compréhension du monde et offrir à l'Humanité les clés pour s'expanser subitement dans un avenir plus heureux aux multiples potentialités... Rêvons!!

  • @pietrodifrancescantonio
    @pietrodifrancescantonio 2 ปีที่แล้ว

    This is insane. Congratulations!

  • @pion137
    @pion137 2 ปีที่แล้ว

    I fell in love with this concept back when I used Nutonian Eureqa (and lots of good pointers from M Schmidt). This is next level! There are few ML endeavors more beneficial to humanity than discerning physical theorems directly from data with no initial primer. Very excited to see where you take this next. Makes me want to use my physics education a bit more and get coding!

  • @therobotstudio
    @therobotstudio 2 ปีที่แล้ว

    Total genius, I love that fire has by far the highest ID.

  • @therobotstudio
    @therobotstudio 2 ปีที่แล้ว

    Oh that's just fantastic work!

  • @sonic-1968
    @sonic-1968 2 ปีที่แล้ว

    Сначала сделаем похожими на нас, потом дадим интеллект , но не очень умный (иначе роботы поймут, что они лучше нас и мы в этом мире лишние) , потом роботами заменят людей (ведь роботы не болеют, не просят соцпакет и работают сутками, только подзаряжай, послушные во всем) ... А нас тем временем ширяют варевом от (для) ковидлы энное количество раз, пока иммунитет на сольется до нуля и не двинем кони... Миллионы уже на том свете, и это не предел... Так что роботы им ой как нужны стали...

  • @МихаилМихайлович-ь5ч
    @МихаилМихайлович-ь5ч 2 ปีที่แล้ว

    Потрясающий проект👍

  • @МихаилМихайлович-ь5ч
    @МихаилМихайлович-ь5ч 2 ปีที่แล้ว

    Супер! Узнать бы из чего такую кожу искусственную делают. И можно ли её самому сделать. Я бы на протезы попробовал.

    • @sonic-1968
      @sonic-1968 2 ปีที่แล้ว

      th-cam.com/video/b-eWgphPG3g/w-d-xo.html а как тебе такое)

  • @quantumfineartsandfossils2152
    @quantumfineartsandfossils2152 2 ปีที่แล้ว

    my selfie experiment is a hit

  • @isaacsantos8865
    @isaacsantos8865 2 ปีที่แล้ว

    As expressões até que estão mais ou menos, mas a maioria são negativas; quase não se vê alegria, nem vi sorriso quase.

  • @IkumonTV
    @IkumonTV 2 ปีที่แล้ว

    すんばらしい

  • @mycelleismybffl
    @mycelleismybffl 2 ปีที่แล้ว

    wow so good! what did you use to animate this!?

  • @Ms.Robot.
    @Ms.Robot. 2 ปีที่แล้ว

    ❤️😍❤️

  • @adityapurushotham982
    @adityapurushotham982 2 ปีที่แล้ว

    Can you pls give the stl files

  • @mk8_it
    @mk8_it 2 ปีที่แล้ว

    can we get a how to video soon?

  • @juzhanxu4237
    @juzhanxu4237 2 ปีที่แล้ว

    In 2:30, when you show the examples about computing value map, I saw 3 colors to represent the value, if blue represents 0, red represents 1, what does gray color represent for?

  • @franksiam2975
    @franksiam2975 3 ปีที่แล้ว

    give me more vids pleaaase

  • @zerosugarmatcha7348
    @zerosugarmatcha7348 3 ปีที่แล้ว

    Good job

  • @Nano12123
    @Nano12123 3 ปีที่แล้ว

    Why the heck does this not have more views?

  • @JasmineSurrealVideos
    @JasmineSurrealVideos 3 ปีที่แล้ว

    Why is she blue? Is there any logical reason for this colour choice? Is it to prevent uncanny valley? Or can you see the expressions better? It's utterly fascinating they way it replicates human expression, so far its not quite got the exact facial expressions, but like a baby it will learn. It's the development factor that holds my interest, how it could be in time.

    • @lucamatteobarbieri2493
      @lucamatteobarbieri2493 ปีที่แล้ว

      I imagine blue is good to prevent uncanny valley an is not specific for any human population skin color.

  • @margheritapryor8479
    @margheritapryor8479 3 ปีที่แล้ว

    Of course, EVA is the name of the robot in WALL-E, which everyone could completely interact with despite almost zero human face similarities. Same with WALL-E, whose "humanity" was presented by its camera eyes and most importantly, it's voice. Maybe we don't need all the bells and whistles, fascinating though they may be!