Augmented Reality Assembly Demo

แชร์
ฝัง
  • เผยแพร่เมื่อ 29 ก.ย. 2024
  • We work on marker-less tracking for augmented reality (AR) applications with focus on AR for assembly assistance. This video shows the latest improvements. We work on robustness and we (almost) achieved our goal: regardless what one is doing, we find the part and keep track of it. There are still ways to break it and, of course, the camera must see a portion of the object(s) of interest.

ความคิดเห็น • 12

  • @nirajkarki9314
    @nirajkarki9314 8 หลายเดือนก่อน

    Hello! Have you published a paper for this work? If so, what is the title of the paper? Thank you!

  • @yangbert7288
    @yangbert7288 6 ปีที่แล้ว +1

    Hey, how did you achieve "part" detection and recognition, I mean displaying of the green box? maching learning? would you give me some advice about that? Many thanks.

    • @rafaelradkowski1329
      @rafaelradkowski1329  6 ปีที่แล้ว +2

      I actually have three videos in the queue that explain the techniques behind it. There were some complications to get them out. But I have still to wait for a paper to be accepted. It should not take very long anymore since I already work on that for a year. Also, there is a good chance that all of this will become open source at the end of this year. No guarantee, but I work on that too.
      To your questions. It is a three-step process: Detection -> registration -> tracking.
      Detection: feature descriptor matching using pairs of Principal Curvatures to match geometry properties, e.g., edges.
      Registration (and pose estimation): It is just plain ICP.
      Tracking: Kalman filter.
      Of course, there are plenty of details that make it work. And the essential steps run on a GPU to maintain real-time.
      We also work on a CNN solution for detection in point clouds which will replace or substitute the feature detector.

  • @Sumitanad
    @Sumitanad 3 ปีที่แล้ว

    Is it possible to achieve this functionality through Vuforia?

    • @rafaelradkowski1329
      @rafaelradkowski1329  3 ปีที่แล้ว

      I think so. With model-based tracking, it should be possible to realize something such as this.

  • @PooPaPaw
    @PooPaPaw 5 ปีที่แล้ว

    great job!
    Have you published a paper related to this work? Thanks

  • @nishadnazar8757
    @nishadnazar8757 6 ปีที่แล้ว

    Hi THis looks fantastic. Just would you share which AR platform is used as SDk. Unity3D?

    • @rafaelradkowski1329
      @rafaelradkowski1329  6 ปีที่แล้ว

      Thanks. The graphic is based on OpenSceneGraph. Detection and tracking are mostly written in C++ (no tracking SDK). We use some support libs such as Eigen, Cuda, and OpenCV for basic functions.

    • @nishadnazar8757
      @nishadnazar8757 6 ปีที่แล้ว

      Rafael Radkowski Thanks for ur response. Much appreciated. Okays, it's not totally different from what I thought. I have worked on thingworx. So, i was thinking this is somehow related to AR in IoT .

  • @정소희-w4b
    @정소희-w4b 7 ปีที่แล้ว

    good job. i have a question. Where did the Unity engine parts assets come from?

    • @rafaelradkowski1329
      @rafaelradkowski1329  5 ปีที่แล้ว

      Thanks. Some sort of. We published some aspects but are still short of the entire story due to grant restrictions. But I am working on an open source version + paper (for two years, and I already promised it for two years). But we are close.... 2019 will be the year.

  • @liquor-dtz8426
    @liquor-dtz8426 6 ปีที่แล้ว

    Have u paper? Can ue teach me