NVIDIA NERF vs Reality Scan iOS App from Reality Capture + Epic Games

แชร์
ฝัง
  • เผยแพร่เมื่อ 10 พ.ค. 2024
  • Comparing the 3d obj export capability of NVIDIA NERF vs Reality Scan.
    The Reality Scan app was released on the 4th of April for beta testing and the first 10 000 people that sign up to it will also get access to the Pro Sketchfab app including 50 uploads as long as you hand over all rights to any content you create using the app in Beta. So this is currently purely for testing and feedback purposes.
    Join Reality Scan Beta www.capturingreality.com/intr... (Limited to 10 000 spaces)
    I thought I would do this video first because really if you want to create usable 3d objects NERF is not currently the way to do it, but photogrammetry is.
    Meshroom - alicevision.org/ (Free Photogrammetry software)
    If you found this useful and want to show some appreciation you can buy me a coffee - www.buymeacoffee.com/dirkteucher
    / dirkteucher
    0:00 Start
    1:21 NERF problems
    2:46 Reality Scan iOS 3d photogrammetry
    4:00 The best software for photogrammetry
    5:13 iPhone masking with depth sensor
  • ภาพยนตร์และแอนิเมชัน

ความคิดเห็น • 15

  • @ItsYBwhoa
    @ItsYBwhoa 2 ปีที่แล้ว +1

    Noooooooo, I'm late for the TestFlight beta lol.
    Probably gotta wait till late spring for early access.
    Interesting, looking forward to the next tests!

  • @nicolasportu
    @nicolasportu ปีที่แล้ว +1

    Outstanding reserch! Agree with you about poor obj/fbx export, but what about ply format? We can use and get depth with that in Unity or Unreal? Thanks!

  • @RussianO1eg
    @RussianO1eg 2 ปีที่แล้ว +1

    Aaaand with Reality Capture you will have much better result. Especially with custom marks and correct background (some newspapers or magazines on the table, for example)

    • @DirkTeucher
      @DirkTeucher  2 ปีที่แล้ว

      Exactly my thoughts as well. Reality capture and meshroom both produce excellent results but Reality capture in my experience is a little bit faster and better.

  • @Instant_Nerf
    @Instant_Nerf ปีที่แล้ว

    Whatever happened to the iOS app., it’s been 4 months

  • @clintonferns4u
    @clintonferns4u 2 ปีที่แล้ว +1

    How far do you think Nerf is from achieving quality like Reality Capture?

    • @DirkTeucher
      @DirkTeucher  2 ปีที่แล้ว +5

      In theory if NERF could be applied to the photogrammetry algorithms then a good developer that understands both + CUDA could probably implement it in a few months or maybe a year.
      From what I can see in the code it looks like a great deal of this uses colmap to generate the data and sparse point cloud and then the bulk of the code is cuda which is the stuff that makes this process so ridiculously fast on the GPU. It looks to me that what this currently does is help show developers how to implement a GPU to process images to 3d in the same way that photogrammetry currently does on CPU. So if a photogrammetry engineer took a look at the cuda code to see how the NVIDIA engineers utilized the GPU they should be able to use that to figure out a way to utilize NVIDIA GPUS for photogrammetry either with a new tech stack or existing code like reality capture or meshroom.
      I could be wrong here as I do not understand the math behind light fields or photogrammetry so do not know exactly how interchangeable they are or how feasible this is. But my hope is that instead of just having a CPU crunch the data for an hour as it currently works we should at some point in the future be able to use both the CPU + GPU to do the work which would make things potentially 10-20 times faster if NERF is anything to go by.
      And hopefully the code here will help someone to figure out how to do that. It sure won't be me :D

  • @kasali2739
    @kasali2739 ปีที่แล้ว

    What's the output of nerf? Is it coloured point cloud?

    • @DirkTeucher
      @DirkTeucher  ปีที่แล้ว

      The devs call it a light field. But to me it looks like a coloured point cloud. I think there is a distinction between the two as the "light field" in NERF output appears to change based on the angle you are viewing it.

  • @zyxwvutsrqponmlkh
    @zyxwvutsrqponmlkh ปีที่แล้ว +2

    Been extremely frustrated with Nvidia NERF, despite the hype the thing is crap and the developers are actively antagonistic against fixing even the simplest bugs with folder structure that causes lots of difficulty on Windows.

    • @DirkTeucher
      @DirkTeucher  ปีที่แล้ว

      Check out luma ai . They use NERF and the results are the best I have seen. But it depends on what you want to use NERF to do. If you want to produce 3d models then yeah NERF is still no good for that because it's a point cloud of light approximation. I think it will require more research and some new breakthrough to be able to improve on photogrammetry. But for what it does do to capture a scene from video it is pretty incredible and I consider it a completely new tool.
      Also I watch the NERF github repo and they seem pretty active to me. I see updates virtually daily. Though it is frustrating when something you want added is not worked on. I get that myself from time to time. Did you raise a pull request for that windows folder change improvement? Drop the name of it here I will support it if I can. But don't post a link here as youtube blocks comments with external links quite often. It looks like it's posted but if you refresh the comment is gone.

    • @zyxwvutsrqponmlkh
      @zyxwvutsrqponmlkh ปีที่แล้ว

      @@DirkTeucher ​ Someone already did it and explained the problem in detail and made the fix but Tom94 said no because he does not want windows normies to be able to use the tool without extream pain and he only wanted it to work well for the 3 people that use it on linux. instant-ngp/pull/1051 (Yes, TH-cam deleted the comment like you said because they are morons)
      He says we cant have something that works at all because IF you zipped up your output and tried to import it into Linux then the folder structure would need to be adjusted. And because he doesn't want that the whole thing is broken for windows and will remain broken for windows and he will fight to keep it broken for windows. I just want the thing to run at all, I am not zipping up results and trying to re-import them in linux but he hates windows and windows users so much he would intentionally break things for us.
      I spent two days to get the ting installed and running and all I got was the fox to run once with the original json, I cant even replace the fox pictures with my own in the fox directory and have it still work, the whole think is just broken beyond belief. I don't even know how many instances of visual studio 2019 or python are installed on my computer right now.

    • @DirkTeucher
      @DirkTeucher  ปีที่แล้ว

      ​@@zyxwvutsrqponmlkh I subscribed to that pull request to show my support. I agree with you that it does seem to me that allowing us to have a functioning --out argument should be possible :D . I found that quite annoying at first too. The way I got around it was to just grab the transforms.json file and then find replace the path automatically using a script.
      You cannot replace the fox images with your own and get it to work. Check out this nerf video where I walk through the entire process . th-cam.com/video/aVZO0r16S5U/w-d-xo.html . Hope that helps.

  • @randomforest3007
    @randomforest3007 ปีที่แล้ว

    Guys pls help how to get RealityScan invitation Code?

    • @DirkTeucher
      @DirkTeucher  ปีที่แล้ว

      There is a link in the description ☝️