Great to hear Merijn. It’s definitely worth checking out and might be of huge help in some projects. Glad you liked the explanation. Definitely subscribe if you don’t want to miss other topics about AR and VR.
One day machine learning will become so good that you could feed it all the photos on the internet and you would get an accurate 3d model of the entire planet with all the people on it (at least those with facebook/instagram images..)
Great video !! I'm working in an architecture firm and we'd like to create a 3D model of our surrounding buildings and topography. I've seen examples of that both in Polycam and Luma AI. Do you have any suggestions? I believe we need to fly a drone for that purpose, which app do you think have a better result. The goal is to download the result in an OBJ file format, and import it in our Architecture software we're using (Revit+Enscape)
The main interesting thing about NeRFs is the ability to capture view-dependent lighting (reflections). And then Luma Labs goes "look, you can export NeRFs to your favorite 3D software like Blender and Unreal!" The trick? They never mention that all reflection information is gone once you do that. A waste of time.
after watching a few videos I think I am finally understanding what NeRFs are. from the videos I have seen it is within Luma where it is real-time "3D models" like Unreal, with lighting and reflections
Both actually nowadays. But Polycam would be a little easier to light and change later. But luma recently announced a new version which makes it easier to include in VR projects.
Yes. Photogrammetry is like a brute force approach and has been around for quite a while. It will stay so, because NERF's data augmentation is not desired and can be against accuracy in some cases. Both are brilliant technical ideas and implementations@@wintorartour
Interesting that they are not available on Android. I found this online, maybe that will help you: all3dp.com/2/best-3d-scanner-app-iphone-android-photogrammetry/
i am sorry if this is dumb but does this means we can use nerf created images and volumes to create a more detailed 3d model using the classic photogrammetry method? I am sure we will see more creative uses of it once its open for people to use but other that and potential social media usage, nerf's area of utilization seems pretty narrow compared to photogrammetry.
I was thinking that too and I believe the exporting tools to get a GLB for example is already doing that. However, photogrammetry algorithms don't work nicely with reflections and such. Therefore, it might not work as expected. One great use case is using Nerfs to create video shots!
I used the following apps: Polycam, LUMA (only on invite) and Wintor AR tours. Polycam and LUMA are only available on iPhone maybe, not sure. Wintor AR Tours is available on any device. Have a great day!
Thanks for the comment. With NeRF you actually don't measure anything. It's basically a trained AI model with the aim to show a 3D representation based in an image sequence as input. As the video tells.
@@wintorartour what NeRF measures is the radiance field, which is a continuous model of what parts of the volume emit or absorb light. The MLP acts as a fuction approximator, i.e. a convenient way to represent and fit the model, by measuring the photometric error between the observed training images and the predicted images generated by the radiance field that the MLP represents. NB 1) a voxel grid, a depth map, a radiance field, or a collection of smoothed particles are all models, which measure the geometry and properties of a space. NB 2) AI is an application of mathematics. There is no magic (even if it may feel otherwise ; ) .
Nerf was new to me but looks like it has a lot of room to grow as the ML models get better over time. Very clear explanation, thanks!
Great to hear Merijn. It’s definitely worth checking out and might be of huge help in some projects. Glad you liked the explanation. Definitely subscribe if you don’t want to miss other topics about AR and VR.
One day machine learning will become so good that you could feed it all the photos on the internet and you would get an accurate 3d model of the entire planet with all the people on it (at least those with facebook/instagram images..)
Great video !! I'm working in an architecture firm and we'd like to create a 3D model of our surrounding buildings and topography. I've seen examples of that both in Polycam and Luma AI. Do you have any suggestions? I believe we need to fly a drone for that purpose, which app do you think have a better result. The goal is to download the result in an OBJ file format, and import it in our Architecture software we're using (Revit+Enscape)
The main interesting thing about NeRFs is the ability to capture view-dependent lighting (reflections).
And then Luma Labs goes "look, you can export NeRFs to your favorite 3D software like Blender and Unreal!" The trick? They never mention that all reflection information is gone once you do that. A waste of time.
But after seeing the video, you know why the reflection is gone!
how can we create 3d asset from a product image? and insert that to an existing video?
Creating a 3D model from a single image is really experimental. There are different tools for that, but I haven't found anything that I really like.
after watching a few videos I think I am finally understanding what NeRFs are. from the videos I have seen it is within Luma where it is real-time "3D models" like Unreal, with lighting and reflections
It has gotten a lot better since this video!
Thanks for this video. It has come a long way in less than a year! Also, Luma Ai has a new UE5 Plugin!
Indeed that looks very interesting. Lots has happened since we shot this video
thanks for the explanation
You’re welcome! Make sure to subscribe of course!
which one can we use for scanning environments like office or home interior for using in VR?
Both actually nowadays. But Polycam would be a little easier to light and change later. But luma recently announced a new version which makes it easier to include in VR projects.
Both nerf and photogrammetry starts from a point cloud. Both can be meshed.
But still very different technologies!
Yes. Photogrammetry is like a brute force approach and has been around for quite a while. It will stay so, because NERF's data augmentation is not desired and can be against accuracy in some cases. Both are brilliant technical ideas and implementations@@wintorartour
will NeRF be able to generate a 3D environment from a midjourney 2d art ?
I am not sure about that. Probably not.
LUMA is not on Android yet right?
I don’t think so :(
Short and informative. Thank you! Great video👍
Thank you! Keep an eye out for new videos and subscribed to not miss anything :)
Those two apps seem to not be available on android, what are some alternatives for both types of 3d scanning?
Interesting that they are not available on Android. I found this online, maybe that will help you: all3dp.com/2/best-3d-scanner-app-iphone-android-photogrammetry/
@@wintorartour luma ai is not on android but polycam is, my phone seems to not support it that's why it wasnt on the play store for me
@@wintorartour luma ai is not on android but polycam is, my phone seems to not support it that's why it wasnt on the play store for me
Thank You!
3D career 😁
Here I come!
#Ikuzo #Yoshi
woof
Wow great video!
Glad you enjoyed it. Have you subscribed also for our future content?
Do you guys use unity software at all?
Yes we do! It’s a great tool for everything related to AR/VR and game design. You too?
@@wintorartour No. I just own stock. But I think AR/VR is the future. Good to know developers actually use the software.
i am sorry if this is dumb but does this means we can use nerf created images and volumes to create a more detailed 3d model using the classic photogrammetry method? I am sure we will see more creative uses of it once its open for people to use but other that and potential social media usage, nerf's area of utilization seems pretty narrow compared to photogrammetry.
I was thinking that too and I believe the exporting tools to get a GLB for example is already doing that. However, photogrammetry algorithms don't work nicely with reflections and such. Therefore, it might not work as expected. One great use case is using Nerfs to create video shots!
@@wintorartour I think work is already going on with Nerf for video shots .
What's that app you used for AR
To view the AR content we used our own app Wintor. It'll launch in three weeks, but you can get the bèta already by going to wintor.app/
What’s the name of the app and If I can find this in the IOS App Store.
I used the following apps: Polycam, LUMA (only on invite) and Wintor AR tours. Polycam and LUMA are only available on iPhone maybe, not sure. Wintor AR Tours is available on any device. Have a great day!
Err, no. "photogrammetry" means _any_ method that measures things light. NeRF is a new technique of photogrammetry.
Thanks for the comment. With NeRF you actually don't measure anything. It's basically a trained AI model with the aim to show a 3D representation based in an image sequence as input. As the video tells.
@@wintorartour what NeRF measures is the radiance field, which is a continuous model of what parts of the volume emit or absorb light. The MLP acts as a fuction approximator, i.e. a convenient way to represent and fit the model, by measuring the photometric error between the observed training images and the predicted images generated by the radiance field that the MLP represents.
NB 1) a voxel grid, a depth map, a radiance field, or a collection of smoothed particles are all models, which measure the geometry and properties of a space.
NB 2) AI is an application of mathematics. There is no magic (even if it may feel otherwise ; ) .
NeRF scans also convert into 3D model and work in AR/VR applications........with better output :)
It is getting better, but those models cannot be called NeRF anymore. It uses nerf to get there and indeed, with better results nowadays!
Thanks!
You're welcome!
I wish to get good 3D models with these...
You should try, they might have become better now!