The sick part is, Luma is partnering with Poly cam, meaning we will get incredible photogramitory for geometry and crazy radiant feilds with reflections, transperency, roughness, etc
How does it handle transparency and specularity compared to photogrammetry and does it create textures besides diffuse (such as metallic, roughness, etc.)?
it uses neural networks and does yes. You could tell if you looked at a reflective material such as a tv while off or a chrome ball which Corridoor Crew did in their video.
good to know that it takes 20-60m to complete. Guess it still requires colmap camera pose estimation behind the scene, there would be a huge speedup if this difficulity is cleared.
Thank you for showing us this pretty cool tech! I would totally like to try it out when available and I'm positive it will be widely adopted, provided enough marketing/educational efforts like this! Keep up the great work guys and congratulations!
I exported my scans and I have a lot more of the scene than I really wanted. Have you encountered this with other scans you have exported to use in a processing application. I uploaded the OBJ straight to Sketchfab and really wish I was more experienced at removing areas I don’t want.
Yes that’s normal since they currently don’t provide a slicing feature, you would need to bring the asset into Blender, Maya, or other similar tool to clean it up. I did submit a feature request to add a slicing type feature to their app but no ETA yet. Great question and thanks for watching !
Get a rotational display ( usb cable ) and place any object on it. Place lock the camera on a height and angle adjustable tripod. Point the camera to a direction only with same background for all 360° images. This also applied for green or blue background for instant chroma key editing of an object. Additional : play with some studio light point to the object to match with the final background for such video/movie scenes 👌🏻 U do less hard work with much better and clear 360° images. Try it 👌🏻
This is just NVIDA NeRF which is open sourced can capture reflections but not very good for photogrammetry. Why is LUMA not releasing this? Something feels fishy here
I don't get this, other older photogrammetry software seems like a better choice. Why capture backgrounds? Is it able to capture other than small objects? Is it able to fill the gaps in models? I mean this was possible 8-10 years ago, luma software isn't out yet and it seems like it has fewer features, why not use software that's already there. Major game franchise already use photogrammetry (yes, it's not that simple just to take some photos with your phone), but I'm curious, why bother with this toy?
As far as I can tell from having done photogrammetry and looked at videos on what "NeRF" is, NeRF seems to let you generate a really-low-quality-compared-to-properly-done photogrammetry 3d model with just a few pictures (I'm sure it'll improve and maybe match photogrammetry at some point but it's still far from it if this video is anything to go by). As a wild guess, maybe with NeRF you can do 5 or 10 pics? to capture a complete object vs. say 70-100 for basic photogrammetry. The AI model fills in the gaps. As to the lighting Roy mentions, I've no clue. For engineering and 3d printing purposes I care about capturing high-resolution, surface-accurate meshes (which I find hard to do really well...maybe AI will help there at some point too). I'm glad this video creator showed the actual meshes. Too many photogrammetry videos show a final model with the texture/source images mapped to it and it looks fantastic but it hides the mesh underneath that's looks like it's been worked over with an ugly stick and could outdo oatmeal for being coarse and lumpy. I guess for game assets it's ok since you anyway reduce polygon count as much as feasible and only care how the object looks in the game?
📌Would you consider using Luma AI for your 3D assets? What are your thoughts about using this technology for early prototyping phases ?
I believe this could have true potential considering this is such early days. Luma AI requires an invite code or I would be trying it out myself!
The sick part is, Luma is partnering with Poly cam, meaning we will get incredible photogramitory for geometry and crazy radiant feilds with reflections, transperency, roughness, etc
In 10 years I think 3d capturing will be so good that Google Street View will be converted to 3d (with a driving simulator on top).
Do you have a resource for this?
@@jimj2683 They're already getting ready to turn it into a Nerf now, not in a decade.
How does it handle transparency and specularity compared to photogrammetry and does it create textures besides diffuse (such as metallic, roughness, etc.)?
it uses neural networks and does yes. You could tell if you looked at a reflective material such as a tv while off or a chrome ball which Corridoor Crew did in their video.
good to know that it takes 20-60m to complete. Guess it still requires colmap camera pose estimation behind the scene, there would be a huge speedup if this difficulity is cleared.
I am this was helpful to you !
Thank you for showing us this pretty cool tech! I would totally like to try it out when available and I'm positive it will be widely adopted, provided enough marketing/educational efforts like this! Keep up the great work guys and congratulations!
You are very welcome and thank you for your kind message, best to you as always !
Dilmer
Is this useful for scanning a room that could be used as a set in a digital production?
how many verts does the model have? - 2 million?
Hi,
Is it possible to export in STL format and than upload it to 3d printer?
I exported my scans and I have a lot more of the scene than I really wanted. Have you encountered this with other scans you have exported to use in a processing application. I uploaded the OBJ straight to Sketchfab and really wish I was more experienced at removing areas I don’t want.
Yes that’s normal since they currently don’t provide a slicing feature, you would need to bring the asset into Blender, Maya, or other similar tool to clean it up. I did submit a feature request to add a slicing type feature to their app but no ETA yet.
Great question and thanks for watching !
You are insanely underrated!
so whats the new thing here or better? its just photogrammetry?
you did not show HOW to download the mesh from Luma AI
Get a rotational display ( usb cable ) and place any object on it. Place lock the camera on a height and angle adjustable tripod. Point the camera to a direction only with same background for all 360° images. This also applied for green or blue background for instant chroma key editing of an object.
Additional : play with some studio light point to the object to match with the final background for such video/movie scenes 👌🏻
U do less hard work with much better and clear 360° images. Try it 👌🏻
How would you export the scan animation (in Luma app after scan) as an MP4? Is that possible? If not is there another way to do that? Great video
how can we use this for volumetric video?
Nice! Looking forward to trying it out myself.
So how is this different from photogrammatry?
I have problems with finding answers for my quastions on their website(maybe someone will help). Is it only for iOS? What about pricing in the future?
Yes this is currently only available for iOS, not sure about the pricing model.
Is it only available on Mac?
Currently only available for iOS.
Unreal’s nanite helps make the poly count less of an issue
i am currently working on nerf and synthesis in robotics
can't wait to try...
Thanks Peter !
Is there an Android app for this?
This is just NVIDA NeRF which is open sourced can capture reflections but not very good for photogrammetry. Why is LUMA not releasing this? Something feels fishy here
Great video and can you animate the model?
Has anyone got invite codes for directly using it?
is this ios only?
Yes currently it is.
Love it
Amazing.
Thank you for your feedback 🙏
NeRF is the future! How can I get an Invite code?
awww baby yoda lol
I downloaded the app and I haven't received socalled invitation for three days
it stinks a bit to me
I recommend engaging with them in twitter, that’s how they are currently inviting people.
@@fsstudiodev thank you ..but I really hoped that the app was not this good
this is crazy
Takes a long time to process
Yes we are hoping it improves with within the next few years to a level that it is instant !
😊
I don't get this, other older photogrammetry software seems like a better choice. Why capture backgrounds? Is it able to capture other than small objects? Is it able to fill the gaps in models? I mean this was possible 8-10 years ago, luma software isn't out yet and it seems like it has fewer features, why not use software that's already there. Major game franchise already use photogrammetry (yes, it's not that simple just to take some photos with your phone), but I'm curious, why bother with this toy?
it captures the lighting of the space much more accurately.
As far as I can tell from having done photogrammetry and looked at videos on what "NeRF" is, NeRF seems to let you generate a really-low-quality-compared-to-properly-done photogrammetry 3d model with just a few pictures (I'm sure it'll improve and maybe match photogrammetry at some point but it's still far from it if this video is anything to go by). As a wild guess, maybe with NeRF you can do 5 or 10 pics? to capture a complete object vs. say 70-100 for basic photogrammetry. The AI model fills in the gaps. As to the lighting Roy mentions, I've no clue. For engineering and 3d printing purposes I care about capturing high-resolution, surface-accurate meshes (which I find hard to do really well...maybe AI will help there at some point too). I'm glad this video creator showed the actual meshes. Too many photogrammetry videos show a final model with the texture/source images mapped to it and it looks fantastic but it hides the mesh underneath that's looks like it's been worked over with an ugly stick and could outdo oatmeal for being coarse and lumpy. I guess for game assets it's ok since you anyway reduce polygon count as much as feasible and only care how the object looks in the game?
@@perspectivex thanks for this bro
@@perspectivex great summary
@@perspectivex for good results, NeRF generally still requires 100s of images
Only for mac monkeys?,,,,will just watch from my window