btw. not trying to hijack the video topic. but the dev from pixelmash actually gave a promo code for 20% off lol (the code is in the other video) it seems it was not limited to 1 person. so if you are interested, can use it or promote it in another video. (no idea how long it will last lol) XD
When using photogrammetry with smartphone camera, does it matter whether the orientation is veritcal or horizontal? Sorry, I have another important question. Do you think Samsung Note 9 camera or iPhoneX is capable enough to produce great quality photogrammetry like scanning a human?? I don't know if its my current smartphone camera that needs improvement or I'm just bad at taking the photos in the specific way photogrammetry needs. Thanks for the response! I really hope I get one.
man, screw this github garbage, omg, why is it so convoluted and complicated??, why cant these damn people just make a download, what is all this other random crap all over the place everywhere, what in the hell is clone/download mean, this is irritating to me :( i dunno where to get the file, why is it buried in this other junk? (sorry i freak out when i can't figure anything out...i figured it out anyways tho) 0.0
@@oozly9291 That means you have a lot of bugs, the program is not able to take your photos or after hours of work you find out that you need a NVIDIA grafic card. And no there is no other option despite of having a NVIDIA grafic card.
Kai Brendel ah rip lmao thank god I have a rtx card but for me it’s when I put in the photo from my iPhone to the pc like 80% of them fail when scanned
You can absolutely get fantastic results with a smartphone camera. You need to switch most of your camera's settings to manual in what ever "pro" mode your camera app has.
I haven't been following but I've always wondered what tools were available to do this nowadays, for those people that want to do a little more than just 2D photography editing with their photos. Nice to know that there are now comprehensive, free, open source tools for that!
As for the mug. I haven't used this software but from your description I don't think it was the background. All photogrammetry software has trouble with objects that are smooth, clear, glossy, or cylindrical. A mug fits most of those issues.
Yeah, I was wondering about this too as I've done quite a lot of photogrammetry without the background. Surely his mug was white and textureless too, which won't help at all if you're trying this technique.
Best thing to do with those is to get a sharpie and make small dots over it then manually remove them from the generated texture afterwards. You can just easily make a mug in Blender tho.
You can definitely do an imaging session in a white box and macro lenses work fantastically (if you know how to use them and the limitations of this technique). It seems the mug didn't work for you because photogrammetry has issues with shiny, reflective, textureless, transparent, filamentous, etc. objects, which a mug has some of these. You need a constant well defined texture on your object, because SfM needs visual features to correlate to other features in the rest of the images. It does help to use a highly textured surface underneath your object in a white box shoot, to help the global alignment, and then you crop it out in edition. And the macro lens tends to reduce the depth of field in close up shots, so if a large part of your image is blurry, it won't recognise any features too. Extend it by closing your aperture or using stacking software. The details you can get in macro photography look fantastic in photogrammetry if well performed.
Always be sure to mention it though... all of your (collective you) leads help me discover software I may not already be aware of, or I can often link you to videos I may have already done.
I have done some photography for photogrammetry usage (not claiming to be a pro or anything, just a part of my job). First of all, camera's on mobile phone are really not suitable for this- low aperture (means less Depth of Field), noisy in lower light conditions, usually low focal lengths (wide). Ideally you need DSLR and it don't have to be any top notch model, even today's basic models will do fine (I used Nikon D3200 if I remember correctly). Prime lenses (no zoom) are the best because of picture quality and also, you need to have fixed focal length (you can bump or brush against zoom lenses and change the focal length accidentally). Also large F-stop will help you to keep everything in focus (larger DOF), of course ISO should be kept as low as possible. Next, you have to have the background present, because every photogrammetry software is using it to piece back the information about object. We used photoscan soft, I think you were able to manually correct the camera and drop points if it couldn't identify what it was looking at to help it stitch the model. Another thing is lighting, which needs to stay the same as much as possible and ideally scattered with soft shadows. Big shadows and underexposed photos= no info. If you shoot indoors, moving the lightsource is catastrophic. You need to have picture of your object from longer distance to get it all in frame as much s possible, then you can go closer and closer until you can shoot also a details of surfaces (cracks, scratches etc.) Software will take it and it will heighten the level of detail of your model.
Ummm, lower aperture means *greater* DoF, not less. It also means less light collected by the sensor. Maybe you are getting the numbers the wrong way round: an aperture of, say f/3.5 is greater than, say, f/7, not less.
I've done photogrammetry with a Galaxy S8 and a Nexus 5, and got good enough results working with mostly ideal, and some less than ideal objects. It's definitely more work, and there's probably some objects that just wont work with a phone camera. The rock featured in the first half of this video would work well with phones, the shinny clean fire hydrant wouldn't. So if the object has a lot of trackable detail and is pretty matte, a good cell phone camera is perfectly fine to get started with.
When using photogrammetry with smartphone camera, does it matter whether the orientation is veritcal or horizontal? Sorry, I have another important question. Do you think Samsung Note 9 camera or iPhoneX is capable enough to produce great quality photogrammetry like scanning a human?? I don't know if its my current smartphone camera that needs improvement or I'm just bad at taking the photos in the specific way photogrammetry needs. Thanks for the response! I really hope I get one.
Phone cameras and macro lenses are similar in the way they distort images. If you have a DSLR you kinda want to have a 100mm lens usually for undistorted perspective. Otherwise you'll get a kind of fish-eye look to images, especially if your macro or phone camera is close to something.
Haven't used Meshroom but the other software I've used has a filter for this distortion. Likewise a lot of higher end phones have the ability to also remove the distortion and even the action cams that have extreme fields of view can "flatten" an image too!
@@MidnightMarrow image correction is a processing power hog, so if you're processing, say, 200 photos, it'll take a bit longer. Unless, of course, those photos are pre-processed
I've used Meshroom with an AMD graphics card (FirePro W5100). /Edit: I must be mistaken because I can't get it to work, now./edit For documentation you can try OpenMVG, which Meshroom is based on, if I understand aright. I haven't been able to learn much from that documentation, but maybe you can. Besides taking photos, you can also take video and then extract stills. If the process isn't working well, another option may be to include/take more photos. You can also manually guide Meshroom into recognizing similar features between photos, but I've yet to try that.
@gamefromscratch My depthmap error is: "[21:04:47.956281][error] This program needs a CUDA-Enabled GPU (with at least compute capablility 2.0)." Do you know what I could do to fix this?
It would be cool if the program could accept a short video file as the series of images. I bet some cameras can export video as such anyway, makes sense, but a video would be fast and simple to produce and then the program would automatically know the exact sequence the images were made and therefore know the camera path. I have yet to even try this out so forgive my ignorance on the subject, it was just a thought. I very much appreciate the video. It's pretty amazing what software is available these days. I can't even imagine where we'll be in 20 more years.
Reality capture was able to create a 3D model from photos extracted from a video, the generated mesh wasn't amazing or anything but it managed do it through just a video. Also when taking photos it's good to have overlapping shots, so the software can find similarities between photos. A phone can do photogrammetry but a DLSR will be better it can depend on how many photos were taken and how they were taken.
I haven't finished watching yet but the question with open source is... Can you use it for commercial work? A lot of great open source software is for personal use only. Like Voodoo Camera Tracker which I really love, but it's not for commercial use.
@@Shiniiee sorry, but AFAIK that is wrong. The GPL "affects" certain, I think they call it derivative works, but it's not like it affects the 3d model made by it, or even less, an image / clip made with a 3d model made by the software.
PLEASE HELP! I have a MacBook Pro (Catalina) and my graphics card is an Intel Iris Pro. I need to use CUDA in order to use the program Meshroom, but I found out that CUDA works only on Nvidia GPU, which I don’t have. I cannot install Meshroom without Nvidia GPU. I have no idea what to do and Google isn’t helping at all. PLEASE HELP THANK YOU!
I grabbed and used that set of rock images and Meshroom does take a while to run because I do have an oldish graphics card but it is Nvidia and does support CUDA. In Meshroom in the 3D viewer window I do have something that resembles the rock shape. It is almost like none of the image content has been assigned to be background. The .obj file imports into Blender I am using version 2.79 where the result appears as a single mesh none of which is rock shaped when I look at the textured view I can see that the surface of this distorted sheet is made of 9,310 faces in a single skin where I can see that the texture is made from the original images I can make out parts that are the brick floor surface and I can see the van that appears in some of the background. But it aint no rock shape. I am going to review the tutorial to see if there is some step I have missed out I have a couple of projects pending but I do know this Rock set of images is supposed to work. FIRST EDIT I cleared the cache and rerun the process from scratch and the whole thing took circa 20 minutes the largest portion of this being the DepthMap section and I noticed that what appeared to be the rock was the point cloud I needed to click import model to actually see the mesh and this was the shape I saw when I imported the mesh .obj file into Blender. So it does look like Blender is importing the mesh as generated by MeshRoom it is just that the mesh I am generating is not the same as the mesh generated in the video from the same sample set. All of the images have a green check mark and the point cloud does appear to be roughly in a rock shape just that the generated mesh is not breaking the resulting mesh into foreground and background sections. There is a possibility I have not installed properly so I am going to take time to revisit this and try again. END FIRST EDIT SECOND EDIT Fresh install no change I am going to take everything to some alternative hardware I have access to to try it there. END SECOND EDIT THIRD EDIT Tried with a newer PC with much more recent Graphics card and the Rock image set has worked fine so it was obviously an older version of CUDA that was incompatible. I will find out the spec of the PC and Graphics used and update when I have the data. END THIRD EDIT FOURTH EDIT I am now using my best Camera this is a Fuji S8650 the Meta data for pictures taken with this camera specify the camera as "S8600 S8650 S8630" so the get Meshroom to recognize the photographs it is necessary to edit the cameraSensors.db file you find this file in the folder Meshroom-2018.1.0\aliceVision\share\aliceVision (note the Meshroom-2018.0.0 may have different digits as this relates to the version). Edit the .db file with a text editor and change the line Fujifilm; Fujifilm Finepix S8600;6.16 so that it becomes Fujifilm; Fujifilm Finepix S8600 S8650 S8630;6.16 END FOURTH EDIT FIFTH EDIT The graphics card that worked was a Nvidia GTX1050 END FIFTH EDIT
I think the main difference between smartphone and dslr is the quality of the lens to operate with shadows and the possibility to have much more control on ISO and aperture values. Dslr is better in this!
@Gamefromscratch, of course, there are tons of information to how to acquire the photos, photogrammetry is been around many years ago...just se DICE conference for the Balttefront games, Gnomon online, PluralSigth, UNITY forums, and so on ...there are many tutorials everywhere....
Question: Can you stop the render to save the progress and then restart it again? Unwittingly loaded 2975 photos and Meshroom has been going non-stop for 6 days...
I ran the rock photos with all the settings on default and I can see the rock shape, in pixels, in the 3D viewer, but the mesh is a different (when I hit 'load model'). It looks like it's the basic shape of the rock except bigger and with lots of holes and doesn't look like a rock at all. I tried changing a few settings, but I don't really know what most of the terms mean.
Can this be used with 3D printing? Seems like it could be really useful, like if you need to replace small parts to make repairs or something, and they are no longer manufactured. You could take photos, scan in Meshroom, and print them out. Is this possible?
chilichamelion As long as they are manifold (no holes in the mesh). I export from Blender to an STL file and import it into Simplify3d. From there to the printer. No problems there.
Actually there is this GDC talk about procedural generation where the speaker says about a board game where the creator had procedurally generated rules and procedurally generated players to play the with rules and when he saw that there was an interesting version he just published it... And had the name also procedurally generated...
it's already the case... games today look like each others... no creativity , nothing new... call of duty for example got to be made by an AI , it's almost the same each episode.
just a note on blender and seeing the texture : import the .obj from the folder meshroom generates with the texture in there as well. if you copy the texture and obj into another folder, and open it in blender (as of the time of writing this comment) blender will not work recognize the texture. at least for me
The best application would be a topology based puller based on sections that the user can pull up or down, similar to the old Kai's Power Tools where you pulled sections of the photo and smeared it away from it's location. We really need a tool that can to topology extractions from only one photo.
Would be nice to see an updated tutorial using the latest versions of the software. I am finding I am spending a lot of time trying to locate menus and features that the presenter is using but since have been moved/hidden/changed/renamed.
does it matter how I take the photos? for example can I rotate the object about the central axis rather than movie the camera? or does the object have to remain stationary and the camera moves?
Great tutorial! Every time I used to picture myself doing photogrammetry capture, I thought of using a blank background. It made complete sense to me why this wouldn't work as soon as you mentioned it, but it didn't occur to me prior to this. I tried it out with some drone footage and it came up quite nice. Thanks :)
The reason SLR images is better then smartphone images is that SLR camera's usually use a much less packed jpg format. e.g. my old 8MB pixel Olympus make files that are over 5MB each, whereas my Samsung Galaxy S7 Edge with it's 12MB camera make files that are between 2.8MB and 4MB. Knowing how jpg loose quality when packed harder gives you the answer.
My photos taken by camera shows focal length in properties of the photos but when I try to process the photos in Meshroom it says focal length is known for too few pictures. What could I be doing wrong?
Noise in the images are bad when finding matching points between images. The dslr has large sensor pixels, the cell phone has small sensor pixels. Large sensor pixels can collect more light. This resilts in less noise. The cell phone should work, but you need a lot alot a lot more light than you think you need. Like try 4x more light than you think you need.
Hi I enjoyed your video. I am new to this and after trying itI have a problem I can't solve, I hope you can help. I'm trying to scan a house. I use a DJI Mavic air 2 to take pictures witha map mission, then manually fly and take some closer pics of the house about 50 pictures each way. When I import into Meshroom, if I import one set of either of them It works as it should. If I import both groups the ones from the map mission are accepted and it works but the ones from the manual shoot are rejected ie; red icon in right top corner of thumbnail. Either group alone works but the two don't. I tried the same project scanning the second set with the Iphone 10 with same results. All meshroom settings are left at default. I would appreciate any suggestions. Thanks
I was thinking of using a turntable to create small models, also, up until you started talking about needing the background in the picture so that the software is able to reference the position of the object. Something on a turntable would have the same background in each image, so that drastically reduced my expectations for that idea. I may still try it, though. Maybe if the camera positions in the software can be manually set, then perhaps it could work. The turntable could be rotated a specific amount for each photo until it goes the full 360° around, raise the camera 30° or so, and repeat the process. Perhaps in this way, the object could even be turned upside down to get the bottom, as well. If that worked out, and if a large number of objects are needed to be scanned, which would be the case for me, then automating the scanning process with a few servos could easily be done. Definitely worth trying.
I've seen setups that have a grid pattern on a turn table with computer vision markers on the edges and corners for the software to use as landmarks. I've noticed better results from my scans when objects are sitting on surfaces with highly detailed patterns. The background becomes less critical for a good scan.
@@speculart In other words, you must inscribe your circle with magic runes to please the machine spirit so that it may pull the shadow from the object in question. ;)
This doesn't work for me. I've tried multiple times with different photos. It will only produce the front of my figure. How to you make it work with your photos?
if meshroom doesn't use a photo does that mean it didn't have any spots/ texture thing that it recognized? parts of my mesh doesnt appear after meshroom does its work and i notice that it doesn't use those pictures so im just wondering how to fix that.
Hello and thanks for sharing your knowledge about photogrametry i think is really interesting, I'll like to use Meshroom but can you please tell me what are the minimum requierements to use it, please? I'm about to buy a computer that have Win 10, 8gb ram ddr4, intel core i7 4th gen, 1gb Nvidia grafic card... Do you think I can use it whit a PC with this features. Thank you
Hi, great video, I have been using this software and have run into a problem unfortunately. I am trying to scan this one particular thing I really want to (it is quite complex a piece of coral from my fishtank), and i have to get a lot of photos from a lot of different angles. It almost is finished then it stops and the error basically says It has exhausted the available memory. Am I stuck until I get a better computer/buy more ram, is it saying I don't have enough ram to do this? Is there any work around?
hi, i'm usieng mehroom and the process going well to the last node, but the mesh button on the 3d view won't work and the color is grey not white? what should i do?? is there any mistake? thankyou.
I downloaded the Mac version...... but I HAVE NO IDEA how to install it or run it at all. Can someone help me out? There is no DMG file or anything like that.
I can't seem to get past the DepthMap stage. Tried lots of times, lots of locations, lots of objects, still won't get past the DepthMap stage, any ideas?
Dude, I have an NVIDIA Card, Meshroom crashes all the time- can't get past one of the steps- is there a setting I have to change for the program to recognize the card? Thanks.
What do you do about textures for the rock? I found meshroom's UVs are brutal and after cutting out my object I'm left with textures that are 90% unused.
Good software, so far has a better workflow (a good UI) than the free others. Now, spending with photogametry is inevitable in acquiring a good camera ...
If I use this on a diorama, could I make a 3D map that way? Would be awesome to be able to take my modeling and gamedev interests and mash them together.
So what do you do with the render? CAuse my 3d scan and assemble just looks like a bunch of blocks clump together. And wanted to see if i could get it into cura to 3d print. But can't 3d print if you can't even properly see what it is or even got anything else than meshroom to use it with. i.imgur.com/b1QOl4Z.jpg
One question I have: I used meshroom a week ago and I created a project on my D hardrive but the processing was using memory of my C drive and it got low on memory (took about 15 gb). How and where can I change the computing folder so it uses my D harddrive?
Brute force solution, but moving the system temp directory should pull the trick off. Just change the %TEMP% environment variable to your other drive, reboot and you should be good to go.
So would this be a good program to use for getting into drone photogrammetry? It's a field I'm looking to get into and while I would not be doing any surveying or data interpretation I would be taking the photos and providing the 3d models to a licensed surveyor to do those readings. Most programs cost thousands of dollars and I think this might work but want to make sure that if I sent the 3d model to a surveyor they would be able to do their analysis with the files.
Thanks for this dopest video! I have some Question.. Can i get a low polly of photoscan of buidings and room ! The problem is can't optimize buildings it have 3500000 poligons,and its unreal to optimize in Unreal Engine 4 in game development. Howe can i get optimize? Ore i can make low polly in Mash Room? Thanks
Hey guys, i have just tried meshroom today and i had some sortof succesfull results. But now i have a finished a project, but for some reason the mesh option is grey and does not show. Does this mean that it could make a mesh? and if so, why do you think that happened?
Hi there, awesome video but you completely forgot to mention that you can't use the same material data for the newly decimated (cleaned up) version of your mesh. After decimating using instant meshes, all the material data disapears. I don't suppose you know a workaround to this do you? I've been using substance painter and crazy baking methods in blender for hours with no results thus far :/
I think I've come up with a solution just now by decimating using blender, only the issue now is that I have a lot of UV's stringed right across my generated UV map to link all the different parts and unfortunately it's causing my texture to have black/ stretched texture on some polygons
Best solution I have atm is manually selecting the polygons of each affected and then in UV editor, logically choosing which side should join which and dragging it across to the rest of the island. Might take a while lol
Excuse me. I didn't understand how exactly i can Download your Rock test Images. I hope somebody can help because i tried already all links but nowhere i found a download button or link. ^^
hello i am an undergraduate student in ECE departement and in my thesis i have to develop a tool that generates a 3d model of a human body using photogrammetry...do you think that i can modify or use this code as a reference in order to do my thesis? Thank you.
I think I'm going to try to see what happens if I invert the procedure to a panarama. For example, I would like to model the inside of some houses and I'll see if this works... I'll let you know. Has anyone else seen or tried that?
Links (including step by step GFS tutorial)
www.gamefromscratch.com/post/2018/10/18/Creating-3D-Models-From-Photos-Using-Meshroom.aspx
alicevision.github.io/#meshroom
blenderartists.org/t/meshroom-free-photogrammetry/1120286/98
th-cam.com/video/PR4KrKHqVTI/w-d-xo.html
btw. not trying to hijack the video topic. but the dev from pixelmash actually gave a promo code for 20% off lol
(the code is in the other video)
it seems it was not limited to 1 person. so if you are interested, can use it or promote it in another video.
(no idea how long it will last lol)
XD
how bro where do i find photos to download or 3D models
When using photogrammetry with smartphone camera, does it matter whether the orientation is veritcal or horizontal?
Sorry, I have another important question. Do you think Samsung Note 9 camera or iPhoneX is capable enough to produce great quality photogrammetry like scanning a human?? I don't know if its my current smartphone camera that needs improvement or I'm just bad at taking the photos in the specific way photogrammetry needs.
Thanks for the response! I really hope I get one.
man, screw this github garbage, omg, why is it so convoluted and complicated??, why cant these damn people just make a download, what is all this other random crap all over the place everywhere, what in the hell is clone/download mean, this is irritating to me :( i dunno where to get the file, why is it buried in this other junk? (sorry i freak out when i can't figure anything out...i figured it out anyways tho) 0.0
@@seanc8054 use 3Df zephyr(free edition) much easier to use and works fine on phone camera... pretty sure it will work for you:)
11:55 It's because pixel phones do a lot of post processing. You can download open camera to get good results
Yes oc is awesome
@@billbergen9169 there is also Footej Camera 2 now
3:40 When he says it will work for 99% of people then you start modeling and after 1 hour you realise that you are in the 1%.
red 123741 wdym?
@@oozly9291 That means you have a lot of bugs, the program is not able to take your photos or after hours of work you find out that you need a NVIDIA grafic card. And no there is no other option despite of having a NVIDIA grafic card.
Kai Brendel ah rip lmao thank god I have a rtx card but for me it’s when I put in the photo from my iPhone to the pc like 80% of them fail when scanned
@@oozly9291 Need to use a camera mode that doesn't auto change the settings.
@@kaibrendel9387 Is this still the case?
6:40 "ok let me undo that" LOL
your videos are natural, i enjoy!
That voice startled me so bad. Anyway, thanks for the game dev news, us indie devs appreciate it.
Haha, yeah... scared the crap out of me when I uploaded it too... was much more quite when I was editing the video... sorry about that. ;)
I was wearing headphones.... -_- and it was 2x the volume of the video.
You can absolutely get fantastic results with a smartphone camera. You need to switch most of your camera's settings to manual in what ever "pro" mode your camera app has.
I am so glad you mentioned clay models. We did that at my day job using recap for a proposed sculpture
I haven't been following but I've always wondered what tools were available to do this nowadays, for those people that want to do a little more than just 2D photography editing with their photos. Nice to know that there are now comprehensive, free, open source tools for that!
As for the mug. I haven't used this software but from your description I don't think it was the background. All photogrammetry software has trouble with objects that are smooth, clear, glossy, or cylindrical. A mug fits most of those issues.
Yeah, I was wondering about this too as I've done quite a lot of photogrammetry without the background. Surely his mug was white and textureless too, which won't help at all if you're trying this technique.
Best thing to do with those is to get a sharpie and make small dots over it then manually remove them from the generated texture afterwards. You can just easily make a mug in Blender tho.
13:00 - Wrong, you should use a green screen approach if you can. You should also use contrasting stickers to add features to your item.
You can definitely do an imaging session in a white box and macro lenses work fantastically (if you know how to use them and the limitations of this technique). It seems the mug didn't work for you because photogrammetry has issues with shiny, reflective, textureless, transparent, filamentous, etc. objects, which a mug has some of these. You need a constant well defined texture on your object, because SfM needs visual features to correlate to other features in the rest of the images. It does help to use a highly textured surface underneath your object in a white box shoot, to help the global alignment, and then you crop it out in edition. And the macro lens tends to reduce the depth of field in close up shots, so if a large part of your image is blurry, it won't recognise any features too. Extend it by closing your aperture or using stacking software. The details you can get in macro photography look fantastic in photogrammetry if well performed.
with the huawei p20 pro phone camera the 3d output is incredible
i was gonna mention this software in your "Materialize" video but you'r already doing it keep up the good work
Always be sure to mention it though... all of your (collective you) leads help me discover software I may not already be aware of, or I can often link you to videos I may have already done.
I have done some photography for photogrammetry usage (not claiming to be a pro or anything, just a part of my job). First of all, camera's on mobile phone are really not suitable for this- low aperture (means less Depth of Field), noisy in lower light conditions, usually low focal lengths (wide). Ideally you need DSLR and it don't have to be any top notch model, even today's basic models will do fine (I used Nikon D3200 if I remember correctly). Prime lenses (no zoom) are the best because of picture quality and also, you need to have fixed focal length (you can bump or brush against zoom lenses and change the focal length accidentally). Also large F-stop will help you to keep everything in focus (larger DOF), of course ISO should be kept as low as possible. Next, you have to have the background present, because every photogrammetry software is using it to piece back the information about object. We used photoscan soft, I think you were able to manually correct the camera and drop points if it couldn't identify what it was looking at to help it stitch the model. Another thing is lighting, which needs to stay the same as much as possible and ideally scattered with soft shadows. Big shadows and underexposed photos= no info. If you shoot indoors, moving the lightsource is catastrophic. You need to have picture of your object from longer distance to get it all in frame as much s possible, then you can go closer and closer until you can shoot also a details of surfaces (cracks, scratches etc.) Software will take it and it will heighten the level of detail of your model.
Ummm, lower aperture means *greater* DoF, not less. It also means less light collected by the sensor.
Maybe you are getting the numbers the wrong way round: an aperture of, say f/3.5 is greater than, say, f/7, not less.
@@lawrencedoliveiro9104 yeah I meant low (small) aperture - high F stop. Sorry for confusion, bit you get what I meant. :)
I've done photogrammetry with a Galaxy S8 and a Nexus 5, and got good enough results working with mostly ideal, and some less than ideal objects. It's definitely more work, and there's probably some objects that just wont work with a phone camera. The rock featured in the first half of this video would work well with phones, the shinny clean fire hydrant wouldn't. So if the object has a lot of trackable detail and is pretty matte, a good cell phone camera is perfectly fine to get started with.
0Jebus0
No.
When using photogrammetry with smartphone camera, does it matter whether the orientation is veritcal or horizontal?
Sorry, I have another important question. Do you think Samsung Note 9 camera or iPhoneX is capable enough to produce great quality photogrammetry like scanning a human?? I don't know if its my current smartphone camera that needs improvement or I'm just bad at taking the photos in the specific way photogrammetry needs.
Thanks for the response! I really hope I get one.
Phone cameras and macro lenses are similar in the way they distort images. If you have a DSLR you kinda want to have a 100mm lens usually for undistorted perspective. Otherwise you'll get a kind of fish-eye look to images, especially if your macro or phone camera is close to something.
Haven't used Meshroom but the other software I've used has a filter for this distortion. Likewise a lot of higher end phones have the ability to also remove the distortion and even the action cams that have extreme fields of view can "flatten" an image too!
100mm can get pincushion distortion. 45-60mm is ideal standard focal length
@@MidnightMarrow image correction is a processing power hog, so if you're processing, say, 200 photos, it'll take a bit longer. Unless, of course, those photos are pre-processed
Frodo-Grammetry...Lord of the Rings speech pathology for hobbits lmao
LOLOL
The Meshroom manual recommends using a smooth plain background. That contradicts what you're recommending, to use a rough background
Thanks for this. I gotta green screen room and was gutted to hear this. I'll give it a go and let u know results
I've used Meshroom with an AMD graphics card (FirePro W5100). /Edit: I must be mistaken because I can't get it to work, now./edit
For documentation you can try OpenMVG, which Meshroom is based on, if I understand aright. I haven't been able to learn much from that documentation, but maybe you can.
Besides taking photos, you can also take video and then extract stills.
If the process isn't working well, another option may be to include/take more photos. You can also manually guide Meshroom into recognizing similar features between photos, but I've yet to try that.
I have an RX 580 8gb graphics card, will this program run on my rig?
@@crossfarm4146 I don't think it will, sadly.
Thank you for the deep explanations, I also thought to try the white background but you changed my mind :)
Wondering what the best meshroom alternative is for Mac users or AMD graphics cards???
@gamefromscratch My depthmap error is: "[21:04:47.956281][error] This program needs a CUDA-Enabled GPU (with at least compute capablility 2.0)." Do you know what I could do to fix this?
So, does your GPU use CUDA? If not, try this github.com/alicevision/meshroom/wiki/Draft-Meshing
It would be cool if the program could accept a short video file as the series of images. I bet some cameras can export video as such anyway, makes sense, but a video would be fast and simple to produce and then the program would automatically know the exact sequence the images were made and therefore know the camera path. I have yet to even try this out so forgive my ignorance on the subject, it was just a thought. I very much appreciate the video. It's pretty amazing what software is available these days. I can't even imagine where we'll be in 20 more years.
Reality capture was able to create a 3D model from photos extracted from a video, the generated mesh wasn't amazing or anything but it managed do it through just a video. Also when taking photos it's good to have overlapping shots, so the software can find similarities between photos. A phone can do photogrammetry but a DLSR will be better it can depend on how many photos were taken and how they were taken.
I haven't finished watching yet but the question with open source is... Can you use it for commercial work? A lot of great open source software is for personal use only. Like Voodoo Camera Tracker which I really love, but it's not for commercial use.
You CAN use software under the GPL for commercial use. You just have to also GPL the product or acknowledge the OSS libraries you used.
@@Shiniiee sorry, but AFAIK that is wrong. The GPL "affects" certain, I think they call it derivative works, but it's not like it affects the 3d model made by it, or even less, an image / clip made with a 3d model made by the software.
PLEASE HELP! I have a MacBook Pro (Catalina) and my graphics card is an Intel Iris Pro. I need to use CUDA in order to use the program Meshroom, but I found out that CUDA works only on Nvidia GPU, which I don’t have. I cannot install Meshroom without Nvidia GPU. I have no idea what to do and Google isn’t helping at all. PLEASE HELP THANK YOU!
I grabbed and used that set of rock images and Meshroom does take a while to run because I do have an oldish graphics card but it is Nvidia and does support CUDA.
In Meshroom in the 3D viewer window I do have something that resembles the rock shape. It is almost like none of the image content has been assigned to be background.
The .obj file imports into Blender I am using version 2.79 where the result appears as a single mesh none of which is rock shaped when I look at the textured view I can see that the surface of this distorted sheet is made of 9,310 faces in a single skin where I can see that the texture is made from the original images I can make out parts that are the brick floor surface and I can see the van that appears in some of the background.
But it aint no rock shape.
I am going to review the tutorial to see if there is some step I have missed out I have a couple of projects pending but I do know this Rock set of images is supposed to work.
FIRST EDIT
I cleared the cache and rerun the process from scratch and the whole thing took circa 20 minutes the largest portion of this being the DepthMap section and I noticed that what appeared to be the rock was the point cloud I needed to click import model to actually see the mesh and this was the shape I saw when I imported the mesh .obj file into Blender.
So it does look like Blender is importing the mesh as generated by MeshRoom it is just that the mesh I am generating is not the same as the mesh generated in the video from the same sample set.
All of the images have a green check mark and the point cloud does appear to be roughly in a rock shape just that the generated mesh is not breaking the resulting mesh into foreground and background sections.
There is a possibility I have not installed properly so I am going to take time to revisit this and try again.
END FIRST EDIT
SECOND EDIT
Fresh install no change
I am going to take everything to some alternative hardware I have access to to try it there.
END SECOND EDIT
THIRD EDIT
Tried with a newer PC with much more recent Graphics card and the Rock image set has worked fine so it was obviously an older version of CUDA that was incompatible.
I will find out the spec of the PC and Graphics used and update when I have the data.
END THIRD EDIT
FOURTH EDIT
I am now using my best Camera this is a Fuji S8650 the Meta data for pictures taken with this camera specify the camera as "S8600 S8650 S8630" so the get Meshroom to recognize the photographs it is necessary to edit the cameraSensors.db file you find this file in the folder Meshroom-2018.1.0\aliceVision\share\aliceVision (note the Meshroom-2018.0.0 may have different digits as this relates to the version). Edit the .db file with a text editor and change the line
Fujifilm; Fujifilm Finepix S8600;6.16
so that it becomes
Fujifilm; Fujifilm Finepix S8600 S8650 S8630;6.16
END FOURTH EDIT
FIFTH EDIT
The graphics card that worked was a Nvidia GTX1050
END FIFTH EDIT
Jim Clark
What was the graphics card that didn’t work?
I think the main difference between smartphone and dslr is the quality of the lens to operate with shadows and the possibility to have much more control on ISO and aperture values. Dslr is better in this!
@Gamefromscratch, of course, there are tons of information to how to acquire the photos, photogrammetry is been around many years ago...just se DICE conference for the Balttefront games, Gnomon online, PluralSigth, UNITY forums, and so on ...there are many tutorials everywhere....
Question: Can you stop the render to save the progress and then restart it again? Unwittingly loaded 2975 photos and Meshroom has been going non-stop for 6 days...
Don't forget to save the fully trimmed full resolution blend file, so you can rebake the textures after retopo. Instant meshes destroys the UV maps.
I ran the rock photos with all the settings on default and I can see the rock shape, in pixels, in the 3D viewer, but the mesh is a different (when I hit 'load model'). It looks like it's the basic shape of the rock except bigger and with lots of holes and doesn't look like a rock at all. I tried changing a few settings, but I don't really know what most of the terms mean.
Can this be used with 3D printing? Seems like it could be really useful, like if you need to replace small parts to make repairs or something, and they are no longer manufactured. You could take photos, scan in Meshroom, and print them out. Is this possible?
chilichamelion
As long as they are manifold (no holes in the mesh). I export from Blender to an STL file and import it into Simplify3d. From there to the printer.
No problems there.
@@Traitorman..Proverbs26.11 Awesome! I'd love to see a tutorial on this, haven't been able to find one yet (that doesn't use a $20,000 3D scanner)
chilichamelion
www.creativeshrimp.com/free-photo-scanning-tutorial.html
www.sculpteo.com/en/tutorial/prepare-your-model-3d-printing-blender/
Free indie development tools is getting so more easy everyday then we will get to a point to an AI to make indie games by itself.
You are not far from the truth, this is a A.I based solution.
Actually there is this GDC talk about procedural generation where the speaker says about a board game where the creator had procedurally generated rules and procedurally generated players to play the with rules and when he saw that there was an interesting version he just published it... And had the name also procedurally generated...
it's already the case...
games today look like each others... no creativity , nothing new...
call of duty for example got to be made by an AI , it's almost the same each episode.
Actually there is an ai that created games
just a note on blender and seeing the texture : import the .obj from the folder meshroom generates with the texture in there as well. if you copy the texture and obj into another folder, and open it in blender (as of the time of writing this comment) blender will not work recognize the texture. at least for me
The best application would be a topology based puller based on sections that the user can pull up or down, similar to the old Kai's Power Tools where you pulled sections of the photo and smeared it away from it's location. We really need a tool that can to topology extractions from only one photo.
Instant mesh is not for comercial use if you wonder
Would be nice to see an updated tutorial using the latest versions of the software. I am finding I am spending a lot of time trying to locate menus and features that the presenter is using but since have been moved/hidden/changed/renamed.
does it matter how I take the photos? for example can I rotate the object about the central axis rather than movie the camera? or does the object have to remain stationary and the camera moves?
Great tutorial! Every time I used to picture myself doing photogrammetry capture, I thought of using a blank background. It made complete sense to me why this wouldn't work as soon as you mentioned it, but it didn't occur to me prior to this. I tried it out with some drone footage and it came up quite nice. Thanks :)
The reason SLR images is better then smartphone images is that SLR camera's usually use a much less packed jpg format.
e.g. my old 8MB pixel Olympus make files that are over 5MB each, whereas my Samsung Galaxy S7 Edge with it's 12MB
camera make files that are between 2.8MB and 4MB.
Knowing how jpg loose quality when packed harder gives you the answer.
What happens if you want to model the inside of a place, like an apartment?
Just plain cool, thanks for sharing!
My photos taken by camera shows focal length in properties of the photos but when I try to process the photos in Meshroom it says focal length is known for too few pictures. What could I be doing wrong?
Great video Tutorial Mate! thank you!
Noise in the images are bad when finding matching points between images. The dslr has large sensor pixels, the cell phone has small sensor pixels. Large sensor pixels can collect more light. This resilts in less noise. The cell phone should work, but you need a lot alot a lot more light than you think you need. Like try 4x more light than you think you need.
how do you get the little graph up that 5m, mesh I could not find a way to bring it up
Hi I enjoyed your video. I am new to this and after trying itI have a problem I can't solve, I hope you can help. I'm trying to scan a house. I use a DJI Mavic air 2 to take pictures witha map mission, then manually fly and take some closer pics of the house about 50 pictures each way. When I import into Meshroom, if I import one set of either of them It works as it should. If I import both groups the ones from the map mission are accepted and it works but the ones from the manual shoot are rejected ie; red icon in right top corner of thumbnail. Either group alone works but the two don't. I tried the same project scanning the second set with the Iphone 10 with same results. All meshroom settings are left at default. I would appreciate any suggestions. Thanks
It is just stopping at the center and it says not able to set the node...plz help
I was thinking of using a turntable to create small models, also, up until you started talking about needing the background in the picture so that the software is able to reference the position of the object. Something on a turntable would have the same background in each image, so that drastically reduced my expectations for that idea. I may still try it, though. Maybe if the camera positions in the software can be manually set, then perhaps it could work. The turntable could be rotated a specific amount for each photo until it goes the full 360° around, raise the camera 30° or so, and repeat the process. Perhaps in this way, the object could even be turned upside down to get the bottom, as well. If that worked out, and if a large number of objects are needed to be scanned, which would be the case for me, then automating the scanning process with a few servos could easily be done. Definitely worth trying.
I've seen setups that have a grid pattern on a turn table with computer vision markers on the edges and corners for the software to use as landmarks.
I've noticed better results from my scans when objects are sitting on surfaces with highly detailed patterns. The background becomes less critical for a good scan.
I have used this technique. You just stick a whole bunch of crazy symbols onto the surface around the object and it works fine
@@speculart In other words, you must inscribe your circle with magic runes to please the machine spirit so that it may pull the shadow from the object in question. ;)
How about a larger turntable 'ring' that you mount the camera on, so you move it instead of the object?
Great video! Thank you!
This doesn't work for me. I've tried multiple times with different photos. It will only produce the front of my figure. How to you make it work with your photos?
Post the spec of your computer plus your graphics card. Be interesting to see what you got.
Would it works with photos of Machine Components. Gears or Screws or Bolts?
if meshroom doesn't use a photo does that mean it didn't have any spots/ texture thing that it recognized? parts of my mesh doesnt appear after meshroom does its work and i notice that it doesn't use those pictures so im just wondering how to fix that.
Hello and thanks for sharing your knowledge about photogrametry i think is really interesting, I'll like to use Meshroom but can you please tell me what are the minimum requierements to use it, please? I'm about to buy a computer that have Win 10, 8gb ram ddr4, intel core i7 4th gen, 1gb Nvidia grafic card... Do you think I can use it whit a PC with this features. Thank you
Hi, great video, I have been using this software and have run into a problem unfortunately. I am trying to scan this one particular thing I really want to (it is quite complex a piece of coral from my fishtank), and i have to get a lot of photos from a lot of different angles. It almost is finished then it stops and the error basically says It has exhausted the available memory. Am I stuck until I get a better computer/buy more ram, is it saying I don't have enough ram to do this? Is there any work around?
please, my 2018 version AND my 2019 version get stuck at Depthmap. it shows a red dash and doesn't go on in the process.
I could suggest better,simpler photogrammetry software that workes even with mobile cameras if you want?
@@_Chad_ThunderCock thank you , but now I use zephyr free version
how do u do this with command line? anyone?
hi, i'm usieng mehroom and the process going well to the last node, but the mesh button on the 3d view won't work and the color is grey not white? what should i do?? is there any mistake? thankyou.
FrodoGandalfTree?
Thanks for the tutorial.
I downloaded the Mac version...... but I HAVE NO IDEA how to install it or run it at all. Can someone help me out? There is no DMG file or anything like that.
Thanks so much for this!
I can't seem to get past the DepthMap stage. Tried lots of times, lots of locations, lots of objects, still won't get past the DepthMap stage, any ideas?
My loading stops with a red line cane anyone help?
Dude, I have an NVIDIA Card, Meshroom crashes all the time- can't get past one of the steps- is there a setting I have to change for the program to recognize the card? Thanks.
What do you do about textures for the rock? I found meshroom's UVs are brutal and after cutting out my object I'm left with textures that are 90% unused.
Is there any reason to use Instant Meshes instead of Meshmixer?
So, this won't work with Intel Graphics? What's a good card for an Optiplex 790?
Good software, so far has a better workflow (a good UI) than the free others. Now, spending with photogametry is inevitable in acquiring a good camera ...
Or you could have a good camera from 10 years ago. Or one might be in your pocket.
Is there a site where you can download images like the rock for testing?
If I use this on a diorama, could I make a 3D map that way? Would be awesome to be able to take my modeling and gamedev interests and mash them together.
Yeah! You may want to "cut" the mesh into pieces though, rather than have the map be one 3d mesh. Better for optimisation, that way.
has potential, but i haven't been able to render anything with the way it works now, even when it gives all green check marks to every image
Did you say Froto gametry?
So what do you do with the render? CAuse my 3d scan and assemble just looks like a bunch of blocks clump together. And wanted to see if i could get it into cura to 3d print. But can't 3d print if you can't even properly see what it is or even got anything else than meshroom to use it with. i.imgur.com/b1QOl4Z.jpg
One question I have: I used meshroom a week ago and I created a project on my D hardrive but the processing was using memory of my C drive and it got low on memory (took about 15 gb). How and where can I change the computing folder so it uses my D harddrive?
Brute force solution, but moving the system temp directory should pull the trick off. Just change the %TEMP% environment variable to your other drive, reboot and you should be good to go.
I jus did it with a fruitbowl and a iphone 6s. the resault was pretty good
THanks for great tutorial
So would this be a good program to use for getting into drone photogrammetry? It's a field I'm looking to get into and while I would not be doing any surveying or data interpretation I would be taking the photos and providing the 3d models to a licensed surveyor to do those readings. Most programs cost thousands of dollars and I think this might work but want to make sure that if I sent the 3d model to a surveyor they would be able to do their analysis with the files.
Hey Eric, i'm looking into the same, have you tried meshroom for drone photogrammetry and found it worked?
How do you get 50/50 images accepted??? Mine keeps rejecting around half of the photos :(
Thanks for this dopest video! I have some Question.. Can i get a low polly of photoscan of buidings and room ! The problem is can't optimize buildings it have 3500000 poligons,and its unreal to optimize in Unreal Engine 4 in game development. Howe can i get optimize? Ore i can make low polly in Mash Room? Thanks
How did you cut the day from the rock?
Hey guys, i have just tried meshroom today and i had some sortof succesfull results. But now i have a finished a project, but for some reason the mesh option is grey and does not show. Does this mean that it could make a mesh? and if so, why do you think that happened?
Can glasses be modelled in 3d ?
Can this be used in Lightwave 3D and photoshop? If so how do I download the software to be used in these softwares?
Can you do this in meshroom from a video?
Hi there, awesome video but you completely forgot to mention that you can't use the same material data for the newly decimated (cleaned up) version of your mesh. After decimating using instant meshes, all the material data disapears. I don't suppose you know a workaround to this do you? I've been using substance painter and crazy baking methods in blender for hours with no results thus far :/
I think I've come up with a solution just now by decimating using blender, only the issue now is that I have a lot of UV's stringed right across my generated UV map to link all the different parts and unfortunately it's causing my texture to have black/ stretched texture on some polygons
Best solution I have atm is manually selecting the polygons of each affected and then in UV editor, logically choosing which side should join which and dragging it across to the rest of the island. Might take a while lol
Looking at an HP with the 32 gig ram recommended but the GOU is an intel iris processor. Is that still sufficient?
GPU… is an intel iris
i can't add pic ?!
Its possible add rtk control points in a mapper terrain?
I've seen in other video that 3df Zephyr makes it easier. And it has a free version (
Excuse me. I didn't understand how exactly i can Download your Rock test Images. I hope somebody can help because i tried already all links but nowhere i found a download button or link. ^^
I do looking after the those files too.
Its definitely pretty sweet but really suffering from lack of documentation atm. :/ There isn't really any great place to discuss either.
Thanks for your great video, but what on earth is a "Pixel Phone"?
The “Pixel” phone is by Google. It had the distinction of “best camera” for about five minutes a couple of years ago.
@@mbunds - Thanks!
Is it use only cpu ??? My gpu use 3%
When does the video starts ?
hello i am an undergraduate student in ECE departement and in my thesis i have to develop a tool that generates a 3d model of a human body using photogrammetry...do you think that i can modify or use this code as a reference in order to do my thesis?
Thank you.
it could work with this code as long as the subject is completely still...
I like how your sofa looks though. It might work for some horror game.
Meshroom only works if you have a NVIDIA grafic card.
I you have another one, there is no chance to create a 3 D model.
wow its amazing
may i know which version of blender was used?
n what it is used for
Why is there no Godot in that toolbox?!
I think I'm going to try to see what happens if I invert the procedure to a panarama. For example, I would like to model the inside of some houses and I'll see if this works... I'll let you know. Has anyone else seen or tried that?