Bardzo sie ciesze wiedzac ze znalazles tu cos dla siebie. Staram sie zeby kazde kolejne video bylo lepsze od poprzedniego :) ale niestety kosztem ilosci materialu jaki jestem w stanie wypuscic. Pozdrawiam i do nastepnego, ktore tez mam nadzieje sie spodoba :)
Your attention to detail is very appreciated. It is sometimes easy to get lost in the details. That wall will be viewed from mid to far away. With 1 square image the repetition will be very apparent. It would be far more useful at say 1024x4096 with a phone. In your case with limited overall area I would have flipped 1 image and added to the end of the first. Far easier to tile. Another attention to detail over all would be the pattern of how the bricks where layed (Common Bond) in this case. You must choose wisely on where you cut the vertical image. Google Brick pattern/type. Another helpful thing would be to place a 1cm cube in an inconspicuous area telling everyone at what size the bricks are. I've watched almost of your vids and find them very useful. Your first spray paint to the last was such a leap. I hope I didn't come off to harsh. gl on your journey.
Thanks, to be honest there is way better solution for this which is to create two square textures of the similar pattern and simply blend them together using large, low resoution mask. THis way you get super high level of detail from close distance and lack of tiling and authoring from mid to long. You can also apply decals to help with final authoring. 1:4 ratio material for this would be rather a waste of memory and quality drop. Of course shaders with blending option are more expensive so depends on the platform used etc. What you suggested might work well then for mobiles or VR where performance constraints are much more demanding and PBR quality has to be simplified. Cheers!
@@khov Sure I can, thats the actually a pretty good idea but I dont know how many folks would be interested in it since these are very basic things. I would rather suggest to search for mask based or vertex color based texture blending subject as well as splatting in Unreal engine etc. Cheers!
the delighter is also included in the ai powered image to material layer. there you can also adjust its strenght. normally there should be no use for another run and thats probably why the image gets too saturated. really enjoy your content, nice job!
Alchemist could be really good for small teams that need to do quick turnaround with projects that need to have a decent level of likeness to a real location, like Archvis of an addition to an existing building perhaps, but doesn’t need huge detail, just needs to match what the client knows about their space so the render isn’t distracting.
Thats correct. ALchemist ..which was rebranded to Sampler recently has a lot of additional tools which many might find useful, but I have a feeling that none of these is final and all feel a bit like need polishing. It means that it does a lot of stuff but doesnt shine at anything and needs additional support. Yeah, it can be very useful for small teams where quality isnt the key and often quick 'good enough' is just enough.
A while ago I tried demo of some software that creates a PBR tileable material out of two photos. So I guess it used paralax to calculate surface bumps. But I didn't remamber the name and couldn't find it ever since.
I played with software with similar functionality about 7 years ago but as far as I remember even if it was fun, quick and promising, the results I got were way to glitchy and I had better resultss with simple CrazyBump based single image reconstruction :). But to be honest .. technically you just really need to provide 2 images to start the photogrammetry reconstruction and get some results. Of course if based on 2 images they are gonna be glitchy as well as 3D space is made of 3 dimentions and only by adding a 3rd image to the equation you can provide them all :)
Substance Alchemist seems to fit a niche usage area. As a game developer, I could only see Alchemist material being used for a surface the player camera could not get close to. Substance Designer materials are defiantly much stronger for my usage. I can see myself using Photogrammetry material if I am looking for 100% realism but it will also offer lower customizability than a 100% Designer material.
I totally agree. Each tool offers something different, but dont forget there are also other tools on the market which can customise scans like ArtEngine etc. Also in reality you dont have to limit yourself to a single tool. You can use scan as a base and process it with Substance Designer to bring some control you need to it. You can use Sampler to tile it.. or even Painter. You can also use ZBrush and add the data other tools cant. You can even consider Photoshop or Affinity Photo. The sky is the limit here. So I would to suggest consider mix of options and dont get trapped with just single tool.
@@GrzegorzBaranArt I've seen such a workflow using 3D models with Substance Designer before although never done it myself. Will have to check out ArtEngine since I am unfimiliar with.
Hi! Great comparison, you did a great job, really glad that I accidentally stumbled across your channel, it would be just great if you also compared different programs for photogrammetry
Thank you. This is something I was planning to do since a while and its in my queue of things to do :). I would say the photogrammetry comparison video is in progress but it takes time to make a proper and reliable one :). Cheers!
I can see how, if you have a machine on your network specifically dedicated to handling the 2.5hr reconstruction for each texture, the photogrammetry approach could be faster. Alchemist seems useful for small teams who only have so much time and effort to dedicate to generating materials.
I would risk saying that there are material types which can be handled by Alchemist very well, and material types which shouldnt. Well made materials has potential to be re-used in many productions and overal material quality can have really big impact on any 3D environment art. So honestly, I would always prefer to get less but high quality materials for my scenes to more diversed but at average or low quality level. This rule applies to everything else, props, characters, animations, light, shaders etc. Photogrammetry is the tool which should be used for every complex surfaces where the height information is relevant for us in any way. For all flat and simple surfaces 'single image2PBR' technique (Alchemist) should be usually enough. But if I have two materials to pick for my scene, and one was made with Image2PBR and one with photogrammetry, I would pick the second one :). Cheers!
Hey thanks for the video. I would like to ask for your opinion. I believe that the "single image to PBR material" is relatively useful if you are scanning an object which only has one "real substance" and this substance is kind of repeating. Like a brick wall in your example. But photogrammetry fares much better if you are scanning something more complex, like a piece of machinery. Depending on the complexity of the object, modeling the machinery, UVing, texturing with multiple layers in Substance, for instance, takes much longer, Also no matter what you do, no PBR texturing method can capture the variety that can be found in nature. I would personally use the "single image to material" approach for repeating surfaces like walls and use photogrammetry for anything else.
I would consider Single Image to PBR material reconstruction technique for secondary materials. For any close ups and primary materials I would use procedural technique, photometric stereo or photogrammetry based scans. What I wanted to point out is that AI powered algorithm Substance Alchemist offers is a very big step forward compared to old approach and AI can do really decent job. So whatever works for you is a good choice. Bare in mind that you can also mix and combine different techniques together. Last but not least, Single Image to PBR might be also very useful and quick technique to deal with overal vegetation or to build atlases for scattering. I also totally agree that its easier for the AI to properly reconstruct solid and consistent surface where we have local differences caused by shadows within a same substance type. If you pay attention to this video you should find that AI struggled to properly interpret bricks which had dark parts.. it means there is still field for AI improvements, but the key step was made and Image to PBR reconstruction has just leveled up to totally new level. I dont get the part about props scanning tho. With the photogramemtry you can capture albedo, geometry which includes also medium and high frequency height information but also reflection map. I guess you can capture even more if you wish.. everything depends on what equipment you use. I can imagine special lighting setup which would allow you to capture even scatter map.. but usually it is just way easier, cheaper and quicker to generate these things using available standard data (albedo/height/normal/ao) which comes from a basic scan. So yeah, you can capture more physical features if you wish if you build the setup with capture extension... a cross-polarisation is an example of it but you can go even deeper.
@@GrzegorzBaranArt What I meant in regard to props scanning is exactly, like you argue, you can use photogrammetry. But easier props such as rock boulders, which do not have to be a very specific shape or form, and stones are easier done modeling from the ground up and using an existing library of PBR textures. For instance, the UE marketplace is full of photogrammetry stones but I think for single stones photogrammetry is an overkill.
@@GrzegorzBaranArt On another note, a suggestion for an intersting video: Could you make a comparison of using mobile phone camera vs. SLR camera using a simple example, like a prop or a surface? I know SLR is always better, but sometimes one might find an interesting motive to capture in nature when you are on holiday when you do not have your photogrammetry gear with you. I also think that smartphone cameras are getting better and better even in badly lit environments due to developments in AI. So in some situations the quality difference may be not that great and using you smartphone is more approachable to a lot of people. I got, in my opinion, pretty decent results using the camera of a Samsung S20 smartphone and I think iPhone12 should be good as well.
@@BreakMaker2904 Hey, yes, its a very good idea and I had it already in my queue :). The problem with mobiles is that there are different mobiles :).. a better one will give you better result while a crappy one not. In theory the drone I use (Dji Mavic 2) is a kind of 'flying' mobile. Usually to shot with mobile you need to take more images to compensate lower resolution, also there might be a bit more to do at pohotoediting stage to fix mobile issues.. a few issues to properly mount the camera on a tripod/monopod to stabilise it .. of course you can shot hand held.. its another video in my queue where I want to compare different ways of camera stabilisation... but yeah.. its a good idea to make a video to cover this topic. Imo mobile is fine when you have nothing else to use. It works fine when used to capture basic shapes without high frequency details ... but if you can use a DSLR camera you should definitelly use a DSLR camera :D as the result always is going to be better :D
@@BreakMaker2904 I would agree to some degree.. there are cases when photogrammetry makes sense and when it is an overkill.. I dont want to go into details but I am planning to make a pretty interesting video soon to present something on this subject :) .. its 2nd in my queue and its going to be the second part to the next one .. of course if I ever finish it and release :D
Fascinating! What would be interesting is also to compare photogrammetry vs. making your textures by hand 'the old fashioned way' in Zbrush + Designer (or Painter). I reckon an experienced material artist would be able to whip out scan-quality material in the same amount of time it takes to do the photogrammetry pass? Would love your thoughts.
Yeah, thats a very good idea. To be honest it all depends what tools you plan to use. Personally hand sculpted textures with use of photogrammetry based alpha brushes would be quicker without significant quality loss but with all freedom sculpting gives. Same applies to procedural stuff mixed with scans. I made some of brushes I am talking about in here: www.artstation.com/artwork/q9xeme Was even planning to make a video about that subject but it got stuck in a queue of TO DO stuff :). Cheers!
Grzegorz big fan of your work! Can you make a breakdown on capturing foliage? Also consider travelling to Greece island like Milos for cliffs etc its awesome
Thanks, yeah, I was planning to do it, I even started working on a video which covers it about a year ago but it was so huge that I dropped it and never managed to finish it. Maybe I should return to it :). Basically to scan folliage I wouldnt use photogrammetry but Image to PBR technique and Photometric Stereo. Photogrammetry tho would be useful for some things too.. crap.. yeah. .I guess it is a good idea to cover it with the video :). Regarding the Greece.. would be great.. maybe one day when everything calms down a bit and it is safer and easier to travel :D
@@GrzegorzBaranArt Yeah i tried that with help from a substance tut and unity's paper on that matter and i thought you should do it :) Besides your obvious knowledge in photogrammetry and photometry i think you have a natural gift of guiding others and teaching them. I was a procedural guy but after your help i think nature is far more beautiful to capture and use.Ty and take care
Is the output resolution of both workflows identical? I would assume the Photogrammetry based texture set to be much more detailed as the photos taken were closer and would have more resolution.
It depends. You can capture a single shot using a camera which has 50sh Mpx or even twice as much megapixels resolution to work with. Form the other side you can reconstruct a really low quality surface using photogrammetry. The advantage of the photogrammetry is that it gives you much more accurate physical data. So reconstructed height information isnt a guess but it is what it actually is in reality. Also with the photogrammetry you can cross polaise the light and get really accurate color information. With the Image to PBR its always a more or less accurate but still a guess. So of course that the photogrammetry based materials wins in every quality aspect with the Image to Material based materials but.. the quality gap with the AI is getting smaller every year. There are many purposes where accuracy isnt just necessary and time is prioritised. These are cases where Image to PBR makes sense. It can be anything, a quick pattern capture to be used as an alpha for ZBrush to bring some ornaments, or concept materials, or background materials etc. Last but not least, there are surface types which cant benefit from the photogrammetry too much.. these are usually flat surfaces with lack of crossurface details which can be used for image alignment in photogrammetry software. TO give you an example, it can be clean, white painted wall or varnished wood etc. These are surfaces where height isnt very relevant and and with surfaces like this you can get even better results if you use an Image to PBR technique instead :). So bare in mind.. photogrammetry, photometric stereo, image to PBR, procedral material generation, manual sculpt etc. .. these are just tools to be used. And its good to know them all so you can pick the right one to the job. And this is the main purpose of my videos .. to present them all so everyone can see what are pros and cons of each.. so in result we know WHY we do things in a certain way and what actual options we have so we always have a choice. Hope that helps :). Cheers!
Thanks, these are scanned bricks from already existing brick wall :) so to answer your question, no these pricks cant be randomized. In theory you can reshift them and rearrange a bit using Clone Tool, the Substance Painter offers, or a new, recently introduced by the ArtEngine - Patch Tool. But to fully generate a brick wall and get all bricks under full control you would need to pick totally different technique. Personally I would generate them procedurally in Substance Designer or would create them manually in ZBrush brick by brick. You can also mix these techniques or utilise scans to make your life easier and rise the quality. The sky is the limit here. The Image to PBR or photogrammetry based scans are the easiest and the most efficient ways to create this type of texture on very high level of quality.
Nice explanation and process video, thank you. You should not forget that travelling and preparing your journey to shoot photos also counts as time spent :P.
@@GrzegorzBaranArt Interesting comparison, though I feel how you value and compare the time spent on each isn't entirely correct. Doing a photogrammetry capture requires a huge set of photos, you can't usually find those online easily (not for free), so you have to go out and capture them yourself. Grabbing my camera, walking somewhere, shooting, walking back, getting pictures off the SD card would take me at least one extra hour, maybe more, and I'm still limited to materials you can find nearby. With Alchemist you can just use any highres image from online, or pay a little extra to get good quality ones from textures.com. Huge timesaver compared like that.
Really good and deep comparison, I suppose its down to how much time you have to produce something, budget of the overall project etc. And if it needs that level of accuracy or not as the Alchemist wall was still a believable wall, it just wouldn't hold up as well if it were for closer renders.
Thanks a lot. I think its more about the cos, user interface and overall efficiency. Marmoset Toolbag handles lowpoly in a totally different way to Substance Designer or Knald. It can be beneficial in certain situations.. planning to show a good example of this pretty soon. From the other side being limited to Video Memory is a huge bottleneck. Substance Designer baker for example offers a choice and can bake using GPU, which is very quick, but if you dont have enough VRam you can always switch to CPU. Knald lacks of updates, its a couple of months since I tested a albedo as a texture feature but it still hasnt been added to the official release and even if the baker itself is great, it doesnt offer much out of the baker at the current stage. XNormal... its a very solid but slow and outdated piece of software.. but its normal. So regarding the output quality I believe all tools deliver similar level of quality which is depended on details set for each map. I think there is no winner in here as each app has pros and cons.. but even if I have each of these app, I still prefer the Substance Designer as with the baker we get super powerful PBR material package.
@@GrzegorzBaranArt Agreed, Its basically a game of trade offs, I've only began diving into designer recently (really the whole pipeline) and the sheer level control it affords is next level. I also much prefer its baker. Its only issue is baking textures when UDIMs are involved, but I understand Marmoset has good tools for that so thats likely what i'll be trying next so i look forward to your video on it! :)
great comparison, thanks for your effort. Do you think Alchemist AI would do a good job in ornaments texture? I mean, to get the volume of ornaments in an accurate way?
Thanks a lot :). I believe it can, was even thinking about this. I would be careful with shiny, reflective surfaces tho as highlights or reflections might be misinterpreted correctly. Also bare in mind that Alchemist simplifies things a bit so you wont get 100% of accuracy you get from the photogrammety or photometric stereo but I think it should be close enough. I would estimate something between 75-95% and how close you are would depend on the image. It should also help if you would introduce photo-editing software to the pipeline as this way you can calm down spme speculars and fix some blacks. Looks like another video to present more examples with less setting details would be beneficial and help to present the tool in action :)
Its a very good idea. Was planning the subject of the photometric stereo reconstruction with the Substance Designer as it gives way more control, but actualy might worth to give a chance to the Alchemsit too to compare results.
@@dainjah Of course there are and I am planning to record a video to present a few in action. I started actual research for it over a year ago and the video about it (photometric stereo) is currently 3rd in my 2do queue :)
Bonjour Laura, oui, mais il a été renommé Substance Sampler. Il est disponible sur Steam avec une licence perpétuelle que vous pouvez conserver pour toujours, ou via le modèle d'abonnement d'Adobe sur leur page Web. D'ailleurs. malheureusement, je ne parle pas français :) donc je ne suis pas sûr que ma réponse soit suffisamment informative
Hi Grzegorz, i am trying to export a 50M photoscan from metashape. i see you use fbx so i tried it myself but i've waited for half an hour and it wasn't still done. is it normal? do i have to wait more than an hour for those amount of polys?
Yes, its normal. FBX is a kind of well organised and well compressed format to store geometry. It is supported by almost everything but the cost is in the conversion time. If you want to save time, and use format which caries vertex color information but works faster I would suggest to use PLY instead. This is the format designed for realy heavy geometry :). Just make sure the software you use supports it. To give you the time context. I was exporting a 40mln heavy mesh from metashape and these are times. They depends on computer specs you use of course, so the better computer will do it quicker but proportions stays the same: Binary PLY export time: 8 seconds File size: 978,726KB ------------------------------------ Non-binary PLY export time: 2,5 mins File size: 2,351,891KB ------------------------------------ Binary FBX: export time: 9.5 minutes File size: 357,586KB hope that helps. Cheers!
@@GrzegorzBaranArt thanks a lot, Gezegorz you feedback is a lot appreciated. I didn't know ply was so useful with high poly meshes. In the end i went with a single obj that took less than 10 min which is no problem for me. One thing that i didn't know and i didn't find any info on it is regarding the Collada file format (.dae), in fact from my tests it doesn't seem to register high poly counts at all: i tried with a decimated 4ml poly and it all went well, but when i tried using it to export a still decimated 25ml polys it was unreadable. Kind of unfortunate since collada exports and imports really fast (almost as alembic does). In the end i thank you a lot for all of your videos and the information that you're giving for free! Have a nice day!
I use many apps to do the same job as this way I know them better. This industry changes very quickly and and there is a neverending competition between each. For example, 3DF Zephyr is much better to what it was 2 years ago. To be honest all.. Reality Capture, Metashape and 3DF Zephyr are roughly on the same level regarding the quality and speed. They differ between each but I would say that an actual pick is the matter of individual preferences, budget and actuall job. So in this video I presented workflow with 3DF Zephyr while on the other one with Metashape etc. Since a few years I plan to record a comprehensive comparison video but these apps changes to fast and I still have it under construction :)
Alchemist seemed like it didn’t create enough height difference between the bricks and grout. I’ve only dabbled with it a little bit and don’t know all of the tools which are available, but it seems like it needs a way to selectively isolate areas of the image (such as the bricks) for processing apart from the unselected (grout) areas. Selecting the bricks based on their hue and then manipulating the height would make a big difference. I suppose you could always go to Substance Painter to do that after exporting the textures from Alchemist, but it would be nice to be able to do it all in one application.
I totaly agree. Imo there is no way for the AI to do everything and in my opinion it should - like it does - process the initial pass and ask user about the result. And the HUMAN user who knows what to expect from the surface can tell the AI what is correct and what isnt.. and based on that input knowledge AI should make another, full reconstruction pass .. and loop this process until it is done. So imo the best way to reconstruct the single image and turn it into a PBR material should be based on comunication between the USER and the AI :). I know it might sound simple but imo there are no other way. If the AI was correct.. the comunication would be short.. if it wasnt .. it would learn from the user :). Cheers!
@@GrzegorzBaranArt I agree! You should reach out to the Substance 3D team and suggest that. Based on the credibility of your channel, they may listen. I’ve had quite a bit of interaction with the After Effects team from attending several years of the After Effects World conference, and have found them to be very friendly, approachable, and receptive. I’m sure the 3D team is no different. Might be worth a try!
@@GrzegorzBaranArt I agree that you should at least suggest this! Of course I'm sure having some kind of back-and-forth with the AI algorithm and human interference probably wouldn't be a small thing for them to implement, but I feel like it could vastly increase Alchemist's usefulness to get it past the point of alot of imperfect materials.
@@deastman2 I just did. Here are answers, will add the to the description in case they are useful for someone else too: - th-cam.com/video/XDAUw_dSmt8/w-d-xo.html We're working on it to have that done automatically - th-cam.com/video/XDAUw_dSmt8/w-d-xo.html You can use the geometry equalizer parameter instead of using the equalizer later but I understand that you want the equalizer after all the clone patches too. - th-cam.com/video/XDAUw_dSmt8/w-d-xo.html Tiling layer would be better in this case to align patterns/bricks - th-cam.com/video/XDAUw_dSmt8/w-d-xo.html The delighting was already one in the Image to Material (AI-powered) layer. It's like you're doing it twice here. And on the Image to Material, you have a delighting intensity parameter. You can also use the Adjustment layer to play with the saturation, luminosity of the base color. - Ctrl + Z doesn't work on painting action yet. That is a known issue / not developed feature. - as well as layers OPACITY to blend it with the previous state: This is asked often indeed! Cheers :)
@Grzegorz Baran The video is intresting however the comparaison could have been more "Fair". Why ? The photogrametry one do receive a larger texture as imput [X photos of X size]. While the Alchemist one just received one ... To be fair you could have Stitched all photos within Phosothop and imput it in Alchemist. [However maybe there is a limitation of texture size to be processed]
Thanks :). Regarding the image stiching isnt as easy as it seems to be. I gave it a try a few times already using PTGui, Photoshop etc. and it didnt work very well due to the consistency loss. The result I had were always patchy to some degree and it was very hard to connect edge details.. just imagine matching grass blades for each image where each has some barrel distortion :). I would rather increase the resolution by using a larger camera.. like 50Mpx? I mentioned the resolution issue as someone might cinsider to use a mobile or a drone with small matrix and didnt want anyone to be surprised with the resolution limit. Bare in mind that I cropped from the image. The result I got was 4k tho as I upscaled it with the Alchemist. I would say it can be solved by the proper AI based 'rescale tool'. As far as I know ArtEngine offers such but havent tested it yet. As I understand it upscales textures and brings details to them using AI, not by sharpening it. I am sure there are more tools like this one.. migt be even worth to research them and make a separate video when done.. dunno if would it be to far from the core of this channel and anyone would be interested :)
@@GrzegorzBaranArt If you do find a software that stitch properly I’m interested better than photoshop. I already did it for some stuff and it worked but you’re correct could not work for all cases. For Bricks I think it could work, maybe I’m wrong. For grass no doubt it will not work. I used Photoshop Stitch for a Dam texture.
@@GrzegorzBaranArt Hi! Very nice comparsion! I have try topaz Ai for upscale images...mostly serached via google images... and then process it in Alchemist. Not always work but make a decent result. Maybe with a more "planned capture" and good camera the result can be good.
nice overview of the alchemy process. but photogrammetry is still king. the only annoying thing about photogrammetry is taking 100+ pictures. at least for me. not really concerned about computation time since i can just drop it to my other pc
Thanks, I totally agree. But AI based reconstruction brings single image based reconstruction to a new level which and can be useful in many ways. I believe it is good to know whre it is and what it offers. I am trully amazed with the result of AI Powered reconstruction, especially when I compared the reslut to the B2M based solution and I wont be surprised if soon this is going to be a very common way to create materials. Just imagine what might happen if that AI takes more complex feedback from the user or even.. learns from it :). Or what if AI can be used for the photogrammetry..and instead of 100 images we can take just 3? SO yeah, its still a King but I can imagine the Image to PBR technique as a quite reliable option for a few things already :D
@@GrzegorzBaranArt well, i am not saying that ai powered texture creation will always be inferior, its just not there YET imho. maybe in the near future we will have a button "make art"
@@snowinchina5531 Sure, I got it :). I just realised that the photogrammetry reconstruction itself might also change and someone might consider to use an AI in theor software. And regarding the AI powered surface reconstruction I also agree that its not there yet when compared to the photogrammetry, but I meant that its really close and at this stage it can be a very useful tool in material reconstruction in real production quality. Alchemist devs made a definitely a freaking huge step forward with this tool and pushed Single Image based PBR reconstruction to the next level :D. At least this is what I have found because.. who knows.. mybe it was just an accident ;). This is why I guess I need to process a few more totally different materials with the AI powered reconstruction similar to what I did in this video: th-cam.com/video/mYUMQj0cXgg/w-d-xo.html Cheers! :)
I said that because all provide pretty similar results. While they were good enough in early PBR era, nowadays there are way behind to what ArtEngines AI or Substance Sampler AI based algorithms can offer. Please don't get me wrong, I used to use them and I really loved what crazybumb could offer but it was years ago. Versions from the time I made this video couldnt even compete with modern applications regardign both: the quality and functionality level. ArtEngine or Sampelr can generate a really decent texture set from a single image - still not perfect but good - other listed apps can't generate texture-sets which are comparable. So its a no brainer for me to drop them and pick something that outputs way more reliable data.
Since it is in constant development it received some improvements. It is still the best AI based 'SingleImage to PBR' reconstruction system but it didnt change much recently. It also got a few very handy tools to support Photometric Stereo approach but the reconstruction algorithm is very poor when compared to the competition. Seam removal tools are still below my expectations and I dont use them on a daily basis. Unity ArtEngine does 'seam removal' job much better when compared.. but of course also has some pros and cons. Adobe has added a few HDRI tools we can use to create 360 HDRI panoramas which some might find useful. So, its better to what it was but its not a mind-blowing type of change. I would say that it is worth to own a copy of it, especially that now we can purchase a perpetual (subscription independent) license on Steam, so we can keep it forever with 1 year of updates, but personally, I dont use it too much in my current workflows. I plan to record a video about some of new features and compare how they do the job compared to other similar apps next year so stay tuned.
@@GrzegorzBaranArt Thanks for the heads up Grazegorz. I guess it is a tool worth looking into. Considering the advancements in Ai since 2020, material creation really shouldn't be as time or labour intensive as it still is. I look forward to your comparison video. Cheers!
Super porównanie. Obejrzałem całe, chociaż zazwyczaj przewijam do wniosków końcowych. Masa przydatnej treści: czas, programy, workflow. Dzięki.
Bardzo sie ciesze wiedzac ze znalazles tu cos dla siebie. Staram sie zeby kazde kolejne video bylo lepsze od poprzedniego :) ale niestety kosztem ilosci materialu jaki jestem w stanie wypuscic. Pozdrawiam i do nastepnego, ktore tez mam nadzieje sie spodoba :)
Your attention to detail is very appreciated. It is sometimes easy to get lost in the details. That wall will be viewed from mid to far away. With 1 square image the repetition will be very apparent. It would be far more useful at say 1024x4096 with a phone. In your case with limited overall area I would have flipped 1 image and added to the end of the first. Far easier to tile. Another attention to detail over all would be the pattern of how the bricks where layed (Common Bond) in this case. You must choose wisely on where you cut the vertical image. Google Brick pattern/type. Another helpful thing would be to place a 1cm cube in an inconspicuous area telling everyone at what size the bricks are. I've watched almost of your vids and find them very useful. Your first spray paint to the last was such a leap. I hope I didn't come off to harsh. gl on your journey.
Thanks, to be honest there is way better solution for this which is to create two square textures of the similar pattern and simply blend them together using large, low resoution mask. THis way you get super high level of detail from close distance and lack of tiling and authoring from mid to long. You can also apply decals to help with final authoring. 1:4 ratio material for this would be rather a waste of memory and quality drop. Of course shaders with blending option are more expensive so depends on the platform used etc. What you suggested might work well then for mobiles or VR where performance constraints are much more demanding and PBR quality has to be simplified. Cheers!
@@GrzegorzBaranArt can you make a video with that solution ?
@@khov Sure I can, thats the actually a pretty good idea but I dont know how many folks would be interested in it since these are very basic things. I would rather suggest to search for mask based or vertex color based texture blending subject as well as splatting in Unreal engine etc. Cheers!
Amazing comparison, thank you !
Glad to hear that, you are welcome :)
Excellent comparison.
Thank you :)
the delighter is also included in the ai powered image to material layer. there you can also adjust its strenght. normally there should be no use for another run and thats probably why the image gets too saturated. really enjoy your content, nice job!
Thanks
Thanks
Thank you
Great video Grzegorz! :) It was intriguing to look at the comparison. Special thanks for making comparison as close and accurate as possible.
Thank you a lot
Great breakdown and comparison!
Thanks Mike :). Appreciated
Alchemist could be really good for small teams that need to do quick turnaround with projects that need to have a decent level of likeness to a real location, like Archvis of an addition to an existing building perhaps, but doesn’t need huge detail, just needs to match what the client knows about their space so the render isn’t distracting.
Thats correct. ALchemist ..which was rebranded to Sampler recently has a lot of additional tools which many might find useful, but I have a feeling that none of these is final and all feel a bit like need polishing. It means that it does a lot of stuff but doesnt shine at anything and needs additional support. Yeah, it can be very useful for small teams where quality isnt the key and often quick 'good enough' is just enough.
A while ago I tried demo of some software that creates a PBR tileable material out of two photos. So I guess it used paralax to calculate surface bumps. But I didn't remamber the name and couldn't find it ever since.
I played with software with similar functionality about 7 years ago but as far as I remember even if it was fun, quick and promising, the results I got were way to glitchy and I had better resultss with simple CrazyBump based single image reconstruction :). But to be honest .. technically you just really need to provide 2 images to start the photogrammetry reconstruction and get some results. Of course if based on 2 images they are gonna be glitchy as well as 3D space is made of 3 dimentions and only by adding a 3rd image to the equation you can provide them all :)
Substance Alchemist seems to fit a niche usage area. As a game developer, I could only see Alchemist material being used for a surface the player camera could not get close to. Substance Designer materials are defiantly much stronger for my usage. I can see myself using Photogrammetry material if I am looking for 100% realism but it will also offer lower customizability than a 100% Designer material.
I totally agree. Each tool offers something different, but dont forget there are also other tools on the market which can customise scans like ArtEngine etc. Also in reality you dont have to limit yourself to a single tool. You can use scan as a base and process it with Substance Designer to bring some control you need to it. You can use Sampler to tile it.. or even Painter. You can also use ZBrush and add the data other tools cant. You can even consider Photoshop or Affinity Photo. The sky is the limit here. So I would to suggest consider mix of options and dont get trapped with just single tool.
@@GrzegorzBaranArt I've seen such a workflow using 3D models with Substance Designer before although never done it myself.
Will have to check out ArtEngine since I am unfimiliar with.
This video is what I was looking for man, thanks a lot for such a great explanation. Subscribed, liked, and definitely will share it with friends.
Thank you Rikardo, I am really glad to hear that .... and welcome to my channel :)
Hi! Great comparison, you did a great job, really glad that I accidentally stumbled across your channel, it would be just great if you also compared different programs for photogrammetry
Thank you. This is something I was planning to do since a while and its in my queue of things to do :). I would say the photogrammetry comparison video is in progress but it takes time to make a proper and reliable one :). Cheers!
Fantastic! Wonderful comparison!
Thank you! Very appreciated
I can see how, if you have a machine on your network specifically dedicated to handling the 2.5hr reconstruction for each texture, the photogrammetry approach could be faster. Alchemist seems useful for small teams who only have so much time and effort to dedicate to generating materials.
I would risk saying that there are material types which can be handled by Alchemist very well, and material types which shouldnt. Well made materials has potential to be re-used in many productions and overal material quality can have really big impact on any 3D environment art. So honestly, I would always prefer to get less but high quality materials for my scenes to more diversed but at average or low quality level. This rule applies to everything else, props, characters, animations, light, shaders etc. Photogrammetry is the tool which should be used for every complex surfaces where the height information is relevant for us in any way. For all flat and simple surfaces 'single image2PBR' technique (Alchemist) should be usually enough. But if I have two materials to pick for my scene, and one was made with Image2PBR and one with photogrammetry, I would pick the second one :). Cheers!
Hey thanks for the video. I would like to ask for your opinion.
I believe that the "single image to PBR material" is relatively useful if you are scanning an object which only has one "real substance" and this substance is kind of repeating. Like a brick wall in your example. But photogrammetry fares much better if you are scanning something more complex, like a piece of machinery. Depending on the complexity of the object, modeling the machinery, UVing, texturing with multiple layers in Substance, for instance, takes much longer, Also no matter what you do, no PBR texturing method can capture the variety that can be found in nature.
I would personally use the "single image to material" approach for repeating surfaces like walls and use photogrammetry for anything else.
I would consider Single Image to PBR material reconstruction technique for secondary materials. For any close ups and primary materials I would use procedural technique, photometric stereo or photogrammetry based scans. What I wanted to point out is that AI powered algorithm Substance Alchemist offers is a very big step forward compared to old approach and AI can do really decent job. So whatever works for you is a good choice. Bare in mind that you can also mix and combine different techniques together. Last but not least, Single Image to PBR might be also very useful and quick technique to deal with overal vegetation or to build atlases for scattering.
I also totally agree that its easier for the AI to properly reconstruct solid and consistent surface where we have local differences caused by shadows within a same substance type. If you pay attention to this video you should find that AI struggled to properly interpret bricks which had dark parts.. it means there is still field for AI improvements, but the key step was made and Image to PBR reconstruction has just leveled up to totally new level.
I dont get the part about props scanning tho. With the photogramemtry you can capture albedo, geometry which includes also medium and high frequency height information but also reflection map. I guess you can capture even more if you wish.. everything depends on what equipment you use. I can imagine special lighting setup which would allow you to capture even scatter map.. but usually it is just way easier, cheaper and quicker to generate these things using available standard data (albedo/height/normal/ao) which comes from a basic scan. So yeah, you can capture more physical features if you wish if you build the setup with capture extension... a cross-polarisation is an example of it but you can go even deeper.
@@GrzegorzBaranArt What I meant in regard to props scanning is exactly, like you argue, you can use photogrammetry. But easier props such as rock boulders, which do not have to be a very specific shape or form, and stones are easier done modeling from the ground up and using an existing library of PBR textures.
For instance, the UE marketplace is full of photogrammetry stones but I think for single stones photogrammetry is an overkill.
@@GrzegorzBaranArt On another note, a suggestion for an intersting video:
Could you make a comparison of using mobile phone camera vs. SLR camera using a simple example, like a prop or a surface?
I know SLR is always better, but sometimes one might find an interesting motive to capture in nature when you are on holiday when you do not have your photogrammetry gear with you.
I also think that smartphone cameras are getting better and better even in badly lit environments due to developments in AI. So in some situations the quality difference may be not that great and using you smartphone is more approachable to a lot of people.
I got, in my opinion, pretty decent results using the camera of a Samsung S20 smartphone and I think iPhone12 should be good as well.
@@BreakMaker2904 Hey, yes, its a very good idea and I had it already in my queue :). The problem with mobiles is that there are different mobiles :).. a better one will give you better result while a crappy one not. In theory the drone I use (Dji Mavic 2) is a kind of 'flying' mobile. Usually to shot with mobile you need to take more images to compensate lower resolution, also there might be a bit more to do at pohotoediting stage to fix mobile issues.. a few issues to properly mount the camera on a tripod/monopod to stabilise it .. of course you can shot hand held.. its another video in my queue where I want to compare different ways of camera stabilisation... but yeah.. its a good idea to make a video to cover this topic. Imo mobile is fine when you have nothing else to use. It works fine when used to capture basic shapes without high frequency details ... but if you can use a DSLR camera you should definitelly use a DSLR camera :D as the result always is going to be better :D
@@BreakMaker2904 I would agree to some degree.. there are cases when photogrammetry makes sense and when it is an overkill.. I dont want to go into details but I am planning to make a pretty interesting video soon to present something on this subject :) .. its 2nd in my queue and its going to be the second part to the next one .. of course if I ever finish it and release :D
Fascinating! What would be interesting is also to compare photogrammetry vs. making your textures by hand 'the old fashioned way' in Zbrush + Designer (or Painter). I reckon an experienced material artist would be able to whip out scan-quality material in the same amount of time it takes to do the photogrammetry pass? Would love your thoughts.
Yeah, thats a very good idea. To be honest it all depends what tools you plan to use.
Personally hand sculpted textures with use of photogrammetry based alpha brushes would be quicker without significant quality loss but with all freedom sculpting gives. Same applies to procedural stuff mixed with scans.
I made some of brushes I am talking about in here:
www.artstation.com/artwork/q9xeme
Was even planning to make a video about that subject but it got stuck in a queue of TO DO stuff :).
Cheers!
@@GrzegorzBaranArt nice! Thanks for the link, thats an awesome library for that price. Gonna pick that up for sure.
Grzegorz big fan of your work! Can you make a breakdown on capturing foliage? Also consider travelling to Greece island like Milos for cliffs etc its awesome
Foliage woud be awesome breakdown :)
Thanks, yeah, I was planning to do it, I even started working on a video which covers it about a year ago but it was so huge that I dropped it and never managed to finish it. Maybe I should return to it :). Basically to scan folliage I wouldnt use photogrammetry but Image to PBR technique and Photometric Stereo. Photogrammetry tho would be useful for some things too.. crap.. yeah. .I guess it is a good idea to cover it with the video :). Regarding the Greece.. would be great.. maybe one day when everything calms down a bit and it is safer and easier to travel :D
@@GrzegorzBaranArt Yeah i tried that with help from a substance tut and unity's paper on that matter and i thought you should do it :) Besides your obvious knowledge in photogrammetry and photometry i think you have a natural gift of guiding others and teaching them. I was a procedural guy but after your help i think nature is far more beautiful to capture and use.Ty and take care
Panie Grzegorzu dzięki za breakdowny :)
Nie ma za co. Mam nadzieje ze sie do czegos przydaja :)
@@GrzegorzBaranArt Tak, ale muszę mieć więcej czasu żeby wpleść scanowanie do mojego workflow :)
Great and so useful MAN
Thanks for sharing.
Thanks :), I am really glad to hear that
Great comparison, really nice presentation! What renderer did you use when you were flying by the sphere on the plane?
Thanks, I used Marmoset Toolbag 4 for this :)
Is the output resolution of both workflows identical? I would assume the Photogrammetry based texture set to be much more detailed as the photos taken were closer and would have more resolution.
It depends. You can capture a single shot using a camera which has 50sh Mpx or even twice as much megapixels resolution to work with. Form the other side you can reconstruct a really low quality surface using photogrammetry. The advantage of the photogrammetry is that it gives you much more accurate physical data. So reconstructed height information isnt a guess but it is what it actually is in reality. Also with the photogrammetry you can cross polaise the light and get really accurate color information. With the Image to PBR its always a more or less accurate but still a guess.
So of course that the photogrammetry based materials wins in every quality aspect with the Image to Material based materials but.. the quality gap with the AI is getting smaller every year.
There are many purposes where accuracy isnt just necessary and time is prioritised. These are cases where Image to PBR makes sense. It can be anything, a quick pattern capture to be used as an alpha for ZBrush to bring some ornaments, or concept materials, or background materials etc.
Last but not least, there are surface types which cant benefit from the photogrammetry too much.. these are usually flat surfaces with lack of crossurface details which can be used for image alignment in photogrammetry software. TO give you an example, it can be clean, white painted wall or varnished wood etc. These are surfaces where height isnt very relevant and and with surfaces like this you can get even better results if you use an Image to PBR technique instead :).
So bare in mind.. photogrammetry, photometric stereo, image to PBR, procedral material generation, manual sculpt etc. .. these are just tools to be used. And its good to know them all so you can pick the right one to the job. And this is the main purpose of my videos .. to present them all so everyone can see what are pros and cons of each.. so in result we know WHY we do things in a certain way and what actual options we have so we always have a choice. Hope that helps :). Cheers!
Super vlog można dużo sie nauczyć Pozdrawiam
Dziekuje bardzo :)
This was a nice video. Thank you!
Thank you Guilherme :), appreciated
Awesome video. Is there a way to make the bricks get randomized in position somehow?
Thanks, these are scanned bricks from already existing brick wall :) so to answer your question, no these pricks cant be randomized. In theory you can reshift them and rearrange a bit using Clone Tool, the Substance Painter offers, or a new, recently introduced by the ArtEngine - Patch Tool. But to fully generate a brick wall and get all bricks under full control you would need to pick totally different technique. Personally I would generate them procedurally in Substance Designer or would create them manually in ZBrush brick by brick. You can also mix these techniques or utilise scans to make your life easier and rise the quality. The sky is the limit here. The Image to PBR or photogrammetry based scans are the easiest and the most efficient ways to create this type of texture on very high level of quality.
nice
Cheers!
Hmm, ciekawa tematyka:) Zostawiam łapę w górę!:)
Dziekuje bardzo Aniu :)
Nice explanation and process video, thank you. You should not forget that travelling and preparing your journey to shoot photos also counts as time spent :P.
Thanks :). Yeah, you are right and I totally agree.. but this is something that has to be done to cover both techniques so I didnt mention it :)
@@GrzegorzBaranArt Interesting comparison, though I feel how you value and compare the time spent on each isn't entirely correct. Doing a photogrammetry capture requires a huge set of photos, you can't usually find those online easily (not for free), so you have to go out and capture them yourself. Grabbing my camera, walking somewhere, shooting, walking back, getting pictures off the SD card would take me at least one extra hour, maybe more, and I'm still limited to materials you can find nearby. With Alchemist you can just use any highres image from online, or pay a little extra to get good quality ones from textures.com. Huge timesaver compared like that.
@@Xoliul thats a very good point :)
Really good and deep comparison, I suppose its down to how much time you have to produce something, budget of the overall project etc. And if it needs that level of accuracy or not as the Alchemist wall was still a believable wall, it just wouldn't hold up as well if it were for closer renders.
Thanks a lot. I think its more about the cos, user interface and overall efficiency. Marmoset Toolbag handles lowpoly in a totally different way to Substance Designer or Knald. It can be beneficial in certain situations.. planning to show a good example of this pretty soon. From the other side being limited to Video Memory is a huge bottleneck. Substance Designer baker for example offers a choice and can bake using GPU, which is very quick, but if you dont have enough VRam you can always switch to CPU. Knald lacks of updates, its a couple of months since I tested a albedo as a texture feature but it still hasnt been added to the official release and even if the baker itself is great, it doesnt offer much out of the baker at the current stage. XNormal... its a very solid but slow and outdated piece of software.. but its normal. So regarding the output quality I believe all tools deliver similar level of quality which is depended on details set for each map. I think there is no winner in here as each app has pros and cons.. but even if I have each of these app, I still prefer the Substance Designer as with the baker we get super powerful PBR material package.
@@GrzegorzBaranArt Agreed, Its basically a game of trade offs, I've only began diving into designer recently (really the whole pipeline) and the sheer level control it affords is next level. I also much prefer its baker. Its only issue is baking textures when UDIMs are involved, but I understand Marmoset has good tools for that so thats likely what i'll be trying next so i look forward to your video on it! :)
great comparison, thanks for your effort. Do you think Alchemist AI would do a good job in ornaments texture? I mean, to get the volume of ornaments in an accurate way?
Thanks a lot :). I believe it can, was even thinking about this. I would be careful with shiny, reflective surfaces tho as highlights or reflections might be misinterpreted correctly. Also bare in mind that Alchemist simplifies things a bit so you wont get 100% of accuracy you get from the photogrammety or photometric stereo but I think it should be close enough. I would estimate something between 75-95% and how close you are would depend on the image. It should also help if you would introduce photo-editing software to the pipeline as this way you can calm down spme speculars and fix some blacks. Looks like another video to present more examples with less setting details would be beneficial and help to present the tool in action :)
It would be good to see how the multi image to material option would work in alchemist
Its a very good idea. Was planning the subject of the photometric stereo reconstruction with the Substance Designer as it gives way more control, but actualy might worth to give a chance to the Alchemsit too to compare results.
@@GrzegorzBaranArt are there any other apps that can do photometric stereo ? I would love to see a comparison
@@dainjah Of course there are and I am planning to record a video to present a few in action. I started actual research for it over a year ago and the video about it (photometric stereo) is currently 3rd in my 2do queue :)
Very interesting comparison, excellent video to explain it. I would choose the quicker easier method :)
Thank you Barny
Bonjour, merci pour cette superbe vidéo, Substance Alchemist existe-t-il toujours ou a-t-il changer de nom car j'ai du mal à trouver un lien
Bonjour Laura, oui, mais il a été renommé Substance Sampler. Il est disponible sur Steam avec une licence perpétuelle que vous pouvez conserver pour toujours, ou via le modèle d'abonnement d'Adobe sur leur page Web. D'ailleurs. malheureusement, je ne parle pas français :) donc je ne suis pas sûr que ma réponse soit suffisamment informative
@@GrzegorzBaranArt merci beaucoup votre réponse est parfaite
Hi Grzegorz, i am trying to export a 50M photoscan from metashape. i see you use fbx so i tried it myself but i've waited for half an hour and it wasn't still done. is it normal? do i have to wait more than an hour for those amount of polys?
Yes, its normal. FBX is a kind of well organised and well compressed format to store geometry. It is supported by almost everything but the cost is in the conversion time. If you want to save time, and use format which caries vertex color information but works faster I would suggest to use PLY instead. This is the format designed for realy heavy geometry :). Just make sure the software you use supports it. To give you the time context. I was exporting a 40mln heavy mesh from metashape and these are times. They depends on computer specs you use of course, so the better computer will do it quicker but proportions stays the same:
Binary PLY
export time: 8 seconds
File size: 978,726KB
------------------------------------
Non-binary PLY
export time: 2,5 mins
File size: 2,351,891KB
------------------------------------
Binary FBX:
export time: 9.5 minutes
File size: 357,586KB
hope that helps. Cheers!
@@GrzegorzBaranArt thanks a lot, Gezegorz you feedback is a lot appreciated.
I didn't know ply was so useful with high poly meshes.
In the end i went with a single obj that took less than 10 min which is no problem for me.
One thing that i didn't know and i didn't find any info on it is regarding the Collada file format (.dae), in fact from my tests it doesn't seem to register high poly counts at all: i tried with a decimated 4ml poly and it all went well, but when i tried using it to export a still decimated 25ml polys it was unreadable.
Kind of unfortunate since collada exports and imports really fast (almost as alembic does).
In the end i thank you a lot for all of your videos and the information that you're giving for free!
Have a nice day!
Why you using zephyr instead agisoft ? Do zephyr is someway better than agisoft for making 3D of brick wall ?
I use many apps to do the same job as this way I know them better. This industry changes very quickly and and there is a neverending competition between each. For example, 3DF Zephyr is much better to what it was 2 years ago. To be honest all.. Reality Capture, Metashape and 3DF Zephyr are roughly on the same level regarding the quality and speed. They differ between each but I would say that an actual pick is the matter of individual preferences, budget and actuall job.
So in this video I presented workflow with 3DF Zephyr while on the other one with Metashape etc. Since a few years I plan to record a comprehensive comparison video but these apps changes to fast and I still have it under construction :)
nice video bro, learned a lot.
Thanks dude :). Cheers!
Alchemist seemed like it didn’t create enough height difference between the bricks and grout. I’ve only dabbled with it a little bit and don’t know all of the tools which are available, but it seems like it needs a way to selectively isolate areas of the image (such as the bricks) for processing apart from the unselected (grout) areas. Selecting the bricks based on their hue and then manipulating the height would make a big difference. I suppose you could always go to Substance Painter to do that after exporting the textures from Alchemist, but it would be nice to be able to do it all in one application.
I totaly agree. Imo there is no way for the AI to do everything and in my opinion it should - like it does - process the initial pass and ask user about the result. And the HUMAN user who knows what to expect from the surface can tell the AI what is correct and what isnt.. and based on that input knowledge AI should make another, full reconstruction pass .. and loop this process until it is done.
So imo the best way to reconstruct the single image and turn it into a PBR material should be based on comunication between the USER and the AI :). I know it might sound simple but imo there are no other way. If the AI was correct.. the comunication would be short.. if it wasnt .. it would learn from the user :). Cheers!
@@GrzegorzBaranArt I agree! You should reach out to the Substance 3D team and suggest that. Based on the credibility of your channel, they may listen. I’ve had quite a bit of interaction with the After Effects team from attending several years of the After Effects World conference, and have found them to be very friendly, approachable, and receptive. I’m sure the 3D team is no different. Might be worth a try!
@@deastman2 Thanks, the substance guys are amazing too and super friendly :). I had the pleasure to meet them personally already.
@@GrzegorzBaranArt I agree that you should at least suggest this! Of course I'm sure having some kind of back-and-forth with the AI algorithm and human interference probably wouldn't be a small thing for them to implement, but I feel like it could vastly increase Alchemist's usefulness to get it past the point of alot of imperfect materials.
@@deastman2 I just did. Here are answers, will add the to the description in case they are useful for someone else too:
- th-cam.com/video/XDAUw_dSmt8/w-d-xo.html
We're working on it to have that done automatically
- th-cam.com/video/XDAUw_dSmt8/w-d-xo.html
You can use the geometry equalizer parameter instead of using the equalizer later but I understand that you want the equalizer after all the clone patches too.
- th-cam.com/video/XDAUw_dSmt8/w-d-xo.html
Tiling layer would be better in this case to align patterns/bricks
- th-cam.com/video/XDAUw_dSmt8/w-d-xo.html
The delighting was already one in the Image to Material (AI-powered) layer. It's like you're doing it twice here. And on the Image to Material, you have a delighting intensity parameter. You can also use the Adjustment layer to play with the saturation, luminosity of the base color.
- Ctrl + Z doesn't work on painting action yet. That is a known issue / not developed feature.
- as well as layers OPACITY to blend it with the previous state: This is asked often indeed!
Cheers :)
@Grzegorz Baran The video is intresting however the comparaison could have been more "Fair". Why ? The photogrametry one do receive a larger texture as imput [X photos of X size]. While the Alchemist one just received one ... To be fair you could have Stitched all photos within Phosothop and imput it in Alchemist. [However maybe there is a limitation of texture size to be processed]
Thanks :). Regarding the image stiching isnt as easy as it seems to be. I gave it a try a few times already using PTGui, Photoshop etc. and it didnt work very well due to the consistency loss. The result I had were always patchy to some degree and it was very hard to connect edge details.. just imagine matching grass blades for each image where each has some barrel distortion :).
I would rather increase the resolution by using a larger camera.. like 50Mpx? I mentioned the resolution issue as someone might cinsider to use a mobile or a drone with small matrix and didnt want anyone to be surprised with the resolution limit. Bare in mind that I cropped from the image. The result I got was 4k tho as I upscaled it with the Alchemist. I would say it can be solved by the proper AI based 'rescale tool'. As far as I know ArtEngine offers such but havent tested it yet. As I understand it upscales textures and brings details to them using AI, not by sharpening it. I am sure there are more tools like this one.. migt be even worth to research them and make a separate video when done.. dunno if would it be to far from the core of this channel and anyone would be interested :)
@@GrzegorzBaranArt If you do find a software that stitch properly I’m interested better than photoshop. I already did it for some stuff and it worked but you’re correct could not work for all cases. For Bricks I think it could work, maybe I’m wrong. For grass no doubt it will not work. I used Photoshop Stitch for a Dam texture.
@@GrzegorzBaranArt Hi! Very nice comparsion! I have try topaz Ai for upscale images...mostly serached via google images... and then process it in Alchemist. Not always work but make a decent result. Maybe with a more "planned capture" and good camera the result can be good.
nice overview of the alchemy process. but photogrammetry is still king. the only annoying thing about photogrammetry is taking 100+ pictures. at least for me. not really concerned about computation time since i can just drop it to my other pc
Thanks, I totally agree. But AI based reconstruction brings single image based reconstruction to a new level which and can be useful in many ways. I believe it is good to know whre it is and what it offers. I am trully amazed with the result of AI Powered reconstruction, especially when I compared the reslut to the B2M based solution and I wont be surprised if soon this is going to be a very common way to create materials. Just imagine what might happen if that AI takes more complex feedback from the user or even.. learns from it :). Or what if AI can be used for the photogrammetry..and instead of 100 images we can take just 3? SO yeah, its still a King but I can imagine the Image to PBR technique as a quite reliable option for a few things already :D
@@GrzegorzBaranArt well, i am not saying that ai powered texture creation will always be inferior, its just not there YET imho. maybe in the near future we will have a button "make art"
@@snowinchina5531 Sure, I got it :). I just realised that the photogrammetry reconstruction itself might also change and someone might consider to use an AI in theor software. And regarding the AI powered surface reconstruction I also agree that its not there yet when compared to the photogrammetry, but I meant that its really close and at this stage it can be a very useful tool in material reconstruction in real production quality. Alchemist devs made a definitely a freaking huge step forward with this tool and pushed Single Image based PBR reconstruction to the next level :D. At least this is what I have found because.. who knows.. mybe it was just an accident ;). This is why I guess I need to process a few more totally different materials with the AI powered reconstruction similar to what I did in this video:
th-cam.com/video/mYUMQj0cXgg/w-d-xo.html
Cheers! :)
3:10 "...quite crappy..."
I just felt sorry for the developers of good, affordable, and time saving tools...
I said that because all provide pretty similar results. While they were good enough in early PBR era, nowadays there are way behind to what ArtEngines AI or Substance Sampler AI based algorithms can offer. Please don't get me wrong, I used to use them and I really loved what crazybumb could offer but it was years ago. Versions from the time I made this video couldnt even compete with modern applications regardign both: the quality and functionality level. ArtEngine or Sampelr can generate a really decent texture set from a single image - still not perfect but good - other listed apps can't generate texture-sets which are comparable. So its a no brainer for me to drop them and pick something that outputs way more reliable data.
No worries, i got it... a scan is a scan, it's not one picture that would beat that. 😄
How good is Alchemist now in (almost) 2022?
Since it is in constant development it received some improvements. It is still the best AI based 'SingleImage to PBR' reconstruction system but it didnt change much recently. It also got a few very handy tools to support Photometric Stereo approach but the reconstruction algorithm is very poor when compared to the competition. Seam removal tools are still below my expectations and I dont use them on a daily basis. Unity ArtEngine does 'seam removal' job much better when compared.. but of course also has some pros and cons. Adobe has added a few HDRI tools we can use to create 360 HDRI panoramas which some might find useful. So, its better to what it was but its not a mind-blowing type of change. I would say that it is worth to own a copy of it, especially that now we can purchase a perpetual (subscription independent) license on Steam, so we can keep it forever with 1 year of updates, but personally, I dont use it too much in my current workflows. I plan to record a video about some of new features and compare how they do the job compared to other similar apps next year so stay tuned.
@@GrzegorzBaranArt Thanks for the heads up Grazegorz. I guess it is a tool worth looking into. Considering the advancements in Ai since 2020, material creation really shouldn't be as time or labour intensive as it still is. I look forward to your comparison video. Cheers!
cyborg
This is a very helpful video. Thanks!
You are welcome :)