what is the purpose of this though? it can be used for distant object *maybe* but there are easier ways to make those. for general purpose assets, you really can't pass the quality standart of modern games with this tech. not to mention this is just the base color. and throw away the aesthetic consistency between models too. ai makes either nearly identical images if you ask or it just can not understand what you are trying to do at all. plus if you want symbolism in your game, there is additional steps on fixing this which i think is way cumbersome and boring than actually making the asset. i didn't even mention cinema since these kinds of assets are pretty low quality even for games. (just to add, it is still ethically questionable to use these in a profit-driven project) oh one more thing, usually, games require some procedurality in their textures for some of the assets they have. this can not produce that flexibility too. only thing that is beneficial is that depth map thing i guess. that is kinda cool.
Yeah, of course, nobody here says that this models can be used in AAA games or cinema as is, and I'm not brainless "ai-bro" to say that it will be. I myself work in gamedev for a while. But there are fields for 3d-graphics other than games and cinema, for example abstract psychedelic video-art or music videos, heavily stylized indy games, maybe some surreal party poster, etc. know what I'm sayin'. As for cinema and gamedev, I think it can be used in some cases as kitbash parts for concept-art, and with proper knowledge of how to build prompts and usage of custom made loras and stuff like this, you can get really consistent results with ai generations.
This is a proof of concept, its brand new. the process 100% can be sped up, streamlined, with ways to get better results as ai art improves. The description of this very video just says its a proof of concept, and people were asking for details. THis type of video is for professionals who want to explore different techniques, to build off of each others works, to stay informed about new techniques, and its just interesting. For instance, I make digital art, and one thing I have been experimenting with is making a 3d environment and characters as close as possible in style as some AI art Ive already generated, without taking vvery much time or effort, then rendering it as a video, then overlaying AI on top of it for a more cohesive look. This process could be very useful for this for multiple reasons: First, if Im using AI art to make the 3d, models, they are going to mesh very well when I overlay the second set of AI art over the 3d render. Second, because the AI art is going to be overlaid on the 3D model, I dont really care if the 3d models dont look perfect, kinda irrelevant. Lastly, Look at the game BattleBit which has gone viral recently. Or look at Amogus. or Minecraft. Not every game is aiming for amazing photorealism.
I think it's valid to criticize the quality of the output, but I think you miss the point of you think this is trying to be a replacement for the current traditional methods. It's just an experimental process playing around with what's currently available. It's called a creative process for a reason. A true artists enjoys figuring out new and unique ways of combining tools and process, and this video is just an exercise in that. If you can't see the purpose for it, then just remove creative from anything you do.
I had this theory on the start of this year, when I noticed you could generate good displacement maps using control nets, good to see someone putting that at practice.
So, I've been diving deep into storytelling and creative videos lately. VideoGPT showed up, and it's like having this magical assistant that instantly enhances the quality of my content.
i wrote a long ass comment on this kind of stuff here but for this also, again, you can produce these maps with just normal renders of artists and either way is ethically questionable if you do not change and add your twist to it.
@@pygmalion8952 sure you can do this from other artists but its restrictive because you can only use what already exists -- but if you're generating the images by AI you have much more freedom -- you can sketch your idea or use a whole bunch of other tools that are available to control the AI generation, then you can make the depth map and do this bit..
I just want to point out that you people that are dissing this, that for a person like me who had zero clue about any of this, just inticed into trying out something I can get actual creative results from like this is so exiting, I mean I read a few of the technical comments and that's so past my head it really shows how its a specialized viewpoint that's not generalized to more common people in terms of general knowledge ok weird ass rant over
How can we generate 3d models from multiple depth maps of the same character from different angles? I have a comfy-ui workflow that produces identical characters from multiple angles so I should be able to combine these to avoid things like mirroring and sculpting , right?
I don't use blender, but my guess for why this is, is that the color needs to be interpreted as linear for data processes, versus sRGB or whatever color profile is usually slapped on top of the image when rendering for your screen.
I think using Mirror is a nice idea, but it may not be applicable for all objects. How about using SD & LORA to create 2x2 images or 3x3 images of the same object from multiple different POVs, then connecting them together instead of using a mirror?
@DIGITAL GUTS, I really like this workflow, I also wanted to know, can I use this same strategy for humanoid AI characters, as you are the only person I have seen use this workflow, thanks in advance :) also subbed
Honestly I'm quite impressed, a really cool way to make a lot of kitbashing, really necessary nowadays I guess now I have to learn how to make AI images hehe cheers from Mexico!
When I do this with a depth map of a 16:9 format, the displacement modifier applies the map as a small 1:1 repeat pattern.. why? note: I made my place 16:9 ratio and applied scale before adding displacement modifier.
What about using chartunner Lora to create the front, back, left and right sides so merging all 4 sides will give a better and smoother object instead of correcting sides manually? It's just an idea but I haven't seen anyone try that so if you can could you please give it a try and share a tutorial 3:36
That actually is a pretty decent little quick workflow. Pop that out to something like zbrush and go to town refining. Is it really good enough on its own? For previz and posing with a quick rig, absolutely. That's pretty fast tbh and simple.
this is awesome! BUT you lost me at mirroring the image and then bisecting to get rid of the extra geometry. i am a noob to blender still and dont know how you did that. was it a short cut key you used? at 3:35 in the video
oh, its a speedup part and there is quiet a few hot keys here. but its very basic usage of scuplt mode in blender. there are many videos on youtube where this stuff is explaned, try this one th-cam.com/video/Cmi0KoFtc-4/w-d-xo.htmlsi=mKSHWz8SCE8evM6M
You "accomodate" yourself by sculpting something random from a not-so-accurate mesh, the mirrored thing do not look any like the originalimage thing... Do you have a workflow to get a real mesh from something representative? (like a character or landscape)
The whole process is sub-any-standard. Kitbashing some weird crap together - that's all this will work for. Maybe in 2 or 3 years we will see something more generally usable. Good luck getting any meaningful model data from AI models these days. Hard enough to prompt them into what you actually want let alone transfer this into a working 3d environment.
The more people who experiment with new technology, the more cool ideas we come up with, and better uses we figure out for the technology. This particular workflow may not be usable for anything meaningful, but maybe it inspires someone to try something different, and that person inspires someone else, and so-on until really cool uses come out of this.
you get the point of this video. i’m just messing around this tech and trying things. actually now making full game using only this and similar approaches to meshes. it wont be anything of industry standard quality of course, but just proof of concept experiment. having a lot of fun
@@digital-guts I've been experimenting with ways to create a character in Stable Diffusion and turn them into a 3D model for months. The first few attempts were awful, but without those, I wouldn't have the current workflow (which is getting really close). I also know that the technology is getting better every week, so all my experimenting should help me figure out how to do things once things get to that point.
Such a jimmy rig way to do things. Do any of these AI generators just offer an option to export or download the 3D mesh file with maps, lighting etc???? I.e. .3ds .max .dxf .fbx .obj .stl etc! Seems the AI generators are initially just composing highly elaborate 3d scenes and rendering flat image results anyways anyways? Same for vector based files??? Can they just export native vector files such as .svg, .ai, .eps, .cdr vector .pdf??? AI is a career killer.
So, is it jimmy-rig way or career killer, you decide. Neither, i think, of course it depends on your goals. Meshes like this can work only as quick kitbash parts for concepts, not as final polished product anyways. Does kitbashing killed 3d careers or photobashing killed matte painting in concept art? i dont think so.
No, AI like Stable Diffusion are not working in a 3D space or vector, it's working on random pixels (noise) and apply denoising steps learnt from huge images set with descriptions. Your prompt text is helping the denoising steps to be able to "hallucinate" something from noise... The fact a final image is looking like 3D render or vectors or photgraphy or painting (etc) is just pure coincidence! :)
very interesting for concept generation - thanks for sharing! I'm assuming you can also upscale the various images in SD as well to maintain more 'closeup' detail...? Maybe with appropriate LORAs...
My question is can i get a diffuse map turn this into a printable model id love to at least use it to make a base model and modify from there for like masks and such
@@digital-guts awesome! because ive gotten it to work with about 60% of my images but some get destroyed when i bisect the z axis on mirroring. but all the info you got is useful. this technique is mid blowing and a major day saver. one last question. you kind of speeded over the part where you clean up the mesh after mirroring. me being a noob to 3d software i could really use some clarification on how you cleaned it up. you made it look so simple.
Ok, You can, but it looks quite bad compared to proper connection with "normal map" node. It seems like intensity is way lower without it, and you cannot control it without normal map node.
It's not bad at all (is impressive actually) and you gave me very good ideas. Although I suppose this wouldn't be very applicable to non-symmetrical images, right?
I haven’t seen a single person use AI to texture a model using individual UV maps and I can’t understand why. Ai can dramatically speed up the texture process but I have not seen anybody take an ai generated image then turn it into a 3D model and i can’t understand why…
Seen this technique before but at this stage it looks very limited. In terms of the mesh without putting any textures on it, it looked not representative of the object. I feel like adding the textures fools the eye into thinking it is more detailed than the mesh actually is.
Question, how is this geometry usable for animation purposes for like a game or anything? it just seems like it’s just a cheap way to get something cheaply moving on screen in a hacked way to view at one angle. versus something that’s really actually usable for like production or a game? And don’t you feel like you’re not learning anything about art creation? you’re just kind of hacking everything together and not even creating the original art yourself? Doesn’t that feel weird?
what is art? I can easily move this topic to the point where making commercial meshes for AAA games and cinema is not an art at all, and weird naive or outsiders art is an art in its purest form, but not this time. Basically, if you don't find this useful then its not for you, and this is perfectly fine. (also you can find answer to your question below the pinned comment)
There's something a bit misleading about the video. At the start, you show a piece of work you've made that has an animated realistic character in it, but the only thing you actually show how to make is the extra geometry on her head. I do think it's a cool shortcut for extra props and background objects, but the start of the video makes it seem like you're using AI generated images to make animation-ready models, which you clearly aren't. I do wonder if it's possible to have an AI generate orthographic views of a character to make a base mesh which will be manually re-meshed for animation later on. Then, the AI generated images would be used as a base for the model's textures.
This video was not meant to be a step-by-step tutorial of how to replicate the example from the beginning, only the most interesting part that I was asked about. This realistic character is an Unreal Engine MetaHuman and I think in 2023 everybody in this field knows what it is and there are thousands of tutorials on youtube about this topic. Making this i'm just having fun playing with tech tools to try new ideas and sharing it. i'm not planning to do full step-by-step beginner-friendly explaining every button. Answering your second question, yes it's possible (and with not so bad topology) I've seen such tools from my colleagues, but it's not public yet, as I know.
@@digital-guts "yes it's possible (and with not so bad topology) I've seen such tools from my colleagues, but it's not public yet, as I know." Can you make a vid explaining how to fix the verts to be game ready? Im deving for UE4.27/5, making character armor parts. Im not exactly sure how far I can push my vert count in UE since I dont have 10 characters in a level at once yet. But if you show your method [about how far I need to go] to reduce verts for games, then that would be helpful. Thank you.
what is the purpose of this though? it can be used for distant object *maybe* but there are easier ways to make those. for general purpose assets, you really can't pass the quality standart of modern games with this tech. not to mention this is just the base color. and throw away the aesthetic consistency between models too. ai makes either nearly identical images if you ask or it just can not understand what you are trying to do at all. plus if you want symbolism in your game, there is additional steps on fixing this which i think is way cumbersome and boring than actually making the asset. i didn't even mention cinema since these kinds of assets are pretty low quality even for games. (just to add, it is still ethically questionable to use these in a profit-driven project) oh one more thing, usually, games require some procedurality in their textures for some of the assets they have. this can not produce that flexibility too.
only thing that is beneficial is that depth map thing i guess. that is kinda cool.
Yeah, of course, nobody here says that this models can be used in AAA games or cinema as is, and I'm not brainless "ai-bro" to say that it will be. I myself work in gamedev for a while. But there are fields for 3d-graphics other than games and cinema, for example abstract psychedelic video-art or music videos, heavily stylized indy games, maybe some surreal party poster, etc. know what I'm sayin'.
As for cinema and gamedev, I think it can be used in some cases as kitbash parts for concept-art, and with proper knowledge of how to build prompts and usage of custom made loras and stuff like this, you can get really consistent results with ai generations.
This is a proof of concept, its brand new. the process 100% can be sped up, streamlined, with ways to get better results as ai art improves.
The description of this very video just says its a proof of concept, and people were asking for details. THis type of video is for professionals who want to explore different techniques, to build off of each others works, to stay informed about new techniques, and its just interesting.
For instance, I make digital art, and one thing I have been experimenting with is making a 3d environment and characters as close as possible in style as some AI art Ive already generated, without taking vvery much time or effort, then rendering it as a video, then overlaying AI on top of it for a more cohesive look.
This process could be very useful for this for multiple reasons: First, if Im using AI art to make the 3d, models, they are going to mesh very well when I overlay the second set of AI art over the 3d render. Second, because the AI art is going to be overlaid on the 3D model, I dont really care if the 3d models dont look perfect, kinda irrelevant.
Lastly, Look at the game BattleBit which has gone viral recently. Or look at Amogus. or Minecraft. Not every game is aiming for amazing photorealism.
I think it's valid to criticize the quality of the output, but I think you miss the point of you think this is trying to be a replacement for the current traditional methods.
It's just an experimental process playing around with what's currently available.
It's called a creative process for a reason. A true artists enjoys figuring out new and unique ways of combining tools and process, and this video is just an exercise in that.
If you can't see the purpose for it, then just remove creative from anything you do.
Concept art - really great for populating giant scenes with lots of gack and set dressing
Thank you for asking this question as I wondered myself what this could be used for
I had this theory on the start of this year, when I noticed you could generate good displacement maps using control nets, good to see someone putting that at practice.
It’s an old idea. I think many of us discover it first or later.
So, I've been diving deep into storytelling and creative videos lately. VideoGPT showed up, and it's like having this magical assistant that instantly enhances the quality of my content.
usually I don't find such good music with these tutorials, cheers mate
My jaw literally dropped. This is incredible! Thank you!
This can be a great process to use for a rough starter mesh that you can then refine
i wrote a long ass comment on this kind of stuff here but for this also, again, you can produce these maps with just normal renders of artists and either way is ethically questionable if you do not change and add your twist to it.
@@pygmalion8952 sure you can do this from other artists but its restrictive because you can only use what already exists -- but if you're generating the images by AI you have much more freedom -- you can sketch your idea or use a whole bunch of other tools that are available to control the AI generation, then you can make the depth map and do this bit..
I just want to point out that you people that are dissing this, that for a person like me who had zero clue about any of this, just inticed into trying out something I can get actual creative results from like this is so exiting, I mean I read a few of the technical comments and that's so past my head it really shows how its a specialized viewpoint that's not generalized to more common people in terms of general knowledge ok weird ass rant over
How can we generate 3d models from multiple depth maps of the same character from different angles? I have a comfy-ui workflow that produces identical characters from multiple angles so I should be able to combine these to avoid things like mirroring and sculpting , right?
You can't just plug the color data of a normal map texture into the normal slot in principled BSDF, you need to put a "normal map" node in between.
you can actually
@@albertobalsalm7080 Yes but that will lead to horrible results. You can also plug it straight into roughness if you want.
thanks i missed that part while recording
I don't use blender, but my guess for why this is, is that the color needs to be interpreted as linear for data processes, versus sRGB or whatever color profile is usually slapped on top of the image when rendering for your screen.
Also it needs to be set to non-colour
ok, I'm speechless... just wow!
Thank you for this amazing tutorial!
For BG objects like murals on walls and ornaments this can give a nice 2.5 D feel. Maybe can also speed up design to find form from first idea.
I think using Mirror is a nice idea, but it may not be applicable for all objects. How about using SD & LORA to create 2x2 images or 3x3 images of the same object from multiple different POVs, then connecting them together instead of using a mirror?
nice one! got to try this! thansk for sharing
Very cool, thanks for sharing the workflow!
@DIGITAL GUTS, I really like this workflow, I also wanted to know, can I use this same strategy for humanoid AI characters, as you are the only person I have seen use this workflow, thanks in advance :) also subbed
yeah, since this videos i’ve tried couple of things and its kinda ok for characters in certain cases. especially for weird aliens )
How did you get the animated face, That seems completely different to what you showed us in this demo.
Looks like the metahuman facial animator possibly
yes it is and its no the point of this video. there are tons of content about metahuman in youtube
@@digital-guts I've heard of metahuman but never tried it. I'll look into it. Thank you.
u are a lying clickbait @@digital-guts
i wouldnt mind learning blender and learning how to do this. can you do a tutorial on how to run "zoe depth" locally?
Honestly I'm quite impressed, a really cool way to make a lot of kitbashing, really necessary nowadays I guess now I have to learn how to make AI images hehe cheers from Mexico!
Please ignore the salty comments. This is a game changer, especially for mobile platforms. jaw dropping result and pragmatic pipeline.
When I do this with a depth map of a 16:9 format, the displacement modifier applies the map as a small 1:1 repeat pattern.. why?
note: I made my place 16:9 ratio and applied scale before adding displacement modifier.
What about using chartunner Lora to create the front, back, left and right sides so merging all 4 sides will give a better and smoother object instead of correcting sides manually?
It's just an idea but I haven't seen anyone try that so if you can could you please give it a try and share a tutorial 3:36
i’ll give it a try and take a look. i’ve done some test with ai-characters and it looks ok-ish and weird, maybe share the results later.
Sorry if this is a newbie question ...but is this dreamstudio some componet of SDXL?
That actually is a pretty decent little quick workflow. Pop that out to something like zbrush and go to town refining.
Is it really good enough on its own? For previz and posing with a quick rig, absolutely. That's pretty fast tbh and simple.
I love this! Plus (because of the horror-related prompts that I've been using), I'll probably give myself nightmares 😅
Thank you for sharing ❤
This is good enough for some indie game companies honestly. Might really help some folks out there get some assets done faster.
Fantastic content and video mate,very useful ,subbed ! Keep it up !
How would one do this with a front-facing character? Or does this technique demand the profile view of them?
Woah, i think i will try to see if i can remake this tomorrow, would be a nice way to spend some time, thanks !
Very interesting nontheless, thanks for your time man, this technique sure have its uses.
Wow... Just wow.
Nice trick.
Now you are literally working for the machine, for free! :)
this is awesome! BUT you lost me at mirroring the image and then bisecting to get rid of the extra geometry. i am a noob to blender still and dont know how you did that. was it a short cut key you used? at 3:35 in the video
oh, its a speedup part and there is quiet a few hot keys here. but its very basic usage of scuplt mode in blender. there are many videos on youtube where this stuff is explaned, try this one th-cam.com/video/Cmi0KoFtc-4/w-d-xo.htmlsi=mKSHWz8SCE8evM6M
@@digital-guts thank you! And i have been using this and most images work but some images invert when i mirror them. Have you wver had this problem?
Absolutly amzing! Thank you for the tutorial! :D
Amazing ,awesome ..Thanks for sharing
Great video. Do you have a Cinema 4d tutorial on this?
yeah and what software or website did you use on the first minutes ?
this is Automatic1111 web-ui for stable diffusion
You "accomodate" yourself by sculpting something random from a not-so-accurate mesh, the mirrored thing do not look any like the originalimage thing... Do you have a workflow to get a real mesh from something representative? (like a character or landscape)
The whole process is sub-any-standard. Kitbashing some weird crap together - that's all this will work for. Maybe in 2 or 3 years we will see something more generally usable. Good luck getting any meaningful model data from AI models these days. Hard enough to prompt them into what you actually want let alone transfer this into a working 3d environment.
u are crazy😆😆😆😆🥰🤩😍❤❤❤❤❤❤, i u love bro keep it up
The more people who experiment with new technology, the more cool ideas we come up with, and better uses we figure out for the technology. This particular workflow may not be usable for anything meaningful, but maybe it inspires someone to try something different, and that person inspires someone else, and so-on until really cool uses come out of this.
you get the point of this video. i’m just messing around this tech and trying things. actually now making full game using only this and similar approaches to meshes. it wont be anything of industry standard quality of course, but just proof of concept experiment. having a lot of fun
@@digital-guts I've been experimenting with ways to create a character in Stable Diffusion and turn them into a 3D model for months. The first few attempts were awful, but without those, I wouldn't have the current workflow (which is getting really close). I also know that the technology is getting better every week, so all my experimenting should help me figure out how to do things once things get to that point.
Such a jimmy rig way to do things.
Do any of these AI generators just offer an option to export or download the 3D mesh file with maps, lighting etc???? I.e. .3ds .max .dxf .fbx .obj .stl etc! Seems the AI generators are initially just composing highly elaborate 3d scenes and rendering flat image results anyways anyways?
Same for vector based files??? Can they just export native vector files such as .svg, .ai, .eps, .cdr vector .pdf???
AI is a career killer.
So, is it jimmy-rig way or career killer, you decide. Neither, i think, of course it depends on your goals. Meshes like this can work only as quick kitbash parts for concepts, not as final polished product anyways. Does kitbashing killed 3d careers or photobashing killed matte painting in concept art? i dont think so.
No, AI like Stable Diffusion are not working in a 3D space or vector, it's working on random pixels (noise) and apply denoising steps learnt from huge images set with descriptions. Your prompt text is helping the denoising steps to be able to "hallucinate" something from noise... The fact a final image is looking like 3D render or vectors or photgraphy or painting (etc) is just pure coincidence! :)
@@zephilde Any video links on that exact process?
very interesting for concept generation - thanks for sharing! I'm assuming you can also upscale the various images in SD as well to maintain more 'closeup' detail...? Maybe with appropriate LORAs...
Amazing, Bro! But.. How did you get 2nd (BW) image? My SD gen only one image
this is ControlNet depth model you can get it here github.com/Mikubill/sd-webui-controlnet or use zoedepth online from link in description
@@digital-guts thanks for the answer! yes, I have it installed. but it gives only one result and it is different from what is needed.
@@digital-gutszoedepth actually work, but I try to do this in SD
My question is can i get a diffuse map turn this into a printable model id love to at least use it to make a base model and modify from there for like masks and such
very cool concept
Would you say this works better with black and white images?
i dont think so. today i’m recording new video with this technique, it could be useful.
@@digital-guts awesome! because ive gotten it to work with about 60% of my images but some get destroyed when i bisect the z axis on mirroring. but all the info you got is useful. this technique is mid blowing and a major day saver. one last question. you kind of speeded over the part where you clean up the mesh after mirroring. me being a noob to 3d software i could really use some clarification on how you cleaned it up. you made it look so simple.
nice and effective video
Can you make a character model with this?
no.
Great tutorial thank you
im using invoke AI how do i get the controlnet v1.1.224 to run with it or where do i find the controlnet v1.1.224
This is really good I like this workflow thanks for sharing.
Yo, how you get a stable diffusion, control net as local? it is?
th-cam.com/video/d1lPvI0T_go/w-d-xo.html check this link
@@digital-guts thanks
where is your hoodie from?
i dont remember, i think something like h&m or bershka nothing special
This is nest. cool technique.
Привет, очень классно! Подскажи пожалуйста, анимация лица сделана в unreal с помощью live link или это всё блендер ?
это metahuman animator внутри анриала уже да, но запись сама делается через live link просто более качественно интерпретируется
Wait, can you now just plug in normal map to the "norma"l socket without extra "normal map" node? I have to check it.
Ok, You can, but it looks quite bad compared to proper connection with "normal map" node. It seems like intensity is way lower without it, and you cannot control it without normal map node.
This works so much better than zoedepths image to 3d
thanks for the video, very insightful
Amazing! Thanks for sharing
Nice, but I will wait for 360 3D AI models :X
this has many, possibilities...
Hello thank for video please ,please.give me tutorial traking armor 3d stable diffusion and man video please urgen sorry bad english iam french
th-cam.com/video/bKO_nVGKgLA/w-d-xo.htmlsi=j7BOrRMU_8AeXrUe
@@digital-guts thank my friend soory i want video 2d no man 3d sorry .
Sim wonder studio softwar
th-cam.com/video/frVLAJjkHf0/w-d-xo.htmlsi=2uVjuK6HK8WwhQWX
Simple and cool
Thanks !!!! I will try it !!!
Please make this topic more with more examples.Thank you
It's not bad at all (is impressive actually) and you gave me very good ideas. Although I suppose this wouldn't be very applicable to non-symmetrical images, right?
what about the aye animation and smile tho? thats the most important part tbh
Most definitely the face mesh was a separate ready model. The assets were made with AI
Excellent
very interesting
Amazing , thank u
i could do well until the part i had to sculpt the stuff out, i couldnt come to a solution like you did easily
thanks for sharing! it is inspiring
Thanks for sharing!
thanks man
what about meshes to ai images?
th-cam.com/video/GSW3m79tsqU/w-d-xo.html
in the normal node the color space is non color for normal maps!
"Linear" is pretty much the same, although he missed "normal map" node between.
Ita still need a time.but its a cool start
I was waiting to see animation like your intro video😢
AI texturing skills constantly amaze me.
My pc would explode with those mesh 😭
Fooking genius you are..
This shows that AI images aren't just combination of random images, but a depiction of actual 3D figure and it's amazing
Ai Images are just a combination of images. - You can extract depth from any image and turn it into 3D figure.
I haven’t seen a single person use AI to texture a model using individual UV maps and I can’t understand why. Ai can dramatically speed up the texture process but I have not seen anybody take an ai generated image then turn it into a 3D model and i can’t understand why…
Mirror tool become life changer LOL
"What? What A Mazing!
I think the opposite direction (mesh to AI) is more interesting as it can then be used for AI training
nice method
Great for kit bashing!
Seen this technique before but at this stage it looks very limited. In terms of the mesh without putting any textures on it, it looked not representative of the object. I feel like adding the textures fools the eye into thinking it is more detailed than the mesh actually is.
and this is the main point of this aproach. to trick the eye
@@sashamartinsen I do 3d printing so this technique wouldn't work for my application.
God Damn it you look like that Guy who help Mr Walter with cooking Drug in breaking Bad by the way i like your tuto keep it UP
the R.U.R. is coming
wow!
its the end og the world VFX
Question, how is this geometry usable for animation purposes for like a game or anything? it just seems like it’s just a cheap way to get something cheaply moving on screen in a hacked way to view at one angle. versus something that’s really actually usable for like production or a game? And don’t you feel like you’re not learning anything about art creation? you’re just kind of hacking everything together and not even creating the original art yourself? Doesn’t that feel weird?
what is art? I can easily move this topic to the point where making commercial meshes for AAA games and cinema is not an art at all, and weird naive or outsiders art is an art in its purest form, but not this time. Basically, if you don't find this useful then its not for you, and this is perfectly fine. (also you can find answer to your question below the pinned comment)
Changing to double sided vertices, is the way to remove and double the texture map data.😂
Goodness me… how and why are slow eye movements in the female eye ball-brain so deeply, directly connected to the male brain.???? 🧠 😂❤
sheeesh
There's something a bit misleading about the video. At the start, you show a piece of work you've made that has an animated realistic character in it, but the only thing you actually show how to make is the extra geometry on her head.
I do think it's a cool shortcut for extra props and background objects, but the start of the video makes it seem like you're using AI generated images to make animation-ready models, which you clearly aren't.
I do wonder if it's possible to have an AI generate orthographic views of a character to make a base mesh which will be manually re-meshed for animation later on. Then, the AI generated images would be used as a base for the model's textures.
This video was not meant to be a step-by-step tutorial of how to replicate the example from the beginning, only the most interesting part that I was asked about. This realistic character is an Unreal Engine MetaHuman and I think in 2023 everybody in this field knows what it is and there are thousands of tutorials on youtube about this topic. Making this i'm just having fun playing with tech tools to try new ideas and sharing it. i'm not planning to do full step-by-step beginner-friendly explaining every button.
Answering your second question, yes it's possible (and with not so bad topology) I've seen such tools from my colleagues, but it's not public yet, as I know.
@@digital-guts "yes it's possible (and with not so bad topology) I've seen such tools from my colleagues, but it's not public yet, as I know."
Can you make a vid explaining how to fix the verts to be game ready? Im deving for UE4.27/5, making character armor parts. Im not exactly sure how far I can push my vert count in UE since I dont have 10 characters in a level at once yet.
But if you show your method [about how far I need to go] to reduce verts for games, then that would be helpful. Thank you.
I could hug you through the computer!!!!
wow.
good bye my future career in 3d modeling :')
Dont give up! AI is just another powerful tool you'll have to learn, like Photoshop back in the days.