I was about to build this whole vr blender renderer, to comfy, back to blender, use camera projection.... then I found this video. Thank you, just saved me days of noodling
@@oysterenjoyer for making a video that is short and straight to the point. The keyword in your message is "anyone". Not anyone can utilize AI to produce pro level results. In fact, almost nobody can.
I came for the Blender part, which was helpful, But just a note about GPT: there's no need to click 'like' or 'dislike' if you don't want to-it won't affect your session personally. Clicking 'simply provides feedback to OpenAI for future training and doesn't influence the outcome of the next response. (A lot of people have assumed/placeboed themselves into thinking otherwise.)
I've been experimenting on something similar, but from non-frontal images. Basically turning any generated image or photograph into a bas relief with a decent parallax effect. Didn't think about generating front facing modular buildings for kit bashing. Thanks for sharing this!
i was doing the same thing - just searching google for images - if you want to a better way to search of front facing images check out my other video: th-cam.com/video/q9LbLCUAwCQ/w-d-xo.htmlsi=i-IKvp6SRmNMYINU - you can level up your search game with AI too!
Great but I think it only work when we use it for render. there's still some problems about performance if you use it for games. I agree to use AI to generate textures but it would be better if I modeling it my self. It would be more control to your model.
WOW This is amazing! I cant belive how good you make it look! You are so talented I cant wait to watch more of your channel!! how did you learn this way of working by the way are you self-taught?
This might be a quicker way of doing things but the amount of geometry needed for displacing the plane is wayy too much compared to modelling the actual geometry
Well, the generating the texture in ChatGPT part seems to not work anymore. It doesnt generate a square or front facing texture for me which can be used this way. Its always curved or from some angle. I suppose OpenAI has tuned this in purpose so you cant use it this way?
It’s the node wrangler add-on included with blender! turn it on and then you can just press shift T and it will create a texture node with uv coordinates
Can't test this out yet, I'm trying to make clothing, but it looks great. Tell me; Does this work with game engines? Specifically Unreal? This would be great for making specific landscapes and areas in case I don't really feel like going the route of manually creating it in Unreal. If so, would it just be better to do it in Unreal 5? Can I make clothing with this? What about other accessories, even weapons?
I am using the paid version here but you can now get very good results online here: huggingface.co/spaces/black-forest-labs/FLUX.1-dev or huggingface.co/spaces/black-forest-labs/FLUX.1-schnell you can also search or other free image generators like bing image creator!
i tried with a couple characters and the sides still give me trouble - best free ai tool I've found is this demo: huggingface.co/spaces/Wuvin/Unique3D but you'd have to remap the textures manually if you want more detail/roughness
Is there a local install for AI that will create the depth map? Similar to how i can install Stable Diffusion on my PC and avoid uploading to a website. Also wondering if the multires modifier would be less taxing on a large scene vs the regular subdivisions. Great video! I subscribed. Luma AI is also cool.
Yes Marigold Depth Estimation can be used locally in ComfyUI. Another benefit of that workflow is that it can output a 32 bit .exr file for the depth map which produces a lot higher quality displacement in the 3d program. A regular image only has 256 shades of grey but a 32 bit image has a huge range of values.
Hi Scarlet, nive video. What about texturing them? Also I want you to realize that some people who come here, know nothing about Blender (or are just learning), I learned earlier about displacment and subdivision so I was able to catch up on those, but the other things such as "array modifier"' well they are just new to me and I am learning them. All of this to say, that any other tutorial that starts just AFTER the AI part is done, and explain to newcomers to blender how to create the 3D asset is very welcome.
subdividing it makes insanely high poly counts which my laptop cant handle. Is there a way to reduce it like remesh or something to clean extra unnecessary geometry?
Model it yourself, very hard but it is what you need to do if you want a real good quality model. Dont expect AI to do those parts of the work. It cuts corners like the people who use it.
you can try an adaptive subdivide or apply the modifiers and decimate the object. if you are looking for a way to make a low ploy model using a A.I. image you can also check out my other tutorial here: th-cam.com/video/9nEuOhluxZU/w-d-xo.htmlsi=Pbfd20jpDdoQWob8
I was an animation student. Because of all these AI developments, I hated this industry and dropped out of school. I decided to study psychology. Because art is about to die. Everything has turned into a few written words, and after a certain period of time, stupid humanity will not have the ability to even write that and will have it written by artificial intelligence. I wish I were born in 16th century Italy. Not in this freak system where everything turns into profit
@@roniricardosystem people need people. The most important service offered by psychologists is not treatment, but reminding or making the other person feel that they are human and valuable. So I'm sure psychologists and prostitutes will be safe (:
@@dilayyilmaz4254 nope. It can be emulated by advanced IA that eliminates the uncanny valley of voice, facial texture, muscular facial expressions etc. Some people are already using it. "People need people" This literally happens with art too. Art made by humans for humans. And the same with psychologists.
Now you too can tell your friends that you are a 3D artist... but you don't really know how to model and you are looking for the quickest and easiest way to post on ig... so you won't find work in the industry... but all the followers who can't even turn on a computer on IG will think you are an important artist and that you do things that even the best architect can't do Same for djs using the sync button and taking lines the whole night
true i mostly do short animations so it works for me - you can always use an adaptive subdivide for larger scenes. Ive also got another tutorial on how to build out a low poly model from an ai image: th-cam.com/video/9nEuOhluxZU/w-d-xo.htmlsi=Pbfd20jpDdoQWob8
Please just stop using ai it would build your skills actually modelling and sooner or later you will yeild well better results than the mess AI makes. By yourself you can make neat optimised models that you dont have to subdivide an insane ampount of times to look better and that can be game ready, learn how to do things instead of relying on AI. I think the most anyone shoukd use ai for is idea generation but even then that is still not true creativty and doesnt build your skill of being able to come up with ideas on your own.
Ive been building my own models for years and I agree its important to learn the fundamentals first - this was just an interesting workflow I came up with for a project I'm working on for fun :)
The video lasting 3 minutes didn't seemingly skip any steps, so in like 10 minutes, it is possible to build a neat-looking diorama with the help of gen-AI. Those who would try to use for more complex applications would run into limitations of this approach, and would see the neccessity of learning Blender properly, but for newcomers, it is a tool which gets their foot into the door of 3D-modeling, and for experienced 3D-artists, it is another tool they can make use of to optimize their workflow. And yes, this approach doesn't seem to be any good for building game-ready assets, but it can still be used in static renders and videos
Thats crazy. Love seeing these practical workflows start to manifest
🥰🥰thank you!
I was about to build this whole vr blender renderer, to comfy, back to blender, use camera projection.... then I found this video. Thank you, just saved me days of noodling
You deserve an award just for how quick it is.
No give the ai the reward
An award for what exactly? I don't even think studios will get awards if they rely too much on AI for VFX. If anyone can do it it's not real art.
@@oysterenjoyer for making a video that is short and straight to the point. The keyword in your message is "anyone". Not anyone can utilize AI to produce pro level results. In fact, almost nobody can.
@@oysterenjoyer were you able to do it before watching this tutorial? They are complimenting how streamlined the video and workflow was
I came for the Blender part, which was helpful, But just a note about GPT: there's no need to click 'like' or 'dislike' if you don't want to-it won't affect your session personally. Clicking 'simply provides feedback to OpenAI for future training and doesn't influence the outcome of the next response. (A lot of people have assumed/placeboed themselves into thinking otherwise.)
Very good to know!
@@Scarlett_vfx I actually think it effect your personal session Bailey, you can observe it when generating images
a legend.explained it under 4 min.thats crazy.
Thanks, Amelia! 🙌 This video is super clear and helpful! Really appreciate the tips!
Fast explanation and to the point. Great job!! 😊
thanks so much!
Your channel is a gem! really great tutorial also you have a nice voice to listen to.
haha thank you friend!
I've never seen Marigold before - absolutely incredible, that by itself is a game changer
Impressive! Quick concise explanation . Looking forward to additional videos.
EXCELLENT WORKFLOW. subscribed.
Wooow Amazing tut changed my entire modeling process
Wow. powerful workflow, explained quickly and clearly 👍
glad you like it! 🥰
I've been experimenting on something similar, but from non-frontal images. Basically turning any generated image or photograph into a bas relief with a decent parallax effect. Didn't think about generating front facing modular buildings for kit bashing. Thanks for sharing this!
i was doing the same thing - just searching google for images - if you want to a better way to search of front facing images check out my other video: th-cam.com/video/q9LbLCUAwCQ/w-d-xo.htmlsi=i-IKvp6SRmNMYINU - you can level up your search game with AI too!
holy shit, this looks great! At first I thought you were gonna cut out everything (like I currently do)
this is extremly good technique. As an artist you need to adapt to new tools to survive., that is art too
Wow! Super efficient and well explained. Thank you!
Thank you!
Work smart not hard! Awesome tut!
Love your work❤ this is so helpful and I have learned soo much. Thankyou soo much. Keep it coming!
thank you!
Thank you for sharing! Very smart approach and potentially very helpful
nice, very interesting. Make more tutorials like this one!
You are amazing! Thanks for this simple to the point tutorial
Thank you!🤘🙌
😮 I am speechless. Great tutorial thanks.
🥰🥰🥰
Wow! Thank you so much for this tutorial 😊 Great job!!!!!! 👍👍👍👍
🥰thank you!
great tutorial, thanks!
this is amazing , so , good on you.
Now thats a tutorial. I used AI for my background model. The different is, you made it better😂new subscriber here
One simple word: Brutal. 😁
Great but I think it only work when we use it for render. there's still some problems about performance if you use it for games. I agree to use AI to generate textures but it would be better if I modeling it my self. It would be more control to your model.
Fantastic, thank you so much!
Thank you for sharing!!!😁
awesome tutorial. thanks Amelia.
🥰🙌
2:46 Can you give me the blender file that appears in this section?🙏🙏🙏
to comfy, back to blender, use camera projection.... then I found this video. Thank you, just saved me days of noodling
wow this is brilliant thanks!
smart...
this is amazing!
Well done teacher
WOW This is amazing! I cant belive how good you make it look! You are so talented I cant wait to watch more of your channel!! how did you learn this way of working by the way are you self-taught?
Just a graduate of TH-cam university 😜
awesome stuff!
great quick and easy! Thanks you
thats what i like to hear! :)
Cool video!
would be interesting to try this out
This is cool, i definitely wanna try some type of simulation
Thank nice workflow, this works in Unity too with detphmap shader.
Thanks for sharing this.
Very nice!! Thank you!
Is there a more detailed video?
It would be great if you could also extract the Albedo so that you have realtime light interaction within the CGI blender environment.
This might be a quicker way of doing things but the amount of geometry needed for displacing the plane is wayy too much compared to modelling the actual geometry
true! i have another video on how to build low poly models from ai images here th-cam.com/video/9nEuOhluxZU/w-d-xo.htmlsi=Pbfd20jpDdoQWob8
Well, the generating the texture in ChatGPT part seems to not work anymore. It doesnt generate a square or front facing texture for me which can be used this way. Its always curved or from some angle. I suppose OpenAI has tuned this in purpose so you cant use it this way?
Kitbashers United !
woah that's crazyyy
Looks intressting i would assume it would save alot of memory too for lower end builds.
This is great.
well that's great but one small issue that to subdivide it soo many times that make it unoptimized
Nah
for sure - you can try adaptive subdivide for larger scenes
@@Scarlett_vfxok thnx
this is insane
Could be good for quick visualization!!
amazing
What about poly counts values? Retopology?
Is perfect to make trim sheets or to use for a basemesh to model with retopology tools and make your own changes.
does this work with faces and complex sculptures, too?
Awesome indeed
What site are you using for your reference images?
Ignore me - i rewatched and you mentioned another video where you cover that :)
Nice video!!! What shortcut did you use to create the node timeline with Texture coordinate and mapping in combination with the image input?
It’s the node wrangler add-on included with blender! turn it on and then you can just press shift T and it will create a texture node with uv coordinates
Wo....howd you do the kit bashing? Id love to see a tutorial on that. Looked like you had to cut the figures out
Could you do a more longer and in depth for beginners? Or send me a link to one please
For better results use geometry nodes. :D
Nice sharing
🙌🙌🙌
Nice one. Are there any tools (like that depth-map tool) to remove shadows from the original AI image so the lighting is no baked into the image?
Can't test this out yet, I'm trying to make clothing, but it looks great. Tell me;
Does this work with game engines? Specifically Unreal? This would be great for making specific landscapes and areas in case I don't really feel like going the route of manually creating it in Unreal.
If so, would it just be better to do it in Unreal 5?
Can I make clothing with this?
What about other accessories, even weapons?
Genius thx!
how to make that 1:31 base color other banner??
Make sure you have “node wrangler” on in preferences and then press control+t and it will auto populate the texture coordinate
Do you have to pay for that version of ChatGPT?
I am using the paid version here but you can now get very good results online here: huggingface.co/spaces/black-forest-labs/FLUX.1-dev or huggingface.co/spaces/black-forest-labs/FLUX.1-schnell you can also search or other free image generators like bing image creator!
this is freakin awesome. Can you use these methods for characters as well?
i tried with a couple characters and the sides still give me trouble - best free ai tool I've found is this demo: huggingface.co/spaces/Wuvin/Unique3D but you'd have to remap the textures manually if you want more detail/roughness
You can do this with any picture, no need for AI
Is there a local install for AI that will create the depth map? Similar to how i can install Stable Diffusion on my PC and avoid uploading to a website.
Also wondering if the multires modifier would be less taxing on a large scene vs the regular subdivisions.
Great video! I subscribed. Luma AI is also cool.
Yes Marigold Depth Estimation can be used locally in ComfyUI. Another benefit of that workflow is that it can output a 32 bit .exr file for the depth map which produces a lot higher quality displacement in the 3d program. A regular image only has 256 shades of grey but a 32 bit image has a huge range of values.
Hi Scarlet, nive video. What about texturing them?
Also I want you to realize that some people who come here, know nothing about Blender (or are just learning), I learned earlier about displacment and subdivision so I was able to catch up on those, but the other things such as "array modifier"' well they are just new to me and I am learning them.
All of this to say, that any other tutorial that starts just AFTER the AI part is done, and explain to newcomers to blender how to create the 3D asset is very welcome.
subdividing it makes insanely high poly counts which my laptop cant handle. Is there a way to reduce it like remesh or something to clean extra unnecessary geometry?
Retopologize or model it, you can't run away from modeling.
you can only unsubdivide later or decimate by planar, if you dont want to model by yourself
@@pansitostyle Agree
Model it yourself, very hard but it is what you need to do if you want a real good quality model. Dont expect AI to do those parts of the work. It cuts corners like the people who use it.
you can try an adaptive subdivide or apply the modifiers and decimate the object. if you are looking for a way to make a low ploy model using a A.I. image you can also check out my other tutorial here: th-cam.com/video/9nEuOhluxZU/w-d-xo.htmlsi=Pbfd20jpDdoQWob8
Does this method also work for Unreal Engine as well???
if it has a displacement node, yes
yes ive been able to pull these into unreal and kitbash with them there!
@@Scarlett_vfx wow maybe there could be a tutorial on that next
@@MythicMasters-m5v great suggestion!
i want more
it doesnt work
it does?
i just wanted to learn 3d modeling but now😅😅😅😅😅😅 i have changed my mind
if it gets two job done , and efficient,why not
When Ai explains how to use Ai
❤
Holy shiet
I was an animation student. Because of all these AI developments, I hated this industry and dropped out of school. I decided to study psychology. Because art is about to die. Everything has turned into a few written words, and after a certain period of time, stupid humanity will not have the ability to even write that and will have it written by artificial intelligence. I wish I were born in 16th century Italy. Not in this freak system where everything turns into profit
I love how it is now, finally I can create entire games myself
I agree with you, everything I see is AI, and people are exploiting AI to make a profit.
And you think that psychologists are safe? Wait more few years...
@@roniricardosystem people need people. The most important service offered by psychologists is not treatment, but reminding or making the other person feel that they are human and valuable. So I'm sure psychologists and prostitutes will be safe (:
@@dilayyilmaz4254 nope. It can be emulated by advanced IA that eliminates the uncanny valley of voice, facial texture, muscular facial expressions etc. Some people are already using it.
"People need people"
This literally happens with art too. Art made by humans for humans. And the same with psychologists.
And now make the 3d character model :> .
Next step, shadow removal
sometimes i add "even lighting" to the prompt to reduce shadows
Please increase your this business USA please sir please
Holy fucking shit I am shocked….
great, more lifeless slop
This is just image projection modeling. Been around for 25+ years
Now you too can tell your friends that you are a 3D artist... but you don't really know how to model and you are looking for the quickest and easiest way to post on ig... so you won't find work in the industry... but all the followers who can't even turn on a computer on IG will think you are an important artist and that you do things that even the best architect can't do
Same for djs using the sync button and taking lines the whole night
Nice but unusable-millions of polygons for a simple scene, not suitable for gaming and interactive use
true i mostly do short animations so it works for me - you can always use an adaptive subdivide for larger scenes. Ive also got another tutorial on how to build out a low poly model from an ai image: th-cam.com/video/9nEuOhluxZU/w-d-xo.htmlsi=Pbfd20jpDdoQWob8
It is usable, especially for concept art, this is amazing for that
Please just stop using ai it would build your skills actually modelling and sooner or later you will yeild well better results than the mess AI makes. By yourself you can make neat optimised models that you dont have to subdivide an insane ampount of times to look better and that can be game ready, learn how to do things instead of relying on AI. I think the most anyone shoukd use ai for is idea generation but even then that is still not true creativty and doesnt build your skill of being able to come up with ideas on your own.
Ive been building my own models for years and I agree its important to learn the fundamentals first - this was just an interesting workflow I came up with for a project I'm working on for fun :)
Ai is great to use for textures, it would be better to just model it from the photo without all the subd of course but still a great walk through.
@@Chodree check out my video where I do just that! th-cam.com/video/9nEuOhluxZU/w-d-xo.htmlsi=0W_5rsJr9WVUM-5c
The video lasting 3 minutes didn't seemingly skip any steps, so in like 10 minutes, it is possible to build a neat-looking diorama with the help of gen-AI. Those who would try to use for more complex applications would run into limitations of this approach, and would see the neccessity of learning Blender properly, but for newcomers, it is a tool which gets their foot into the door of 3D-modeling, and for experienced 3D-artists, it is another tool they can make use of to optimize their workflow.
And yes, this approach doesn't seem to be any good for building game-ready assets, but it can still be used in static renders and videos
I'm sure you would be using stone tools still if you began with them. 😂 Advanced intelligence is here and you throw stones at it smh
wwww
Please stop grating your voice at the end of your sentences.