hello, if i like the image from the prompt that is 700x500 pixel size, how can i get higher resolution of the seme immage with more detailes? if i use extras it only upscale all the imperfections and i doesn t look good. thx.
Hi! To get a higher resolution with more details, enable 'Highres. fix' in the txt2img tab (just below the sampling method). Once enabled, you can choose an upscaler and adjust the upscaling parameters to refine the result. This helps avoid simply upscaling imperfections. Hope this helps!
Thank you for your hard work in creating this video and for sharing your valuable experiences. Although I’m not involved in architecture, I feel it has greatly helped me expand my knowledge of Stable Diffusion.
very soon, most rendering engines will adopt AI in their workflow. tydiffusion is an AI plug-in is designed to work with 3ds max and does give very nice results. thank you for the video
Yup, I agree! It's the only way to gain total control. The ultimate destination for useful AI in architecture and design is a 'render engine'-then we're back to something like V-Ray or Corona 2.0. That way, we can return to being designers and humans piloting a tool again. Full circle!
Nice video, we've been using this kind of workflow in our office since the start of the year but using PromeAI instead, however I'll be implementing the Stable Diffusion methods going for as the control you have is just so much better...
Thanks, exactly! I've been looking into PromeAI for some time now, and the owners have been pushing it to me for promotion. It's just not up to par with SD + ControlNet, at least not yet. Are you working i the US? Anyway take care, @brettwessels1283.
Hi! I'd love to see how you can change the perspective of a model while keeping all other parameters the same. It would be great to create multiple images like this for our clients.
You need to improve the prompt quality. You have to start writing it in a global sense and then go into detail. Furthermore, you never specified sky info, e.g: on top a light blue sky. Better to start with a simple rendering in the viewport (I use Blender and it can be done quickly with Evee), to help SD and give it direction. The goal, in using this Gen AI, is to have consistency in the results, also through greater context in the prompt. For example, to have the same structure rendered from multiple views.
I’ve already tried it and found that the 'lineart' control type more or less doesn’t affect the generation (txt2img) or re-generation (img2img) process. Interestingly, Canny seems to do what one might expect lineart to handle. Please post again if you discover anything different. Thanks, @erlinghagendesign!
I have set up everything exactly the same with the same UI, ControlNet, Model, settings, etc., but it's like it is totally ignoring the ControlNet image. So weird.
Found the error: I added the controlNet extension. It then already has a "canny" option, but you actually need to download the .tbh files and put it in the correct folder. It doesn't show an error or anything in the SD UI.
@@ArchViz007 After downloading Canny and placing it in the correct folder, it worked. Yes, I noticed the fiddlyness, but got some good results in the end. It seems the ControlNet image has to quite specific as well. I noticed the architecture model you use, doesn't really do perspectives other than from eye-height very well. When you give it a sketch from an aerial view, it usually tries to interpret that as being from eye-level, leading to some great Escher-style images :)
@letspretend_22 Yeah, I’ve discovered the same. Maybe we can use civitai.com/models/115392/a-birds-eye-view-of-architecture for those pictures. I think I’ll look into it! By the way, love Escher-what a master!
Very tedious. I had to fast forward a lot. Anyways, you are still better off doing your whole scene in a 3D app. Much more control and more predictable output. I don't get this AI rage in archviz. It's hot garbage. Maybe it needs another 5-10 years to become sort of useful.
Share your thoughts below, and if you’d like to support my work, buy me a coffee: buymeacoffee.com/archviz007. Thank you!
hello, if i like the image from the prompt that is 700x500 pixel size, how can i get higher resolution of the seme immage with more detailes? if i use extras it only upscale all the imperfections and i doesn t look good. thx.
Hi! To get a higher resolution with more details, enable 'Highres. fix' in the txt2img tab (just below the sampling method). Once enabled, you can choose an upscaler and adjust the upscaling parameters to refine the result. This helps avoid simply upscaling imperfections. Hope this helps!
Hi thx, for fast reply, found some videos on it on yt, will try it, thx so much
WOW, Amazing tutorial which I was looking for it for a long time
Thanks @KADstudioArchitect! Take care
Thank you for your hard work in creating this video and for sharing your valuable experiences. Although I’m not involved in architecture, I feel it has greatly helped me expand my knowledge of Stable Diffusion.
gr8 @cri.aitive :) Take care!
very soon, most rendering engines will adopt AI in their workflow. tydiffusion is an AI plug-in is designed to work with 3ds max and does give very nice results. thank you for the video
Yup, I agree! It's the only way to gain total control. The ultimate destination for useful AI in architecture and design is a 'render engine'-then we're back to something like V-Ray or Corona 2.0. That way, we can return to being designers and humans piloting a tool again. Full circle!
Nice video, we've been using this kind of workflow in our office since the start of the year but using PromeAI instead, however I'll be implementing the Stable Diffusion methods going for as the control you have is just so much better...
Thanks, exactly! I've been looking into PromeAI for some time now, and the owners have been pushing it to me for promotion. It's just not up to par with SD + ControlNet, at least not yet. Are you working i the US? Anyway take care, @brettwessels1283.
Gr8!
Thank you for your videos..this is very useful for our architects
Gr8! :) Are you US based?
Hi! I'd love to see how you can change the perspective of a model while keeping all other parameters the same. It would be great to create multiple images like this for our clients.
Yes, I´ll look into that! take care @LukasRichter
You need to improve the prompt quality. You have to start writing it in a global sense and then go into detail. Furthermore, you never specified sky info, e.g: on top a light blue sky.
Better to start with a simple rendering in the viewport (I use Blender and it can be done quickly with Evee), to help SD and give it direction.
The goal, in using this Gen AI, is to have consistency in the results, also through greater context in the prompt. For example, to have the same structure rendered from multiple views.
Thanks you for this great video
Thanks @rakeshyadav-mz6kk! take care
IDEA: how about you use lineart in controlnet to create sketches from images and use them further on with the same processes shown
I’ve already tried it and found that the 'lineart' control type more or less doesn’t affect the generation (txt2img) or re-generation (img2img) process. Interestingly, Canny seems to do what one might expect lineart to handle. Please post again if you discover anything different. Thanks, @erlinghagendesign!
I have set up everything exactly the same with the same UI, ControlNet, Model, settings, etc., but it's like it is totally ignoring the ControlNet image. So weird.
Found the error: I added the controlNet extension. It then already has a "canny" option, but you actually need to download the .tbh files and put it in the correct folder. It doesn't show an error or anything in the SD UI.
Have you enabled it? Same resolution? Pixel perfect... Stable Diffusion can be a bit fiddly
@@ArchViz007 After downloading Canny and placing it in the correct folder, it worked. Yes, I noticed the fiddlyness, but got some good results in the end. It seems the ControlNet image has to quite specific as well. I noticed the architecture model you use, doesn't really do perspectives other than from eye-height very well. When you give it a sketch from an aerial view, it usually tries to interpret that as being from eye-level, leading to some great Escher-style images :)
@letspretend_22 Yeah, I’ve discovered the same. Maybe we can use civitai.com/models/115392/a-birds-eye-view-of-architecture for those pictures. I think I’ll look into it! By the way, love Escher-what a master!
wow!!!
IDEA: try to use Nvidia Canvas as a base concept, then use the Nvidia image in SD ;-)
Good idea @cecofuli! Think I´ll explore that in the next video :)
Very tedious. I had to fast forward a lot. Anyways, you are still better off doing your whole scene in a 3D app. Much more control and more predictable output. I don't get this AI rage in archviz. It's hot garbage. Maybe it needs another 5-10 years to become sort of useful.