Stable Diffusion + ControlNET in Architecture

แชร์
ฝัง
  • เผยแพร่เมื่อ 24 ธ.ค. 2024

ความคิดเห็น • 32

  • @ArchViz007
    @ArchViz007  24 วันที่ผ่านมา

    Share your thoughts below, and if you’d like to support my work, buy me a coffee: buymeacoffee.com/archviz007. Thank you!

  • @vinbarg
    @vinbarg 3 วันที่ผ่านมา +1

    hello, if i like the image from the prompt that is 700x500 pixel size, how can i get higher resolution of the seme immage with more detailes? if i use extras it only upscale all the imperfections and i doesn t look good. thx.

    • @ArchViz007
      @ArchViz007  2 วันที่ผ่านมา

      Hi! To get a higher resolution with more details, enable 'Highres. fix' in the txt2img tab (just below the sampling method). Once enabled, you can choose an upscaler and adjust the upscaling parameters to refine the result. This helps avoid simply upscaling imperfections. Hope this helps!

    • @vinbarg
      @vinbarg 2 วันที่ผ่านมา

      Hi thx, for fast reply, found some videos on it on yt, will try it, thx so much

  • @KADstudioArchitect
    @KADstudioArchitect 13 วันที่ผ่านมา +1

    WOW, Amazing tutorial which I was looking for it for a long time

    • @ArchViz007
      @ArchViz007  12 วันที่ผ่านมา

      Thanks @KADstudioArchitect! Take care

  • @cri.aitive
    @cri.aitive 28 วันที่ผ่านมา +1

    Thank you for your hard work in creating this video and for sharing your valuable experiences. Although I’m not involved in architecture, I feel it has greatly helped me expand my knowledge of Stable Diffusion.

    • @ArchViz007
      @ArchViz007  28 วันที่ผ่านมา

      gr8 @cri.aitive :) Take care!

  • @zainfadhil2588
    @zainfadhil2588 22 วันที่ผ่านมา +1

    very soon, most rendering engines will adopt AI in their workflow. tydiffusion is an AI plug-in is designed to work with 3ds max and does give very nice results. thank you for the video

    • @ArchViz007
      @ArchViz007  21 วันที่ผ่านมา +1

      Yup, I agree! It's the only way to gain total control. The ultimate destination for useful AI in architecture and design is a 'render engine'-then we're back to something like V-Ray or Corona 2.0. That way, we can return to being designers and humans piloting a tool again. Full circle!

  • @brettwessels1283
    @brettwessels1283 หลายเดือนก่อน +2

    Nice video, we've been using this kind of workflow in our office since the start of the year but using PromeAI instead, however I'll be implementing the Stable Diffusion methods going for as the control you have is just so much better...

    • @ArchViz007
      @ArchViz007  หลายเดือนก่อน

      Thanks, exactly! I've been looking into PromeAI for some time now, and the owners have been pushing it to me for promotion. It's just not up to par with SD + ControlNet, at least not yet. Are you working i the US? Anyway take care, @brettwessels1283.

    • @ArchViz007
      @ArchViz007  หลายเดือนก่อน

      Gr8!

  • @SivaMaharana
    @SivaMaharana หลายเดือนก่อน +2

    Thank you for your videos..this is very useful for our architects

    • @ArchViz007
      @ArchViz007  หลายเดือนก่อน

      Gr8! :) Are you US based?

  • @LukasRichter-p7n
    @LukasRichter-p7n หลายเดือนก่อน +1

    Hi! I'd love to see how you can change the perspective of a model while keeping all other parameters the same. It would be great to create multiple images like this for our clients.

    • @ArchViz007
      @ArchViz007  หลายเดือนก่อน

      Yes, I´ll look into that! take care @LukasRichter

  • @Aristocle
    @Aristocle 16 วันที่ผ่านมา +1

    You need to improve the prompt quality. You have to start writing it in a global sense and then go into detail. Furthermore, you never specified sky info, e.g: on top a light blue sky.
    Better to start with a simple rendering in the viewport (I use Blender and it can be done quickly with Evee), to help SD and give it direction.
    The goal, in using this Gen AI, is to have consistency in the results, also through greater context in the prompt. For example, to have the same structure rendered from multiple views.

  • @rakeshyadav-mz6kk
    @rakeshyadav-mz6kk หลายเดือนก่อน +1

    Thanks you for this great video

    • @ArchViz007
      @ArchViz007  หลายเดือนก่อน

      Thanks @rakeshyadav-mz6kk! take care

  • @erlinghagendesign
    @erlinghagendesign หลายเดือนก่อน +2

    IDEA: how about you use lineart in controlnet to create sketches from images and use them further on with the same processes shown

    • @ArchViz007
      @ArchViz007  หลายเดือนก่อน

      I’ve already tried it and found that the 'lineart' control type more or less doesn’t affect the generation (txt2img) or re-generation (img2img) process. Interestingly, Canny seems to do what one might expect lineart to handle. Please post again if you discover anything different. Thanks, @erlinghagendesign!

  • @letspretend_22
    @letspretend_22 หลายเดือนก่อน +1

    I have set up everything exactly the same with the same UI, ControlNet, Model, settings, etc., but it's like it is totally ignoring the ControlNet image. So weird.

    • @letspretend_22
      @letspretend_22 หลายเดือนก่อน

      Found the error: I added the controlNet extension. It then already has a "canny" option, but you actually need to download the .tbh files and put it in the correct folder. It doesn't show an error or anything in the SD UI.

    • @ArchViz007
      @ArchViz007  หลายเดือนก่อน

      Have you enabled it? Same resolution? Pixel perfect... Stable Diffusion can be a bit fiddly

    • @letspretend_22
      @letspretend_22 หลายเดือนก่อน

      @@ArchViz007 After downloading Canny and placing it in the correct folder, it worked. Yes, I noticed the fiddlyness, but got some good results in the end. It seems the ControlNet image has to quite specific as well. I noticed the architecture model you use, doesn't really do perspectives other than from eye-height very well. When you give it a sketch from an aerial view, it usually tries to interpret that as being from eye-level, leading to some great Escher-style images :)

    • @ArchViz007
      @ArchViz007  หลายเดือนก่อน

      @letspretend_22 Yeah, I’ve discovered the same. Maybe we can use civitai.com/models/115392/a-birds-eye-view-of-architecture for those pictures. I think I’ll look into it! By the way, love Escher-what a master!

  • @samuelbonilla1110
    @samuelbonilla1110 หลายเดือนก่อน +1

    wow!!!

  • @cecofuli
    @cecofuli หลายเดือนก่อน +2

    IDEA: try to use Nvidia Canvas as a base concept, then use the Nvidia image in SD ;-)

    • @ArchViz007
      @ArchViz007  หลายเดือนก่อน +1

      Good idea @cecofuli! Think I´ll explore that in the next video :)

  • @metternich05
    @metternich05 11 วันที่ผ่านมา

    Very tedious. I had to fast forward a lot. Anyways, you are still better off doing your whole scene in a 3D app. Much more control and more predictable output. I don't get this AI rage in archviz. It's hot garbage. Maybe it needs another 5-10 years to become sort of useful.