How I Created This Render From Google Maps with AI (sketch into photo)

แชร์
ฝัง
  • เผยแพร่เมื่อ 18 พ.ย. 2024

ความคิดเห็น • 33

  • @abdoulayeidrissa9965
    @abdoulayeidrissa9965 ปีที่แล้ว +3

    great insight! i think stable diffusion and control has great potential to replace certain rendering engines. i use it almost everyday but the biggest challenge is to fine tune and find the perfect parameters for a really crisp and accurate generation on SD and Controlnet. what model, sampling steps, CFG scale, Loras etc

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, thanks a lot for your comment! Absolutely agree, it is amazing for the initial idea visualization, but as you said, I can't say the same for the case of final views with all the details and different materials. I am trying for that hopefully, I will share a video about that case too :)
      I have used Realistic Vision V2.0, V3.0, and EpicRealism model. Between 24-40 sampling steps, 5-9 CFG scale and I didn't use any additional Loras. Which model or Loras do you use?

    • @abdoulayeidrissa9965
      @abdoulayeidrissa9965 ปีที่แล้ว

      @@designinput are you having consistent results with that? i used realistic vision 2.0, 3.0, Xarchitectural, Deliberate, Reliberate, between 25-50 steps. out of all this i get good result from realistic vision 3.0 with 40 steps and CFG scale of 15. lower CFG scale are good but sometimes the image comes out inconsistent with my prompt, like i want the walls to be white but they sometimes come out with a different material, so upping the CFG fixed that for me.
      as for control net what setting are you using?
      Loras i am using XSArchi_129, for glass windows and Lora Add-Detail that can somewhat add some crispness to the image.
      out of all the samplers i found DPM++ SDE Karras the best

  • @keiralx
    @keiralx ปีที่แล้ว +2

    Outstanding! Love it, thank you

    • @designinput
      @designinput  ปีที่แล้ว

      Hello, thanks a lot :) So happy to hear that, you are very welcome!

  • @yasminehab7513
    @yasminehab7513 ปีที่แล้ว +1

    Thank you for sharing this 😍

    • @designinput
      @designinput  ปีที่แล้ว +1

      My pleasure 😊 Happy to hear that you liked it!

  • @MathieuDeVinois
    @MathieuDeVinois 9 หลายเดือนก่อน

    Looks great. Do you have a way to use a photo as a background but have a CAD drawing instead of a sketch? Its maybe more complicated as Camera views of models are difficult to match onto a photo. will AI understand the differen viewpoints and still merge them?

  • @vontainer5109
    @vontainer5109 ปีที่แล้ว

    Thats amazing I’ll use it on my next university project!!

    • @designinput
      @designinput  ปีที่แล้ว

      Thank you! Happy to hear you liked it

  • @eng.mo3tasem127
    @eng.mo3tasem127 ปีที่แล้ว

    I’ve been looking for explaining like this!
    Thnx you🙏🏼❤️‍🩹

  • @pablessLoz
    @pablessLoz ปีที่แล้ว +1

    Great video, thanks

    • @designinput
      @designinput  ปีที่แล้ว

      Hi Pablo, thank you, glad to hear that! Your are very welcome!

  • @marcinooooo
    @marcinooooo ปีที่แล้ว

    Hey, once again amaizng video - so for the interior AI rendering, which packages I need to download (I want to put them on my runpod server and see if it works)

    • @designinput
      @designinput  ปีที่แล้ว

      What do you mean by packages, checkpoints? If yes, you can try Realistic Vision 5.1 or epicRealism ones for SD1.5 version

  • @ilaydakaratas1957
    @ilaydakaratas1957 ปีที่แล้ว +1

    Great video!

  • @brianz7877
    @brianz7877 ปีที่แล้ว

    Awesome

  • @musatekdal7135
    @musatekdal7135 ปีที่แล้ว

    I appreciate how you describe the process in detail. I wonder if you are able to describe certain materials/furniture in stable diffusion to be more accurate or is there any script that can help to do this? And could you also describe a bit your computer setup please? I am interested in implementing it in my work.

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, you are very welcome, thanks for the feedback! You can try using multi-control with segmentation for more control but still, you may not be able to describe all the elements in your design. I am planning to share a workflow about that soon, hope that can help!

  • @cekuhnen
    @cekuhnen ปีที่แล้ว

    how to make this with Fabrie ? they seem not to have a direct controlnet access ?

    • @designinput
      @designinput  ปีที่แล้ว

      Hi, it wasn't possible to do that when I shared the Fabrie video. But they just added an inpainting feature, where you can edit the parts of the images. However, I don't think you can insert your own sketches

    • @cekuhnen
      @cekuhnen ปีที่แล้ว

      @@designinput yeah I saw the change - I have to text it. I work mainly with Vizcom.

  • @firatgunesbalci2743
    @firatgunesbalci2743 ปีที่แล้ว

    Süper video Ömer. stable diffusion 2.0 mi arayüz?

    • @designinput
      @designinput  ปีที่แล้ว

      Çok teşekkür ederim Fırat :) Automatic1111 arayüzünü kullanıyorum ama eğer model olarak soruyorsan, kullandığım bütün modeller genellikle 1.5 base modelli, SDXL'ı bekliyorum :)

  • @acerol
    @acerol ปีที่แล้ว

    Great video ! Is stable diffusion free ?

    • @designinput
      @designinput  ปีที่แล้ว +1

      Hey, thank you :) Yes, it is absolutely free to use it locally

  • @MrBoardcube
    @MrBoardcube ปีที่แล้ว

    What do you think of realistic Vision 4, do you like the features of Vision 3 better?

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, apparently V3 got lots of negative comments that's why the developer uploaded a new V4. I haven't tested it out a lot yet so can't say much about it. But I will share my results as a comparison soon :) Thanks for your comment!
      What do you think about it? Which one did you like more?

  • @mekkoid
    @mekkoid ปีที่แล้ว

    Is it possible in MJ?

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, you can do it for the midjourney generated images but not for your own images :/