How To Create Real-Time Renders with AI (synced to 3D model)

แชร์
ฝัง
  • เผยแพร่เมื่อ 22 ก.ค. 2024
  • AI for Architects: learn.designinputstudio.com/a...
    Get the workflow: bit.ly/realtimerenderresources
    Runpod - Stable Diffusion on Cloud: bit.ly/48etVvg
    Decoded facades: / decoded.fa
    Please let me know what you think! Thank you :)
    You can find all the recourses here: designinputstudio.com/how-to-...
    Don't forget to subscribe if you liked the video :)
    Website: designinputstudio.com
    Single-Click Stable Diffusion Installation: bit.ly/singleclickinstallation
    Join Our Discord Community: / discord
    Join Patreon: / designinput
    Instagram: / design.input
    Subscribe: / @designinput
    Freebies: designinputstudio.com/freebies/
    newsletter.designinputstudio....
    Timeline:
    00:00 Intro
    00:40 3D Model
    01:28 Setting Up RunPod
    04:48 Install ComfyUI in RunPod
    06:21 Real-Time Render Models
    06:20 ControlNet Installation
    06:42 Add new model into RunPod
    09:30 Creating the Workflow in ComfyUI
    25:00 Real-Time Render
    30:05 Adding Reference Image
    36:44 Final Results
    36:57 Outro
    *Description may contain affiliate links that, at no additional cost to you, I may earn a small commission.
    #aiart #stablediffusion #aiarchitecture

ความคิดเห็น • 26

  • @rahuyroman3831
    @rahuyroman3831 4 หลายเดือนก่อน

    This channel is pure gold. Thank you for sharing so much value!

  • @cosminpaduraru4506
    @cosminpaduraru4506 5 หลายเดือนก่อน +1

    Great piece of content Ömer. Thanks and keep it up!

    • @designinput
      @designinput  5 หลายเดือนก่อน

      Thanks a lot for your support 🧡

  • @ilaydakaratas1957
    @ilaydakaratas1957 5 หลายเดือนก่อน +3

    This is everything! Thanks for showing a cloud-based alternative, I have a very weak GPU

    • @designinput
      @designinput  5 หลายเดือนก่อน

      Thank you, glad you liked 🧡

  • @puyakhalili
    @puyakhalili 3 หลายเดือนก่อน

    Amazing!

  • @cgimadesimple
    @cgimadesimple 5 หลายเดือนก่อน +1

    super powerful!!

    • @designinput
      @designinput  5 หลายเดือนก่อน

      I agree, this workflow is handy and efficient

  • @O0Thyrael0O
    @O0Thyrael0O 5 หลายเดือนก่อน +1

    Eres un genio muchas gracias amigo :)

    • @designinput
      @designinput  5 หลายเดือนก่อน

      🧡🧡

  • @vahidt1398
    @vahidt1398 5 หลายเดือนก่อน

    Nice 👌🏻
    Is there any ai render engine to render the 3d scene exactly with details? To replace with common render engines like v-ray or corona

  • @domi6283
    @domi6283 5 วันที่ผ่านมา

    Big props on your video! Where did you find the rvxl3 checkpoint, cant find it on civitai and huggingface :(

  • @Antonygraphics
    @Antonygraphics หลายเดือนก่อน

    Hi thanks for the amazing tutorial, can you help me? I don't understand how to install your workflow on comfy, which file are you dropping inside of comfy? Thanks!

  • @jindongzhu8459
    @jindongzhu8459 4 หลายเดือนก่อน

    Hello! Let me ask you a question: I create real-time rendering and artificial intelligence (synced to 3dsmax model), and the generated pictures have a lot of noise! The quality is not high, why? Thanks!

  • @sergeyarhi
    @sergeyarhi 5 หลายเดือนก่อน +1

    Super!👌✋💪👏👏👏👍👍👍

    • @designinput
      @designinput  5 หลายเดือนก่อน

      Thanks a lot 🧡

  • @slobodanmrp
    @slobodanmrp 5 หลายเดือนก่อน +1

    Keep learning and sharing your knowledge! Thank you for that!
    One question, is it possible to use mask pass/channels for controlling some part of the image? For example window frames, bg, trees etc. If we have all that in input image. Is that going through control net?
    ✌✌✌

    • @designinput
      @designinput  5 หลายเดือนก่อน +1

      Thanks a lot, I appreciate the support
      Yes, of course, everything is possible in ComfyUI, We can mask it over the areas we want to improve and regenerate it with Control Net. I actually do upscale with this technique, especially with people's faces it works great.

  • @user-di4vl8np5w
    @user-di4vl8np5w 4 หลายเดือนก่อน

    Where is depthanything custom node?

  • @user-hq9of7ep3e
    @user-hq9of7ep3e 5 หลายเดือนก่อน +2

    It is possible to use without runpods? With a medium GPU how much time run for a render.thanks

    • @designinput
      @designinput  5 หลายเดือนก่อน +2

      I have GeForce RTX 3060 GPU with 6GB of memory and it took me to get a render in 20 seconds to 30 seconds and it is not really efficient to wait a that long for a Real time render thats why i choose to rent GPU on RunPod.

  • @ammartaher5680
    @ammartaher5680 4 หลายเดือนก่อน

    Hey Omer, I am trying to download the resources/ the files you mentioned in the installation on jupyter. but i can't find a link to them. The link on the page you refered in the description here is not clickable/not opening anything. Am I missing something?

    • @ammartaher5680
      @ammartaher5680 4 หลายเดือนก่อน

      NVM mind found it

  • @lorenzoarena7725
    @lorenzoarena7725 4 หลายเดือนก่อน +1

    question, were did you get the workflow template? Is it insde the first link? anyone can help?

    • @JoepSwagemakers
      @JoepSwagemakers 3 หลายเดือนก่อน

      this is also unclear to me... I drag an image into the workflow but nothing happens...

    • @JoepSwagemakers
      @JoepSwagemakers 3 หลายเดือนก่อน

      I found how this works, you have to use the image that is on his discord because it has the node structure information in it