ความคิดเห็น •

  • @perrymaizon
    @perrymaizon ปีที่แล้ว +61

    My younger clients will be able to do all my previous work in arch visualisation within a year. GAME OVER!!!

    • @simonperry5990
      @simonperry5990 ปีที่แล้ว +17

      All the fulfilling human jobs are on the way out!. Really depressing

    • @Tepalus
      @Tepalus ปีที่แล้ว +23

      No they don't. Assuming you're an actual architect and not just someone who does visualisations. AI has an understanding of architecture, and it looks very good, BUT it doesn't have the knowledge what makes physically the most sense, what are regulations or what does and what doesn't work. You still can lead projects, and oversee the building process in general.
      I also specialised in visualisations but am shifting to a more technical level at the moment. Keep up with technology or it owns you.

    • @perrymaizon
      @perrymaizon ปีที่แล้ว +9

      @@Tepalus ai is in a way just been born, and will be on a totaly new level already within a year!!! Beast mode in two years... What about 10 years?

    • @wahedasamsuri9248
      @wahedasamsuri9248 ปีที่แล้ว +2

      in third world countries, this already affecting so many job market. those who do not even have a nuance of abilities to produce design can call themselves designer. They said ' we do not need these people now to do images of our product now, we can do that in Ai'. I'm waiting for the day when people left and right start suing over common design interest.

    • @ribertfranhanreagen9821
      @ribertfranhanreagen9821 ปีที่แล้ว +4

      If this is enouh for you to be replaced.i question what you do as architect. This just help you make render easier.

  • @Fabi_terra
    @Fabi_terra ปีที่แล้ว +18

    Thank you for taking the time to show us this fantastic tool, and very inspiring ideas. I believe that AI resources, are here to stay. All we have to do is figure out the best way to work with them. We are just starting to work with this, and we still have a lot to learn, including improving our writing skills to make better prompts.

    • @designinput
      @designinput ปีที่แล้ว

      Hi, thanks for your comment and lovely feedback. Totally agree; soon, we will have more ideas on how to use it more user-friendly way.
      Regarding prompting, I believe it will have less impact on the overall result in the future. We will be able to explain it with plain text without needing any special keywords or phrases.

    • @Fabi_terra
      @Fabi_terra ปีที่แล้ว

      🧡

  • @adetibakayode1332
    @adetibakayode1332 ปีที่แล้ว +5

    PERFECT !!!! That's all i see about it. Nice work bro 👍

    • @designinput
      @designinput ปีที่แล้ว

      Hey, thanks a lot for your comment

  • @ThoughtFission
    @ThoughtFission ปีที่แล้ว +9

    Thank you so much for sharing this. I am trying to figure out how to do something similiar with portraits, keeping the original face and changing the clothes, background focal length etc. This is a great starting point.

    • @designinput
      @designinput ปีที่แล้ว +1

      Hey, thanks for your lovely feedback and comment! Hmm, interesting idea. I will definitely try it out. Please share your result and experience with us!

    • @krissstoyanoff8853
      @krissstoyanoff8853 ปีที่แล้ว +3

      @@designinput consider making a video in which you show us how to create a 3d render from a sketchup jpeg without any changes on the composition and the placement of the object. would be really helpful

  • @tomcarroll6744
    @tomcarroll6744 ปีที่แล้ว +1

    Nice work. This is clearly the direction of how concepts are generated. probably in another 4 weeks this capability will be available at numerous webapps for free.

    • @designinput
      @designinput ปีที่แล้ว

      Hey Tom, thanks for your comment! Totally agree! We will start to see this workflow integrated into many different applications soon.

  • @IDArch26
    @IDArch26 11 หลายเดือนก่อน +1

    Exactly what i was looking for, thank you!

    • @designinput
      @designinput 11 หลายเดือนก่อน

      Great to hear! You are very welcome :)

  • @m.a.a.1442
    @m.a.a.1442 ปีที่แล้ว +1

    It is almost what I was searching of thank you for your help

    • @designinput
      @designinput ปีที่แล้ว +1

      You are very welcome, thanks for your comment!

  • @designinput
    @designinput ปีที่แล้ว +4

    You can find all the recourses here: designinputstudio.com/create-realistic-render-from-sketch-using-ai-you-should-know-this/
    ControlNet Paper: arxiv.org/pdf/2302.05543.pdf
    ControlNet Models: huggingface.co/lllyasviel/ControlNet/tree/main/models
    Realistic Vision V2.0: civitai.com/models/4201/reali...
    Install Stable Diffusion Locally (Quick Setup Guide): th-cam.com/video/Po-ykkCLE6M/w-d-xo.html
    Instagram: instagram.com/design.input/

    • @fc5130
      @fc5130 ปีที่แล้ว

      Do u use realistic vision V2.0 or V1.4 like all the tutorials? Thank u!

    • @designinput
      @designinput ปีที่แล้ว +1

      @@fc5130 Hey, in this video, I used V1.4. Because, at that time, Realistic Vision V2.0 wasn't available yet. I am using V2.0 at the moment.
      You are very welcome :)

    • @fc5130
      @fc5130 ปีที่แล้ว

      @@designinput Thank u :)

  • @petera4813
    @petera4813 ปีที่แล้ว +54

    This channel will grow so fast if you can show either through Stable Diffusions or MidJourney 5.1 how to render a sketchup file, 3d max (jpeg) the exterior of a building into the render we want without a lot of distortions using prompts.
    There is no such video online. And I am positive that if people are not searching it now, they will very soon!

    • @adoyer04
      @adoyer04 ปีที่แล้ว +3

      how do you knew that? maybe every architect and other creative person once heard about ai and is following these topic/is using it?

    • @petera4813
      @petera4813 ปีที่แล้ว +1

      @@adoyer04 maybe..but maybe i am a wizard 🤷🏻‍♂️

    • @michaelbooth90
      @michaelbooth90 ปีที่แล้ว +9

      @@petera4813 im an architect in a firm and we want it but cant find it.

    • @pappathescooper
      @pappathescooper ปีที่แล้ว +1

      @@michaelbooth90 if you find... let me know...!!!! ;)

    • @designinput
      @designinput ปีที่แล้ว +11

      Hey, thanks a lot for your nice comment! I totally agree; that's where we are headed to. It is not quite possible to have a simple one-click render solution yet without many settings and prompting the "try error and experimenting" process. Although, I am working on a video for Midjourney and how to use it to render from a sketch or base simple image. I will share it as soon as I figure out a nice, straightforward workflow.

  • @panzerswineflu
    @panzerswineflu ปีที่แล้ว +1

    I didn't know such a thing was possible from napkin sketch to render. Thanks

    • @designinput
      @designinput ปีที่แล้ว

      Hi, thanks for your comment. You are very welcome, happy to hear it was helpful!

  • @chantalzwingli5698
    @chantalzwingli5698 8 หลายเดือนก่อน

    WOW it worked!!! THANKS A LOT!!! I had to download some important stuff like .pth files and then drag them to the right place.
    Just to find them afterwards in ControlNet / Model like in your example. YOU ARE AMAZING WITH THIS TUTORIALS!!! THANKS

    • @designinput
      @designinput 8 หลายเดือนก่อน

      Hi, you are right it's a bit detailed and a long process for an architect but super happy to hear it worked 🧡 Thanks for the lovely comment!

  • @andreaognyanova
    @andreaognyanova ปีที่แล้ว

    Very clear explanation, selamlar addled

  • @tuyenguru
    @tuyenguru 2 หลายเดือนก่อน

    Great. Thank you very much.

  • @tatianagavrilova2252
    @tatianagavrilova2252 ปีที่แล้ว

    it is fantastic!!! thank you so much for sharing

    • @designinput
      @designinput ปีที่แล้ว

      Hi, thanks a lot for your feedback

  • @alexanderburbitskiy4382
    @alexanderburbitskiy4382 ปีที่แล้ว

    looks amazing!

  • @fabrizioc7644
    @fabrizioc7644 4 หลายเดือนก่อน

    Thank you for the tips! ;-)

  • @Ramb0li
    @Ramb0li ปีที่แล้ว +4

    Hey I am a architect from Switzerland and it really amazes me how far we came. I already did a presentation in my architectural office and I am about to implement it in our design workflow... After using a lot of midjourney I came across the problem not having the control to just change a specific thing... I am trying now a combination of Stable Diffusion and MJ. Thank you for your informative video!

    • @Ramb0li
      @Ramb0li ปีที่แล้ว

      One question do I have: What computer do you use, graphic card and memory and how long does it take to for you to create a picture (AI render process)? I am working with a late MacBookPro and it takes me up to 10min to have a picture.

    • @designinput
      @designinput ปีที่แล้ว +2

      Hey, thank you for your comment :) That's great to hear because I think, usually, our industry is not the fastest in case of adaptations to new technologies :|
      Thanks for your kind words, I really appreciated it ❤

    • @designinput
      @designinput ปีที่แล้ว +3

      @@Ramb0li I am using a laptop with RTX3060 and 12th Gen Intel(R) Core(TM) i7-12700H CPU. Of course, for this process, the most important one is the GPU. Depending on the image resolution, the number of sampling steps, and the sampling method, it takes 2-5 minutes on average.
      I usually test my prompt and settings with a low resolution and fewer sampling steps to make the process faster. And once I find a nice prompt combination and correct settings, I render a final version with a higher resolution. Maybe that can help to speed up the process.

  • @davidlaska8248
    @davidlaska8248 ปีที่แล้ว

    That is really impressive

  • @tailopezbutnolamborghini4862
    @tailopezbutnolamborghini4862 ปีที่แล้ว

    What settings do you keep to make it look exactly as my kitchen model on sketchup? I tried to keep CFG at 8 and the height/width to match but the AI keep generating my cabinets/refrigerator all over the place. My model has refrigerator on right side and it generates an on the left. How do I fix this? Can you show me a video tutorial on this?

  • @PaoloBhA
    @PaoloBhA ปีที่แล้ว

    Hi! thanks for the video, very interesting. how did you convert the safetensor for realistic vision v2.0 to cptk?
    Thanks and keep up the good work!

    • @designinput
      @designinput ปีที่แล้ว

      Hey, thanks for your comment! You can download Realistic Vision V2.0 here: civitai.com/models/4201/realistic-vision-v20
      And you should place it to the Stable Diffusion folder under the models file.
      Thanks for your support

  • @leslie5815
    @leslie5815 ปีที่แล้ว

    The renderings are niceeeeeeeeeee!

    • @designinput
      @designinput ปีที่แล้ว

      Hi, thanks a lot for your lovely feedback

  • @RogerioDec
    @RogerioDec ปีที่แล้ว +1

    THIS is a game changer.

    • @designinput
      @designinput ปีที่แล้ว

      Hi, it really is... Thanks for the comment!

  • @romneyshipway7161
    @romneyshipway7161 ปีที่แล้ว

    Thank you for your time

    • @designinput
      @designinput ปีที่แล้ว

      Hey, you are very welcome! Thanks a lot for your comment, happy to hear that!

  • @LorenceLiu
    @LorenceLiu ปีที่แล้ว +2

    wow this looks amazing! Here is what I am thinking : is it possible to turn an image into sketch by AI, then use AI on that sketch to produce designs that actually fits the real-life object?

    • @motassem85
      @motassem85 ปีที่แล้ว

      there's a lot of programs can help to turn it to sketch but will not be clear like rendering it

  • @Exindecor
    @Exindecor 5 หลายเดือนก่อน

    Very inspirational

  • @antongerasymovich4876
    @antongerasymovich4876 ปีที่แล้ว +2

    Thanks for these great instructions! Couldn't figure out how to add "models" in ControlNet tab, now I have only "none" in "Model" tab, but you have some options with names "control_sd15_canny/normal/seg etc..) Thanks!

    • @designinput
      @designinput ปีที่แล้ว

      Hi Anton, thanks for your great feedback! You must download them separately and place them in the ControlNet folder under the models folder. You can download them here: huggingface.co/lllyasviel/ControlNet/tree/main/models
      Also, you can check this video to use it easily: Use Stable Diffusion & ControlNet in 6 Clicks For FREE: th-cam.com/video/Uq9N0nqUYqc/w-d-xo.html

  • @anagraciela534
    @anagraciela534 4 หลายเดือนก่อน

    Is there a way we can incorporate specific furniture we might see in an online store

  • @rasoolrahmani1585
    @rasoolrahmani1585 ปีที่แล้ว

    Thanks so much helpful for me

    • @designinput
      @designinput ปีที่แล้ว +1

      Hey, thank you for your comment. So happy to hear that, you are very welcome ❤️

  • @mukondeleliratshilavhi5634
    @mukondeleliratshilavhi5634 ปีที่แล้ว

    Love that you put it that it's only to help you come with more ideas. It is only a tool we are still the master and still need to match what the image is to what the client needs .. yes make more detailed video's

    • @designinput
      @designinput ปีที่แล้ว +1

      Exactly! Thank you very much for your comment!
      I will share a detailed step-by-step tutorial about it very soon.

  • @Constantinesis
    @Constantinesis ปีที่แล้ว +2

    I wish some of the prompting be replaced by inputing additional images and tagging or labeling through sketching perhaps like in Dalle-E. For example instead of describing how modern styled green sofa with geometrical patterns I want, I should be able to drop a reference photo of such a sofa or any other object inside my project. I am sure these kind of features will come sooner than later but what makes Stable Diffusion amazing is that its also free and open source.

    • @TJ-ki3gp
      @TJ-ki3gp ปีที่แล้ว +2

      Just give it time and everything you described will be possible.

  • @007vivek11
    @007vivek11 ปีที่แล้ว

    Hey bro i did follow and reach a mad lvl crazzy stuff thanks but for your this process i couldn't figure out the control net 1.111 processor not get Only control net running! If yiu can help would be great!!

  • @kasali2739
    @kasali2739 ปีที่แล้ว +2

    not sure, but I believe you don't need to choose anything from preprocessing menu, just leave in at none because otherwise you let SD to create sketch from sketch as input

    • @designinput
      @designinput ปีที่แล้ว

      Yes, you are absolutely right. I didn't realize that at that time. Thank you for letting us know about it!

  • @Ssquire11
    @Ssquire11 6 หลายเดือนก่อน +1

    Thanks alot but also it could of helped to show how to install controll net

  • @SpinnerPen
    @SpinnerPen ปีที่แล้ว +2

    Could you please tell me about your computer's specs? What graphics card are you using, and does it take a long time to generate each image?

    • @designinput
      @designinput ปีที่แล้ว +2

      Hey, I am using a laptop with RTX3060 and 12th Gen Intel(R) Core(TM) i7-12700H CPU. Of course, for this process, the most important one is the GPU. Depending on the image resolution, the number of sampling steps, and the sampling method, it takes 2-5 minutes on average.
      I usually test my prompt and settings with a low resolution and fewer sampling steps to make the process faster. And once I find a nice prompt combination and correct settings, I render a final version with a higher resolution.

  • @michawalkowiak1464
    @michawalkowiak1464 ปีที่แล้ว

    it is possible to get interior using a 3d model of the lamp

  • @user-ok2wi2fl9k
    @user-ok2wi2fl9k ปีที่แล้ว

    Thank you Now you are my teacher!

    • @designinput
      @designinput ปีที่แล้ว

      Hey, glad to hear you liked it :)
      Haha, thanks a lot for your lovely comment!

  • @ilaydakaratas1957
    @ilaydakaratas1957 ปีที่แล้ว +13

    I had never heard of Stable Diffusion before and it looks really helpful!! Please make a tutorial on how to install it!!

    • @designinput
      @designinput ปีที่แล้ว +3

      Thank you for your comment! Definitely, I will make one soon.

    • @CoolBreeze39
      @CoolBreeze39 ปีที่แล้ว +4

      I agree, this would be helpful!

    • @designinput
      @designinput ปีที่แล้ว

      @H M 😂😂

    • @knight32d
      @knight32d ปีที่แล้ว +1

      Not only it's helpful. It'll save us lots of money n time.

    • @designinput
      @designinput ปีที่แล้ว

      @@knight32d haha :) Totally agree! Thanks for your comment!

  • @nopnop6274
    @nopnop6274 ปีที่แล้ว

    Wow! Fascinating, thank you for making this video.

    • @designinput
      @designinput ปีที่แล้ว

      Hey, thanks a lot for your lovely feedback and comment

  • @systemmusic6830
    @systemmusic6830 ปีที่แล้ว

    thanks a lot❤

    • @designinput
      @designinput ปีที่แล้ว

      Thanks for your kind comment! Glad to hear that you liked it :)

  • @cgimadesimple
    @cgimadesimple 9 หลายเดือนก่อน

    impressive!

    • @designinput
      @designinput 9 หลายเดือนก่อน

      Thank you!

  • @user-wb8ne7fk7t
    @user-wb8ne7fk7t ปีที่แล้ว +2

    Great video and I like to repeat the steps you demonstrate. The link to "Realistic Vision V1.4" appears broken, but I did find a similar download on huggingface. However, I do not have the ControlNet option visible when I go to Stable Diffusion after following all of the steps. What am I missing?

    • @designinput
      @designinput ปีที่แล้ว +3

      Hey, thanks for letting me know; I changed it with the updated Realistic Vision V2.0. At the moment, ControlNet doesn't come directly with Stable Diffusion, you need to download it separately and then put it in the ControlNet folder inside the Stable Diffusion folder on your computer.
      You can download the ControlNet models here: huggingface.co/lllyasviel/ControlNet/tree/main/models
      And then, you should move them here: C:\SD\stable-diffusion-webui\models\ControlNet
      After you place the folder, restart the Stable Diffusion, and you should see the ControlNet section. I will upload a detailed step-by-step tutorial about this in the following days.

    • @robwest1830
      @robwest1830 ปีที่แล้ว +1

      @@designinput do we need all of the controlnet files? as there are 8 4.71 GB files

    • @designinput
      @designinput ปีที่แล้ว +1

      @@robwest1830 Hey, no, we don't need all of them. If you want to use only your sketches as input, you can download only the scribble model (which is the best for sketches).
      Or you can try depth mode if you want to use views from your 3D model or photos.

    • @Just.Dad.Things
      @Just.Dad.Things ปีที่แล้ว

      @@designinput I'm very impressed and I would like to try it out myself, but I ran into the same problem, missing ContorlNet option in Stable Diffusion.
      I created the folder ControlNet in stable-diffusion-webui\models\
      Then restarted webui-user.bat, but stable diffusion doesn't show ControlNet at all. Am I missing something? I downloaded the scribble model and put it in ControlNet folder

  • @motassem85
    @motassem85 ปีที่แล้ว +1

    thanks for the toturial bro
    can you add the link for Realistic-vision 1.4.ckpt which you used in the video please,and one more thing i can't find ControlNet to add picture what's the issue i've?

    • @xsanskar
      @xsanskar ปีที่แล้ว

      same issue

  • @islandersean2213
    @islandersean2213 ปีที่แล้ว

    how do i load control sd 15 scribble into model? thank you

  • @adoyer04
    @adoyer04 ปีที่แล้ว

    can i upload a floorplan to create a scenery for every angle of visualisation is needed. they have to match its look from angle to angle and should be correct with the reality around it. give it some years and you just implement points on a 3d model to do do. keywords for every surface and a hirarchy for the post production look. from 3d to promts to avoid fine tuning in specific programs you may not understand.

  • @gelione
    @gelione ปีที่แล้ว

    Superb.

    • @designinput
      @designinput ปีที่แล้ว

      Hey Berk, thanks a lot for the feedback! ❤

  • @deborasouza3897
    @deborasouza3897 3 หลายเดือนก่อน

    This IA is paying or free? Carmaker online or need download program?
    Thanks 😊

  • @jjrendering
    @jjrendering ปีที่แล้ว

    Hi there! Amazing info. Been trying this for the past few days. I had a problem with at first with CUDA and VRAM. I thought it was because of my GPU (i have an Nvidia GTX 1050 with 4GB), so i made a few adjustements with another video i've seend about this (editing medvram or xformers), but they usually change a bit the results from de AI.
    Did you have some problems with CUDA when you try to generate images? is there a way to solve this without changing to much parameters?
    Thx a lot for the info!

    • @designinput
      @designinput ปีที่แล้ว

      Hey, thanks for the comment! I have a GPU with 6GB VRAM, so I had issues with that too. As far as I know, xformers can change the result slightly, but I had better results only with medvram or lowvram. They use less vram but increase the generation time.

    • @jjrendering
      @jjrendering ปีที่แล้ว +1

      @@designinput That's right, i did that too. Just testing some results, the best ones came from using only medvram. Also, i've seen another thing that's called "Token Merge", but that's in case those other things didn't work (xformers, med or lowvram).
      Thx a lot again!

  • @ovidiupatraus-ub8uq
    @ovidiupatraus-ub8uq ปีที่แล้ว +3

    Hello, my problem with this is that I cant find when I press processor scribble , and my generated images are very different than my sketch I upload, can you help me with that please ? appreciate your work

    • @jolopukkii
      @jolopukkii ปีที่แล้ว

      i also have that problem! The images it generate are very different (diferent shapes, window sizes, roof angle, etc.). I also have Realistic Vision V1.4 and Control Net with MLSD on... But the results are far from what is shown in the video..

  • @michelearchitecturestudent1938
    @michelearchitecturestudent1938 ปีที่แล้ว

    Great video! I have a question...how do I activate controlnet in the text to image prompt? I don't see this option in my realistic_vision_1.4

    • @designinput
      @designinput ปีที่แล้ว +2

      Hi Michele, thanks for your comment. You need to download the ControlNet models additionally; you can find them here: huggingface.co/lllyasviel/ControlNet/tree/main/models
      I will upload a step-by-step tutorial about the whole process soon, hope that will be helpful for you.

    • @michelearchitecturestudent1938
      @michelearchitecturestudent1938 ปีที่แล้ว

      @@designinput thanks for the reply. I found the video...but still have problems

  • @moizzasaeed5132
    @moizzasaeed5132 ปีที่แล้ว

    I can't figure out how to install it. When I open the webui-user batch file, the code tells me to press any key to continue and when I do it, it just closes the window. Have restarted the PC, still not working properly

  • @user-ik2to2hu3y
    @user-ik2to2hu3y 11 หลายเดือนก่อน +1

    teşekkürler

    • @designinput
      @designinput 10 หลายเดือนก่อน +1

      :)

  • @yuyuyu9948
    @yuyuyu9948 ปีที่แล้ว

    Hi, thanks for your video! quick question which lora model u used? where i can download it?

    • @designinput
      @designinput ปีที่แล้ว +1

      Hey, thank you! I have used Realistic Vision V2.0 model together with epi_noiseoffset. You can find their links here:
      civitai.com/models/4201/realistic-vision-v20
      civitai.com/models/13941/epinoiseoffset

    • @yuyuyu9948
      @yuyuyu9948 ปีที่แล้ว

      @@designinput Thank you so much! Really appreciate it!

  • @trevorpearson1702
    @trevorpearson1702 3 หลายเดือนก่อน

    How can I convert a 2d dwg file in to a 3d render using AI

  • @wangshuyen
    @wangshuyen ปีที่แล้ว +9

    Great video. You should do one with the same sketches but using Midjourney as a comparison please.

    • @designinput
      @designinput ปีที่แล้ว +3

      Hi, thanks for your lovely comment and suggestion! I am currently working on that, I will upload a video about it soon!

  • @atlasmimarlik
    @atlasmimarlik ปีที่แล้ว

    Hi dude, thaks for sharing❤

    • @designinput
      @designinput ปีที่แล้ว +1

      Hey, thank you for the feedback ❤ Happy to hear that you liked it!

    • @atlasmimarlik
      @atlasmimarlik ปีที่แล้ว

      @@designinput Where r u from?

  • @NMPrecedent
    @NMPrecedent ปีที่แล้ว

    Is there a way for it to reference real world materials? As if I put a link for a backsplash, can it use that?

    • @designinput
      @designinput ปีที่แล้ว +1

      Hey, unfortunately, not really :/ You can primarily describe it with text; additionally, you can add similar textures to your sketch to mimic a similar material.
      Thanks for your comment!

  • @Amir_Ferdos
    @Amir_Ferdos ปีที่แล้ว

    thank you 🙏🙏🙏🙏🙏🙏

    • @designinput
      @designinput ปีที่แล้ว

      Thank you, glad that you liked it! ❤

  • @7ckngsane354
    @7ckngsane354 ปีที่แล้ว +2

    This is amazing. I have a question: What does the mean in your key words? What does dslr mean as well? Much appreciated!

    • @designinput
      @designinput ปีที่แล้ว +1

      Hey, thanks for your nice comment! is an additional Lora model to improve overall quality of the image, but it is not necessary to use it. You can learn more about it here: civitai.com/models/13941/epinoiseoffset

    • @7ckngsane354
      @7ckngsane354 ปีที่แล้ว

      @@designinput Thank you! Increasing the image quality is an important task for me. Could you be so nice to explain what is the meaning of "dslr" in your key words mean?

    • @designinput
      @designinput ปีที่แล้ว +1

      @@7ckngsane354 hey, dslr refers to the DSLR cameras. It is a common keyword for stable diffusion prompting, but it is hard to judge the effect of a keyword like this on the overall image quality. Even though sometimes it can help, I don't think it has a huge impact. Feel free to experiment with/without it to see the difference between them and share the results with us :)
      You are very welcome ❤

    • @7ckngsane354
      @7ckngsane354 ปีที่แล้ว

      @@designinput 👍

  • @gergelybodnar6002
    @gergelybodnar6002 ปีที่แล้ว

    Hi, everything is fine but under the controlnet dropdown under models it says that I have none. Where do I get the ones you have?

    • @designinput
      @designinput ปีที่แล้ว +1

      Hi, you need to download ControlNet Models separately and then put them in the ControlNet file: C:\stable-diffusion-webui\models\ControlNet
      You can find all the models here:
      huggingface.co/lllyasviel/ControlNet/tree/main/models
      You don't need all of them; if you want to follow this video, you can download only the Scribble model, but feel free to experiment with all of them :)
      Thanks for your comments!

  • @phgduo3256
    @phgduo3256 ปีที่แล้ว

    Hi there, thanks for inspiring tutorial; what does " " mean?

    • @designinput
      @designinput ปีที่แล้ว +1

      Hey, thanks a lot for your comment, much appreciated

  • @user-kt4kh8he7e
    @user-kt4kh8he7e 9 หลายเดือนก่อน

    directly from sketchup to AI for different testing different looks

  • @user-rf2so1fv2r
    @user-rf2so1fv2r ปีที่แล้ว

    Is there any way we can use the API to create our own app that does this in a more "one click" kind of way with the correct prompts?

    • @designinput
      @designinput 9 หลายเดือนก่อน

      Hey, there are many web applications that do that right now. You can get the API directly from Stability AI or just install it on a cloud computing service (like AWS) and run it there

  • @METTI1986LA
    @METTI1986LA ปีที่แล้ว +2

    I actually don’t want random results in my designs... and it’s really not that hard to texture or model a 3d scene... but it is useful for finding ides maybe

    • @williambrady-is8bd
      @williambrady-is8bd ปีที่แล้ว +8

      But with the technology, people will start using it and it may become the industry norm to have this quality of rendering early in the design stage. It might become less what we want and what the client / market expects. We are already facing similar things with clients expecting renders early on so they can visualise the thinking. They don't understand sketches and drawings like we do. The majority don't actually understand the work we do beyond what colour the kitchen bench should be, which is often how they want to express some control / knowledge in the design process. I also would never be able to produce so many variations to this level of detail in the time it would take to sketch five solid ideas and model them in sketchup or rhino, then render them dealing with vray crashing all the time or too many trees and details slowing things down. I also think this will change architecture schools dramatically in terms of pin up. Students who don't have that critical and analytical depth to their thinking will flock to this aesthetic driven approach fo ideation.

  • @aceheart5828
    @aceheart5828 ปีที่แล้ว +28

    So this needs to be developed using an interactive user interface.
    The word prompts need to become labels. Architects want to be able to draw lines from objects and label them feeding specific information into the AI generation.
    The Architect does not care about multiple options as much as he cares about creating the specific option he desires.
    He must be enabled through the interface to engage in an interactive back and forth. Erasing parts, and redrawing them, developing parts of the drawing, adding more specific labels,........ all in an endeavour to produce a vision as close to what he sees in his minds eye.
    This is of utmost importance.
    All said and done, on a positive note, this is the only useful sphere for architects, which I think they may use and be willing to pay for, that I have seen thus far from all the AI related attempts.
    It would be idiotic not to take it forward to fruition.

    • @designinput
      @designinput ปีที่แล้ว +5

      Hey, you are right, and we will soon see more user-friendly interfaces integrated with other software for sure.
      I totally agree; in the case of architecture, accuracy and quality are way more important than the number of alternatives you have. But even a couple of months ago, having this much control over the whole generation process was impossible. And it is getting better every day. I am sure you will be able to fine-tune your final result very soon.
      Thank you very much for your comment!

    • @Constantinesis
      @Constantinesis ปีที่แล้ว +1

      I agree with you. Some of the drawing/erasing features of Dall-E would be amazing! You can already use Dall-E to replace parts of an image but you can`t use it for entire image2image process.

    • @StringBanger
      @StringBanger ปีที่แล้ว +1

      Before you know it AI can take over the entire AEC industry. It can be smart enough to pull code sets from UpCodes,NFPA etc. and all relevant code models applicable by state and jurisdiction to construct an entire BIM model that is fully code compliant based on best engineering practices all while creating multiple models for clients within min.

    • @user-cn9kk8bj4e
      @user-cn9kk8bj4e ปีที่แล้ว +1

      I certainly agree with you text prompts needs to become labels.Great Idea

  • @danielummenhofer6120
    @danielummenhofer6120 ปีที่แล้ว +1

    I follow your steps, but for some reason, it won't use the image / sketch but makes a completely new image. How to you get stable diffusion to use the sketch as the base to create the CGI on?

    • @designinput
      @designinput ปีที่แล้ว

      Hey Daniel, thanks for your comment. It is probably related to ControlNet. Did you enable it before you generated the new image?

    • @danielummenhofer6120
      @danielummenhofer6120 ปีที่แล้ว

      @@designinput Thank you for your reply. Yes, after reading through the comments I saw someone mentioned to turn it on, and I did. Still didn't solve the issue. I'm following your new video now and see if this works then.

  • @hyalimy3150
    @hyalimy3150 8 หลายเดือนก่อน +1

    Ellerine sağlık videolar çok güzel ve bilgilendirici. (Sanırım Türksün ingilizcesi ne güzelmiş çok iyi anladım dedim ve sonra farkettim) sanırım bu saatten sonra Türkçe içerik gelmez :)

    • @designinput
      @designinput 8 หลายเดือนก่อน +1

      🧡🧡

  • @maiyadagamal8142
    @maiyadagamal8142 9 หลายเดือนก่อน

    can you give examples for the input text that can work

    • @designinput
      @designinput 9 หลายเดือนก่อน

      Hey, there is no special formula for the text input. I mostly try to follow the structure from the checkpoint I am using. But you can just freely describe the scene you would like to create in you prompt.

  • @dalegas76
    @dalegas76 ปีที่แล้ว

    I have not seen anywhere any mention to the resolution of rendered images. How big or what size can you get from this? Thanks 😊

    • @designinput
      @designinput ปีที่แล้ว +1

      Hi, by defult it generates 512x512 but you can enter custom values up to 2048x2048. I think that's the limit.
      Thanks for your comment :)

    • @dalegas76
      @dalegas76 ปีที่แล้ว

      @@designinput thanks for your answer.. I got the info I needed. This Ai tools are developing fast, I do believe better and more accurate to the architecture branch should be developed soon.😊👍

    • @designinput
      @designinput ปีที่แล้ว +1

      @@dalegas76 you are very welcome, that's great! Totally agree, I believe it will be very soon :)

  • @jojustchilling
    @jojustchilling ปีที่แล้ว

    I’m so happy. Omg

    • @mmkamalraj8931
      @mmkamalraj8931 ปีที่แล้ว

      Nice room decor video

    • @designinput
      @designinput ปีที่แล้ว +1

      Hey, thanks for your comment :) Happy to hear that you liked it!

  • @votrongkhiem2777
    @votrongkhiem2777 ปีที่แล้ว

    Hi, is Stable Diffusion checkpoint important to have that result, I tried to use the same setting with the same sketch (your sketch) but can't have the same result

    • @designinput
      @designinput ปีที่แล้ว

      Hey, yes, which model you use has a significant impact on the final image. My current favorite model is Realistic Vision V2.0. You can download it from the link in the video description.
      Thanks for your comment!

    • @votrongkhiem2777
      @votrongkhiem2777 ปีที่แล้ว

      @@designinput I tried, however, the result still didn't follow your sketch, I use the ggl collab one though

  • @mlee9049
    @mlee9049 10 หลายเดือนก่อน

    Hi, do you know of any A.I. that allows you to change the camera views for interiors and exteriors?

    • @designinput
      @designinput 9 หลายเดือนก่อน +1

      Hey, unfortunattely it's not possible yet so some manual work needed but maybe in soon future, why not?

    • @mlee9049
      @mlee9049 9 หลายเดือนก่อน

      @@designinput Thank you for your reply. That will be a game changer.

    • @designinput
      @designinput 9 หลายเดือนก่อน

      @@mlee9049 absolutelly!

  • @cador1624
    @cador1624 ปีที่แล้ว

    thanks for sharing..
    i have a problem on getting my design into realistic as possible because i dont have budget to buy good performance PC (i even cannot open d5 render and have 0 to 5 fps when using lumion). if only i can master this and somehow make its like rendering my design image it will really helpful for my future !

    • @designinput
      @designinput ปีที่แล้ว

      Hey, you are very welcome; thanks for your comment! Ah, I feel your pain... Well, then, local Stable Diffusion is not a good option in this case, but you can try cloud base platforms to use Stable Diffusion; just with a couple of bucks, you can use it without any issues. I plan to make a video to share some options for these platforms.

    • @cador1624
      @cador1624 ปีที่แล้ว

      @@designinput ah.. thanks for your insight imma learn into that! But this video like give me a glimpse of hope if maybe free AI can just rendering our design into realistic image and make ud can adjust the material/color too!! Well. But i think its will hits hard that many high budget rendering software and their very high specs PC too ! 🤣

  • @B-water
    @B-water ปีที่แล้ว +1

    Ammmmmaaaaaaaazzzzing

  • @marcinooooo
    @marcinooooo 11 หลายเดือนก่อน +1

    Hey, thank you soooo much for this video! Your resutls are amazing, but mine are ekmg they s*ck haha...
    I think the problem is that control_sd15_scribble does not load for me, can you give links to all of the files we need donwload (models) - I am using RunPod, maybe you could help me with that?

    • @marcinooooo
      @marcinooooo 11 หลายเดือนก่อน

      hey, so I see I have a problem in the "preprocessor x model" since don't see '....Scribble', but this: 'control_v11p_sd15_canny [d14c016b] '
      I have uplaoded it to the workspace/stable-diffusion-webui/models/Stable-diffusion/control_sd15_scribble.pth
      or should I put it somehwere else?
      Thank you

    • @designinput
      @designinput 9 หลายเดือนก่อน

      Hey, sorry for the late response :( workspace/stable-diffusion-webui/models/Stable-diffusion/control_sd15_scribble.pth this path is totally correct. Let me know if it still doesn't work and we can take a look together

  • @pilardicio7266
    @pilardicio7266 ปีที่แล้ว

    Hi there! I have a Mac. How can I install stable diffusion?

    • @designinput
      @designinput ปีที่แล้ว

      Hey, unfortunately, I don't have much experience with how to use it on a Mac but you can follow this tutorial to install it. Hopefully, it will help, thanks :)
      th-cam.com/video/Jh-clc4jEvk/w-d-xo.html

  • @crisislab
    @crisislab ปีที่แล้ว

    Forgive my ignorance: how do you install Control Net?

    • @designinput
      @designinput ปีที่แล้ว

      Hey, thanks for your comment! You must download them separately and place them in the ControlNet folder under the models folder. You can download them here: huggingface.co/lllyasviel/ControlNet/tree/main/models
      Also, you can check this video to use it easily: Use Stable Diffusion & ControlNet in 6 Clicks For FREE: th-cam.com/video/Uq9N0nqUYqc/w-d-xo.html

  • @victorfeinstein1815
    @victorfeinstein1815 ปีที่แล้ว +1

    Teşekkürler

  • @marcschipperheyn4526
    @marcschipperheyn4526 ปีที่แล้ว +5

    I would like to see a video that uses both a floorplan and 2D designs for for example.a kitchen from the front. It would be interesting to see if people like me, with limited drawing and no 3D skills, could use tools like Figma to create 2D organizations of cabinets and floor plans to create effective rendering of the environment

    • @designinput
      @designinput ปีที่แล้ว +3

      Hi, thanks for your comment :) There is no such tool that allows us to use both floor plans and side views as input to create 3D models or renders. But the whole industry is moving and improving incredibly fast, and I am pretty sure someone is working on this right now :)
      When I see something related, I will definitely share it!

    • @amagro9495
      @amagro9495 ปีที่แล้ว +1

      @@designinput Congrats for the video. Do you know if it is possible to generate, from a single image/design, several ones with different perspectives?

    • @designinput
      @designinput ปีที่แล้ว +1

      @@amagro9495 Thanks for your comment! Hmm, good question. Changing perspective for the same space can be challenging if you are only using text-to-image or image-to-image modes. But if you have a basic 3D model that you can work on, you can manage to do it. I just uploaded a video about creating renders from the 3D model; feel free to check that out.
      But I will definitely test and experiment with the perspective change!

    • @fervo1991
      @fervo1991 ปีที่แล้ว

      @@designinput I think he means using a floorplan in SD to generate a "3D rendering"

  • @moodoo3001
    @moodoo3001 ปีที่แล้ว +1

    I can't find scrabble preprocessor even that I dowmloaded scrabble model other scrabble preprocessors just like scrabble hed and pidinet are available so what is the problem?

    • @designinput
      @designinput ปีที่แล้ว +1

      Hey, if you will upload your drawing to ControlNet you don't need to use preprocessor. Just choose "none" for preprocessor and "scribble" model. Thanks for your comment!

    • @moodoo3001
      @moodoo3001 ปีที่แล้ว +1

      @@designinput Ok 👍 thanks for your help

  • @tusharpandey858
    @tusharpandey858 ปีที่แล้ว

    can I install stable Diffusion on my home PC, it has a graphic card rtx2060, and an i7 10th gen with 16gb ram, will it work?

    • @designinput
      @designinput ปีที่แล้ว

      Hey, I believe you can. It mostly depends on your GPU and the amount of VRAM it has. I am using RTX 3060 6GB VRAM. So feel free to test it out.
      If you can't, you can check out this video to use it on Google Colab: th-cam.com/video/Uq9N0nqUYqc/w-d-xo.html&lc=Ugxw1pFnOcldtEnPEAt4AaABAg

  • @nevergiveuptrader
    @nevergiveuptrader ปีที่แล้ว +1

    Can you tell me how to install : " Realistic Vision V1.4 Model " or " Realistic Vision V2.0 " after download, ? thank you ^^

    • @designinput
      @designinput ปีที่แล้ว +2

      Hey, sure, after you download the Realistic Vision model, all you need to do is drop that file to the "C:\stable-diffusion-webui\models\Stable-diffusion" folder. After that, if you start Stable Diffusion again, you can find it in the available model's menu.
      Let me know if you need any help. Thank you :)

    • @nevergiveuptrader
      @nevergiveuptrader ปีที่แล้ว

      @@designinput yes i did that, thanks you so muchhhhh

    • @designinput
      @designinput ปีที่แล้ว

      @@nevergiveuptrader great, happy to hear it worked. You are very welcome :)

    • @robwest1830
      @robwest1830 ปีที่แล้ว

      i dont even get how to download it :D pls tell me

    • @designinput
      @designinput ปีที่แล้ว

      Hi @@robwest1830, you can find all the necessary resources in the link in the video description.
      For installation, I will share a quick tutorial, but until then, feel free to follow this one:
      th-cam.com/video/hnJh1tk1DQM/w-d-xo.html
      He clearly explains everything you need to install to start using it.

  • @epelfeld
    @epelfeld 11 หลายเดือนก่อน

    Is there any difference between sketch and scribble models?

    • @designinput
      @designinput 9 หลายเดือนก่อน

      Hi, no there is only one model for sketch inputs but with different preprocessor options. However, if you upload directly a sketch, you don't need to use any preprocessor

  • @user-qi3bm8nm4r
    @user-qi3bm8nm4r 3 หลายเดือนก่อน

    i still think its hard to control and fine tuning the ai image, still better to handle with 3d software

  • @Darkcrimefiles9
    @Darkcrimefiles9 ปีที่แล้ว

    Hey ,,i need your help could you plz one image rendering for my college project, coz i haven't laptop

    • @designinput
      @designinput ปีที่แล้ว +1

      Hey, thanks for the comments! How can I help? Let me know please, thanks :)

    • @Darkcrimefiles9
      @Darkcrimefiles9 ปีที่แล้ว

      I can send you one sketch could you plz convert into colour image plz

    • @Darkcrimefiles9
      @Darkcrimefiles9 ปีที่แล้ว

      Reply me as soon as possible

  • @michelearchitecturestudent1938
    @michelearchitecturestudent1938 ปีที่แล้ว

    I found how to install control net, but I can only select the preprocessor and not the model in the tab. In the video you have multiple solutions...my only one is "none"".
    Do you know how to fix it?

    • @designinput
      @designinput ปีที่แล้ว

      Hey, did you download the ControlNet models and place them into the ControlNet folder under the models file?

    • @michelearchitecturestudent1938
      @michelearchitecturestudent1938 ปีที่แล้ว

      @@designinput thanks for the reply again. not it works ❤️

    • @designinput
      @designinput ปีที่แล้ว

      @@michelearchitecturestudent1938 you are very welcome ❤

  • @DonVitoCS2workshop
    @DonVitoCS2workshop ปีที่แล้ว

    Until we're not able to change specific materials on specific objects i dont see a huge point in this.
    The sketch would be enough to let your agency or even the client imagine the result and the ai render could be very misleading compared to a handmade render of the sketch.
    Just a couple papaers down the line though, ghis will be the new process how it's done

  • @jp5862
    @jp5862 ปีที่แล้ว

    I am an architectural designer. I've been doing this for 15 years. I don't think I need it anymore.

  • @alpahetmk
    @alpahetmk ปีที่แล้ว +1

    nice btw r u turkish

  • @pedrodeelizalde7812
    @pedrodeelizalde7812 ปีที่แล้ว

    Hi thanks but how do i install Realistic Vision V2.0?

    • @designinput
      @designinput ปีที่แล้ว +1

      Hi, you can download it from here:
      civitai.com/models/4201/realistic-vision-v20

  • @nguyenthithuhieu9501
    @nguyenthithuhieu9501 ปีที่แล้ว

    why is my ControlNet not showing ?

  • @kebsriad
    @kebsriad ปีที่แล้ว

    What is ai pls

  • @igorstojanov9030
    @igorstojanov9030 ปีที่แล้ว

    Does any 3D artist use this tool in everyday work? What do you think about future of AI in 3D work?

    • @designinput
      @designinput ปีที่แล้ว +1

      Hey, I am sure many 3D artists use these new tools, too, but creating perfect 3D models is not quite possible yet. But there are lots of developments in this space, so indeed, better tools to create 3D models will be available soon.
      Thanks for your comment!

  • @MrMadvillan
    @MrMadvillan 4 หลายเดือนก่อน

    There’s nothing better than to show a client a completely finished project right at the beginning so that you have zero wiggle room to change anything and their expectations are super high. haha amazing way to f yourself from the beginning.

  • @cr4723
    @cr4723 ปีที่แล้ว

    I tried it. Constructing / modeling takes up most of the time. Assigning the materials in the render program is quick. In the AI you have to try a lot of prompts and generate a lot of images. That takes longer. And it's inferior in quality.

    • @designinput
      @designinput 9 หลายเดือนก่อน

      Hey, if the goal is to create a final quality render, you are absolutely right. It can easily become more time-consuming than actually modeling everything and creating renders. But if the goal is creating something more conceptual for the early phases of the design process it can be really beneficial and time saving

  • @Ssquire11
    @Ssquire11 6 หลายเดือนก่อน

    The scribble model isn't in my drop down

  • @tuynsgoing789
    @tuynsgoing789 ปีที่แล้ว

    why it keeps generating different image, not same as my uploaded one?

    • @tuynsgoing789
      @tuynsgoing789 ปีที่แล้ว

      even I checked to enable on control net

    • @designinput
      @designinput ปีที่แล้ว

      Hey, what do you mean exactly by different images?
      It is possible to have a certain level of control over the process with ControlNet but up to some level. Even if you keep the seed number the same, the final images probably will be very different from each other.
      I am sure we will have more control over it soon with all the new developments, but it is not quite possible to generate exactly the same image multiple times.
      Thanks for your comment!

  • @ABDUCTOLOGY
    @ABDUCTOLOGY ปีที่แล้ว +1

    is there a method to transform photos into drawings

    • @designinput
      @designinput ปีที่แล้ว

      Hi, hmm, good question. I haven't tested much that option yet, but I definitely will now. What kind of drawing do you mean? Like sketch or more technical CAD-style drawings?

    • @ABDUCTOLOGY
      @ABDUCTOLOGY ปีที่แล้ว

      @@designinput like sketch

    • @teeambird2079
      @teeambird2079 ปีที่แล้ว

      @@ABDUCTOLOGY photoshop has had an effect to do that for a while. It might be called cartoonize or something like that

  • @vaskodrogriski2697
    @vaskodrogriski2697 ปีที่แล้ว

    How the hell did you get Stable diffusion to install? I've watched dozens of videos with instructions how to insta, after I saw your video, but not one of them have worked. I've installed, git, python and every thing that is instructed but nothing seems to work.

    • @designinput
      @designinput ปีที่แล้ว

      Hey, yes, I have used a similar process. What is the problem for you? What error do you get? I will upload a new video today to show how you can use it without downloading it to your computer, I hope that can help.

    • @vaskodrogriski2697
      @vaskodrogriski2697 ปีที่แล้ว

      @@designinput HI, essentially i run into problems when i launch the webui-user file that tells me it can’t install torch. I therefore cannot get past that point to get the url return.

  • @1insp3ru16
    @1insp3ru16 ปีที่แล้ว +1

    will a gaming laptop be able to run it?

    • @designinput
      @designinput ปีที่แล้ว +1

      It mostly depends on your GPU and the amount of VRAM it has. But you don't need something crazy; my laptop has an RTX3060 with 6GB VRAM, and I can use it without any issues.

    • @1insp3ru16
      @1insp3ru16 ปีที่แล้ว

      @@designinput I have a ASU’s rog laptop Ryzen 9 and RX 6800 X graphics and 16 gb ram graphics equal event to RTX 3080

  • @schenier
    @schenier ปีที่แล้ว +1

    your image is already a scribble, you don't need to put the preprocessor as scribble. It can be left to none. Use the preprocessor if you want to change your image into a scribble

    • @designinput
      @designinput ปีที่แล้ว

      Yes, you are absolutely right. I didn't realize that at that time. Thank you for letting us know about it!