This Will Change EVERYTHING in Architectural Visualization FOREVER!

แชร์
ฝัง
  • เผยแพร่เมื่อ 16 ก.ย. 2024

ความคิดเห็น • 162

  • @nickp8094
    @nickp8094 ปีที่แล้ว +5

    I think its really good and i can see the evolution of it in my head as a visualiser. Feels like one day you will really be able to custom load in pre written scripts that perform very specific functions to make it even more as per the experience of working for a client. Basically visualisation will become a bit like computer programming, not necessarily quicker or easier

  • @ilaydakaratas1957
    @ilaydakaratas1957 ปีที่แล้ว +3

    Such useful tools!! I will definetly try it out! Thank you for the video!! Also, that was an interesting pavilion model

    • @designinput
      @designinput  ปีที่แล้ว

      Hey there, thanks for your support and lovely comment ❤❤ I hope you liked the pavilion :)

  • @sherifamr4160
    @sherifamr4160 ปีที่แล้ว +6

    love the way you explained it, to the point and easy to follow up. I do have a question hopefully you will read my comment, but I wanted to ask if you already have materials on your pavilion would that somehow redirect the rendering process into what we want acting as a more parameters? ... I hope I am making sense in my comment. again thank you so much I love that you are sharing your knowledge with us shows how amazing you are as a person.

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, thanks a lot for your lovely comment! Unfortunately, it is not possible to use materials as a parameter at the moment, but I am sure soon we will be able to have more control over this workflow.
      Thanks a lot for your kind words

  • @peterpanic7019
    @peterpanic7019 ปีที่แล้ว +3

    Thanks for your great quality videos, I just watched the latest ones about AI and image generation, can't wait to try them out. Hope you channel grows :)

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, thank you so much for your support ❤ Please let us know what you think after you try it out :)

    • @mockingbird1128
      @mockingbird1128 ปีที่แล้ว

      what did u watch im new to this

    • @armannasr3681
      @armannasr3681 ปีที่แล้ว

      @@mockingbird1128 try stable diffusion + controlnet

  • @B-water
    @B-water ปีที่แล้ว +1

    A gift from heaven...a million thanks 😃😃😃

    • @designinput
      @designinput  ปีที่แล้ว

      Thanks for your great comments!

  • @phgduo3256
    @phgduo3256 ปีที่แล้ว +1

    I am become a fan of your works. Will spen the next holidays of this month on these AI series. Thanks

    • @designinput
      @designinput  ปีที่แล้ว

      Hi, thanks a lot for your lovely comment and feedback

  • @reflections191
    @reflections191 ปีที่แล้ว +1

    Very well explained, Thanks for the great video!

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, thanks for your lovely comment!

  • @mkemaladro5942
    @mkemaladro5942 5 หลายเดือนก่อน

    Very nice work, I'm a student trying to learn this and just stumbled on your video. It's constructive and informative, keep up the good work sir!!!

  • @niirceollae2
    @niirceollae2 ปีที่แล้ว

    wow... that is insane. i have to try it now

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, thanks for your lovely comment! Please share your experiences after you tried it out, and feel free to ask if you have any problems.

  • @pranayyalamuri3127
    @pranayyalamuri3127 ปีที่แล้ว +1

    Thanks for the content ❤

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, thanks a lot for your great comment and support!

  • @NicoChin
    @NicoChin ปีที่แล้ว +8

    if you tell the client that the last picture is man-made and the same picture you say was created by AI. If the client's attitude does not change, then AI will really change the world.

  • @firatgunesbalci2743
    @firatgunesbalci2743 ปีที่แล้ว +5

    Great videos 👍🏻👍🏻👍🏻Can you explain Sketchup workflow as well?

  • @MDLEUA
    @MDLEUA ปีที่แล้ว

    Great tutorial, followed Ambrosini videos but I like this format more!

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, thank you! Glad to hear that you liked it! Did you have a chance to try it?

  • @amazingsound63
    @amazingsound63 ปีที่แล้ว

    Scary For Future Job Opportunity.

  • @HannesGrebin
    @HannesGrebin ปีที่แล้ว

    Wizard! Thank you so much for your concise introduction and other videos. Just came along from the Parametric Architecture course of Aturo Tedeschi who you might know (the grasshopper guy)

    • @designinput
      @designinput  ปีที่แล้ว

      Hi, thanks a lot for your lovely comment and feedback

  • @JJSnel-uh3by
    @JJSnel-uh3by ปีที่แล้ว

    I love the setup but the voice is just too funny xD

  • @dkn822
    @dkn822 ปีที่แล้ว +1

    Thank you for all this amazing information and resources, I will definitely use this for my projects.
    Subscribed and eager to watch your upcoming videos! Keep it up!

    • @designinput
      @designinput  ปีที่แล้ว +1

      Hey, thanks a lot for your lovely comment and support! I am happy to hear that you liked it! Please share your experiences with me once you try it out!

  • @tatianagavrilova2252
    @tatianagavrilova2252 ปีที่แล้ว

    It look like a magic! Thanks a lot

    • @designinput
      @designinput  ปีที่แล้ว

      Hi, thanks a lot, glad you liked it! You are very welcome!

  • @Masoud.Ansari
    @Masoud.Ansari ปีที่แล้ว

    Thank you for sharing this is awesome 👌

    • @designinput
      @designinput  ปีที่แล้ว +1

      Hey, thanks a lot! Glad to hear that you liked it :)

    • @Masoud.Ansari
      @Masoud.Ansari ปีที่แล้ว

      @Design Input yourwelcome bro

  • @ilhan1936
    @ilhan1936 ปีที่แล้ว

    Thats really great thanks for the video! Eline saglik arkadasim :)

    • @designinput
      @designinput  ปีที่แล้ว

      Hi Ilhan, thanks a lot for your lovely comment :)) ❤❤

  • @emekachime1089
    @emekachime1089 ปีที่แล้ว

    Looking forward to your next video of CLASSICAL RENDER VS AI RENDER .👍

    • @designinput
      @designinput  ปีที่แล้ว +1

      Hey, thanks a lot for your lovely comment! It will be out soon :)

  • @borchzhang2211
    @borchzhang2211 ปีที่แล้ว +1

    How to handle parameter settings indoors to better align with the model?

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, thanks for your comment! For indoor views, you can try the Depth Model too. Is there any specific parameter you want to ask? Maybe I can help better with that one :)

  • @dianaallaham2801
    @dianaallaham2801 9 หลายเดือนก่อน

    Since your video there has been an update to the Ambrosinus, and for some reason I cannot get the port to be available. Do you happen to know what inputs should go into the LaunchSD as it has many more inputs now?

  • @zafiriszafiropoulos5346
    @zafiriszafiropoulos5346 ปีที่แล้ว +1

    hi there. I only have rhino 6, and the ambrosini tool is only available for rhino 7. is there another way?

  • @william0916
    @william0916 ปีที่แล้ว

    Thank you for sharing this fabulous workflow!! I am about to try it out, and I'm wondering if there are any newer extensions and development you would suggest us to use (since this video is from April, not sure if there's anything new in these 3 months!)
    Thank you in advance and have a nice day :)

    • @designinput
      @designinput  ปีที่แล้ว +1

      Hey, thanks a lot for the feedback! Of course, there are lots of new developments happening every day, I am trying to stay updated as much as I can and share what I learn. But in terms of this specific workflow, there are new major updates for both Stable Diffusion and Grasshopper extensions. But both should still work fine!

  • @MertMert-g7c
    @MertMert-g7c 6 หลายเดือนก่อน

    I have a no data problem when I connect 2.24 LaunchSD to the panel, how can I solve it?

  • @DannoHung
    @DannoHung ปีที่แล้ว

    Backing the rendered image out to a textured and lit scene is the next step probably, hah!

  • @infographie
    @infographie ปีที่แล้ว

    Excellent

  • @moaazaldahan1175
    @moaazaldahan1175 ปีที่แล้ว

    thank you very much

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, your are very welcome

  • @mrezaforoozandeh520
    @mrezaforoozandeh520 9 หลายเดือนก่อน

    thanks but by clicking start botton the webui-user.bat wont run by --api. i edit the bat file but after clicking start it wont be able to run it in that way and changes the bat file back to origin

  • @user-ae5pa
    @user-ae5pa ปีที่แล้ว +1

    soooooo good

    • @designinput
      @designinput  ปีที่แล้ว

      Hi, thanks a lot for your great comment! ❤

  • @mukondeleliratshilavhi5634
    @mukondeleliratshilavhi5634 ปีที่แล้ว

    I think it's a great tool for rapid prototyping with less images . It unlocks more possibilities and gives us and the client more variety with less time and energy. The biggest hope is we come to a final image that we might have not even though possible before.
    But for a final image I think the old method is still king. Who knows next year this time it might be a different story m
    Will I use it for my next project oh yes but the blender version it's always best to get in early with new technology

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, thanks for your comment; I totally agree! Hmm, that's interesting; why do you prefer Blender specifically?

    • @mukondeleliratshilavhi5634
      @mukondeleliratshilavhi5634 ปีที่แล้ว

      @@designinput there are a few reason.
      1) Been open source it was easy access with out restrictions and invest time and resources on it. I'm a freelance/ business owner. It is important I run as lean as possible
      2) rapid development : it can do a lot of things and it's ever expanding its reach. I'm able to complete a project in one software with out having to hop on another. Yes it's not as strong as rhino or Max but it's gives great quality.
      3) the community: they drive the development and education of the software it's so of owned by us .
      The amount of tutorial and add on , stores available.
      There is more but let me park here

  • @firatgunesbalci2743
    @firatgunesbalci2743 ปีที่แล้ว +1

    When I first saw the teaser, I thought that you used ArkoAi

    • @designinput
      @designinput  ปีที่แล้ว +1

      Hey, haha, yes, that's the most "popular" one nowadays, but I feel like you don't have much control over it.
      I will share a video soon to compare different AI Render alternatives. Thanks for your comments!

  • @adel.419
    @adel.419 ปีที่แล้ว

    I have followed everything in the video but when I tried my own model and hit the generate button the AleNG-Ioc battery turned red and doesn't generate anything and the panel connected to the info says "No data was collected" even though the viewport appears in the LB image viewer

  • @lawrencenathan351
    @lawrencenathan351 ปีที่แล้ว

    quick question : Do i just add this on top of sketchup? or is there any simple tutorial i can follow on combining ai in sketcuo? thanks

    • @designinput
      @designinput  ปีที่แล้ว

      Hi, this workflow doesn't work with SketchUp at the moment, but you can try platforms like VerasAI. Thanks for your comment!

  • @azimbekibraev1249
    @azimbekibraev1249 5 หลายเดือนก่อน

    Selam aleykum Omer! Ambrosinus has updated and your sample GH fail is no longer work, could you please share the updated version, if this workflow is still relevant. Thank you in advance

  • @Albert_Riseal
    @Albert_Riseal ปีที่แล้ว

    Awesome! I like it, thanks. Please make a tutorial using blender, if possible

  • @shinndin
    @shinndin ปีที่แล้ว +1

    Amazing

    • @designinput
      @designinput  ปีที่แล้ว

      Hi Dina, thanks a lot for your excellent feedback ❤❤

  • @hopperblue934
    @hopperblue934 ปีที่แล้ว

    great bro💖💖💖

    • @designinput
      @designinput  ปีที่แล้ว

      Hi, thanks a lot for the lovely feedback

  • @motivizer5395
    @motivizer5395 ปีที่แล้ว

    Amazing video . Can you make a video for sketchup as well about this process ?

    • @designinput
      @designinput  ปีที่แล้ว

      Hi, thanks for your comment and suggestion! I will definitely try it out and share the results!

  • @soitalwaysgoes
    @soitalwaysgoes ปีที่แล้ว +1

    Hello! I checked out your instagram and I would die for a tutorial on how to do those veil textures you did!

    • @designinput
      @designinput  ปีที่แล้ว

      Hi, oh, thank you for your lovely feedback. Happy that you liked them ❤
      I created them with Midjourney v5. Sure, I will do a video about it soon!

  • @Peter-hn9yv
    @Peter-hn9yv ปีที่แล้ว +1

    does this workflow saves the viewport and dimensions of the image?

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, yes, it saves the image exactly in the viewport size and uses the same aspect ratio for the new image.
      Thanks for your comment!

  • @darkrider897
    @darkrider897 ปีที่แล้ว

    Hi sir, I was stuck at 2:28 when u clicked on the administrator window. I tried to do it by right clicking webui-user.bat, then click run as administrator. However it just flashes but nothing happens. How do I solve the problem?

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, you don't need to run the webui-user.bat file as administrator, you need to run Rhino as administrator. And make sure to add the --api parameter to the .bat file.
      If you can't start Stable Diffusion inside Grasshopper you can just run it manually and if you have --api commend, it should automatically connect to the Grasshopper plugin.

  • @arv3ryn
    @arv3ryn ปีที่แล้ว

    Great video, also what is you computer specs, cuz I have a basic laptop, wondering whether I can run this

    • @designinput
      @designinput  ปีที่แล้ว +1

      Hey, thanks a lot for you lovely feedback! I am using a laptop with RTX3060 (6GB VRAM) and 12th Gen Intel(R) Core(TM) i7-12700H CPU. Of course, for this process, the most important one is the GPU. I will share another workflow how you can use Stable Diffusion without any computer in couple of days.

  • @kedarundale972
    @kedarundale972 ปีที่แล้ว

    Thank you for the wonderful video.
    I had one question, so everything in the script works perfectly on my computer but when I connect value list to Mode, I get error. Do you know why this could be?
    Basically the mode doesn't take any other input apart from 0 - which is the T2I Basic. In my stable diffusion I do see the other models but I am not sure what the error is. The same thing is happening with SAMPLER MODEL, it does not take any input apart Euler A. Any suggestions will be helpful. Thank you.

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, thanks for your comment. I am not sure why you can't see the other modes. There was a new update to the ambrosinus-toolkit plugin since I published the video, maybe you should update it to work. I will check the file and upload an updated version soon. Let me know if you are still having problems with it. Thank you!

  • @Peter-hn9yv
    @Peter-hn9yv ปีที่แล้ว

    i got the error in grasshopper saying excepting index was out of range, have you encounter this issue before?

  • @METTI1986LA
    @METTI1986LA ปีที่แล้ว

    It’s actually good but I rather have control over the textures and put them where I want to have them - it’s really not that hard... of course it takes a bit more time but why would you need 1000 renders just to get overwhelmed by the choices you have

  • @alexanderaggersbjerg5187
    @alexanderaggersbjerg5187 ปีที่แล้ว

    Thanks for the great explanation! Got everything up and running:) One quick question, I am having issues working with the depth controlnet. I have downloaded the previous controlnet versions (aside from the new controlnet v1.1 versions) but the depth and canny masks are very bad quality. This is only an issue for me when I use controlnets in grasshopper. Any ideas what the problem may be?

    • @simongobel2709
      @simongobel2709 ปีที่แล้ว

      i have the same problem unfortunately .... any answer yet ?

  • @ezzathakimi2201
    @ezzathakimi2201 ปีที่แล้ว

    Please make a video how to use it in 3ds Max + Corona

  • @sirousghaffari9556
    @sirousghaffari9556 ปีที่แล้ว

    Hello, thank you very much for your good lessons. In the 3rd minute of the tutorial, you say that I put the GrassHopper codes for you in the description section. But unfortunately I can't find it. Is it possible to guide me?

    • @designinput
      @designinput  ปีที่แล้ว +1

      Hi, thanks for the feedback! You can find all the resources mentioned in the video here: designinputstudio.com/this-will-change-everything-in-architectural-visualization-forever/
      And you can download the file here: www.notion.so/designinputs/AI-Render-Engine-Template-File-02d34b595f824ca6a9f1339470fb1387?pvs=4

  • @diegovazquezdesantos4667
    @diegovazquezdesantos4667 ปีที่แล้ว

    Thank you so much for the clear explanation. I tried to follow this video with the new update of ambrosinus but I was no able too. And when I installed v1.1.9 I was able to utilized your code. Although at the output SeeOUt (LA_SeeOut) an error occurs. “index was out of range” any ideas on how to fix this error?

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, thanks a lot! I think you just need to generate an image first, after that you will able to see it and the error will disappear.

  • @youssefdaadoush8755
    @youssefdaadoush8755 ปีที่แล้ว

    Thanks a lot for the video, is really incredible, I just have a question, I did everything exactly same and in the generation comes the results regardless of my base image, what could be the problem? otherwise it works directly in stable difussion in web window

    • @designinput
      @designinput  ปีที่แล้ว

      Hey Youssef, thanks for your great comment! It looks like there is a problem with the ControlNet. Did you enable it?

  • @firatgunesbalci2743
    @firatgunesbalci2743 ปีที่แล้ว +1

    Hi, what is your computer hardware configuration ?

    • @designinput
      @designinput  ปีที่แล้ว

      Hey Fırat, I am using a laptop with RTX3060 (6GB VRAM) and 12th Gen Intel(R) Core(TM) i7-12700H CPU.

  • @oof1498
    @oof1498 ปีที่แล้ว

    Great! How about if I want to use the same material on the same place but in different perspective?

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, thanks for your feedback ❤You can keep the same seed number for the different views to have similar results. But still, it is not so easy to generate precisely the same materials and textures all the time. If I figure out something for more consistent results, I will share it :)

  • @user-un3jv4lo3r
    @user-un3jv4lo3r ปีที่แล้ว

    I need a plugin that can give a million likes to this video👍👍👍

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, thanks a lot for your lovely comment!

  • @user-yq7zn7do8p
    @user-yq7zn7do8p ปีที่แล้ว

    Hi. What’s your rhino version and ladybug version? Ladybug is not working on my rhino.

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, I was using 1.6 version, you can download it here: www.food4rhino.com/en/app/ladybug-tools
      But even if Ladybug doesn't work, you can still use this workflow, just you won't be able to see the images directly inside Grasshopper.

  • @jelisperez7968
    @jelisperez7968 ปีที่แล้ว

    Thank you for sharing this amazing tutorial. Is it still working? I am having this issue with ControlNet updates: controlnet warning gess mode is removed since 1.1.136. please use Control Mode instead. If I choose the CN v1.1.X IN Ambrosinus tool, Result image differs completely from original image. Also changed directory to point directly to CNet path.
    Any hint?
    Is there a way to choose the SD Model?
    Best

    • @jelisperez7968
      @jelisperez7968 ปีที่แล้ว

      I figured out that with the update, CN Depth modes are working as expected, but not Canny mode. I've posted the bug on food4Rhino. Many thanks again

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, good to hear that it's working :) For me, it was working without any issues. Thanks for your comment!

  • @sirousghaffari9556
    @sirousghaffari9556 ปีที่แล้ว

    In the 4th minute, when you press the start button, it renders without any problem, but it is a problem for me because the SEE OUT code is red and it gives this error ( Solution exception:Index was out of range. Must be non-negative and less than the size of the collection. Parameter name: index)
    can you help?

    • @11Bashar
      @11Bashar ปีที่แล้ว

      Have you found a solution yet?

    • @sirousghaffari9556
      @sirousghaffari9556 ปีที่แล้ว

      @@11Bashar Unfortunately, I was disappointed in connecting to Grasshopper because I don't notice its errors and there is no explanation about it anywhere.

  • @wido.daniel
    @wido.daniel ปีที่แล้ว

    Thank you man, this si SO good! to your knowledge, would it possible to use this in Revit through Dynamo?

    • @designinput
      @designinput  ปีที่แล้ว +1

      Hey, thanks a lot for the feedback. ❤Hmm, I am not super sure, but I believe there is no extension for that yet. But I am experimenting with connecting Revit to this same workflow with Rhino.Inside.Revit. I will share it as soon as it's ready :)

    • @wido.daniel
      @wido.daniel ปีที่แล้ว

      @@designinput that would be awesome!

  •  ปีที่แล้ว

    hi, thanks for the video. i check other videos and came to somewhere until I stuck with webui part. my webui-user file looks different than yours. there is "--xformers" and "git pull" lines in yours but I don't have it. unfortunately just copying it as yours doesn't work :) . Dont know what is missing but I can say that it is pretty overwhelming setup for sure.

    • @designinput
      @designinput  ปีที่แล้ว

      Hey Cankat,
      Thanks for your comment. "--xformers" is an additional step that you can use if you have an RTX 30 or 40-series GPU; it will speed up the generation process. And the "git pull" comment automatically checks for new updates when you run the SD. So you don't have to have them to use it; the only must is the "--api" to give access directly inside the Grasshopper file.
      Since it is an early experimental workflow, you are right that it is not so user-friendly. But it will surely develop, and I will share the newer versions very soon.
      Thank you!

  • @韩鹏坤
    @韩鹏坤 ปีที่แล้ว

    My rhino7 cannot be installed ambrosinus-toolkit,which version of am should i download?

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, I am also using Rhino7 and was able to use it without any issues with the latest version of Ambrosinus-toolkit, if you are still having issues you may contact the developer.

  • @danr9277
    @danr9277 ปีที่แล้ว

    This is great how is the speed of the rendering? Seems very fast.

    • @designinput
      @designinput  ปีที่แล้ว +1

      Hey, thanks for your comment! It mostly depends on your GPU, I am using a RTX 3060 with 6GB VRAM, and I can generate a 1024x1024 image in 1-2 minutes.

  • @lorenzoguadagnucci-e1q
    @lorenzoguadagnucci-e1q ปีที่แล้ว

    Thank you so much!! I’m just having issues with the resolution of the “depth image” that it creates, its really low and cause of it I can t use my models Can I increase it ? Thank you anyway this tool is amazing 👍

    • @lorenzoguadagnucci-e1q
      @lorenzoguadagnucci-e1q ปีที่แล้ว

      being more precise, I probably have problems with the preprocessor I can't change it so it doesn't generate the correct depth image

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, thanks for the comment! If the image resolution is low from the viewport, you can try printing a view from Rhino with a custom resolution and use it in Stable Diffusion directly. It may help but don't go larger than 1024x1024 it will slow down the process dramatically, once you like one of the views than you can upscale the image later. Hope I understood your question correctly. Let me know if you have any other issues.

  • @Macora3251
    @Macora3251 ปีที่แล้ว

    Can you get the same results twice if the client wants the exact same render but change just the column material for example?

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, thanks for your comment! Generating exactly the same image twice can be challenging. But if you want to change a part of it, you can use inpainting to edit it.

  • @NMPrecedent
    @NMPrecedent ปีที่แล้ว

    Can stable diffusion further elaborate the model so that at different views you can maintain the same materials, facades?

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, thanks for your feedback ❤You can keep the same seed number for the different views to have similar results. But still, it is not so easy to generate precisely the same materials and textures all the time.
      But I am sure we will see some developments about this very soon!

  • @韩鹏坤
    @韩鹏坤 ปีที่แล้ว

    2023-07-01 22:55:51,129 - ControlNet - WARNING - Guess Mode is removed since 1.1.136. Please use Control Mode instead.
    What should i do?

    • @designinput
      @designinput  ปีที่แล้ว +1

      Hello, I think it should still work but if it doesn't update your ControlNet extension and it should solve this issue. Thank you!

  • @cgimadesimple
    @cgimadesimple ปีที่แล้ว

    cool :)

  • @user-ee7ko1yb9s
    @user-ee7ko1yb9s ปีที่แล้ว

    Hi its looks amazing thank you for that but I tried it and also used the same parameters but unfortunately it generate a different image not the image of the pavilion it change it completely i dont know what i did wrong if you could help me thank you again

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, thanks for your comment! Probably there was a problem with the ControlNet. Do you have the ControlNet models installed locally?

    • @user-ee7ko1yb9s
      @user-ee7ko1yb9s ปีที่แล้ว

      @@designinput hi thank you for replying back yes I already download it but controlnet doesn't work in Rhino it just work in the Browser no idea why

  • @sossiopalmiero3582
    @sossiopalmiero3582 ปีที่แล้ว

    where i can find the grasshopper file?

    • @designinput
      @designinput  ปีที่แล้ว +1

      Hey, you can find all the resources here: designinputstudio.com/this-will-change-everything-in-architectural-visualization-forever/

  • @ABCDEFGH-bi5tk
    @ABCDEFGH-bi5tk ปีที่แล้ว

    Does this work with 3ds Max as well?

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, not with the exact workflow but it can be possible to use it with an extension. I am not using 3ds Max myself, that's why I haven't experimented with that one. Let me know if you try it :)

  • @riccia888
    @riccia888 ปีที่แล้ว

    This is the most cofusiing software ever

  • @remyleblanc8778
    @remyleblanc8778 ปีที่แล้ว

    nice! wish it was 1000 times more simple

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, thanks! Haha, I feel you

  • @pedorthicart1201
    @pedorthicart1201 ปีที่แล้ว

    I feel it is great and help me with visualization of orthopedic footwear designed through #Pedorthic Information Modeling! Waiting to have time to explore it! Thank you

    • @designinput
      @designinput  ปีที่แล้ว +1

      Hey, thanks for your comment! I will share a video specifically about product photography and how to use AI.
      Thank you!

    • @pedorthicart1201
      @pedorthicart1201 ปีที่แล้ว

      @@designinput Waiting for it! Thanks!

  • @mockingbird1128
    @mockingbird1128 ปีที่แล้ว

    would this work with revit too?

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, maybe it could work with the Rhino.Inside.Revit, but I haven't tested it. But you can always take a screenshot and use the SD + ControlNet additionally.

  • @borchzhang2211
    @borchzhang2211 ปีที่แล้ว

    succes 成功了

  • @bixp2k3
    @bixp2k3 ปีที่แล้ว

    how does it cost

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, it doesn't cost anything if you already have Rhino, because Stable Diffusion is running locally on your computer.

  • @abdulmelikyetkin9721
    @abdulmelikyetkin9721 ปีที่แล้ว

    #DesignInput can u do this with sketchup

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, thanks for your comment! Technically yes, I had some issues creating this custom workflow on SketchUp; when I figure it out, I will share it :)
      Meanwhile, you can try extensions like VerasAI and ArkoAI extensions.

  • @sabaahmed1261
    @sabaahmed1261 ปีที่แล้ว

    Does it work with revit ?

    • @GRUMPNUGS
      @GRUMPNUGS ปีที่แล้ว

      I know revit currently has one called Veras

    • @designinput
      @designinput  ปีที่แล้ว +1

      Hi, I am currently experimenting with implementing this workflow in Revit. I will share a video about it soon :)
      Thanks for the comment!

  • @iaspace6737
    @iaspace6737 ปีที่แล้ว

    I NEED SD+SU

  • @motassem85
    @motassem85 ปีที่แล้ว

    Looks too complicated for me still prefer 3ds max vray or lumion 😂

    • @designinput
      @designinput  ปีที่แล้ว

      Haha, totally understand that :) But we will see much easier user interfaces soon, surely!

  • @shiryu7101
    @shiryu7101 ปีที่แล้ว

    Hi! Could you tell me why it says “Input image doesn’t exist or is not supported format” even I put png file? Thank you!

  • @oof1498
    @oof1498 ปีที่แล้ว

    Great! How about if I want to use the same material on the same place but in different perspective?

    • @designinput
      @designinput  ปีที่แล้ว

      Hey, thanks for your feedback ❤You can keep the same seed number for the different views to have similar results. But still, it is not so easy to generate precisely the same materials and textures all the time. If I figure out something for more consistent results, I will share it :)

    • @oof1498
      @oof1498 ปีที่แล้ว

      @@designinput thanks bro, appreciate your effort:)