Thanks Gavin! Yeah to me this is an amazing tool. Definitely not perfect but one of the most applicable tools we can use today since its reads our existing inputs amazingly well.
Thanks - very good explaination. I have Controlnet and stable diffusion, but my Interior design and perfect workflow is a struggle. I imagine to prompt for a design, with good prompts endning up with the look and feel I want. This session would be in any random room. Next I will draw the exact room sketch in Sketchup - white background black lines. My dream is to tell AI "this is the exact room - use it 100%, but the interior should be with the look and feel from previously generated picture". THAT video would make my day :-)
I screenshot your sketch to use it as test, but I absolutely got something not even remotely close what the sketch look like. it's another space with another furniture, it even has a heater on full glass window ceiling to floor . Any idea what could be wrong?
I am a first-year college student studying architecture in South Korea. I'm studying how to combine AI and architecture, so can you recommend a site that receives a checkpoint model..? pls
I tried to follow your instructions for RunDiffusion and ran into trouble. If I am using a color image (rendered in Enscape), should I still use scribble? Some of my results aren't too impressive. And at some point I started getting only portions of the image as a result. Thanks.
@@byDevCreates Thank you so much, I hadn't tried that, even though you mentioned it in your video. I'm finding Rundiffusion difficult to understand. And sometimes it switches to a partial and hardly related image even though all I've done is push "generate." My Model 0 looked fine, and the preprocessor image next to it looked fine, and the result is a partial part of the original image and which has very little to do with it. Maybe problems with website, or just over my head. Thank you so much.
I tried to render my kitchen. I use 2 point perspective sketching and without writing any description it simply downloads fruit pictures instead of a kitchen Even when I write description it turns into a kitchen but totally irrelevant to the sketch and looks like its downloading modern kitchen pictures from google I wonder where I make the mistake
the sketches that i make in sketchup that i feed to stable diffusion are very far from the result. I am not sure where I go wrong. it looks nothing like my sketch.
Dev, you obviously know what you’re doing but you should look at your transcript, you talk so fast it’s nearly impossible to follow. Does what you are doing work in MAC or PC only?
This is amazing, can't wait to try it. I'm not even finished with your video yet (taking it slow in order to understand it well).
Thanks Mary!
Very nice! This was a solid tutorial mate, and nice outcomes too.
Thanks Gavin! Yeah to me this is an amazing tool. Definitely not perfect but one of the most applicable tools we can use today since its reads our existing inputs amazingly well.
Great video, been looking for this for some time
Thanks - very good explaination. I have Controlnet and stable diffusion, but my Interior design and perfect workflow is a struggle. I imagine to prompt for a design, with good prompts endning up with the look and feel I want. This session would be in any random room. Next I will draw the exact room sketch in Sketchup - white background black lines. My dream is to tell AI "this is the exact room - use it 100%, but the interior should be with the look and feel from previously generated picture". THAT video would make my day :-)
Keep up the great work Dev! Love your videos!
Cheers man!
Excelent content, thanks!
Thank you Dev for this video. I have a question.
What version of stable Diffusion do you use?
I used realisticvision model in the video.
@@byDevCreates can you please add a link to dowload the realsticvision model you used? sd-v1.4 is not so good
Hey ,,i need your help could you plz one image rendering for my college project, coz i haven't laptop
how can i get the model that you're using ?
I screenshot your sketch to use it as test, but I absolutely got something not even remotely close what the sketch look like. it's another space with another furniture, it even has a heater on full glass window ceiling to floor . Any idea what could be wrong?
Thank you. how can I contact you privately about it?
I am a first-year college student studying architecture in South Korea. I'm studying how to combine AI and architecture, so can you recommend a site that receives a checkpoint model..? pls
I tried to follow your instructions for RunDiffusion and ran into trouble. If I am using a color image (rendered in Enscape), should I still use scribble? Some of my results aren't too impressive. And at some point I started getting only portions of the image as a result. Thanks.
I wouldn't use scribble from an Enscape image, maybe try cannylines, or lineart? Always look at the pre-processor image and see what's going on.
@@byDevCreates Thank you so much, I hadn't tried that, even though you mentioned it in your video. I'm finding Rundiffusion difficult to understand. And sometimes it switches to a partial and hardly related image even though all I've done is push "generate." My Model 0 looked fine, and the preprocessor image next to it looked fine, and the result is a partial part of the original image and which has very little to do with it. Maybe problems with website, or just over my head. Thank you so much.
I tried to render my kitchen. I use 2 point perspective sketching and without writing any description it simply downloads fruit pictures instead of a kitchen
Even when I write description it turns into a kitchen but totally irrelevant to the sketch and looks like its downloading modern kitchen pictures from google
I wonder where I make the mistake
the sketches that i make in sketchup that i feed to stable diffusion are very far from the result. I am not sure where I go wrong. it looks nothing like my sketch.
play with the input modes, make sure to look at precusor / image processing and make sure stable diffusion is reading the input correct
Super
Dev, you obviously know what you’re doing but you should look at your transcript, you talk so fast it’s nearly impossible to follow. Does what you are doing work in MAC or PC only?