That's a really well made tutorial, i did days of testing before understanding what every parameter do, this video will be a miracle for newbies, good job!
Good to see another video from you. Hope this becomes a full time thing for you if that's what you want. Would love to see a video on making videos with animatediff and the alternatives and some workflow tips and tricks and/or how to migrate from A1111 to ComfyUI
How to use this in forge diffusion ? there is a "integrated multidiffusion" but its options are not the same at all. and, I can't install the a1111 extension in it (even after removing the integrated multidiff". that's really annoying.)
I have not tried this in Forge. I read on github that the Forge extension might not be working very well. Link here for reference: github.com/lllyasviel/stable-diffusion-webui-forge/issues/124
if you have multiple areas that need to be fixed, can you use latent upscaling and then downscale the image so it's more workable, and then use latent upscaling again?
Youre the guy when it comes to A1111 turorials. Am i missing something or this is all very similar to SD Ultimate upscale + controlnet tile? What's the point in using this new method rather than SD ultimate upscale + tile?
I'm glad you asked this question. You are right, the tiled diffusion extension is very similar to Ultimate SD Upscale + CN Tile, but I think there are a few advantages for tiled diffusion. First, you can use this extension directly in txt2img, where as Ultimate SD upscale only shows up in img2img. Second, you have the option of using this extension with CN Tile, but it's not a requirement. You can often get better details by using a combination of Noise Inversion and CN Tile. Lastly, this extension comes with a Regional Prompt Control feature, which is such a great feature! I went into detail in one of my videos: th-cam.com/video/3aIEitw5Pt8/w-d-xo.html. I like it a LOT as you can probably tell and overall I think this extension is a bit more intuitive to use. I hope this helps you. Cheers!
Does anyone know why im getting little bumby circles artifacts on upscaled images when using? using method 1. EDIT: Its something with FAST DECODER, with it off I get slower gen but I must
I was wondering if I have LoRAs(sometimes multiple of them), how I would go about balancing them out? I noticed that they tended to over saturate or change the style to much depending on how they interact. Anyways, thanks for all these workflows, was wondering how people upscaled with Multidiffusion!
Not sure if I fully understand what you are asking regarding balancing your LoRAs, but I'll take a stab at it. If all of your LoRAs are meant to modify one object or person in the image, then you would want to adjust the weights and have them add up to 1.0. So say you have LoRAs A, B, and C, and say if you want more of A in your main object, then you can do something like A weighted at 0.6, then B and C both at 0.2. This way, the A LoRA will have more influence over your main object.
@@KeyboardAlchemist Oh I meant something along the lines of if I used region prompter or composable lora+latent couple to generated an image with a specific LoRA background and/or a bunch of different characters, how would I go about tackling it? I was also wondering, what is the tool you're using to slide and compare images?
@@FlawlessMind-lb3tw With the Tiled Diffusion / Multi diffusion extension it's pretty easy to do this. I am working on the video for this, but here is the short version of it. There is the Regional Prompt Control section within Tiled Diffusion, if you expand it, you will see different regions that you can define, and this will allow you to define prompts specifically for a region of your image. Then you can put LoRAs in your regional prompt. Let's say you want to generate three distinctly different characters in the same image, you can draw out 3 different regions and give each region a different prompt and add the LoRAs that you want. I also described a different method that can do this with the Adetailer extension, here is the link if you want to check it out: th-cam.com/video/6EraysHdhHE/w-d-xo.html You can google "before and after slider" to find the slider tool, there are a lot of them out there. Hope this helps you!
Tile diffusion dosnt work for me, it won’t stay checked to show it’s enabled. I checked my config and there are no extensions that are disabled. I’ve also uninstalled and reinstalled it, and it still won’t work.
'Whole Picture' uses the entire image as reference for inpainting the new thing while 'Only Masked' uses the masked area plus the surrounding padding pixels as reference. In this situation, the Whole Picture option worked better.
Thank you for this tutorial and all the others, they are really helpful for beginners like me ! 👌 I have a question regarding the end result, I don't really know how to describe it but when I zoom in a lot, they are little squares edges visible, like cracked skin, tile edge artifacts or I don't know.. is this something normal ? Can we do something to reduce this effect ?
Thanks, I'm glad you liked my tutorials! The edge artifacts are definitely not normal. Which workflow did you follow to get to your final image? It's hard to tell without some additional information, such as which method you used or what settings.
@@KeyboardAlchemistI don't know exactly if it had a link with my issue but I changed tile overlap to 16 and upscale with more resolution and now everything seems ok ! But I don't know if it could also be linked to the checkpoint, as I often change it
@@virtualj8561Both tile overlap value and the checkpoint could be a source of your issue. Most of the time, if you just change one variable at a time, you can figure out what went wrong. Good to know that you solved the problem! Cheers!
I must be doing something wrong with the Method #2, my GPU usage goes to 15G and uses system ram and it will take like 10 minutes for a single upscale. Is this normal? I have the same parameters as shown in the video, same starting resolution, same parameters for tiled diffusion, tiled vae and controlnet. What could it be? Also, the image changes a bit, specially the face is very different.
That's a really well made tutorial, i did days of testing before understanding what every parameter do, this video will be a miracle for newbies, good job!
Thanks for your support! I'm glad you liked the tutorial!
Good to see another video from you. Hope this becomes a full time thing for you if that's what you want. Would love to see a video on making videos with animatediff and the alternatives and some workflow tips and tricks and/or how to migrate from A1111 to ComfyUI
Thank you for tuning in! I plan on uploading regularly.
Excellent guide, you made it so easy to understand! Thanks!
You're welcome! I'm glad you liked the video!
This video is a Gem, well done Sir 🔥
Thank you, I'm glad you liked the video!
Really ty again !
How to use this in forge diffusion ?
there is a "integrated multidiffusion" but its options are not the same at all. and, I can't install the a1111 extension in it (even after removing the integrated multidiff". that's really annoying.)
I have not tried this in Forge. I read on github that the Forge extension might not be working very well. Link here for reference: github.com/lllyasviel/stable-diffusion-webui-forge/issues/124
hello! I really hope that you will have a lesson on restoring badly damaged photos
Great video. Whats your opinion about ComfyUI, I am curious why you never make any videos on it.
if you have multiple areas that need to be fixed, can you use latent upscaling and then downscale the image so it's more workable, and then use latent upscaling again?
Youre the guy when it comes to A1111 turorials. Am i missing something or this is all very similar to SD Ultimate upscale + controlnet tile? What's the point in using this new method rather than SD ultimate upscale + tile?
I'm glad you asked this question. You are right, the tiled diffusion extension is very similar to Ultimate SD Upscale + CN Tile, but I think there are a few advantages for tiled diffusion. First, you can use this extension directly in txt2img, where as Ultimate SD upscale only shows up in img2img. Second, you have the option of using this extension with CN Tile, but it's not a requirement. You can often get better details by using a combination of Noise Inversion and CN Tile. Lastly, this extension comes with a Regional Prompt Control feature, which is such a great feature! I went into detail in one of my videos: th-cam.com/video/3aIEitw5Pt8/w-d-xo.html. I like it a LOT as you can probably tell and overall I think this extension is a bit more intuitive to use. I hope this helps you. Cheers!
Does anyone know why im getting little bumby circles artifacts on upscaled images when using? using method 1.
EDIT: Its something with FAST DECODER, with it off I get slower gen but I must
It's a great TY!
Can you give me the upScale 4x-Remacri model on google driver, the website won't let me download it anymore. Thankyou...
I was wondering if I have LoRAs(sometimes multiple of them), how I would go about balancing them out? I noticed that they tended to over saturate or change the style to much depending on how they interact. Anyways, thanks for all these workflows, was wondering how people upscaled with Multidiffusion!
Not sure if I fully understand what you are asking regarding balancing your LoRAs, but I'll take a stab at it. If all of your LoRAs are meant to modify one object or person in the image, then you would want to adjust the weights and have them add up to 1.0. So say you have LoRAs A, B, and C, and say if you want more of A in your main object, then you can do something like A weighted at 0.6, then B and C both at 0.2. This way, the A LoRA will have more influence over your main object.
@@KeyboardAlchemist Oh I meant something along the lines of if I used region prompter or composable lora+latent couple to generated an image with a specific LoRA background and/or a bunch of different characters, how would I go about tackling it?
I was also wondering, what is the tool you're using to slide and compare images?
@@FlawlessMind-lb3tw With the Tiled Diffusion / Multi diffusion extension it's pretty easy to do this. I am working on the video for this, but here is the short version of it. There is the Regional Prompt Control section within Tiled Diffusion, if you expand it, you will see different regions that you can define, and this will allow you to define prompts specifically for a region of your image. Then you can put LoRAs in your regional prompt. Let's say you want to generate three distinctly different characters in the same image, you can draw out 3 different regions and give each region a different prompt and add the LoRAs that you want.
I also described a different method that can do this with the Adetailer extension, here is the link if you want to check it out: th-cam.com/video/6EraysHdhHE/w-d-xo.html
You can google "before and after slider" to find the slider tool, there are a lot of them out there. Hope this helps you!
thank you sir!
You are welcome!
could you just leave the controlnet input image empty?
Tile diffusion dosnt work for me, it won’t stay checked to show it’s enabled. I checked my config and there are no extensions that are disabled. I’ve also uninstalled and reinstalled it, and it still won’t work.
This extension doesn't seem to work with SD Forge. It'll work in A1111.
Great tutorial! Are those methods also possible with sdxl?
Yes, you can use this extension to upscale with SDXL models as well.
Why do you choose „whole picture“ as inpaint area?
'Whole Picture' uses the entire image as reference for inpainting the new thing while 'Only Masked' uses the masked area plus the surrounding padding pixels as reference. In this situation, the Whole Picture option worked better.
Thank you for this tutorial and all the others, they are really helpful for beginners like me ! 👌
I have a question regarding the end result, I don't really know how to describe it but when I zoom in a lot, they are little squares edges visible, like cracked skin, tile edge artifacts or I don't know.. is this something normal ? Can we do something to reduce this effect ?
Thanks, I'm glad you liked my tutorials! The edge artifacts are definitely not normal. Which workflow did you follow to get to your final image? It's hard to tell without some additional information, such as which method you used or what settings.
@@KeyboardAlchemistI don't know exactly if it had a link with my issue but I changed tile overlap to 16 and upscale with more resolution and now everything seems ok ! But I don't know if it could also be linked to the checkpoint, as I often change it
@@virtualj8561Both tile overlap value and the checkpoint could be a source of your issue. Most of the time, if you just change one variable at a time, you can figure out what went wrong. Good to know that you solved the problem! Cheers!
thanks teach
I must be doing something wrong with the Method #2, my GPU usage goes to 15G and uses system ram and it will take like 10 minutes for a single upscale. Is this normal? I have the same parameters as shown in the video, same starting resolution, same parameters for tiled diffusion, tiled vae and controlnet. What could it be? Also, the image changes a bit, specially the face is very different.
You might want to try halving the encoder and decoder tile sizes in the Tiled VAE section. What kind of GPU do you have and how much VRAM?
For such image critical videos, upload high bitrate 4k video 60 fps and not 1080p with blurred quality (by TH-cam). Hard to spot differences
Yeah, I just wanted to say the same.
If i want to add a Lora like "add more details" and "Sharpness tweaker" will it work on this tile diffusion method?
✨👌😎😮😵😮😎👍✨
1:58 why do you randomly speedup the AI voice? it's distracting. leave the playback speed at 1x please
Trying to do methord 2 but I get botched results. Think this needs an updated guide, that or my setup requires different parameters
I learned a lot from your video!!!!
Glad to hear that!
Helloo great video! I've been doing the same to upscale my renders. Is there any chance you could show how to do the same process in comfyUI?
I have not dived into ComfyUI very much; maybe in the distant future. Thanks for watching and for the sub!
Such an amazing video!
I just started with SD 2 weeks ago, and this video is what made my images reach a much higher quality level.
Thanks!
I'm glad it was helpful! Thank you for your kind words!
i think this is the best ai image workflow kind of video, I do vfx work so you can imagine how I appreciate the different values tests. Subscribed
I'm glad you liked the video! Thanks for your support.
is DemoFusion better than this method? Do I work with it other add on extensions or use it by itself?
The Tiled diffusion / Multidiffusion extension works by itself. You don't really need anything else.
Great video just what I wanted to know about good upscaling methods!
I'm glad this was helpful!
Just one thing İ discovered when using sdxl with tiled vae you have to disable fast decodeing cause it messes up the image unfortunately
@@AresmarThat's good to know! Thank you for sharing!
ありがてえ・・・!!!
この情報超助かる!!
Thank you!
👋
13:20 this advanced part was very helpful. I always struggled with preventing artifacts.