Thank you very much for such a detailed oriented video on upscaling! Very nice comparisons among all different methos. I will keep watching this clip until I remember everything! Wonderful video!
Hi Han, thanks for making this video. It's a really good explanation and walkthrough on upscaling. I've been experimenting with upscaling, and have watched a number of videos. Yours is the most coherent and straightforward explainer I've seen. Thanks again, and all the best for your channel!
Hi, it probably would not work as there will be too much noise added to each frame causing inconsistency, even with controlnet. (There will be alot of stuttering between frames) The way I usually use for vid2vid upscaling is to use animatediff -> controlnet (tile, lineart, depth) -> ultimatesd / latent upscale. I'm in the process of making an AnimateDiff tutorial, would explain more about it in the video.
What do you do exactly on 1:59 to get that Normal Upscale window? It seems you just drag, but it can't be. I have been stuck for hours. Did you press any keys, could you break it down? Thanks!
Ahh my bad, I duplicated the 'Base Image' node which is a default 'Preview Image' node using Alt + Click and rename it to 'Normal Upscale' for easy comparison.
Haha I knew something was off. Editing gets the best of us. However I do not have "Base Image" I have "Save Image" by default. When I duplicate, it doesn't show the upscaled pic. It's empty@@DataLeveling
I see, so ComfyUI have 'Save Image' that will save images to the '../ComfyUI/output folder' and 'Preview Image' that will only display the image without saving. The ones that I am renaming are the 'Preview Image' nodes, you can search it by double clicking on a blank space or dragging out from the blue dot at the right side of 'VAE Decode' node. When the image is not showing, it is probably because the lines are not connected, at 2:00 I have edited out the footage where I join the lines from the 'Upscale Image By' node to the 'Normal Upscale' node. @@yasminastitou
Hi, yes you have to install the controlnet models from those in the description. If your checkpoint model is SD1.5 then you have to use the SD1.5 controlnet models and if it is SDXL then download the SDXL controlnet models. Once you download the models, put it into the controlnet folder.
@@DataLeveling I have installed openposeXL2.safetensors (through comfyui manager) for sdxl. Is that okay or do you have some Better suggestion? And it will be directly installed in the correct folder right?
Why My Comfyui got this error when running your workflow, could you please guide me to fix it? I will appreciate that! "Error occurred when executing UltimateSDUpscale: mat1 and mat2 shapes cannot be multiplied (154x2048 and 768x320)"
Hello, sure thing :) This error usually means that there is a conflict between the ControlNet model versions and the Checkpoint model versions. If your checkpoint model ver is SD1.5 you will have to download the Controlnet(1.5) models from huggingface.co/lllyasviel/ControlNet-v1-1/tree/main and if its ver is SDXL it will be from huggingface.co/lllyasviel/sd_control_collection/tree/main. Hope this helps!
Thanks for the quick reply! I resolved the issue following your advice. You're an amazing TH-camr, appreciated for your kindness and patience.@@DataLeveling
It's the same workflow as the above, except instead of upscaling a generated image, you use an image loader and load the image you want to upscale to the "upscale image by" node. You can add CLIP guidance if you want too, in case that's important to you.
Could you make an upscale method that works best on lower end GPU? I'm on 4GB so I can't do 50 steps for upscales. Need something a bit lower if possible.
Hi, I don't think the no. of steps affects the VRAM. It is the image size thats really pushing it. You could try launching ComfyUI with a lower vram setting but I'm not 100% sure it works, solution: github.com/comfyanonymous/ComfyUI/discussions/1177
Not exactly a method but I meant to use controlnet as a way of keeping details in conjunction with the upscaling process when going through samplers. The best controlnet to go with this would be the tile controlnet.
Brother literally thanks a lot❤ Your indepth explanation is soo awesome. Pls make one video of how to make Ai model like Aitana Lopen in 4 Gb Vram device. Also pls share your discord. Thanks man❤
I'm glad it helped you understand better :) I am currently working on a video to achieve AI consistent models, will upload soon :) I will also update the discord in my 'About' page once I have done creating it :)
Thank you very much for such a detailed oriented video on upscaling! Very nice comparisons among all different methos. I will keep watching this clip until I remember everything! Wonderful video!
Your exanples and comparisons were great. Thank you for comparing them so clearly
thanks so much for your detailed explanations!
Masterclass. Saves me a ton of experimentation.
Supir and Hidiffusion would be good methods to add into your tests now! Thanks for posting this.
Hi Han, thanks for making this video. It's a really good explanation and walkthrough on upscaling. I've been experimenting with upscaling, and have watched a number of videos. Yours is the most coherent and straightforward explainer I've seen. Thanks again, and all the best for your channel!
Thanks for the kind words! :)
this upscale comparision is informative, thank you for this detailed walkthrough.
Thanks this is really insightful for a beginner like me. Can’t wait for you to upload more videos in the future!
Thanks for the kind support!
How do you run from a specific node and not from the beginning?
this one is clear tutorial but what if i want to use my own lora and vae? what should i do? still confused
Can i upscale video like this same workflow please reply?
Hi, it probably would not work as there will be too much noise added to each frame causing inconsistency, even with controlnet. (There will be alot of stuttering between frames)
The way I usually use for vid2vid upscaling is to use animatediff -> controlnet (tile, lineart, depth) -> ultimatesd / latent upscale.
I'm in the process of making an AnimateDiff tutorial, would explain more about it in the video.
Can you make a video on this topic?
@@SoniInterioit will be covered in my AnimateDiff video! :)
Where is the video give me link please
@@SoniInterioIt is still in the making!
Absolutely insightful, thanksss
What do you do exactly on 1:59 to get that Normal Upscale window? It seems you just drag, but it can't be. I have been stuck for hours. Did you press any keys, could you break it down? Thanks!
Ahh my bad, I duplicated the 'Base Image' node which is a default 'Preview Image' node using Alt + Click and rename it to 'Normal Upscale' for easy comparison.
Haha I knew something was off. Editing gets the best of us. However I do not have "Base Image" I have "Save Image" by default. When I duplicate, it doesn't show the upscaled pic. It's empty@@DataLeveling
I see, so ComfyUI have 'Save Image' that will save images to the '../ComfyUI/output folder' and 'Preview Image' that will only display the image without saving. The ones that I am renaming are the 'Preview Image' nodes, you can search it by double clicking on a blank space or dragging out from the blue dot at the right side of 'VAE Decode' node.
When the image is not showing, it is probably because the lines are not connected, at 2:00 I have edited out the footage where I join the lines from the 'Upscale Image By' node to the 'Normal Upscale' node. @@yasminastitou
Dude, I have installed whatever you said and restarted. But still I don't get anything to select for control net
Do I have to install some model?
Hi, yes you have to install the controlnet models from those in the description.
If your checkpoint model is SD1.5 then you have to use the SD1.5 controlnet models and if it is SDXL then download the SDXL controlnet models.
Once you download the models, put it into the controlnet folder.
@@DataLeveling I have installed openposeXL2.safetensors (through comfyui manager) for sdxl. Is that okay or do you have some Better suggestion?
And it will be directly installed in the correct folder right?
Why My Comfyui got this error when running your workflow, could you please guide me to fix it? I will appreciate that! "Error occurred when executing UltimateSDUpscale:
mat1 and mat2 shapes cannot be multiplied (154x2048 and 768x320)"
Hello, sure thing :)
This error usually means that there is a conflict between the ControlNet model versions and the Checkpoint model versions. If your checkpoint model ver is SD1.5 you will have to download the Controlnet(1.5) models from huggingface.co/lllyasviel/ControlNet-v1-1/tree/main and if its ver is SDXL it will be from huggingface.co/lllyasviel/sd_control_collection/tree/main.
Hope this helps!
Thanks for the quick reply! I resolved the issue following your advice. You're an amazing TH-camr, appreciated for your kindness and patience.@@DataLeveling
@@Emwen1199Cn Thanks for the kind words :)
maybe for the last image (HiresFix) that didn't look identical ... I think you should have lowered the denoise 😊
Hi Han, thanks for the totorial. im in Singapore as well .lets meet for a drink or two if you are keen.
Is there a reason to use lineart instead of the tile controlnet?
Hihi, I only used this as an example!
The best way is to use a combination of tile + lineart + depth to keep most of the details consistent :)
@@DataLeveling how would u do that? can u please make a tutorial to for best results when using usdu
Is there an imgae2upscale workflow?
I want to convert stock images to a higher resolution.
It's the same workflow as the above, except instead of upscaling a generated image, you use an image loader and load the image you want to upscale to the "upscale image by" node. You can add CLIP guidance if you want too, in case that's important to you.
Could you make an upscale method that works best on lower end GPU? I'm on 4GB so I can't do 50 steps for upscales. Need something a bit lower if possible.
Hi, I don't think the no. of steps affects the VRAM.
It is the image size thats really pushing it.
You could try launching ComfyUI with a lower vram setting but I'm not 100% sure it works, solution: github.com/comfyanonymous/ComfyUI/discussions/1177
i prefer hiresfix + controlnet tile + big denoise for realistic, sdultimate for others
Great video
Hello, please update your links in the comments.
Hi, thanks for informing! :) The links will be clickable once TH-cam verifies my identification 👍
upscale by control net metod?
Not exactly a method but I meant to use controlnet as a way of keeping details in conjunction with the upscaling process when going through samplers.
The best controlnet to go with this would be the tile controlnet.
Brother literally thanks a lot❤ Your indepth explanation is soo awesome. Pls make one video of how to make Ai model like Aitana Lopen in 4 Gb Vram device. Also pls share your discord.
Thanks man❤
I'm glad it helped you understand better :) I am currently working on a video to achieve AI consistent models, will upload soon :) I will also update the discord in my 'About' page once I have done creating it :)
lora looks blurry
thanks
Model 4x and hires fix that enough