ComfyUI Tutorial Series: Ep012 - How to Upscale Your AI Images
ฝัง
- เผยแพร่เมื่อ 23 ก.ย. 2024
- In Episode 12 of the ComfyUI tutorial series, you'll learn how to upscale AI-generated images without losing quality. Using ComfyUI, you can increase the size of your images while enhancing their sharpness and detail.
We'll cover the process of installing the necessary nodes, choosing models like Siax and Anime Sharp for different image styles, and creating workflows that deliver quick, high-quality results. You’ll see how to compare upscaled images and fine-tune settings for the best output, whether you're working with portraits, landscapes, or illustrations.
This tutorial is perfect for anyone looking to improve their AI-generated art with sharper, larger images. Whether you’re using SDXL, Flux, or any other models, you’ll learn how to upscale efficiently.
Download all the workflows from Discord
/ discord
look for the channel pixaroma-workflows
Go to manger, model manager
sort by type Upscale
Install 4x_NMKD-Siax_200k
4x-AnimeSharp
Refresh ComfyUI
Install this custom nodes if you don't have it
ControlAltAI Nodes
ComfyUI-PixelResolutionCalculator
ComfyUI Easy Use
rgthree's ComfyUI Nodes
Restart ComfyUI
Unlock exclusive perks by joining our channel:
/ @pixaroma
A small token of my appreciation. Thank you for taking so much time to thoroughly test, select the best, and so clearly explain comfyUI to us. The workflows on your discord work like a charm 🙏🏽
Thank you so much ☺️ glad it helped
Thank you so much. Your work is amazing and highly appreciated. I usually find tutorials about this topic that doesn't show the details behind the process or the role of each node.
The explanations are comprehensive and easy-understanding. I tried the Flux model on the mimicpc, and it works so well!!!!For AI painting newcomers, it is not overly complex, and the outcome is excellent.
Very detailed tutorial. Congrats and thank you for the effort
these upscalers are absolutely amazing, thank you
Great video! (Question:1.8) Where is the setting so you can see the CPU,GPU ect... On the menu gui?????
Is a custom node install crystools form custom nodes manager
@@pixaroma thank you! I'll check that out tonight!!
@@pixaroma I would also love to see a PROPER video on text syntax, tips and tricks for "CLIP text encode prompt". Like what is the proper format? When should I use 'underscores' how does *{(option1|option2|option3):1.2} work in an actual flow. I would love to see a video on this! Great work keep it up.
wow finally a good upscaler, thank you very much.
Next video please make a tutorial on how to use Flux Controlnet and how to make good images with it. 👍
I will see what I can do ☺️
I am very surprised this works so well. I have done pixel space upscaling using [euler/beta] with horrible results and even with very low denoise (.20-.35) the composition changes too much.
Using dpmpp_2m/karras seems to be the trick.
Thank you.
Thank you!
This is simply amazing. Thank you!
Thanks!
Thank you so much for your support😊
@@pixaroma Thank YOU! 💖
Thank You
Very detailed video & great information!
فيديو جميل وشرح واضح
Fantastic tutorial, Thank you.
Niice
Thank you... When you add the kSampler at @14:33, is the upscaling now using Flux? And not just the Siax?
Yes, i make the image larger and sharper, then is running through flux again, just like you do on image to image, just instead of uploading a new image, I take the image from the previous generation from vae decode, i mage it bigger, entering again in ksampler so basically is an image to image but with bigger image instead of a small image.
Awesome!
please make a compy ui video on using and installing mimic motion, i really appreciate your video, it is very clear compared to other TH-camrs, can mimic motion be used on comfy ui on swarm ui?
I saw there are some nodes for comfyui with mimic motion so I will check it out, but probably in kater episodes there are still more to cover in static images before i go to motion and video
Cool video! I'll definitely try out your approach. However, in AI Search's comfyUI tutorial, he says using tile upscaling yields far better results. Have you tried his method to compare?
I tried with tiles and Ultimate SD upscalers with controlnet but for me took longer time and the results wasnt as good, maybe i didnt find the right settings, i mean I played a few days, and found this settings by mistakes and just worked good enough for me. I wanted something fast. Is not perfect but for me is good enough for what I need. If I find a better way in the future I will make a new video
Eeeeeeee boy! Really Thx man!
I have a Lora that I trainted with Flux dev for my beauty product. Can I incorporate the lora node into the t2i upscale workflow and change the diffusion model to flux dev?
yes you can add it between load checkpoint and clip text encode
Personally I prefer to let the model-upscaler step for the last step, and have the latent img2img upscale as the second, that way you can make a good use of your VRAM, speed up the process and the result ends up the same. so, TLDR: from your workflow, I will swap the generations 2 3
I didn't get the right setting with the latent img2img, the image had some artifacts with latent compared to pixel method can you share how you did it on discord? Thanks
Impressive 👍
awesome! is there any way to reduce the grain applied after upscaling?
is sharpness from the model, if you can try a different uspcale model that has different sharpness maybe, I didnt find yet a solution for that, other upscaler give different results instead of siax i tried RealEsrgan x4, for some illustrations might work but is smooth things too much, 4x_Foolhardi_Remacri might work in some cases, I also tried from the was node suite custom node to add the Image Dragan Photography Filter and has ther ea field for sharpness and reduced to 0.7 or 0.5 that reduce it slight make it more blurry, but didnt find a permanent solution yet
@@pixaroma got it, thanks!
2:40 . Hello. Can you explain how to get the result image to be exactly the same as the original image? Whenever I use this workflow, the result is always different from the original.
are you using the same settings I put on the workflow, just download that workflow from discord and test it, you can reduce the denoise on the Ksampler, but if is the same scheduler and sampler and model the result should be the same
@@pixaroma I created a workflow from a different Sampler with the same structure as your workflow. I noticed that it's basically an image-to-image process and adds upscale after the Sampler. I want to know which parameter determines the result image being similar to the original image but with more details.
You can't always have similar and with more detail is either similar and you don't get more details, or is less similar so is not constrained and can add more creativity and details. You can add a controll net to keep things more similar like depth and canny that way the composition is the same and lines so you can change more things between those lines. I used the setting in the video snd needed high denoise, with other schedulers it needed less denoise
The resulting image you created includes additional details but still retains the entire face of the character and the composition without using ControlNet. However, when I run my workflow, the result is a completely different image from the original.
@@AnNguyen-pd2xi are you using the same workflow? not sure what workflows you have there, but the workflow I use works like in the video, if you changed something can work differently, so get the same workflows and try to see if works, then see what you did different. Download the workflow from discord and try it.
Hi (:
Can you please tell me what is the other use of Upscale besides photoshop? Here I am doing 1280 by 720 resolution for visual novels. Even if I will use in the game is not this screen resolution and FULL HD, but still the difference is almost equal to zero. Thanks 🙂
i use topazgigapixel Ai is not free, but does a good job for me when i need something fast
@pixaroma I meant that I'm a rookie. I've read that Upscale is used mostly for photoshop users. I make art for games for VN, there resolution is 1280 by 720. So, after Upscale even in 4 times still no effect for visual novels. Or is it useless for my work? 🙂
Do you have any video that helps me install and set up Flux01 and Confy like for noobs, I have a 24Vram 4090
episode 1, 8 and 10 th-cam.com/play/PL-pohOSaL8P9kLZP8tQ1K1QWdZEgwiBM0.html
I cant join discord , it says invalid invitation or expired link .
thanks for letting me know, not sure what happened, here is the new link discord.gg/gggpkVgBf3
@@pixaroma you are very welcome
Your discord link is unfortunately invalid
I changed on the channel description yesterday, but in comments and descriptions some remained unchanged, try discord.com/invite/gggpkVgBf3
@@pixaromahiii. still invalid link :((
@@rezvansheho6430 I just test it, it works for me click on it and then click on go to site discord.com/invite/gggpkVgBf3
@@pixaromaI used a VPN, and now it worked ♥️