A small token of my appreciation. Thank you for taking so much time to thoroughly test, select the best, and so clearly explain comfyUI to us. The workflows on your discord work like a charm 🙏🏽
Thank you so much. Your work is amazing and highly appreciated. I usually find tutorials about this topic that doesn't show the details behind the process or the role of each node.
Man, I subbed to your channel after giving this a try. This is by far the best upscaling tutorial and workflow I've come across in the past year. I've seen about 15. No joke. A huge thank you!
@@pixaroma I would also love to see a PROPER video on text syntax, tips and tricks for "CLIP text encode prompt". Like what is the proper format? When should I use 'underscores' how does *{(option1|option2|option3):1.2} work in an actual flow. I would love to see a video on this! Great work keep it up.
I am very surprised this works so well. I have done pixel space upscaling using [euler/beta] with horrible results and even with very low denoise (.20-.35) the composition changes too much. Using dpmpp_2m/karras seems to be the trick. Thank you.
Yes, i make the image larger and sharper, then is running through flux again, just like you do on image to image, just instead of uploading a new image, I take the image from the previous generation from vae decode, i mage it bigger, entering again in ksampler so basically is an image to image but with bigger image instead of a small image.
Great tutorial. One question: how could the upscaled results look similar to the first one since they go through a different seed in the second KSampler? Thanks.
2:40 . Hello. Can you explain how to get the result image to be exactly the same as the original image? Whenever I use this workflow, the result is always different from the original.
are you using the same settings I put on the workflow, just download that workflow from discord and test it, you can reduce the denoise on the Ksampler, but if is the same scheduler and sampler and model the result should be the same
@@pixaroma I created a workflow from a different Sampler with the same structure as your workflow. I noticed that it's basically an image-to-image process and adds upscale after the Sampler. I want to know which parameter determines the result image being similar to the original image but with more details.
You can't always have similar and with more detail is either similar and you don't get more details, or is less similar so is not constrained and can add more creativity and details. You can add a controll net to keep things more similar like depth and canny that way the composition is the same and lines so you can change more things between those lines. I used the setting in the video snd needed high denoise, with other schedulers it needed less denoise
The resulting image you created includes additional details but still retains the entire face of the character and the composition without using ControlNet. However, when I run my workflow, the result is a completely different image from the original.
@@AnNguyen-pd2xi are you using the same workflow? not sure what workflows you have there, but the workflow I use works like in the video, if you changed something can work differently, so get the same workflows and try to see if works, then see what you did different. Download the workflow from discord and try it.
Even with my 3080Ti, I was having a lot of issues with freezing on this one. For some reason, haven't quite found out yet, Comfy isn't clearing VRAM appropriately. My solution was just to put Clean VRAM nodes after most operations. It added a couple seconds on, but prevented freezing.
I have a Lora that I trainted with Flux dev for my beauty product. Can I incorporate the lora node into the t2i upscale workflow and change the diffusion model to flux dev?
is sharpness from the model, if you can try a different uspcale model that has different sharpness maybe, I didnt find yet a solution for that, other upscaler give different results instead of siax i tried RealEsrgan x4, for some illustrations might work but is smooth things too much, 4x_Foolhardi_Remacri might work in some cases, I also tried from the was node suite custom node to add the Image Dragan Photography Filter and has ther ea field for sharpness and reduced to 0.7 or 0.5 that reduce it slight make it more blurry, but didnt find a permanent solution yet
because is too big for the ksampler, and the pixels are replaced anyway when is generated a new image, you can increase it to see, if you have a good video card, but flux has like 2 megapixel limit, after that it doesnt get so good results
Cool video! I'll definitely try out your approach. However, in AI Search's comfyUI tutorial, he says using tile upscaling yields far better results. Have you tried his method to compare?
I tried with tiles and Ultimate SD upscalers with controlnet but for me took longer time and the results wasnt as good, maybe i didnt find the right settings, i mean I played a few days, and found this settings by mistakes and just worked good enough for me. I wanted something fast. Is not perfect but for me is good enough for what I need. If I find a better way in the future I will make a new video
please make a compy ui video on using and installing mimic motion, i really appreciate your video, it is very clear compared to other TH-camrs, can mimic motion be used on comfy ui on swarm ui?
I saw there are some nodes for comfyui with mimic motion so I will check it out, but probably in kater episodes there are still more to cover in static images before i go to motion and video
if the image is under 2 megapixel so is not too big, and the width and height is divisible with 64 you can get usually ok result without lines. You could try different upscalers, i can not use huge images in the ksampler so i need it to do a normal upscale for last step, you can drag a save image before the final upscaler and use a different uspcaler if you want, so if you do a 1024px you could get a 2048 that is over 2 megapixel, so you cand do smaller maybe so the initial image is 960 maybe and the final image would be under 2 megapixel, play with settings
@@pixaroma Thanks for reply, I use 0.5Mpixel with that node as in video with 16:9 ar, then model upscale 4x scaled * 0.5 downscale... so 0.5Mpix becomes 2Mpix (as it upscales in both x &y dimensions), all usual and still lines, that is why I'm asking 🙂
@@Dunc4n1d4h0 I only get it on some images, not sure what cause it, but most of the time I dont get any lines, maybe the prompt influence somehow or some settings, but I didnt figure it out, I usually just run it a few seeds and pick my favorite :)
Only the workflow are on discord but that is free, the rest is public like all the links to models and other stuff are linked to public sites. For me discords is easier because i have them all in same channel and i can link them in the discussion channel when people need help so they don't have to leave discord and can find all they need there.
Hi (: Can you please tell me what is the other use of Upscale besides photoshop? Here I am doing 1280 by 720 resolution for visual novels. Even if I will use in the game is not this screen resolution and FULL HD, but still the difference is almost equal to zero. Thanks 🙂
@pixaroma I meant that I'm a rookie. I've read that Upscale is used mostly for photoshop users. I make art for games for VN, there resolution is 1280 by 720. So, after Upscale even in 4 times still no effect for visual novels. Or is it useless for my work? 🙂
Personally I prefer to let the model-upscaler step for the last step, and have the latent img2img upscale as the second, that way you can make a good use of your VRAM, speed up the process and the result ends up the same. so, TLDR: from your workflow, I will swap the generations 2 3
I didn't get the right setting with the latent img2img, the image had some artifacts with latent compared to pixel method can you share how you did it on discord? Thanks
try to download it manually and put it in the ComfyUI_windows_portable\ComfyUI\models\upscale_models folder from the models manager if you click on model name a page will open from where you can download it
I made a 4608x3072p image with this method. My gpu (RXT3080) and cpu was at its limit and they are not happy with me, but I must say the image is really nice. I think it is way to much, but I found the limit of my pc. From now I make them half size and upscale them without the sampler to get 4k 🤣
Sorry but the AI voice I use doesnt have yet a speed option, it generate the voice from the text I give it, but I dont have a way to make it talk slower :(
I felt like I have followed along the steps closely and installed everything correctly, but when I try to queue the image I get the following error message. Can you help me figure out what I am missing? I downloaded your Flux Dev Q8 GGUF IMG2IMG with Upscaler workflow and my screen looks exactly like yours in the TH-cam video. Many thanks! Prompt outputs failed validation UnetLoaderGGUF: - Value not in list: unet_name: 'flux1-dev-Q8_0.gguf' not in [] DualCLIPLoaderGGUF: - Value not in list: clip_name1: 't5-v1_1-xxl-encoder-Q8_0.gguf' not in ['clip_l.safetensors', 't5xxl_fp16.safetensors']
I'm so impressive with your work and all the afford you have given here (and in your discord), It really helps beginner like me a lot really. I'm appreciated. And your like button should be OVER 30K. For people who read this message, pls gives a LIKE!!!!! it do not cost you anything! thank you love and respect. ❤❤❤
A small token of my appreciation. Thank you for taking so much time to thoroughly test, select the best, and so clearly explain comfyUI to us. The workflows on your discord work like a charm 🙏🏽
Thank you so much ☺️ glad it helped
Amazing video. Great job on this and thanks for the workflows. 🙂
thank you for support 🙂
Great work! Appreciate your time and effort.
Thank you so much 😊
Thanks!
Thank you so much for your support😊
@@pixaroma Thank YOU! 💖
these upscalers are absolutely amazing, thank you
Bro your tutorials and workflows are super useful, thank you!
wow finally a good upscaler, thank you very much.
Very detailed tutorial. Congrats and thank you for the effort
Thank you so much. Your work is amazing and highly appreciated. I usually find tutorials about this topic that doesn't show the details behind the process or the role of each node.
Man, I subbed to your channel after giving this a try. This is by far the best upscaling tutorial and workflow I've come across in the past year. I've seen about 15. No joke. A huge thank you!
Thank you so much🙂
Amazing! Thank you so much for your special explanation!🤩🤩🤩
Another banger, I love open source AI so much ❤
very good, precise explanation. Thank you.
Very detailed video & great information!
thank you so much, this is an amazing workflow
thank you so much angel. now tell me how you get those performance bars on the right above your settings? thank you
Install the crystools node from manager
this is like the only tutorial without attractive woman clickbait thumbnail
Fantastic tutorial, Thank you.
Eeeeeeee boy! Really Thx man!
Awesome!
Great video! (Question:1.8) Where is the setting so you can see the CPU,GPU ect... On the menu gui?????
Is a custom node install crystools form custom nodes manager
@@pixaroma thank you! I'll check that out tonight!!
@@pixaroma I would also love to see a PROPER video on text syntax, tips and tricks for "CLIP text encode prompt". Like what is the proper format? When should I use 'underscores' how does *{(option1|option2|option3):1.2} work in an actual flow. I would love to see a video on this! Great work keep it up.
I am very surprised this works so well. I have done pixel space upscaling using [euler/beta] with horrible results and even with very low denoise (.20-.35) the composition changes too much.
Using dpmpp_2m/karras seems to be the trick.
Thank you.
Thank you... When you add the kSampler at @14:33, is the upscaling now using Flux? And not just the Siax?
Yes, i make the image larger and sharper, then is running through flux again, just like you do on image to image, just instead of uploading a new image, I take the image from the previous generation from vae decode, i mage it bigger, entering again in ksampler so basically is an image to image but with bigger image instead of a small image.
Great tutorial, thank you very much
Glad I could help ☺️
Great tutorial. One question: how could the upscaled results look similar to the first one since they go through a different seed in the second KSampler? Thanks.
You can reduce the denoise strength to make it look more similar
@@pixaroma Thanks for the quick response. Could feeding both Ksampler with the same seed also work?
It works but is using the same image and gets like super sharp or overcooked, like HDR look, I avoid using same seed
Thank You
2:40 . Hello. Can you explain how to get the result image to be exactly the same as the original image? Whenever I use this workflow, the result is always different from the original.
are you using the same settings I put on the workflow, just download that workflow from discord and test it, you can reduce the denoise on the Ksampler, but if is the same scheduler and sampler and model the result should be the same
@@pixaroma I created a workflow from a different Sampler with the same structure as your workflow. I noticed that it's basically an image-to-image process and adds upscale after the Sampler. I want to know which parameter determines the result image being similar to the original image but with more details.
You can't always have similar and with more detail is either similar and you don't get more details, or is less similar so is not constrained and can add more creativity and details. You can add a controll net to keep things more similar like depth and canny that way the composition is the same and lines so you can change more things between those lines. I used the setting in the video snd needed high denoise, with other schedulers it needed less denoise
The resulting image you created includes additional details but still retains the entire face of the character and the composition without using ControlNet. However, when I run my workflow, the result is a completely different image from the original.
@@AnNguyen-pd2xi are you using the same workflow? not sure what workflows you have there, but the workflow I use works like in the video, if you changed something can work differently, so get the same workflows and try to see if works, then see what you did different. Download the workflow from discord and try it.
Next video please make a tutorial on how to use Flux Controlnet and how to make good images with it. 👍
I will see what I can do ☺️
Thank you!
Even with my 3080Ti, I was having a lot of issues with freezing on this one. For some reason, haven't quite found out yet, Comfy isn't clearing VRAM appropriately. My solution was just to put Clean VRAM nodes after most operations. It added a couple seconds on, but prevented freezing.
Not sure, you can try to put vae encode and vae decode tiled the version with tiled in name
@@pixaroma I'll give that a try and let you know
I have a Lora that I trainted with Flux dev for my beauty product. Can I incorporate the lora node into the t2i upscale workflow and change the diffusion model to flux dev?
yes you can add it between load checkpoint and clip text encode
Impressive 👍
awesome! is there any way to reduce the grain applied after upscaling?
is sharpness from the model, if you can try a different uspcale model that has different sharpness maybe, I didnt find yet a solution for that, other upscaler give different results instead of siax i tried RealEsrgan x4, for some illustrations might work but is smooth things too much, 4x_Foolhardi_Remacri might work in some cases, I also tried from the was node suite custom node to add the Image Dragan Photography Filter and has ther ea field for sharpness and reduced to 0.7 or 0.5 that reduce it slight make it more blurry, but didnt find a permanent solution yet
@@pixaroma got it, thanks!
Why use gguf model instead of fp8 model? I'm curious
the quality of q8 is similar to fp16, so fp8 is lower quality compared with gguf q8,
1. fp16 the original flux dev.
2. The Q8
3.Fp8
Cool. which ai voice are you using?
VoiceAir and they have the voices from elevenlabs. Voice is called burt us
I‘m using a RTX 3090 but it breakes my vram, so the ksampler can’t work
I have included some low vram workflows on discord for that episodes, try those maybe if dont have enogh vram
why do you scale down before scaling up? that loses resolution before upscaling.
because is too big for the ksampler, and the pixels are replaced anyway when is generated a new image, you can increase it to see, if you have a good video card, but flux has like 2 megapixel limit, after that it doesnt get so good results
Cool video! I'll definitely try out your approach. However, in AI Search's comfyUI tutorial, he says using tile upscaling yields far better results. Have you tried his method to compare?
I tried with tiles and Ultimate SD upscalers with controlnet but for me took longer time and the results wasnt as good, maybe i didnt find the right settings, i mean I played a few days, and found this settings by mistakes and just worked good enough for me. I wanted something fast. Is not perfect but for me is good enough for what I need. If I find a better way in the future I will make a new video
please make a compy ui video on using and installing mimic motion, i really appreciate your video, it is very clear compared to other TH-camrs, can mimic motion be used on comfy ui on swarm ui?
I saw there are some nodes for comfyui with mimic motion so I will check it out, but probably in kater episodes there are still more to cover in static images before i go to motion and video
Old good trick with second sampler works as expected... but how to deal with those "flux lines" at final step?
if the image is under 2 megapixel so is not too big, and the width and height is divisible with 64 you can get usually ok result without lines. You could try different upscalers, i can not use huge images in the ksampler so i need it to do a normal upscale for last step, you can drag a save image before the final upscaler and use a different uspcaler if you want, so if you do a 1024px you could get a 2048 that is over 2 megapixel, so you cand do smaller maybe so the initial image is 960 maybe and the final image would be under 2 megapixel, play with settings
@@pixaroma Thanks for reply, I use 0.5Mpixel with that node as in video with 16:9 ar, then model upscale 4x scaled * 0.5 downscale... so 0.5Mpix becomes 2Mpix (as it upscales in both x &y dimensions), all usual and still lines, that is why I'm asking 🙂
@@Dunc4n1d4h0 I only get it on some images, not sure what cause it, but most of the time I dont get any lines, maybe the prompt influence somehow or some settings, but I didnt figure it out, I usually just run it a few seeds and pick my favorite :)
I'm getting Bad Request errors when trying to install upscalers. What might I be doing wrong?
can you post some screenshot on discord with workflow and the error you get and comand window error, mention pixaroma there
Can you please put a links to downloads on some public site, like github?
Only the workflow are on discord but that is free, the rest is public like all the links to models and other stuff are linked to public sites. For me discords is easier because i have them all in same channel and i can link them in the discussion channel when people need help so they don't have to leave discord and can find all they need there.
فيديو جميل وشرح واضح
Hi (:
Can you please tell me what is the other use of Upscale besides photoshop? Here I am doing 1280 by 720 resolution for visual novels. Even if I will use in the game is not this screen resolution and FULL HD, but still the difference is almost equal to zero. Thanks 🙂
i use topazgigapixel Ai is not free, but does a good job for me when i need something fast
@pixaroma I meant that I'm a rookie. I've read that Upscale is used mostly for photoshop users. I make art for games for VN, there resolution is 1280 by 720. So, after Upscale even in 4 times still no effect for visual novels. Or is it useless for my work? 🙂
Niice
Do you have any video that helps me install and set up Flux01 and Confy like for noobs, I have a 24Vram 4090
episode 1, 8 and 10 th-cam.com/play/PL-pohOSaL8P9kLZP8tQ1K1QWdZEgwiBM0.html
Cannot execute because node UpscaleModelLoader does not exist.: Node ID '#136:6' - hmmm could you pls tell me, do you have any idieas?
did you download and load the model? can you post a screenshot with workflow and error on discord on comfyui channel? discord.com/invite/gggpkVgBf3
Personally I prefer to let the model-upscaler step for the last step, and have the latent img2img upscale as the second, that way you can make a good use of your VRAM, speed up the process and the result ends up the same. so, TLDR: from your workflow, I will swap the generations 2 3
I didn't get the right setting with the latent img2img, the image had some artifacts with latent compared to pixel method can you share how you did it on discord? Thanks
i have nvidia graphic card 2060 super, can i try flux?
i have that one too on older pc with 64gb of ram also, i use flux schnell, for flux dev takes too much time for me. flux gguf q4 version
tnx alot :D
i get an error Install failed: 4x-AnimeSharp Bad Request
try to download it manually and put it in the ComfyUI_windows_portable\ComfyUI\models\upscale_models folder from the models manager if you click on model name a page will open from where you can download it
I cant join discord , it says invalid invitation or expired link .
thanks for letting me know, not sure what happened, here is the new link discord.gg/gggpkVgBf3
@@pixaroma you are very welcome
👏
i think you are awesome
i downloaded and tried out the workflow. You are a saint, an angel from above of workflow heaven. Thank you so much.
also, i modded the workflow a little bit to generate image to image. magnifique.
do you have a workflow on how to change clothing on character models?
I don't have, there is online something with "try on", but it didn't work for me as expected
@@pixaroma right many clothing swap videos out there but do not work. ok we will wait.
I made a 4608x3072p image with this method. My gpu (RXT3080) and cpu was at its limit and they are not happy with me, but I must say the image is really nice. I think it is way to much, but I found the limit of my pc. From now I make them half size and upscale them without the sampler to get 4k 🤣
You can also try to use vae decode tiled instead of vae decode maybe that help with lowvram
With tiles I get some lines in the image, so I was looking for a new solution. I will give it a try. Thanks ✌🏻
👏🏻💯🙏🏻
Your discord link is unfortunately invalid
I changed on the channel description yesterday, but in comments and descriptions some remained unchanged, try discord.com/invite/gggpkVgBf3
@@pixaromahiii. still invalid link :((
@@rezvansheho6430 I just test it, it works for me click on it and then click on go to site discord.com/invite/gggpkVgBf3
@@pixaromaI used a VPN, and now it worked ♥️
Thank you. I can't help financially. I hope the likes and comments bring attention to your channel
Thank you, yes that like and comments really helps 😊
Your videos are great, but it would help if you slowed down your voice er..you talk very fast..
Sorry but the AI voice I use doesnt have yet a speed option, it generate the voice from the text I give it, but I dont have a way to make it talk slower :(
I felt like I have followed along the steps closely and installed everything correctly, but when I try to queue the image I get the following error message. Can you help me figure out what I am missing? I downloaded your Flux Dev Q8 GGUF IMG2IMG with Upscaler workflow and my screen looks exactly like yours in the TH-cam video. Many thanks!
Prompt outputs failed validation
UnetLoaderGGUF:
- Value not in list: unet_name: 'flux1-dev-Q8_0.gguf' not in []
DualCLIPLoaderGGUF:
- Value not in list: clip_name1: 't5-v1_1-xxl-encoder-Q8_0.gguf' not in ['clip_l.safetensors', 't5xxl_fp16.safetensors']
Never mind. I started with this Ep12. I needed to go back to Ep 10 for the proper GGUF installation.
Glad you figured it out, just woke up, you can always mention me on discord and give a screenshot ☺️
I'm so impressive with your work and all the afford you have given here (and in your discord), It really helps beginner like me a lot really. I'm appreciated. And your like button should be OVER 30K. For people who read this message, pls gives a LIKE!!!!! it do not cost you anything! thank you love and respect. ❤❤❤
thank you 🙂
Thanks!
Thank you so much ☺️