Thank you for explaining the whole process with every little detail. Loved that you got the example of someone that could be any of us and showed why it didn't work and how to fix it. Please continue like this!
Fascinating, I've been experimenting with Supir, but I was just stumbling around, I didn't understand what a lot of the settings did. Nicely explained.
Excellent video, thanks man. Supir is magic, i'm short on v-ram but the v2 with a simple workflow and lightning model really dropped my jaw. i mean i was able to restore or upscale tiny/damaged images that were impossible by everything else i tried.
@@stephantual Yeah, it's my new hobby now :))) i fixed and upscaled around 300 images in the past few days. can't go higher than 1536x1536 because of time and fan noise and memory limit but still pretty impressive. bunch of 400x400 or lower images are now viewable with higher quality and without artifact. i love it.
I am sad to see one of my favorite channels go mute. But I sm sure you are foing great things and being successful. Hope some day you can take a couple minutes to share some of your great knowlegde. All my best wishes amigo!! ❤️🇲🇽❤️
Extremely great tutorial, excellent stuff, really appreciate your sharing experiences in such a detailed level! That's exactly what I need at the moment! A thousand thanks!
I'm impressed! Big thanks for your effort and your time to spend in this video. For me, it's a bit to fast and I have to stop and play it again... but its very helpfull!
Great effort! Thank you for clarifying lots of details for AI people who needs high quality upscale workflows.. Can you kindly explain how did you set up the "wireless" :) nodes, in comfyui they look so handy
Excellent. People don't realize you have to play with the settings per image depending on what you want. Most people don't know what they are doing. They don't want to learn they want to copy. In general in life if you just copy then you get what you get. If you learn and then apply what you learned then you can create something new. Yes it may be derivative but it's your work. Great video the point is about learning and making mistakes trial and error and learning what works for you and how to make it better from the collective knowledge from the community. Thanks for this. Love it.
Hi Stephan, I have used your workflow for some time now and everything worked flawless so far, but since I've updated comfyui and the nodes today I get an error when it comes to image resizing: Error occurred when executing ImageResize+: 'bool' object has no attribute 'startswith' Any idea how to solve this?
Thank you for another excellent video. I always seem to get a lot of great information from both you and Mateo's videos. Question: How do I get rid of tile lines that show up on some of my upscaled images? I noticed you didn't have any issues with that in your video. But the image that I'm working with has tile lines visible everywhere.
Very magical! Thank you for your explanations and explanations, I learned a lot from your last video! At the same time, I also have a question: Can this model use LCM-LoRA? Because I want to use LCM-LoRA directly for acceleration. Thank you very much!
I am a little confused by one thing. The SUPIR Sampler Node seems to support only DPM++2M and EDM Samplers. Quite a few of the faster models are optimized for DPM+ + SDE, Euler A, or another sampler. On some models it makes a huge difference. Does this matter a lot for the SUPIR model? Should we spend time testing a model we want to use with SUPIR with DPM++2M and/or EDM ahead of time? SUPIR can be quite slow, so I'd like to make sure that the issue is not simply a question of model/sampler compatibility. Thanks
Hello! I am fascinated with your skills and results. I get this problem: some of my images become absolutely ugly after the node "SUPIR first stage (denoiser)". And then in the final result I see all the same issues in the image, that this SUPIR DENOISER introduced. Could you comment on it please?
The size of the source image is 2048 * 2048. How to divide it into tiles? The image generated after I divide it into tiles is also divided, not the entire image. What settings are wrong?
so we generate a practice image and see what the steps/cfg are.... the part that confuses me... why is there a CFG start and end? if the CFG is 7 should i just set both to 7?
Ca depend de l'image que tu passe au sampler... 1024x1024 = 8gb plus ou moins. 4k == besoin d'une 4090. L’idée est de garder la première mise à l’échelle aussi petite que possible, car ce que vous voulez vraiment, c’est juste assez de pixels pour que le réseau de contrôle entre en jeu, puis les lanczos ensuite.
Perso j'ai une 3060ti à 8g de vram et ca n'a pas l'air d'etre suffisant juste pour un upscale x3 sur une image d'entrée de 1024, le rendu tourne a vide pendant plus d'une demi heure dans la passe de denoising (meme en passant en fp8 avec des tiles de 256 ) ,j'ai pas eu la patience d'attendre la fin du rendu ^^ je reste sur mon petit ultimate upscaler 😅
Nuke the grain using a different denoiser than the built-in SUPIR one, so it starts from a polished image. Or, use a model from Openmodels.db that is specialized in that type of thing (they have categories at the top, it's pretty good). 👽
*pauses video* there is no way this will work. bbl *** wtf its actually really good. I am using the Lightning version of the same model and im not sure im setting it up totally correct but with cfg 1 to 2 and steps at 8, and samples sed to edm or edm tiled its really good. Has problem of course but its lightning and i dont expect it to get anything correct. Testing with the red truck at 1 upsample
Love the detail you go into. BUT! TH-cam does not charge for time… please have less coffee before recording…. It took me 4 hours to try to follow along needing to pause and rewind at every single step… not even you can effectively learn at even 1/4 this speed…. Subscribed..
So the video basically about: "look I am cool and know how to do shit, look how I am fast. Don't understand what's going on - your problem. I am not to teach here, I am here to boast!"
well if it's fast you can always pause and replay. If you don't understand what's going on, you might watch his previous videos, this is a continuation of the supir series of videos. If you don't have the workflow - it's in the description, the first link. I you need a full video starting with comfyui installation, this is just not a right video for you.
Author gave 6 days to set up this video to share with the community for FREE! People try to earn money no patreon with this kind of stuff. Hence, you are free to go and pay for your custom requests on paid platforms
Hi Stephen, I send you my greetings and I thank you for these incredible videos, what you achieve and what you teach is wonderful. I am putting 100% into learning these methods and well, I am writing to you because I have a problem with the "Color Match" node when trying to change from the "RestoreDPMPP2Sampler" Sampler to the "TiledRestoreEDMSampler". I get the following error: "Error occurred when executing ColorMatch: stack expects a non-empty TensorList File "C:\pinokio\api\comfyui.git\app\execution.py", line 317, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, _cb) File "C:\pinokio\api\comfyui.git\app\execution.py", line 192, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "C:\pinokio\api\comfyui.git\app\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "C:\pinokio\api\comfyui.git\app\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) File "C:\pinokio\api\comfyui.git\app\custom_nodes\ComfyUI-KJNodes odes\image_nodes.py", line 121, in colormatch out = torch.stack(out, dim=0).to(torch.float32)" I tried the 6 default methods of the "Color Match" node, but none of them worked. Could you help me find the solution? Thanks a million in advance
Don't be shy and post your best upscales on the discord @ tinyurl.com/URSIUM 🛸🛸🛸
PS: updated the thumb following rather "interesting" comments 😂
Great tutorial, Supir is amazing, and we need all the information we can get, Thank you!
Thank you for explaining the whole process with every little detail. Loved that you got the example of someone that could be any of us and showed why it didn't work and how to fix it. Please continue like this!
Fascinating, I've been experimenting with Supir, but I was just stumbling around, I didn't understand what a lot of the settings did. Nicely explained.
Hello, please update the link to the workflow because flowty no longer works. Where is this flow available now? I can't find it on Discord. Regards
Awesome...thanks for more SUPIR info and the workflow!
Thanks! 🛸
I've been using your SUPIR workflow daily since you released it. This is really helpful to understand how to tweak the settings. Thank you for this!
You're very welcome! 👽
may be this is going to be my fav go to channel after all
Excellent video, thanks man. Supir is magic, i'm short on v-ram but the v2 with a simple workflow and lightning model really dropped my jaw. i mean i was able to restore or upscale tiny/damaged images that were impossible by everything else i tried.
Yeah I have collected 1000's of photos from my family, and am planning to batch process them all for restoration . It's really nice!
@@stephantual Yeah, it's my new hobby now :))) i fixed and upscaled around 300 images in the past few days. can't go higher than 1536x1536 because of time and fan noise and memory limit but still pretty impressive. bunch of 400x400 or lower images are now viewable with higher quality and without artifact. i love it.
another awesome one... thank you/merci!
Fantastic! Thanks for spending a week on us :) Much appreciated and very well presented :D
Wow, that was amazing! I really appreciated the thorough explanations and detailed settings.
I am sad to see one of my favorite channels go mute. But I sm sure you are foing great things and being successful. Hope some day you can take a couple minutes to share some of your great knowlegde. All my best wishes amigo!! ❤️🇲🇽❤️
Extremely great tutorial, excellent stuff, really appreciate your sharing experiences in such a detailed level! That's exactly what I need at the moment! A thousand thanks!
21:06 The restored old photo is fantastic! I need to try this.
thats really amazing you share the workflow for free. God bless you
All workflows should be free IMHO 👽
Explanation like this was needed. Thanks for your efforts.
100% the more time you put in, the better you get, but it doesn't mean your better than someone else. Well put. Cheers!
I'm impressed! Big thanks for your effort and your time to spend in this video. For me, it's a bit to fast and I have to stop and play it again... but its very helpfull!
hello brother, thanks for the great video. Could you please upload the workflow, the link doesn't work any more
This is amazing, great job on this workflow.
The workflow download address is invalid. Can you update the workflow download address?
Great effort! Thank you for clarifying lots of details for AI people who needs high quality upscale workflows.. Can you kindly explain how did you set up the "wireless" :) nodes, in comfyui they look so handy
"Theres nothing arbitrary about workflows" damn thats a good line lol
Hi, the SUPER workflow is no longer available via the link, how can I get it?
Dude you're the best!!!
Excellent. People don't realize you have to play with the settings per image depending on what you want. Most people don't know what they are doing. They don't want to learn they want to copy. In general in life if you just copy then you get what you get. If you learn and then apply what you learned then you can create something new. Yes it may be derivative but it's your work. Great video the point is about learning and making mistakes trial and error and learning what works for you and how to make it better from the collective knowledge from the community.
Thanks for this. Love it.
You nailed it! 👽
how to change wd14 tagger text that has been generated . In the video part 20.20. change 1girl to 1boy . The text is dimmed and cannot be changed !
Great video. Thanks for the effort
Hi Stephan, I have used your workflow for some time now and everything worked flawless so far, but since I've updated comfyui and the nodes today I get an error when it comes to image resizing:
Error occurred when executing ImageResize+:
'bool' object has no attribute 'startswith'
Any idea how to solve this?
workflow link is dead now since that website has been ditched :( can you please share workflow on civitai ? please :(
The link to your workflow is broken. Can you fix it? thanks :)
was there any change in the workflow? I don't see the 'adding details like magnific' in the workflow you uploaded
Thank you for another excellent video. I always seem to get a lot of great information from both you and Mateo's videos.
Question: How do I get rid of tile lines that show up on some of my upscaled images? I noticed you didn't have any issues with that in your video. But the image that I'm working with has tile lines visible everywhere.
Got you covered on discord my friend :)
Great tutorial, thank you!
You're very welcome! 👽
200% the more time you put in, the better you get, but it doesn't mean your better than someone else. Well put. Cheers! ^^
❤❤❤ need this for videos
Very magical! Thank you for your explanations and explanations, I learned a lot from your last video! At the same time, I also have a question: Can this model use LCM-LoRA? Because I want to use LCM-LoRA directly for acceleration. Thank you very much!
But, is that a different supir workflow, where you enhance and add details to the cute doll image at the end?? from 22:10 ??
I am a little confused by one thing. The SUPIR Sampler Node seems to support only DPM++2M and EDM Samplers. Quite a few of the faster models are optimized for DPM+ + SDE, Euler A, or another sampler. On some models it makes a huge difference.
Does this matter a lot for the SUPIR model? Should we spend time testing a model we want to use with SUPIR with DPM++2M and/or EDM ahead of time? SUPIR can be quite slow, so I'd like to make sure that the issue is not simply a question of model/sampler compatibility.
Thanks
Hi Stephan ! Whare are you been ? 2 months we haven't seen you Dude.
Well done!
Hello! I am fascinated with your skills and results. I get this problem: some of my images become absolutely ugly after the node "SUPIR first stage (denoiser)". And then in the final result I see all the same issues in the image, that this SUPIR DENOISER introduced. Could you comment on it please?
The size of the source image is 2048 * 2048. How to divide it into tiles? The image generated after I divide it into tiles is also divided, not the entire image. What settings are wrong?
so we generate a practice image and see what the steps/cfg are.... the part that confuses me... why is there a CFG start and end? if the CFG is 7 should i just set both to 7?
Maybe you can help. Super using ram rather than gpu?
superb!
If you could show one example for an architecture image, I would love you forever...!
you can share workflow? I need it. help me
Can you use supir for video upscaling? Would it stay consistent or not really?
link's dead
These workflows cannot be refined and simplified?
"restore_cfg -1" - disable restoration. So why -1?
PLZ update workflow link THX
oh no all the links are now down
Supir is better than Magnifi AI ?
Quelle quantité de VRAM minimum faut-il pour permettre l'usage de SUPIR SVP ?
Ca depend de l'image que tu passe au sampler... 1024x1024 = 8gb plus ou moins. 4k == besoin d'une 4090. L’idée est de garder la première mise à l’échelle aussi petite que possible, car ce que vous voulez vraiment, c’est juste assez de pixels pour que le réseau de contrôle entre en jeu, puis les lanczos ensuite.
Perso j'ai une 3060ti à 8g de vram et ca n'a pas l'air d'etre suffisant juste pour un upscale x3 sur une image d'entrée de 1024, le rendu tourne a vide pendant plus d'une demi heure dans la passe de denoising (meme en passant en fp8 avec des tiles de 256 ) ,j'ai pas eu la patience d'attendre la fin du rendu ^^ je reste sur mon petit ultimate upscaler 😅
What is the minimum requirements to run this beast? Is it possible on 12Gb ram and 4 Gb vram?
are you still alive? haven't seen anything new from you
Any tips for grainy photos?
Nuke the grain using a different denoiser than the built-in SUPIR one, so it starts from a polished image. Or, use a model from Openmodels.db that is specialized in that type of thing (they have categories at the top, it's pretty good). 👽
¿Y el link para entrar a Supir?
The blogger was great.
génial
*pauses video* there is no way this will work.
bbl
*** wtf its actually really good. I am using the Lightning version of the same model and im not sure im setting it up totally correct but with cfg 1 to 2 and steps at 8, and samples sed to edm or edm tiled its really good. Has problem of course but its lightning and i dont expect it to get anything correct. Testing with the red truck at 1 upsample
Compared to SD Ultimate Upscale, SUPIR seems about 1,000 times more complex. Which makes me very BIG SAD.
The node graphs = Indian telegraph poles.....
Functional ? maybe
The one thing I do not like about SUPIR is that it's not for commercial use.
404
This page could not be found.
Love the detail you go into. BUT! TH-cam does not charge for time… please have less coffee before recording…. It took me 4 hours to try to follow along needing to pause and rewind at every single step… not even you can effectively learn at even 1/4 this speed…. Subscribed..
It's strange, you have a French accent...
Too much vram
So the video basically about: "look I am cool and know how to do shit, look how I am fast. Don't understand what's going on - your problem. I am not to teach here, I am here to boast!"
well if it's fast you can always pause and replay. If you don't understand what's going on, you might watch his previous videos, this is a continuation of the supir series of videos. If you don't have the workflow - it's in the description, the first link. I you need a full video starting with comfyui installation, this is just not a right video for you.
Author gave 6 days to set up this video to share with the community for FREE! People try to earn money no patreon with this kind of stuff. Hence, you are free to go and pay for your custom requests on paid platforms
@@ooiirraa Thanks, I've ranted, and you've been constructive, appreciate it!
😂
it's an update
th-cam.com/video/2q6Ms9H_cXg/w-d-xo.html
Hi Stephen, I send you my greetings and I thank you for these incredible videos, what you achieve and what you teach is wonderful. I am putting 100% into learning these methods and well, I am writing to you because I have a problem with the "Color Match" node when trying to change from the "RestoreDPMPP2Sampler" Sampler to the "TiledRestoreEDMSampler". I get the following error:
"Error occurred when executing ColorMatch:
stack expects a non-empty TensorList
File "C:\pinokio\api\comfyui.git\app\execution.py", line 317, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, _cb)
File "C:\pinokio\api\comfyui.git\app\execution.py", line 192, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "C:\pinokio\api\comfyui.git\app\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i)
File "C:\pinokio\api\comfyui.git\app\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs))
File "C:\pinokio\api\comfyui.git\app\custom_nodes\ComfyUI-KJNodes
odes\image_nodes.py", line 121, in colormatch out = torch.stack(out, dim=0).to(torch.float32)"
I tried the 6 default methods of the "Color Match" node, but none of them worked.
Could you help me find the solution?
Thanks a million in advance