Let it be know that this is a workflow for SUPER LOW Resolutuion sub 512, so this is a great help if you are upscaling a image of a scanned stamp. It will not work with even basic 832x1216, so do not get fooled.
try tile upscaling, tile method is great because it divides your image into smaller pieces, called “tiles.” Each tile is processed separately, which helps keep all the details clear and sharp.
When you use tile resample, you should always set the denoise strength to 1 and rely on adjusting the control weight instead on the controlnet model so you don't get a weird overlay effect.
Thank you for the advice! I’ll make sure to set the denoise strength to 1 and focus on adjusting the control weight in the ControlNet model to avoid any strange overlay effects. I appreciate your help!
Great tutorial! I wonder if it can run on 12GB VRAM. When placing nodes, can they be connected in order? It's hard to follow. Is it possible to keep workflows neat and tidy using groups? Thank you
Hello Where to download all these models I go to the same place you show where it says “resources” on huggingface, but there are only models “diffusion_pytorch_model.safetensors”.
huggingface.co/Kijai/flux-fp8/tree/main huggingface.co/lllyasviel/flux_text_encoders/tree/main huggingface.co/nerualdreming/flux_vae/tree/main. here are the check point and Text Encoder File.. & VAE
@@spiritform111 This workflow is best for images that lack detail and are blurry. If the image has good details, try using Supir or tile upscaling, tile is best methods for upscaling.
Thanks a ton for this workflow and tutorial, I just need to find the right clip models and this should work for me !! It's unfortunate that I never downloaded any of the clip models used here, so I gotta look for them
Hi, pleas i have problem with this workflow (my hw Amd Epyx 7252, 64GB Ram, 2x RTX 4070 with 12 GB VRAM by ControlNet model - i used: Shakker-Labs FLUX.1-dev-ControlNet-Union-Pro Any good advice? "Error occurred when executing ControlNetLoader: MMDiT.__init__() got an unexpected keyword argument 'image_model'"
Thanks for the video. I get this error: "KSampler: mat1 and mat2 shapes cannot be multiplied (1x768 and 2816x1280)" Can you please help, what should I do? my image is 1888x1056px
yes, it is due to the image size,, this workflow is best of Blurry images.. has you mention sizer of image is 1888x1056px You the image is already Optimized... not work on this Resolution, Your can try Supir For that...
@@ComfyUIworkflows I don't think that is the issue. There is something wrong in this workflow. I tried multiple image sizes from 512x512 down to 66x66 and I get this same error every time. All these images were under 100kb and were low quality. Please do some testing and quality control when putting out your information so you're no wasting people's time.
When I run this, my resulting image looks absolutely awful. It's a bit burry and totally destroyed the face of the man in the starting image. This ControlNet must be pretty weak.
hey, make text promt small and try, this controlnet supports 7 control modes, including canny (0), tile (1), depth (2), blur (3), pose (4), gray (5), low quality (6).
thumbnail.. the face on the right is a modern person's face. the face on the left is not at all like it. people had different things on their mind then. the photograph on the left looks like a british person to me, the photograph on the right, could be from a lot of places. acculturation = reality..
not same face, a different man in upscale, the main problem with all these tools, which preventing to use for any serious restoration. It makes "eye candy" result as usual-the core script of all this Ai, but nothing related to original.
I understand the concern about the upscale results not matching the original face. It can be frustrating when tools produce 'eye candy' rather than accurate restorations. However, we can address the eye issues. Start by lowering the denoise value. First, use the Supir node to recover the details. Once the details are restored, you can enhance with a very low denoise setting, which should help achieve a more accurate result.
@@ComfyUIworkflows that's a fundamental flaw of all this Ai architecture, as i understand the problem is these tools in reality can't see any input image, they always guessing by imagenet, always wrongly. Maybe new Visual models may help but they need some way to correctly describe every pixel of original, not by word description like now
Updated: th-cam.com/video/WcHlkgVlVPs/w-d-xo.html
Let it be know that this is a workflow for SUPER LOW Resolutuion sub 512, so this is a great help if you are upscaling a image of a scanned stamp. It will not work with even basic 832x1216, so do not get fooled.
try tile upscaling, tile method is great because it divides your image into smaller pieces, called “tiles.” Each tile is processed separately, which helps keep all the details clear and sharp.
Thank you for sharing your workflow! Liked and now subscribed.
When you use tile resample, you should always set the denoise strength to 1 and rely on adjusting the control weight instead on the controlnet model so you don't get a weird overlay effect.
Thank you for the advice! I’ll make sure to set the denoise strength to 1 and focus on adjusting the control weight in the ControlNet model to avoid any strange overlay effects. I appreciate your help!
Great tutorial! I wonder if it can run on 12GB VRAM. When placing nodes, can they be connected in order? It's hard to follow. Is it possible to keep workflows neat and tidy using groups? Thank you
yes, i can run on 12GB VRAM
Hello Where to download all these models I go to the same place you show where it says “resources” on huggingface, but there are only models “diffusion_pytorch_model.safetensors”.
huggingface.co/Kijai/flux-fp8/tree/main
huggingface.co/lllyasviel/flux_text_encoders/tree/main
huggingface.co/nerualdreming/flux_vae/tree/main.
here are the check point and Text Encoder File.. & VAE
where is the controlnet model??
in description
hm... for some reason it is giving my images a bit of a double vision / blur... it's not as sharp. is this a gguf thing?
no, 0.6 to 0.7 try this value in control net, make the prompt small
@@ComfyUIworkflows ok, thank you!
@@ComfyUIworkflows thank you! and what do you suggest for a start and end percent?
@@spiritform111 This workflow is best for images that lack detail and are blurry. If the image has good details, try using Supir or tile upscaling, tile is best methods for upscaling.
@@ComfyUIworkflows gotcha... thanks!
Thanks a ton for this workflow and tutorial, I just need to find the right clip models and this should work for me !! It's unfortunate that I never downloaded any of the clip models used here, so I gotta look for them
what do you mean by low VRAM ? below 8 GB ? below 16 GB ?
Below 12 GB
Your pc specification please
12 GB Vram, 64 gb ram, i9 14k
can we use these controlnet for flux schnell also?
yes
Hi, pleas i have problem with this workflow (my hw Amd Epyx 7252, 64GB Ram, 2x RTX 4070 with 12 GB VRAM by ControlNet model - i used: Shakker-Labs FLUX.1-dev-ControlNet-Union-Pro
Any good advice?
"Error occurred when executing ControlNetLoader:
MMDiT.__init__() got an unexpected keyword argument 'image_model'"
hey, you use same control net node which i m using, Could you share a screenshot of your workflow?
Thanks for the video. I get this error: "KSampler: mat1 and mat2 shapes cannot be multiplied (1x768 and 2816x1280)"
Can you please help, what should I do? my image is 1888x1056px
yes, it is due to the image size,, this workflow is best of Blurry images.. has you mention sizer of image is 1888x1056px You the image is already Optimized... not work on this Resolution, Your can try Supir For that...
@@ComfyUIworkflows Many thanks, I will try 👍
@@ComfyUIworkflows I don't think that is the issue. There is something wrong in this workflow. I tried multiple image sizes from 512x512 down to 66x66 and I get this same error every time. All these images were under 100kb and were low quality. Please do some testing and quality control when putting out your information so you're no wasting people's time.
@@helveticafreezes5010 The workflow is working.. tested.. what error you getting in cmd & make sure dual clip loader type is flux
Is it useful for 3D anime?
Yes, if the image have very less details
Is it possible to get the Union-CN's solo? Union is great, but it hogs *a lot* of resources on top of FLUX.
yes
@@ComfyUIworkflows Where? They got "Blur" etc. in there.
just noticed, all the asset names are changed to "diffusion_pytorch_model.safetensors"
you can rename it to FLUX.1-dev-ControlNet-Union-Pro
When I run this, my resulting image looks absolutely awful. It's a bit burry and totally destroyed the face of the man in the starting image. This ControlNet must be pretty weak.
hey, make text promt small and try, this controlnet supports 7 control modes, including canny (0), tile (1), depth (2), blur (3), pose (4), gray (5), low quality (6).
Check your Blog, i let you comment.
send me a blury image, their are so many faces
thumbnail.. the face on the right is a modern person's face. the face on the left is not at all like it. people had different things on their mind then. the photograph on the left looks like a british person to me, the photograph on the right, could be from a lot of places. acculturation = reality..
You should be upfront about how much VRAM this workflow will need. You're not going to be able to run this locally without at least 24GB of VRAM.
@@ZERKA-p4b it will work if you have 6 GB VRAM You have to use GGUF Loader
I am running this on a 8gb 4060,takes a while but it works just fine !! Besides he did mentioned the 2 variations at the start of the video.
not same face, a different man in upscale, the main problem with all these tools, which preventing to use for any serious restoration. It makes "eye candy" result as usual-the core script of all this Ai, but nothing related to original.
I understand the concern about the upscale results not matching the original face. It can be frustrating when tools produce 'eye candy' rather than accurate restorations. However, we can address the eye issues. Start by lowering the denoise value. First, use the Supir node to recover the details. Once the details are restored, you can enhance with a very low denoise setting, which should help achieve a more accurate result.
@@ComfyUIworkflows that's a fundamental flaw of all this Ai architecture, as i understand the problem is these tools in reality can't see any input image, they always guessing by imagenet, always wrongly. Maybe new Visual models may help but they need some way to correctly describe every pixel of original, not by word description like now
Would be great to add *Florence* to get AUTO PROMPT.
will update