ComfyUI Flux Upscale:- Using Flux ControlNet Tile & 4x Upscale Simple Workflow

แชร์
ฝัง
  • เผยแพร่เมื่อ 10 พ.ย. 2024

ความคิดเห็น • 52

  • @ComfyUIworkflows
    @ComfyUIworkflows  หลายเดือนก่อน

    Updated: th-cam.com/video/WcHlkgVlVPs/w-d-xo.html

  • @FredFraiche
    @FredFraiche หลายเดือนก่อน +5

    Let it be know that this is a workflow for SUPER LOW Resolutuion sub 512, so this is a great help if you are upscaling a image of a scanned stamp. It will not work with even basic 832x1216, so do not get fooled.

    • @ComfyUIworkflows
      @ComfyUIworkflows  หลายเดือนก่อน

      try tile upscaling, tile method is great because it divides your image into smaller pieces, called “tiles.” Each tile is processed separately, which helps keep all the details clear and sharp.

  • @JoeBurnett
    @JoeBurnett หลายเดือนก่อน

    Thank you for sharing your workflow! Liked and now subscribed.

  • @zombiehellmonkeygaming1956
    @zombiehellmonkeygaming1956 หลายเดือนก่อน

    When you use tile resample, you should always set the denoise strength to 1 and rely on adjusting the control weight instead on the controlnet model so you don't get a weird overlay effect.

    • @ComfyUIworkflows
      @ComfyUIworkflows  หลายเดือนก่อน

      Thank you for the advice! I’ll make sure to set the denoise strength to 1 and focus on adjusting the control weight in the ControlNet model to avoid any strange overlay effects. I appreciate your help!

  • @captainpike3490
    @captainpike3490 หลายเดือนก่อน

    Great tutorial! I wonder if it can run on 12GB VRAM. When placing nodes, can they be connected in order? It's hard to follow. Is it possible to keep workflows neat and tidy using groups? Thank you

  • @usserFunnySpecific
    @usserFunnySpecific หลายเดือนก่อน +3

    Hello Where to download all these models I go to the same place you show where it says “resources” on huggingface, but there are only models “diffusion_pytorch_model.safetensors”.

    • @ComfyUIworkflows
      @ComfyUIworkflows  หลายเดือนก่อน

      huggingface.co/Kijai/flux-fp8/tree/main
      huggingface.co/lllyasviel/flux_text_encoders/tree/main
      huggingface.co/nerualdreming/flux_vae/tree/main.
      here are the check point and Text Encoder File.. & VAE

  • @christofferbersau6929
    @christofferbersau6929 หลายเดือนก่อน +1

    where is the controlnet model??

  • @spiritform111
    @spiritform111 หลายเดือนก่อน +1

    hm... for some reason it is giving my images a bit of a double vision / blur... it's not as sharp. is this a gguf thing?

    • @ComfyUIworkflows
      @ComfyUIworkflows  หลายเดือนก่อน +1

      no, 0.6 to 0.7 try this value in control net, make the prompt small

    • @spiritform111
      @spiritform111 หลายเดือนก่อน

      @@ComfyUIworkflows ok, thank you!

    • @spiritform111
      @spiritform111 หลายเดือนก่อน

      @@ComfyUIworkflows thank you! and what do you suggest for a start and end percent?

    • @ComfyUIworkflows
      @ComfyUIworkflows  หลายเดือนก่อน +1

      @@spiritform111 This workflow is best for images that lack detail and are blurry. If the image has good details, try using Supir or tile upscaling, tile is best methods for upscaling.

    • @spiritform111
      @spiritform111 หลายเดือนก่อน

      @@ComfyUIworkflows gotcha... thanks!

  • @lucifer9814
    @lucifer9814 หลายเดือนก่อน

    Thanks a ton for this workflow and tutorial, I just need to find the right clip models and this should work for me !! It's unfortunate that I never downloaded any of the clip models used here, so I gotta look for them

  • @DavidDji_1989
    @DavidDji_1989 หลายเดือนก่อน +1

    what do you mean by low VRAM ? below 8 GB ? below 16 GB ?

  • @prasundas6088
    @prasundas6088 28 วันที่ผ่านมา

    Your pc specification please

  • @parthwagh3607
    @parthwagh3607 หลายเดือนก่อน

    can we use these controlnet for flux schnell also?

  • @MAKdotCZ
    @MAKdotCZ หลายเดือนก่อน

    Hi, pleas i have problem with this workflow (my hw Amd Epyx 7252, 64GB Ram, 2x RTX 4070 with 12 GB VRAM by ControlNet model - i used: Shakker-Labs FLUX.1-dev-ControlNet-Union-Pro
    Any good advice?
    "Error occurred when executing ControlNetLoader:
    MMDiT.__init__() got an unexpected keyword argument 'image_model'"

    • @ComfyUIworkflows
      @ComfyUIworkflows  หลายเดือนก่อน

      hey, you use same control net node which i m using, Could you share a screenshot of your workflow?

  • @graphilia7
    @graphilia7 หลายเดือนก่อน +1

    Thanks for the video. I get this error: "KSampler: mat1 and mat2 shapes cannot be multiplied (1x768 and 2816x1280)"
    Can you please help, what should I do? my image is 1888x1056px

    • @ComfyUIworkflows
      @ComfyUIworkflows  หลายเดือนก่อน

      yes, it is due to the image size,, this workflow is best of Blurry images.. has you mention sizer of image is 1888x1056px You the image is already Optimized... not work on this Resolution, Your can try Supir For that...

    • @graphilia7
      @graphilia7 หลายเดือนก่อน

      @@ComfyUIworkflows Many thanks, I will try 👍

    • @helveticafreezes5010
      @helveticafreezes5010 หลายเดือนก่อน

      @@ComfyUIworkflows I don't think that is the issue. There is something wrong in this workflow. I tried multiple image sizes from 512x512 down to 66x66 and I get this same error every time. All these images were under 100kb and were low quality. Please do some testing and quality control when putting out your information so you're no wasting people's time.

    • @ComfyUIworkflows
      @ComfyUIworkflows  หลายเดือนก่อน

      @@helveticafreezes5010 The workflow is working.. tested.. what error you getting in cmd & make sure dual clip loader type is flux

  • @mr.entezaee
    @mr.entezaee หลายเดือนก่อน

    Is it useful for 3D anime?

    • @ComfyUIworkflows
      @ComfyUIworkflows  หลายเดือนก่อน +1

      Yes, if the image have very less details

  • @paul-r-animation
    @paul-r-animation หลายเดือนก่อน

    Is it possible to get the Union-CN's solo? Union is great, but it hogs *a lot* of resources on top of FLUX.

    • @ComfyUIworkflows
      @ComfyUIworkflows  หลายเดือนก่อน

      yes

    • @paul-r-animation
      @paul-r-animation หลายเดือนก่อน

      @@ComfyUIworkflows Where? They got "Blur" etc. in there.

  • @muthurajradirajjj
    @muthurajradirajjj หลายเดือนก่อน

    just noticed, all the asset names are changed to "diffusion_pytorch_model.safetensors"

    • @ComfyUIworkflows
      @ComfyUIworkflows  หลายเดือนก่อน +1

      you can rename it to FLUX.1-dev-ControlNet-Union-Pro

  • @97BuckeyeGuy
    @97BuckeyeGuy หลายเดือนก่อน +1

    When I run this, my resulting image looks absolutely awful. It's a bit burry and totally destroyed the face of the man in the starting image. This ControlNet must be pretty weak.

    • @ComfyUIworkflows
      @ComfyUIworkflows  หลายเดือนก่อน

      hey, make text promt small and try, this controlnet supports 7 control modes, including canny (0), tile (1), depth (2), blur (3), pose (4), gray (5), low quality (6).

  • @AInfectados
    @AInfectados หลายเดือนก่อน

    Check your Blog, i let you comment.

    • @ComfyUIworkflows
      @ComfyUIworkflows  หลายเดือนก่อน

      send me a blury image, their are so many faces

  • @atomictraveller
    @atomictraveller หลายเดือนก่อน

    thumbnail.. the face on the right is a modern person's face. the face on the left is not at all like it. people had different things on their mind then. the photograph on the left looks like a british person to me, the photograph on the right, could be from a lot of places. acculturation = reality..

  • @97BuckeyeGuy
    @97BuckeyeGuy หลายเดือนก่อน +3

    You should be upfront about how much VRAM this workflow will need. You're not going to be able to run this locally without at least 24GB of VRAM.

    • @ComfyUIworkflows
      @ComfyUIworkflows  หลายเดือนก่อน +1

      @@ZERKA-p4b it will work if you have 6 GB VRAM You have to use GGUF Loader

    • @lucifer9814
      @lucifer9814 หลายเดือนก่อน +3

      I am running this on a 8gb 4060,takes a while but it works just fine !! Besides he did mentioned the 2 variations at the start of the video.

  • @fontenbleau
    @fontenbleau หลายเดือนก่อน

    not same face, a different man in upscale, the main problem with all these tools, which preventing to use for any serious restoration. It makes "eye candy" result as usual-the core script of all this Ai, but nothing related to original.

    • @ComfyUIworkflows
      @ComfyUIworkflows  หลายเดือนก่อน

      I understand the concern about the upscale results not matching the original face. It can be frustrating when tools produce 'eye candy' rather than accurate restorations. However, we can address the eye issues. Start by lowering the denoise value. First, use the Supir node to recover the details. Once the details are restored, you can enhance with a very low denoise setting, which should help achieve a more accurate result.

    • @fontenbleau
      @fontenbleau หลายเดือนก่อน

      @@ComfyUIworkflows that's a fundamental flaw of all this Ai architecture, as i understand the problem is these tools in reality can't see any input image, they always guessing by imagenet, always wrongly. Maybe new Visual models may help but they need some way to correctly describe every pixel of original, not by word description like now

  • @AInfectados
    @AInfectados หลายเดือนก่อน

    Would be great to add *Florence* to get AUTO PROMPT.