Seamless Outpainting with Flux in ComfyUI (Workflow Included)

แชร์
ฝัง
  • เผยแพร่เมื่อ 7 พ.ย. 2024

ความคิดเห็น • 80

  • @Lord5oth
    @Lord5oth 15 วันที่ผ่านมา +2

    Cool ! this bricked my comfy build thanks!!

  • @cabinator1
    @cabinator1 21 วันที่ผ่านมา

    Fantastic Workflow. Works flawless for me.

    • @my-ai-force
      @my-ai-force  19 วันที่ผ่านมา

      Great to hear!

  • @TailspinMedia
    @TailspinMedia 26 วันที่ผ่านมา

    very cool and love how organized it is.

    • @my-ai-force
      @my-ai-force  19 วันที่ผ่านมา

      Thank you!

  • @discotek1198
    @discotek1198 13 วันที่ผ่านมา

    the best workflow i have seen yet. Perfect, thank you very much!!! it works floawlesly!!! SUB+

    • @my-ai-force
      @my-ai-force  10 ชั่วโมงที่ผ่านมา

      Thanks for the sub!

  • @abaj006
    @abaj006 26 วันที่ผ่านมา

    Brilliant work, really amazing. Thanks for the tutorial and sharing the workflow. Just tried it and works really well.

    • @my-ai-force
      @my-ai-force  19 วันที่ผ่านมา

      Great to hear!

  • @CasasYLaPistola
    @CasasYLaPistola หลายเดือนก่อน +6

    Thanks for the video and the workflow, I've used it and everything works fine until it reaches the Flux group, and the problem is that I don't know in which directory I should copy the flux model, because if I understood correctly, the flux models don't go in the same directory as the 1.5 and SDXL checkpoints. In short, when I copy them there the workflow gives me an error. Can you tell me where to put it? Also, until now to use flux models I didn't use the usual "load checkpoint" node, instead I used the "dualclip loader" node.

    • @baheth3elmy16
      @baheth3elmy16 หลายเดือนก่อน

      I loaded a Diffusion Model node and used the flux model from there but I got an error with the VAE Encoder. The model to use is the 17gb one not the diffusion model of 11gb referred to here.

    • @user-fo9ce3hr5h
      @user-fo9ce3hr5h หลายเดือนก่อน

      @@baheth3elmy16 bro what is the clip file for download? i haven't clip file for flux1-dev-fp8.safetensors.

    • @wiwwiw2890
      @wiwwiw2890 หลายเดือนก่อน

      Мне тоже интересно

  • @gardentv7833
    @gardentv7833 หลายเดือนก่อน

    after many models re-downloads, its work, thank you, 2 days to figure out

    • @ritikagrawal8454
      @ritikagrawal8454 หลายเดือนก่อน

      I was able to download all nodes and models, but my comfyui is not loading only. Did you face similar issue, if not, still can you please tell what worked for you?

  • @97BuckeyeGuy
    @97BuckeyeGuy หลายเดือนก่อน

    Great workflow! Thank you

    • @my-ai-force
      @my-ai-force  หลายเดือนก่อน

      You're so welcome!

  • @Macieks300
    @Macieks300 หลายเดือนก่อน +1

    Thanks so much for this workflow.

    • @my-ai-force
      @my-ai-force  หลายเดือนก่อน

      Glad it was helpful!

  • @philippeheritier9364
    @philippeheritier9364 หลายเดือนก่อน

    It works very very well, a very big thank you for this brilliant tutorial

    • @my-ai-force
      @my-ai-force  หลายเดือนก่อน

      Glad it helped

  • @dameguy_90
    @dameguy_90 หลายเดือนก่อน

    You are a genius. My subscription is worth it.

    • @my-ai-force
      @my-ai-force  หลายเดือนก่อน

      Thanks a ton for your support.

  • @mcdigitalargentina
    @mcdigitalargentina หลายเดือนก่อน +1

    Amigo gran trabajo! Suscripto a tu canal. Gracias por compartir tu trabajo.

  • @WasamiKirua
    @WasamiKirua หลายเดือนก่อน

    thank you very much, great workflow

    • @my-ai-force
      @my-ai-force  หลายเดือนก่อน

      Glad you like it!

  • @wellshotproductions6541
    @wellshotproductions6541 หลายเดือนก่อน

    Awesome workflow and great video! Found it over on OpenArt, then made my way here! Keep it up. Subscribed!

    • @my-ai-force
      @my-ai-force  หลายเดือนก่อน

      Awesome, thank you!

  • @GrocksterRox
    @GrocksterRox หลายเดือนก่อน

    Very well thought out. Kudos!

    • @my-ai-force
      @my-ai-force  หลายเดือนก่อน +1

      Thanks for your kind words.

  • @happyme7055
    @happyme7055 หลายเดือนก่อน

    Stunning!!!!! First working Outpaint ever ;-) GJ! 2 things would be usefu, i guess... a negative prompt and an optional lineart controlnet implemenation...

    • @kasoleg
      @kasoleg หลายเดือนก่อน

      I've also been looking for a long time for something that actually works. I've finalized it and posted my version on Google Drive above. Take it if you want.

  • @johannesmuller7881
    @johannesmuller7881 หลายเดือนก่อน

    Thanks alot for ur work, but I got one general Question does it make sense to use Comfy Ui with flux on my GTX 1070?
    right now I am downloading all stuff an just wanna set it up running but is it worth it?

    • @my-ai-force
      @my-ai-force  19 วันที่ผ่านมา +1

      You might want to give the GGUF version of the Flux model a try!

  •  หลายเดือนก่อน

    This is wonderfully good work, thank you for sharing! One question, I can only guess where to place the initial image with the x y parameters. Is there a better way to do this? Anyway, great!

    • @kasoleg
      @kasoleg หลายเดือนก่อน

      yes, I had to tinker with the settings to understand how to add. but in about 30 minutes you'll figure it out by trial and error...)

  • @Ekkivok
    @Ekkivok 29 วันที่ผ่านมา +1

    This workflow is great but there is a problem..... the node for the prompts is set with Florence wich is doing an automatisation of the prompt without any control on it,
    For exemple, i have problem for a photo wich shows human but i want the latent empty side of the image that i want to outpaint generate the background with no humans...
    And here is the problem Florence describe the entire image including...Humans on it and then out paint the image with humans (the suff that i don't want....) then....
    My question is that one, is there a workflow or can you make one that can restore the control of the prompt without florence ?

    • @my-ai-force
      @my-ai-force  19 วันที่ผ่านมา +1

      I think you're referring to the Flux ControlNet Upscaler workflow! To address your issue, just use a text node to connect to the 'CLIP Text Encoder' instead of the 'Florence2Run.'

  • @baheth3elmy16
    @baheth3elmy16 หลายเดือนก่อน +2

    (SOLVED) Hi. Your Workflow has a problem with the Flux group number 4. The VAE Encoder returns an error "'NoneType' object is not subscriptable" I used both 17GB flux and 11GB flux. Can you please tell us what the problem might be?
    Edit: Problem solved: The problem was that I disabled the optional groups because I thought I would save VRAM. When I enabled them, the workflow worked.

    • @mohammadbaranteem3487
      @mohammadbaranteem3487 หลายเดือนก่อน

      hello my friend I am an Iranian and I do not have enough control. My problem is that it only works until the flux stage. You managed to solve the problem and I don't understand your advice. Can you explain with a photo?

    • @baheth3elmy16
      @baheth3elmy16 หลายเดือนก่อน

      @@mohammadbaranteem3487 The workflow is divided into four groups. Groups 2 and 4 are optional and usually you can disable them. But if you will use Group 3 which is the Flux enhancement over the SDXL then you must enable group 2. Otherwise, the Flux group and entire workflow won't work.

  • @clflover
    @clflover 19 วันที่ผ่านมา

    thank you. The flux section produces a picture than what is in the Restore detail section.. can you help?

    • @my-ai-force
      @my-ai-force  19 วันที่ผ่านมา

      Great question! I can see where the confusion might be. The idea behind flux image-to-image repainting is to enhance and diversify the output, so it does make sense to expect some differences compared to what SDXL generates on its own. The goal is to optimize SDXL by leveraging these differences to create even more unique and creative results. If you have any specific examples or ideas in mind, I'd love to discuss them further!

  • @AlexMihaiC
    @AlexMihaiC 8 วันที่ผ่านมา

    I get this error and I don´t see were to put the clip for the flux model:
    VAEEncode
    'NoneType' object is not subscriptable

    • @AlexMihaiC
      @AlexMihaiC 8 วันที่ผ่านมา

      I happens if I don´t activate de Restore Detail part, now it works

  • @Henry-xs
    @Henry-xs 6 วันที่ผ่านมา

    This is really nice, but could you please tell me if in the third panel “Get_pad_mask”, “Get_sdxl_img”, “Get_pad_img ”, can I import these three images directly and replace them?

    • @my-ai-force
      @my-ai-force  10 ชั่วโมงที่ผ่านมา

      Yes, we can.

  • @ChrissyAiven
    @ChrissyAiven 9 วันที่ผ่านมา

    Sizes are not really big, is it possible to use higher ratios like 1080x1920 for Reels?

    • @my-ai-force
      @my-ai-force  8 วันที่ผ่านมา +1

      We can use supir or Topaz for upscaling

  • @วรายุทธชะชํา
    @วรายุทธชะชํา หลายเดือนก่อน +1

    I want to generate multiple sizes in one round. How can I do that? Sir

  • @lukehancockvideo
    @lukehancockvideo หลายเดือนก่อน +1

    Where do the images output to? They are not appearing in my ComfyUI Output folder.

    • @my-ai-force
      @my-ai-force  หลายเดือนก่อน +1

      You can replace the ‘Preview Image’ node with a ‘Save Image’ node and the image will be saved.

  • @AmateurDrummerBG
    @AmateurDrummerBG หลายเดือนก่อน

    Hey, the workflow is very cool but when I get to the FLUX part, specifically the KSampler, it gets reaaally slow to render. I'm using RTX 3060 with 12 Gb VRAM. Does abyone know how to speed it up?

    • @my-ai-force
      @my-ai-force  19 วันที่ผ่านมา

      Consider trying out Flux GGUF or Flux Hyper LoRA for your project!

  • @aidanblah9646
    @aidanblah9646 16 วันที่ผ่านมา

    Flux_Repaint - Load Checkpoint "CheckpointLoaderSimpleERROR: Could not detect model type of: E:\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\checkpoints\FLUX\flux1-dev-fp8.safetensors". All my other workflows can find the Dev-fp8 in the unet folder. But I went ahead and copied it and put it in the Checkpoint folder, in a FLUX/ folder like you have it. I even selected it the "Load checkpoint" node. I still get that error. Please help

    • @my-ai-force
      @my-ai-force  8 วันที่ผ่านมา

      Maybe the file has been corrupted. Try redownload it.

  • @莊惠雯-t5g
    @莊惠雯-t5g 7 วันที่ผ่านมา

    Thanks you.
    I use all of your workflow and model
    but why my comfy ui always show:
    CheckpointLoaderSimple
    ERROR: Could not detect model type of: D:\ComfyUI-aki-v1.3\ComfyUI-aki-v1.3\models\checkpoints\flux-dev\flux1-dev-fp8-e5m2.safetensors

    • @my-ai-force
      @my-ai-force  6 วันที่ผ่านมา

      Instead of Checkpoint Loader node, try using Load Diffusion Model to load Flux model.

  • @deonix95
    @deonix95 หลายเดือนก่อน +2

    Error occurred when executing CheckpointLoaderSimple:
    ERROR: Could not detect model type of: D:\Programs\SD\models/Stable-diffusion\flux1-dev-fp8.safetensors
    File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 317, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
    File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\Programs\ComfyUI_windows_portable\ComfyUI
    odes.py", line 539, in load_checkpoint
    out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\Programs\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 527, in load_checkpoint_guess_config
    raise RuntimeError("ERROR: Could not detect model type of: {}".format(ckpt_path))

    • @henroc481
      @henroc481 หลายเดือนก่อน +3

      same here

    • @Cluster5020
      @Cluster5020 หลายเดือนก่อน +1

      @@henroc481 "aderek Flux v2" worked for me

  • @digitalface9055
    @digitalface9055 หลายเดือนก่อน +1

    missing nodes crashed my comfyui, won't start anymore

  • @manipayami294
    @manipayami294 หลายเดือนก่อน

    Can you do it with Flux GGUF verisons?

    • @my-ai-force
      @my-ai-force  หลายเดือนก่อน

      In theory, yes.

  • @manipayami294
    @manipayami294 หลายเดือนก่อน

    When I use GGUF loader the app crashes. Does anyone know how should I fix this problem?

  • @DarioToledo
    @DarioToledo หลายเดือนก่อน

    I didn't know of that Union repaint controlnet. What does that do?

    • @my-ai-force
      @my-ai-force  หลายเดือนก่อน

      It's used for inpainting.

    • @DarioToledo
      @DarioToledo หลายเดือนก่อน

      @@my-ai-force and what difference does it make to usual inpainting without a controlnet? Tried to run it but gave me errors.

  • @maxmad62tube
    @maxmad62tube 27 วันที่ผ่านมา

    I'm sorry but I'm getting an error message “4-bit quantization data type None is not implemented.” Can you help me?

    • @my-ai-force
      @my-ai-force  19 วันที่ผ่านมา

      Thanks for reaching out! To better assist you with this error, could you please share a bit more detail? A screenshot of the terminal or any additional context about the error would be really helpful. This way, I can understand the issue more clearly and provide you with the best support possible!

  • @spelgenoegen7001
    @spelgenoegen7001 หลายเดือนก่อน

    Awesome! Everything works perfectly with diffusion_pytorch-model_promax.safetensors. Thanks!

  • @MatthewWaltersHello
    @MatthewWaltersHello หลายเดือนก่อน

    I find it makes the eyes look like googly-eyes. How to fix?

  • @baheth3elmy16
    @baheth3elmy16 หลายเดือนก่อน +1

    The flux model in your description is the wrong model. It is the 11gb model and it won't work in your workflow.

    • @Cluster5020
      @Cluster5020 หลายเดือนก่อน

      any other flux1-dev (e.g. the bnb) will do it as well?

    • @Cluster5020
      @Cluster5020 หลายเดือนก่อน

      nevermind, "aderek Flux v2" is working :)

  • @maelstromvideo09
    @maelstromvideo09 หลายเดือนก่อน

    try differensial diffusion it make inpainting better, without most of this ass pain.

  • @Thawadioo
    @Thawadioo หลายเดือนก่อน

    ComfyUI is giving me dizziness