Thanks for the video and the workflow, I've used it and everything works fine until it reaches the Flux group, and the problem is that I don't know in which directory I should copy the flux model, because if I understood correctly, the flux models don't go in the same directory as the 1.5 and SDXL checkpoints. In short, when I copy them there the workflow gives me an error. Can you tell me where to put it? Also, until now to use flux models I didn't use the usual "load checkpoint" node, instead I used the "dualclip loader" node.
I loaded a Diffusion Model node and used the flux model from there but I got an error with the VAE Encoder. The model to use is the 17gb one not the diffusion model of 11gb referred to here.
I was able to download all nodes and models, but my comfyui is not loading only. Did you face similar issue, if not, still can you please tell what worked for you?
Stunning!!!!! First working Outpaint ever ;-) GJ! 2 things would be usefu, i guess... a negative prompt and an optional lineart controlnet implemenation...
I've also been looking for a long time for something that actually works. I've finalized it and posted my version on Google Drive above. Take it if you want.
Thanks alot for ur work, but I got one general Question does it make sense to use Comfy Ui with flux on my GTX 1070? right now I am downloading all stuff an just wanna set it up running but is it worth it?
You might want to give the GGUF version of the Flux model a try!
หลายเดือนก่อน
This is wonderfully good work, thank you for sharing! One question, I can only guess where to place the initial image with the x y parameters. Is there a better way to do this? Anyway, great!
This workflow is great but there is a problem..... the node for the prompts is set with Florence wich is doing an automatisation of the prompt without any control on it, For exemple, i have problem for a photo wich shows human but i want the latent empty side of the image that i want to outpaint generate the background with no humans... And here is the problem Florence describe the entire image including...Humans on it and then out paint the image with humans (the suff that i don't want....) then.... My question is that one, is there a workflow or can you make one that can restore the control of the prompt without florence ?
I think you're referring to the Flux ControlNet Upscaler workflow! To address your issue, just use a text node to connect to the 'CLIP Text Encoder' instead of the 'Florence2Run.'
(SOLVED) Hi. Your Workflow has a problem with the Flux group number 4. The VAE Encoder returns an error "'NoneType' object is not subscriptable" I used both 17GB flux and 11GB flux. Can you please tell us what the problem might be? Edit: Problem solved: The problem was that I disabled the optional groups because I thought I would save VRAM. When I enabled them, the workflow worked.
hello my friend I am an Iranian and I do not have enough control. My problem is that it only works until the flux stage. You managed to solve the problem and I don't understand your advice. Can you explain with a photo?
@@mohammadbaranteem3487 The workflow is divided into four groups. Groups 2 and 4 are optional and usually you can disable them. But if you will use Group 3 which is the Flux enhancement over the SDXL then you must enable group 2. Otherwise, the Flux group and entire workflow won't work.
Great question! I can see where the confusion might be. The idea behind flux image-to-image repainting is to enhance and diversify the output, so it does make sense to expect some differences compared to what SDXL generates on its own. The goal is to optimize SDXL by leveraging these differences to create even more unique and creative results. If you have any specific examples or ideas in mind, I'd love to discuss them further!
This is really nice, but could you please tell me if in the third panel “Get_pad_mask”, “Get_sdxl_img”, “Get_pad_img ”, can I import these three images directly and replace them?
Hey, the workflow is very cool but when I get to the FLUX part, specifically the KSampler, it gets reaaally slow to render. I'm using RTX 3060 with 12 Gb VRAM. Does abyone know how to speed it up?
Flux_Repaint - Load Checkpoint "CheckpointLoaderSimpleERROR: Could not detect model type of: E:\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\checkpoints\FLUX\flux1-dev-fp8.safetensors". All my other workflows can find the Dev-fp8 in the unet folder. But I went ahead and copied it and put it in the Checkpoint folder, in a FLUX/ folder like you have it. I even selected it the "Load checkpoint" node. I still get that error. Please help
Thanks you. I use all of your workflow and model but why my comfy ui always show: CheckpointLoaderSimple ERROR: Could not detect model type of: D:\ComfyUI-aki-v1.3\ComfyUI-aki-v1.3\models\checkpoints\flux-dev\flux1-dev-fp8-e5m2.safetensors
Error occurred when executing CheckpointLoaderSimple: ERROR: Could not detect model type of: D:\Programs\SD\models/Stable-diffusion\flux1-dev-fp8.safetensors File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 317, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Programs\ComfyUI_windows_portable\ComfyUI odes.py", line 539, in load_checkpoint out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Programs\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 527, in load_checkpoint_guess_config raise RuntimeError("ERROR: Could not detect model type of: {}".format(ckpt_path))
Thanks for reaching out! To better assist you with this error, could you please share a bit more detail? A screenshot of the terminal or any additional context about the error would be really helpful. This way, I can understand the issue more clearly and provide you with the best support possible!
Cool ! this bricked my comfy build thanks!!
Fantastic Workflow. Works flawless for me.
Great to hear!
very cool and love how organized it is.
Thank you!
the best workflow i have seen yet. Perfect, thank you very much!!! it works floawlesly!!! SUB+
Thanks for the sub!
Brilliant work, really amazing. Thanks for the tutorial and sharing the workflow. Just tried it and works really well.
Great to hear!
Thanks for the video and the workflow, I've used it and everything works fine until it reaches the Flux group, and the problem is that I don't know in which directory I should copy the flux model, because if I understood correctly, the flux models don't go in the same directory as the 1.5 and SDXL checkpoints. In short, when I copy them there the workflow gives me an error. Can you tell me where to put it? Also, until now to use flux models I didn't use the usual "load checkpoint" node, instead I used the "dualclip loader" node.
I loaded a Diffusion Model node and used the flux model from there but I got an error with the VAE Encoder. The model to use is the 17gb one not the diffusion model of 11gb referred to here.
@@baheth3elmy16 bro what is the clip file for download? i haven't clip file for flux1-dev-fp8.safetensors.
Мне тоже интересно
after many models re-downloads, its work, thank you, 2 days to figure out
I was able to download all nodes and models, but my comfyui is not loading only. Did you face similar issue, if not, still can you please tell what worked for you?
Great workflow! Thank you
You're so welcome!
Thanks so much for this workflow.
Glad it was helpful!
It works very very well, a very big thank you for this brilliant tutorial
Glad it helped
You are a genius. My subscription is worth it.
Thanks a ton for your support.
Amigo gran trabajo! Suscripto a tu canal. Gracias por compartir tu trabajo.
thank you very much, great workflow
Glad you like it!
Awesome workflow and great video! Found it over on OpenArt, then made my way here! Keep it up. Subscribed!
Awesome, thank you!
Very well thought out. Kudos!
Thanks for your kind words.
Stunning!!!!! First working Outpaint ever ;-) GJ! 2 things would be usefu, i guess... a negative prompt and an optional lineart controlnet implemenation...
I've also been looking for a long time for something that actually works. I've finalized it and posted my version on Google Drive above. Take it if you want.
Thanks alot for ur work, but I got one general Question does it make sense to use Comfy Ui with flux on my GTX 1070?
right now I am downloading all stuff an just wanna set it up running but is it worth it?
You might want to give the GGUF version of the Flux model a try!
This is wonderfully good work, thank you for sharing! One question, I can only guess where to place the initial image with the x y parameters. Is there a better way to do this? Anyway, great!
yes, I had to tinker with the settings to understand how to add. but in about 30 minutes you'll figure it out by trial and error...)
This workflow is great but there is a problem..... the node for the prompts is set with Florence wich is doing an automatisation of the prompt without any control on it,
For exemple, i have problem for a photo wich shows human but i want the latent empty side of the image that i want to outpaint generate the background with no humans...
And here is the problem Florence describe the entire image including...Humans on it and then out paint the image with humans (the suff that i don't want....) then....
My question is that one, is there a workflow or can you make one that can restore the control of the prompt without florence ?
I think you're referring to the Flux ControlNet Upscaler workflow! To address your issue, just use a text node to connect to the 'CLIP Text Encoder' instead of the 'Florence2Run.'
(SOLVED) Hi. Your Workflow has a problem with the Flux group number 4. The VAE Encoder returns an error "'NoneType' object is not subscriptable" I used both 17GB flux and 11GB flux. Can you please tell us what the problem might be?
Edit: Problem solved: The problem was that I disabled the optional groups because I thought I would save VRAM. When I enabled them, the workflow worked.
hello my friend I am an Iranian and I do not have enough control. My problem is that it only works until the flux stage. You managed to solve the problem and I don't understand your advice. Can you explain with a photo?
@@mohammadbaranteem3487 The workflow is divided into four groups. Groups 2 and 4 are optional and usually you can disable them. But if you will use Group 3 which is the Flux enhancement over the SDXL then you must enable group 2. Otherwise, the Flux group and entire workflow won't work.
thank you. The flux section produces a picture than what is in the Restore detail section.. can you help?
Great question! I can see where the confusion might be. The idea behind flux image-to-image repainting is to enhance and diversify the output, so it does make sense to expect some differences compared to what SDXL generates on its own. The goal is to optimize SDXL by leveraging these differences to create even more unique and creative results. If you have any specific examples or ideas in mind, I'd love to discuss them further!
I get this error and I don´t see were to put the clip for the flux model:
VAEEncode
'NoneType' object is not subscriptable
I happens if I don´t activate de Restore Detail part, now it works
This is really nice, but could you please tell me if in the third panel “Get_pad_mask”, “Get_sdxl_img”, “Get_pad_img ”, can I import these three images directly and replace them?
Yes, we can.
Sizes are not really big, is it possible to use higher ratios like 1080x1920 for Reels?
We can use supir or Topaz for upscaling
I want to generate multiple sizes in one round. How can I do that? Sir
+1
Where do the images output to? They are not appearing in my ComfyUI Output folder.
You can replace the ‘Preview Image’ node with a ‘Save Image’ node and the image will be saved.
Hey, the workflow is very cool but when I get to the FLUX part, specifically the KSampler, it gets reaaally slow to render. I'm using RTX 3060 with 12 Gb VRAM. Does abyone know how to speed it up?
Consider trying out Flux GGUF or Flux Hyper LoRA for your project!
Flux_Repaint - Load Checkpoint "CheckpointLoaderSimpleERROR: Could not detect model type of: E:\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\checkpoints\FLUX\flux1-dev-fp8.safetensors". All my other workflows can find the Dev-fp8 in the unet folder. But I went ahead and copied it and put it in the Checkpoint folder, in a FLUX/ folder like you have it. I even selected it the "Load checkpoint" node. I still get that error. Please help
Maybe the file has been corrupted. Try redownload it.
Thanks you.
I use all of your workflow and model
but why my comfy ui always show:
CheckpointLoaderSimple
ERROR: Could not detect model type of: D:\ComfyUI-aki-v1.3\ComfyUI-aki-v1.3\models\checkpoints\flux-dev\flux1-dev-fp8-e5m2.safetensors
Instead of Checkpoint Loader node, try using Load Diffusion Model to load Flux model.
Error occurred when executing CheckpointLoaderSimple:
ERROR: Could not detect model type of: D:\Programs\SD\models/Stable-diffusion\flux1-dev-fp8.safetensors
File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 317, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Programs\ComfyUI_windows_portable\ComfyUI
odes.py", line 539, in load_checkpoint
out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Programs\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 527, in load_checkpoint_guess_config
raise RuntimeError("ERROR: Could not detect model type of: {}".format(ckpt_path))
same here
@@henroc481 "aderek Flux v2" worked for me
missing nodes crashed my comfyui, won't start anymore
Can you do it with Flux GGUF verisons?
In theory, yes.
When I use GGUF loader the app crashes. Does anyone know how should I fix this problem?
I didn't know of that Union repaint controlnet. What does that do?
It's used for inpainting.
@@my-ai-force and what difference does it make to usual inpainting without a controlnet? Tried to run it but gave me errors.
I'm sorry but I'm getting an error message “4-bit quantization data type None is not implemented.” Can you help me?
Thanks for reaching out! To better assist you with this error, could you please share a bit more detail? A screenshot of the terminal or any additional context about the error would be really helpful. This way, I can understand the issue more clearly and provide you with the best support possible!
Awesome! Everything works perfectly with diffusion_pytorch-model_promax.safetensors. Thanks!
I find it makes the eyes look like googly-eyes. How to fix?
The flux model in your description is the wrong model. It is the 11gb model and it won't work in your workflow.
any other flux1-dev (e.g. the bnb) will do it as well?
nevermind, "aderek Flux v2" is working :)
try differensial diffusion it make inpainting better, without most of this ass pain.
ComfyUI is giving me dizziness