Works like a charm. Good job! In the SDXL Outpaint section, you need to connect the Load Checkpoint VAE to the VAE Encode (for inpainting), otherwise you will get the "...4 channels but expected 16..." error. Cheers to everyone.
Oh wow. What a video. I know you have already provided the workflow, but out of respect and appreciation for your hard and wonderful work, I have to watch your videos till the end, I actually enjoy how you build the workflow, good education. Thanks a bunch!!!
Surprisingly the Flux1.Dev.FP8 mode gave me an error. You seem to have used the Full FP8 model. I used the fp8 UNET model only and added the dual clip and VAE nodes. Also, I had to download the SDXL Union and then the Promax Union Controlnet to get the workflow to work. I added an image and prompted (A woman in a restaurant), changed the seed and fixed it. At the end, the image I input in the first group came out as is in the last output, nothing happened to it and it wasn't outpainted. The image I used seemed to be big in size. I used a 512x512 and it worked. Thanks again.
It doesn't work for me either, maybe the secret is in his (flux dev+vae clip) model, but he didn't give us a link to download that model so it won't work for anyone.
@@gardentv7833 can you share the controlnet-union-sdxl-1.0 and RealvizXL_V5_BakedVAE.safetensors so we can download it? and where we should put them in comfy ui folder?
I noticed that too, so after I temporarily set the scale in the layer utility node from 0.5 to 1 the quality improved, but then the outpainting area was reduced.
getting this error message: LayerUtility: ImageBlendAdvance V2 `np.asfarray` was removed in the NumPy 2.0 release. Use `np.asarray` with a proper dtype instead. but i have the newest comfyui installed
I have a few questions : 1. what's the difference using ImageBlendAdvance v2 Vs Pad Image for Outpainting (local node) 2. why you invert the mask when there's already invert_mask option on the node. Maybe show us the output to make it clearer what's happening
at 9:09 you stated that it is possible to select any SDXL model, so why I get : ## Error Details - **Node Type:** KSampler - **Exception Type:** RuntimeError - **Exception Message:** Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 16, 112, 187] to have 4 channels, but got 16 channels instead ## Stack Trace ??? can you help me a bit ???
Lol, so it didn't take so long and I figured out myself... problem was in VAE... Because I used UNET flux, I served flux VAE in anything everywhere and the VAE encode (inpainting) recieved that, but it needs the sdxl VAE... so... thats it ;-)
This one breaks for me. Main error appears to be "Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 16, 112, 187] to have 4 channels, but got 16 channels instead" in the KSampler SDXL Outpaint Group
at least for me... > problem was in VAE... Because I used UNET flux, I served flux VAE in anything everywhere and the VAE encode (inpainting) recieved that, but it needs the sdxl VAE... so... thats it ;-)
That comfy layer style breaks my comfyUI everytime: File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\cuda\__init__.py", line 284, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled")... this is after it uninstalls a bunch of stuff including Torch.
Hi Profe.. exist a mistake KSAMPLER mat1 and mat2 shapes cannot be multiplied (10528x16 and 64x3072) ... why?? .Thnaks in advance for all projects all are the best
I run it on rtx3060 with 12gb. Both Dev and Schnell, 8 and 16 bit. Works fine. Takes maybe 30 seconds to do a regular generate, the more complex version of this workflow took about 2:00 using the Hyper-8 step lora.
Flux has so many models…and the GGUF ones. Which one would you guys suggest? I have rtx4070 Ti Super with 8vram and 16gb of memory. My laptop has rtx4060 with 8vram and 16gb of memory
Works like a charm. Good job! In the SDXL Outpaint section, you need to connect the Load Checkpoint VAE to the VAE Encode (for inpainting), otherwise you will get the "...4 channels but expected 16..." error.
Cheers to everyone.
Oh wow. What a video. I know you have already provided the workflow, but out of respect and appreciation for your hard and wonderful work, I have to watch your videos till the end, I actually enjoy how you build the workflow, good education. Thanks a bunch!!!
directly use alimama's flux controlnet inpainting to do outinpaintng, the effect is also very good
This is a really helpful video. If a pure flux workflow comes out that removes the sdxl model, please make a tutorial like this then too. ^^
Thank's CG Top, ! 我从你身上学到了很多东西!
WOW
Thanks for the video! Been waiting a long time on Flux and outpaiting )
Thanks, It is a tricky method that I want to share
good video thx, but please change the music to more ambient or lofi to have a better focus on the topic. peace
Thanks Bro for the workflow you klaasny dude
Surprisingly the Flux1.Dev.FP8 mode gave me an error. You seem to have used the Full FP8 model. I used the fp8 UNET model only and added the dual clip and VAE nodes. Also, I had to download the SDXL Union and then the Promax Union Controlnet to get the workflow to work. I added an image and prompted (A woman in a restaurant), changed the seed and fixed it. At the end, the image I input in the first group came out as is in the last output, nothing happened to it and it wasn't outpainted. The image I used seemed to be big in size. I used a 512x512 and it worked. Thanks again.
It doesn't work for me either, maybe the secret is in his (flux dev+vae clip) model, but he didn't give us a link to download that model so it won't work for anyone.
i have to download 17GB model. it works
@@gardentv7833 Bro can you share the
controlnet-union-sdxl-1.0 and
RealvizXL_V5_BakedVAE.safetensors
so i can download it?
@@gardentv7833 can you share the
controlnet-union-sdxl-1.0 and
RealvizXL_V5_BakedVAE.safetensors
so we can download it?
and where we should put them in comfy ui folder?
Nice work, Thank you, I noticed quality losses in face, deformed teeth, is there anyway to repair this?
I noticed that too, so after I temporarily set the scale in the layer utility node from 0.5 to 1 the quality improved, but then the outpainting area was reduced.
@@bwheldale Bro can you share the
controlnet-union-sdxl-1.0 and
RealvizXL_V5_BakedVAE.safetensors
so i can download it?
@@bwheldale can you share the
controlnet-union-sdxl-1.0 and
RealvizXL_V5_BakedVAE.safetensors
so we can download it?
getting this error message: LayerUtility: ImageBlendAdvance V2
`np.asfarray` was removed in the NumPy 2.0 release. Use `np.asarray` with a proper dtype instead.
but i have the newest comfyui installed
thanx alot
🙏🙏
can you share the
controlnet-union-sdxl-1.0 and
RealvizXL_V5_BakedVAE.safetensors
so we can download it?
I have a few questions :
1. what's the difference using ImageBlendAdvance v2 Vs Pad Image for Outpainting (local node)
2. why you invert the mask when there's already invert_mask option on the node. Maybe show us the output to make it clearer what's happening
i got this error KSampler
unsupported operand type(s) for *: 'float' and 'NoneType' what should i do now
Same for me. Maybe because I can't find apply controlnet (advanced) so i used Apply controlnet but i guess there is a problem with VAE
at 9:09 you stated that it is possible to select any SDXL model, so why I get :
## Error Details
- **Node Type:** KSampler
- **Exception Type:** RuntimeError
- **Exception Message:** Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 16, 112, 187] to have 4 channels, but got 16 channels instead
## Stack Trace
??? can you help me a bit ???
Lol, so it didn't take so long and I figured out myself... problem was in VAE... Because I used UNET flux, I served flux VAE in anything everywhere and the VAE encode (inpainting) recieved that, but it needs the sdxl VAE... so... thats it ;-)
This one breaks for me. Main error appears to be "Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 16, 112, 187] to have 4 channels, but got 16 channels instead" in the KSampler SDXL Outpaint Group
I see the problem now - at 5:00 you fail to connect the LoadCheckpoint vae output to the vae input on the Vae Encode (for inpainting) node
at least for me... > problem was in VAE... Because I used UNET flux, I served flux VAE in anything everywhere and the VAE encode (inpainting) recieved that, but it needs the sdxl VAE... so... thats it ;-)
WOW!
🙏
That comfy layer style breaks my comfyUI everytime: File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\cuda\__init__.py", line 284, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")... this is after it uninstalls a bunch of stuff including Torch.
How mucho VRAM neded?
Minimum 6gb vram
Hi Profe.. exist a mistake KSAMPLER mat1 and mat2 shapes cannot be multiplied (10528x16 and 64x3072) ... why?? .Thnaks in advance for all projects all are the best
me pasa lo mismo
ya funciona bien!!
@@gastonboigues4124 Hola ya te genera la imagen ampliada correctamente? ... a mi solo me sale fondo verde en los nodos mejorados de flux con el 0.95
@@KasperskyGroup si me funcionó bien cambiando el modelo de control net
@@gastonboigues4124 Buenas Gaston,puedes enviarme el link del controlnet por favor.
bro, can i use flux with rtx 4060 ti 16gb?
I run it on rtx3060 with 12gb. Both Dev and Schnell, 8 and 16 bit. Works fine. Takes maybe 30 seconds to do a regular generate, the more complex version of this workflow took about 2:00 using the Hyper-8 step lora.
I just ran this workflow at 30 steps... 3:00 minutes... used 50GB or Ram and 11.4 GB of VRAM--- 16 bit model and clip
@@erichearduga haha that means the more RAM you have... the more RAM gets eaten xD I have 32GB, maybe I don't need to upgrade after all xDD
Flux has so many models…and the GGUF ones. Which one would you guys suggest? I have rtx4070 Ti Super with 8vram and 16gb of memory. My laptop has rtx4060 with 8vram and 16gb of memory
Comfy is so unnecessarily complicated to complete a simple task.
What is simpler?
I cannot continue from the "Load Checkpoint" node
ERROR: Could not detect model type of: B:\Data\Models\StableDiffusion\flux1-dev-fp8.safetensors