i got really hard time replicating the effectivness of fooocus outpainting within comfyui with sdxl checkpoint, on background is do almost decent with some error, but everytime i try to outpaint something with a character from a sdxl checkpoint, it only create a huge mess, like multiple body part or clothes mixed together etc..., tried multiple time i'm not sure what i do wrong or if it's a trouble within comfyui, witch had really hard time dealing with outpainting sdxl character. Because with another workflow i had for outpainting i used a 1.5 model and the outpainting worked very well, then i just change the checkpoint for a sdxl and got desastrous result. Where in fooocus almost never used it, i just put my image selected the side i wanted to expand and run the generation without any prompt or any change at all, and got a really good output except for some hand not really well generated,, but hand are always hard to get on first time. Ithink they must use dark magic to make it work so effectively on first shot without any change in whatsoever.
Hmm, thats weird the lama model usually works the best for me. After you apply the lama model, the image should already look like it blends into the background, therefore a lower denoise < 1 can remove the object completely.
It is an error that occurs if you do not mask out anything. At 7:46, my image has no mask therefore the error occur. I then solved it by updating the mask reroute from outpainting node, as the mask is now the padded area around the image.
Hi, very well explained. I have a question about the ImageCompositeMasked. I used it in inpainting where i masked a place in a picture with Maskeditor. Unfortunately the mask lines are more clearly with the Imagecomposite node than without using it. My problem is that i work with the image of myself, and i realise that while i inpainting a corner of the picture, the eyes change, but i havent mask the eyes at all and they are far away from the masked area. So like you said The unmasked region is effected slightly , and i dont know how to fix this problem.
I need help: i don't find find the models (places_512_Full Data. Pth) in the Load Fooocus inpaint. Also not other model in Load Fooocus Inpaint. I did download them and placed them in Models_Inpaint. Why
Hi hi, all the Fooocus models, LAMA and MAT are in the description :) The folder has to be named 'inpaint' as that is how the code will find the models. e.g ComfyUI/models/inpaint
Very interesting, thank you... have to try it out for better removing a figure from the background... because I often had that descibed problem that I couldnt remove the figure completly. Do you have an idea how to realize the other way round? I look for a workflow in which I change backgrounds till I like the background, then keep it, and go on sampling new figures into the same background. Best would be a solution in which I am not using a selfpainted mask to tell the model where to put my figure, but let the model choose a good place for integration of my figure (usually a girl).
Hi, if a 1-1 exact background is required, then the self painted mask might be the best way. However, if the background can accept a little change / variations to the background image, then you could try using an IPAdapter. In my IPAdapter video, in the IPAdapter clothes section, replace it with the background image and unlink the attention mask. Hope that works! th-cam.com/video/oYjEFHb--RA/w-d-xo.html
@@DataLeveling I had tried with ip-adapter some time ago and it did really change the background a lot. Maybe in future there will be models that can be "moderated" to only change the figure.
thanks for your video - I was able to get the basic flow to work following the picture on the comfyui-inpaint-nodes git hub page. I was wondering how to replicate the fooocus feature to improve a face. I tried reducing the denoise but this does not seem to have the desired effect. If you know how to do this maybe an idea for another video
Hi thats a great idea. I haven't looked at the source code for the Fooocus face detailer. I'm sure there's a way, will make a video when I figured it out :)
The custom node uses the same model patch file from the official Fooocus repository but Fooocus might have like additional tweaks for pre/postprocessing on the image level.
Hi, that is just a 'group' in ComfyUI for easy bypassing. The nodes itself can be found under right click -> Add nodes -> inpaint, if you have installed the custom node correctly :)
@@DDBM2023These models are finetuned and improved from the SDXL base model. For e.g DreamshaperXL is better in creating better looking people and less blurry edges. So if you visit civit.ai there are a lot of different sdxl models that are made to achieve certain improvements over the base model.
Using the workflow in the description I get this error "Error occurred when executing KSampler: Allocation on device 0 would exceed allowed memory. (out of memory) Currently allocated : 5.77 GiB Requested : 1.22 GiB Device limit : 8.00 GiB Free (according to CUDA): 0 bytes PyTorch limit (set by user-supplied memory fraction) : 17179869184.00 GiB"
Hi, not sure if this helps, but maybe can try running at low vram mode? As SDXL in general already uses quite a lot of VRAM and not to mention we are stacking ipadapter, fooocus patch, etc.
In Outpainting i get this error: Error occurred when executing INPAINT_MaskedFill: OpenCV(4.7.0) D:\a\opencv-python\opencv-python\opencv\modules\photo\src\inpaint.cpp:760: error: (-209:Sizes of input arguments do not match) All the input and output images must have the same size in function 'icvInpaint' File "C:\Users\Luna\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Luna\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Luna\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Luna\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-inpaint-nodes odes.py", line 245, in fill filled_np = cv2.inpaint( ^^^^^^^^^^^^
Thank you for your help. it works if in Pad Image for Outpainting you set left, top, right and bottom to 200. If you set, for example, left 200, top 0, right 200 and bottom 0 this error appears: Error occurred when executing INPAINT_InpaintWithModel: Argument #6: Padding size should be less than the corresponding input dimension, but got: padding (0, 1104) at dimension 2 of input [1, 3, 1024, 2128] File "C:\Users\Luna\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Luna\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Luna\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Luna\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-inpaint-nodes odes.py", line 347, in inpaint image, mask, original_size = resize_square(image, mask, required_size) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Luna\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-inpaint-nodes\util.py", line 37, in resize_square image = F.pad(image, (0, pad_w, 0, pad_h), mode="reflect") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^@@DataLeveling
Hmm thats weird I just tried it with left 200 right 200 and it works, could you send a link to the workflow where the error occurs, I'm guessing its either a ComfyUI ver difference issue or a wrong matching of nodes @@eltalismandelafe7531
I have discovered the reason by trial and error. If the width of the photo is less than 1024 and you apply left and right 200 it works. If the height of the photo is less than 1024 and you apply top and bottom 200 it works. If the width and height are 1024 or higher it gives error. Is this normal?@@DataLeveling
I don't think thats normal.. my image is 1500x1500 and I tried both only left right / top bottom and both works. Hmm, what sdxl model are you using? @@eltalismandelafe7531
Thank you, this is what was stopping me from switching from fooocus to comfy.
amyone has issues using the load fooocus inpaint node, where it displays 'pickle data was truncated' ?
possible to add loras? and where to put it?
i got really hard time replicating the effectivness of fooocus outpainting within comfyui with sdxl checkpoint, on background is do almost decent with some error, but everytime i try to outpaint something with a character from a sdxl checkpoint, it only create a huge mess, like multiple body part or clothes mixed together etc..., tried multiple time i'm not sure what i do wrong or if it's a trouble within comfyui, witch had really hard time dealing with outpainting sdxl character. Because with another workflow i had for outpainting i used a 1.5 model and the outpainting worked very well, then i just change the checkpoint for a sdxl and got desastrous result.
Where in fooocus almost never used it, i just put my image selected the side i wanted to expand and run the generation without any prompt or any change at all, and got a really good output except for some hand not really well generated,, but hand are always hard to get on first time. Ithink they must use dark magic to make it work so effectively on first shot without any change in whatsoever.
For some reason when i apply the lama model for object removal nothing happens even though I have the same setup as you
Hmm, thats weird the lama model usually works the best for me.
After you apply the lama model, the image should already look like it blends into the background, therefore a lower denoise < 1 can remove the object completely.
very useful, you are my hero!
As others have reported, getting error on inpainting mask fill... size problem. At 7:46 in the video, you also got that error. How did you resolve it?
It is an error that occurs if you do not mask out anything.
At 7:46, my image has no mask therefore the error occur.
I then solved it by updating the mask reroute from outpainting node, as the mask is now the padded area around the image.
Yes!!! That fixed the problem. Thank you so much! I am very excited to see where your workflow can take my imagination. Very cool!!@@DataLeveling
Hello i don´t know why but in the "Load fooocus Inpaint" the inpaint foocus patch is not detected, only the fooocus inpaint head
same issue
Did you find a solution?
I didn't know about samdetector! You legend!!
Hi, very well explained. I have a question about the ImageCompositeMasked.
I used it in inpainting where i masked a place in a picture with Maskeditor. Unfortunately the mask lines are more clearly with the Imagecomposite node than without using it.
My problem is that i work with the image of myself, and i realise that while i inpainting a corner of the picture, the eyes change, but i havent mask the eyes at all and they are far away from the masked area.
So like you said The unmasked region is effected slightly , and i dont know how to fix this problem.
I need help: i don't find find the models (places_512_Full Data. Pth) in the Load Fooocus inpaint. Also not other model in Load Fooocus Inpaint.
I did download them and placed them in Models_Inpaint.
Why
Hi hi, all the Fooocus models, LAMA and MAT are in the description :)
The folder has to be named 'inpaint' as that is how the code will find the models. e.g ComfyUI/models/inpaint
@@DataLeveling Thank u very much. Continue man you are a Legend u helped me a lot :))
It does not detect the patch file from the node drop-down, please help
Very interesting, thank you... have to try it out for better removing a figure from the background... because I often had that descibed problem that I couldnt remove the figure completly.
Do you have an idea how to realize the other way round? I look for a workflow in which I change backgrounds till I like the background, then keep it, and go on sampling new figures into the same background. Best would be a solution in which I am not using a selfpainted mask to tell the model where to put my figure, but let the model choose a good place for integration of my figure (usually a girl).
Hi, if a 1-1 exact background is required, then the self painted mask might be the best way.
However, if the background can accept a little change / variations to the background image, then you could try using an IPAdapter.
In my IPAdapter video, in the IPAdapter clothes section, replace it with the background image and unlink the attention mask.
Hope that works! th-cam.com/video/oYjEFHb--RA/w-d-xo.html
@@DataLeveling I had tried with ip-adapter some time ago and it did really change the background a lot. Maybe in future there will be models that can be "moderated" to only change the figure.
Thanks. Can't I use image prompt(cpds,faceswap,pyracanny) and inpaint at the same time on foocus?
Hi I'm not too sure about that as I seldom use Fooocus sorry!
thanks for your video - I was able to get the basic flow to work following the picture on the comfyui-inpaint-nodes git hub page. I was wondering how to replicate the fooocus feature to improve a face. I tried reducing the denoise but this does not seem to have the desired effect. If you know how to do this maybe an idea for another video
Hi thats a great idea. I haven't looked at the source code for the Fooocus face detailer. I'm sure there's a way, will make a video when I figured it out :)
downloaded all the models and nodes, gives me this error:
Error occurred when executing INPAINT_LoadFooocusInpaint:
invalid load key, '
is this better than the fooocus implementation or the same?
The custom node uses the same model patch file from the official Fooocus repository but Fooocus might have like additional tweaks for pre/postprocessing on the image level.
Would it be possible to have your workflow?
Hi, my workflow is attached in the description. Just have to click download file from the github repository.
How to get preprocessing pannel
Hi, that is just a 'group' in ComfyUI for easy bypassing.
The nodes itself can be found under right click -> Add nodes -> inpaint, if you have installed the custom node correctly :)
@@DataLeveling Thanks!!
Hi, just wondering do you have 1-on-1 session?
Hi, not at the moment as I still have many other commitments. Maybe in the future when my TH-cam Channel is more established :)
Good to know. Thank you. Please put me in your list once you are ready:)@@DataLeveling
one more question: why do you use JuggerautXL model rather than the SDXL base model? what is the benefit of doing that?@@DataLeveling
@@DDBM2023These models are finetuned and improved from the SDXL base model.
For e.g DreamshaperXL is better in creating better looking people and less blurry edges.
So if you visit civit.ai there are a lot of different sdxl models that are made to achieve certain improvements over the base model.
Thank you very much!@@DataLeveling
Using the workflow in the description I get this error "Error occurred when executing KSampler:
Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated : 5.77 GiB
Requested : 1.22 GiB
Device limit : 8.00 GiB
Free (according to CUDA): 0 bytes
PyTorch limit (set by user-supplied memory fraction)
: 17179869184.00 GiB"
Hi, not sure if this helps, but maybe can try running at low vram mode?
As SDXL in general already uses quite a lot of VRAM and not to mention we are stacking ipadapter, fooocus patch, etc.
In Outpainting i get this error: Error occurred when executing INPAINT_MaskedFill:
OpenCV(4.7.0) D:\a\opencv-python\opencv-python\opencv\modules\photo\src\inpaint.cpp:760: error: (-209:Sizes of input arguments do not match) All the input and output images must have the same size in function 'icvInpaint'
File "C:\Users\Luna\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Luna\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Luna\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Luna\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-inpaint-nodes
odes.py", line 245, in fill
filled_np = cv2.inpaint(
^^^^^^^^^^^^
Hi, when using outpainting, you have to link the mask output of the Outpaint node to the mask reroute.
Thank you for your help. it works if in Pad Image for Outpainting you set left, top, right and bottom to 200. If you set, for example, left 200, top 0, right 200 and bottom 0 this error appears:
Error occurred when executing INPAINT_InpaintWithModel:
Argument #6: Padding size should be less than the corresponding input dimension, but got: padding (0, 1104) at dimension 2 of input [1, 3, 1024, 2128]
File "C:\Users\Luna\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Luna\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Luna\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Luna\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-inpaint-nodes
odes.py", line 347, in inpaint
image, mask, original_size = resize_square(image, mask, required_size)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Luna\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-inpaint-nodes\util.py", line 37, in resize_square
image = F.pad(image, (0, pad_w, 0, pad_h), mode="reflect")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^@@DataLeveling
Hmm thats weird I just tried it with left 200 right 200 and it works, could you send a link to the workflow where the error occurs, I'm guessing its either a ComfyUI ver difference issue or a wrong matching of nodes @@eltalismandelafe7531
I have discovered the reason by trial and error. If the width of the photo is less than 1024 and you apply left and right 200 it works. If the height of the photo is less than 1024 and you apply top and bottom 200 it works. If the width and height are 1024 or higher it gives error. Is this normal?@@DataLeveling
I don't think thats normal.. my image is 1500x1500 and I tried both only left right / top bottom and both works. Hmm, what sdxl model are you using? @@eltalismandelafe7531
Gawd COMFYUI is such a aweful UX more like SUPER UNCOMFYUI am I right?!?! feck