Actually, You can just use "VAE encode (for inpainting)" to replace the two nodes "VAE encode" and "Set latent Noise Mask" all together, it actually is doing the exact same thing, also you will have an extra "grow mask by pixel" option to tinker mask more
When im loading yours workflow Im getting this error: "When loading the graph, the following node types were not found: Text box", in comfy menager Derfuu node gives me this "Conflicted Nodes: Float [ComfyLiterals], Float [eden_comfy_pipelines], Float [ComfyUI-Logic]". Filename is all red but i cant type there anything :( p.s. im on linux p.s.2 ofc i've changed file directory to: /home/newfolder
it takes forever to reproduce the results because the files are messy and there's no way to know how to import the exact same workflow without pain in figuring out every single missing detail
IPAdapter has undergone a major update March 23 2024, which renders it incompatible with previous workflows. You'll have to reinstall IPAdapter and then do some (re)work on existing workflows. See github.com/cubiq/ComfyUI_IPAdapter_plus
Nice tutorial. Thanks for not stopping at each point and expounding on why you could do this or maybe you could do that or I have even heard people go completely off subject talking about what Auntie did last week . Thanks again. :O)
There's anew video on inpainting: th-cam.com/video/q9wQe248lc4/w-d-xo.html Link to downloadfolder is in the text. If json does not load there is something off with your ComfyUI, but I can't tell what from a distance.
Hii..... I have downloaded your fooocus inpaint workflow.. but there is error = model not support ..... Please make video on how to use fooocus inpaint workflow
I"m having a look at the Fooocus inpaint custom node right now. Maybe a video will follow. In the mean time, chances are the node that gives you troubles is the Latent Upscale custom node. Just delete it and replace it with a standard Latent Upscale node that is available bu default in ComfyUI.
Thanks Rudy, you have an excellent way of explaining concepts. Glad I found your tutorials. Keep it up, can't wait to see what you have in store with your teachings.
Hi! Thanks for your awesome work! I used your workflow but the "Img2img turbo" node in your tut, is different in my Comfy. In mine says instead: "inpaint turbo" and it's got a red error and doesn't work. How can I fix that?
It may look different from the video, but it should work. Missing nodes can be installed via the ComfyUI Manager. You can also try the non-turboversion, bit that contains the same nodes, it just has different parameters.
@@rudyshobbychannel thanks for replying so fast, I just subbed! ❤ I already ran the manager, installed the missing ones, but I still get the red error on that box. I tried searching for a substitute, but none work. Do you know the name of the non-turbo node that can replace it? I'd love to use your awesome workflow but I'm a noob 😔
@@lararogaard5247 That node actually is the SDXL Aspectratio node, which I always change the title of to the name of the workflow. The node is included in the GDrive. Also if there are any other red nodes, in the Readme there's a list of all the custom nodes I have installed.
IPAdadapter would be worth a try. See this video, where the painting ends up on the wall. th-cam.com/video/WD0EmOE4boc/w-d-xo.html You might need to use a mask.
I'm at a loss to understand where I'm going astray. I've experimented with various inpainting workflows, but the inpainted section consistently fails to match the rest of the image. Despite trying eight different inpainting models alongside numerous VAEs, I can't seem to get the colors to align properly. Even as simple as this always ends up with slightly different color, and a weird black border on the edge of the inpainted area: Load Checkpoint Load Image CLIP Text Encode (Prompt) VAE Encode (for Inpainting) Set Latent Noise Mask KSampler VAE Decode
What if you use the workflow that comes with this video? And follow what is done in the video? I do not use special inpainting checkpoints, just the standard Dreamshaper XL Turbo. Also no VAE for inpainting, just the standard. It should work, because in the video it works and that is not faked, just straightforward executed.
@@rudyshobbychannel Yes, I followed all your instructions. However, I just discovered the root of the issue. It turns out that I had 'xformers' enabled. According to a discussion on Reddit, this was causing the VAE to malfunction. The black border was still there after fixong the color. Thankfully, after implementing your suggested workflow, those borders disappeared 😀😀😀😀😀😀😀.
can i get a link to the text box you are using in the workflow download its the only thing im missing and its a bit vauge to find on google thanks for the video
Great video! I'm trying out your workflow but getting error from the Efficiency node: Efficiency Nodes for ComfyUI Version 2.0+ NOT FOUND Uninstalled it, installed it again as the doc recommends. No resolution. How did you make it work? Many thanks!
I have no idea why the Manager fails to install it. you can install directly from Giothub, that should always work. If it is the Latent Upscale node that give troubles you can install this one: github.com/city96/SD-Latent-Upscaler
@@rudyshobbychannel Yes, that was the latent upscale node that eventually stayed red. What i did is to remove it and add a "latent upscale by" node that seemed to overcome the issue. many thanks for the link though !
It is an integrated node, more info is in the Readme.txt in the download folder. What you can do is replace it with the Face Detailer from github.com/ltdrdata/ComfyUI-Impact-Pack (which is in fact used inside this integrated node).
I tried removing a tie and it would not do it. Then I selected the shirt and tie and said "White collared button up shirt" and it tried but half the shirt came out but then it created a black square on john wicks belly. Was using as a test. Why did it not work? Do I need to put filename node "White collared button up shirt_%seed%" as well? I tried matching aspect ratios as well to the image. I also ran into issues using gal gadot to change colors of dress and background - upon upscaling it changed her face. How do you avoid that?
The filename is just that, the name of the saved images, it has no influence on what image is generated. The %seed% you can leave out, it does not seem to work anymore, seed nr is not added. Why your inpainting does not want to do what you try to accomplish is hard to tell from a distance. If you mask a shirt and write a prompt that tells what should be generated in that area, most times it does that. If you use latent upscaling, the image will change some because it is sampled a second time. But since it uses the same prompt, it should still generate an image according to that prompt. If you like to upscale the original image without any changes, then connect the Image Upscaler to the output of the first sampler.
@@rudyshobbychannelwhy would the face change though? I got the hair and background to change but when going into the second part the upscale part makes a new person and not keeping the same person. Any advice would be greatly appreciated. Thanks for the tips prior!
@@Nibot2023 With latent upscale you need a denoise of about 0.5 (for best quality image), which means it will change the original image some ... also the face. If you have a celebrate in your prompt it would still adhere to the prompt and generate this celebrity again, but somewhat different from the first Sampler ... there's no way to avoid that. If you really want the first image to stay the same, use Image Upscale only and skip the Latent Upscale.
@@rudyshobbychanneli will try that. I am testing this for compositing and wanted to keep the photo details 1 to 1 with only the mask prompts changing it. I will try skipping aspects you mentioned. Trying to develop a work flow using this. Thanks for taking the time to respond. I am a newbie with comfyui nodes. A bit over whelming for me right now.
This link has the default workflows. I use the Pipe Loader and Pipe sampler, so there's no Efficient Sampler in use there anymore. Link to the workflows from the video: drive.google.com/drive/u/1/folders/1C4hnb__HQB2Pkig9pH7NWxQ05LJYBd7D If there's trouble with the Latent Upscale node, try this one: github.com/city96/SD-Latent-Upscaler
Thank you for gettin back. The Default workflow is working for me but not the Inpaint one. I get all. the nodes running but when the generation reaches the Face Detailer I get "Error occurred when executing FaceDetailer: Couldn't load custom C++ ops. This can happen if your PyTorch and torchvision versions are incompatible"
@@rusch_meyer Hm, that is strange. Are there any errors when you use the Img2mg workflow? What if you'd import the image from the Inpainting workflow into Img2Img and use the face detailer there?
Actually, You can just use "VAE encode (for inpainting)" to replace the two nodes "VAE encode" and "Set latent Noise Mask" all together, it actually is doing the exact same thing, also you will have an extra "grow mask by pixel" option to tinker mask more
When im loading yours workflow Im getting this error: "When loading the graph, the following node types were not found:
Text box", in comfy menager Derfuu node gives me this "Conflicted Nodes:
Float [ComfyLiterals], Float [eden_comfy_pipelines], Float [ComfyUI-Logic]". Filename is all red but i cant type there anything :( p.s. im on linux p.s.2 ofc i've changed file directory to: /home/newfolder
it takes forever to reproduce the results because the files are messy and there's no way to know how to import the exact same workflow without pain in figuring out every single missing detail
So glad I discovered this channel. Thanks for the clearly explained tutorials and attached workflows. ComfyUI is so awesome!
Thanks for your feedback, it's appreciated.
✨👌💪😎😎😎👍👍✨
Thanks I have been looking for a simple inpaint workflow to build off this is perfect.
upscaling switch ? pls explaint how do you on/off this process ?
There's a switch at the top immediately left of the group called Upscaling.
IpaAdapter is missing... manager does not find it anyone knows how to fix?
IPAdapter has undergone a major update March 23 2024, which renders it incompatible with previous workflows. You'll have to reinstall IPAdapter and then do some (re)work on existing workflows. See github.com/cubiq/ComfyUI_IPAdapter_plus
PNG Image workflows link is not working
Thanks for letting me know, I removed the link.
Great tutorial. I attempted latent upscaling, but it didn’t work as you mentioned. However, when I removed the latent mask beforehand, it worked.
Nice tutorial. Thanks for not stopping at each point and expounding on why you could do this or maybe you could do that or I have even heard people go completely off subject talking about what Auntie did last week . Thanks again. :O)
I try to add inpainting json but nothing happen
There's anew video on inpainting: th-cam.com/video/q9wQe248lc4/w-d-xo.html
Link to downloadfolder is in the text. If json does not load there is something off with your ComfyUI, but I can't tell what from a distance.
Hii..... I have downloaded your fooocus inpaint workflow.. but there is error = model not support ..... Please make video on how to use fooocus inpaint workflow
I"m having a look at the Fooocus inpaint custom node right now. Maybe a video will follow. In the mean time, chances are the node that gives you troubles is the Latent Upscale custom node. Just delete it and replace it with a standard Latent Upscale node that is available bu default in ComfyUI.
Hey Rudy, great video. Would love if you could show us how to add up adapter to sort of face swap from reference a face.
Thanks Rudy, you have an excellent way of explaining concepts. Glad I found your tutorials. Keep it up, can't wait to see what you have in store with your teachings.
Thanks for your kind feedback, it's appreciated.
@@rudyshobbychannel Thank you again, and wishing you the best for your channel.
Hi! Thanks for your awesome work! I used your workflow but the "Img2img turbo" node in your tut, is different in my Comfy. In mine says instead: "inpaint turbo" and it's got a red error and doesn't work. How can I fix that?
It may look different from the video, but it should work. Missing nodes can be installed via the ComfyUI Manager. You can also try the non-turboversion, bit that contains the same nodes, it just has different parameters.
@@rudyshobbychannel thanks for replying so fast, I just subbed! ❤ I already ran the manager, installed the missing ones, but I still get the red error on that box. I tried searching for a substitute, but none work. Do you know the name of the non-turbo node that can replace it? I'd love to use your awesome workflow but I'm a noob 😔
@@lararogaard5247 Lara, I might be able to help, if only I knew which is the red node you are referring to?
@@rudyshobbychannel sorry, the "Img2img turbo" node in your tut, is different in my Comfy. In mine says instead: "inpaint turbo" and is red
@@lararogaard5247 That node actually is the SDXL Aspectratio node, which I always change the title of to the name of the workflow. The node is included in the GDrive. Also if there are any other red nodes, in the Readme there's a list of all the custom nodes I have installed.
Thank you for tut! And downalable Content extraordinarily!
You're welcome!
When I hear you voice.... "TogethAA! We will devour the very gods!"... Keeps replaying in my head
How do I apply an specific image to the tshirt? Is it even possbile? thanks !
IPAdadapter would be worth a try. See this video, where the painting ends up on the wall.
th-cam.com/video/WD0EmOE4boc/w-d-xo.html
You might need to use a mask.
I'm at a loss to understand where I'm going astray. I've experimented with various inpainting workflows, but the inpainted section consistently fails to match the rest of the image. Despite trying eight different inpainting models alongside numerous VAEs, I can't seem to get the colors to align properly. Even as simple as this always ends up with slightly different color, and a weird black border on the edge of the inpainted area:
Load Checkpoint
Load Image
CLIP Text Encode (Prompt)
VAE Encode (for Inpainting)
Set Latent Noise Mask
KSampler
VAE Decode
What if you use the workflow that comes with this video? And follow what is done in the video? I do not use special inpainting checkpoints, just the standard Dreamshaper XL Turbo. Also no VAE for inpainting, just the standard. It should work, because in the video it works and that is not faked, just straightforward executed.
@@rudyshobbychannel Yes, I followed all your instructions. However, I just discovered the root of the issue. It turns out that I had 'xformers' enabled. According to a discussion on Reddit, this was causing the VAE to malfunction.
The black border was still there after fixong the color. Thankfully, after implementing your suggested workflow, those borders disappeared 😀😀😀😀😀😀😀.
@@ultimategolfarchives4746 OK, nice, now have fun!
Awesome!
can i get a link to the text box you are using in the workflow download its the only thing im missing and its a bit vauge to find on google thanks for the video
If you are referring to the filename node, that is part of the Derfuu mods: github.com/Derfuu/Derfuu_ComfyUI_ModdedNodes
Thanks. Is it possible to apply consistent characters created with ipadater and controlnet in inpaint?
I'm not sure, never tried.
Great video!
I'm trying out your workflow but getting error from the Efficiency node: Efficiency Nodes for ComfyUI Version 2.0+ NOT FOUND
Uninstalled it, installed it again as the doc recommends. No resolution.
How did you make it work? Many thanks!
I have no idea why the Manager fails to install it. you can install directly from Giothub, that should always work. If it is the Latent Upscale node that give troubles you can install this one:
github.com/city96/SD-Latent-Upscaler
@@rudyshobbychannel Yes, that was the latent upscale node that eventually stayed red. What i did is to remove it and add a "latent upscale by" node that seemed to overcome the issue. many thanks for the link though !
instead of writing a promt to generate a purple tshirt, is there a way to input the image of a tshirt so that it will be inpainted ?
You could try IPAdapter.
th-cam.com/video/zjkWsGgUExI/w-d-xo.html
Thank you for the video and the tip about zooming and panning on the image when creating the mask. Very helpful.
Also where is the Face Restore node located? The manger can't find it. Thanks.
It is an integrated node, more info is in the Readme.txt in the download folder. What you can do is replace it with the Face Detailer from github.com/ltdrdata/ComfyUI-Impact-Pack (which is in fact used inside this integrated node).
@@rudyshobbychannel Thank you for the information.
In the mean time the workflows have been changed into ones that do not use the Integrated Nodes. That should make life easier.
@@rudyshobbychannel Thank you!
Thanks Rudy, You're the best!
Thanks Sam, appreciated.
a french thank you. First video i found from you , i'll look at the rest.
Enjoy ...
how to image to mask... i mean make a precise ,ask in ps or krita or any other img program and then use that image as mask.
ComfyUI contains an Image to Mask node which can be used to convert a channel of an uploaded image to a mask.
or you can use Load image as mask node
I tried removing a tie and it would not do it. Then I selected the shirt and tie and said "White collared button up shirt" and it tried but half the shirt came out but then it created a black square on john wicks belly. Was using as a test. Why did it not work? Do I need to put filename node "White collared button up shirt_%seed%" as well? I tried matching aspect ratios as well to the image. I also ran into issues using gal gadot to change colors of dress and background - upon upscaling it changed her face. How do you avoid that?
The filename is just that, the name of the saved images, it has no influence on what image is generated. The %seed% you can leave out, it does not seem to work anymore, seed nr is not added.
Why your inpainting does not want to do what you try to accomplish is hard to tell from a distance. If you mask a shirt and write a prompt that tells what should be generated in that area, most times it does that.
If you use latent upscaling, the image will change some because it is sampled a second time. But since it uses the same prompt, it should still generate an image according to that prompt. If you like to upscale the original image without any changes, then connect the Image Upscaler to the output of the first sampler.
@@rudyshobbychannelwhy would the face change though? I got the hair and background to change but when going into the second part the upscale part makes a new person and not keeping the same person. Any advice would be greatly appreciated. Thanks for the tips prior!
@@Nibot2023 With latent upscale you need a denoise of about 0.5 (for best quality image), which means it will change the original image some ... also the face. If you have a celebrate in your prompt it would still adhere to the prompt and generate this celebrity again, but somewhat different from the first Sampler ... there's no way to avoid that. If you really want the first image to stay the same, use Image Upscale only and skip the Latent Upscale.
@@rudyshobbychanneli will try that. I am testing this for compositing and wanted to keep the photo details 1 to 1 with only the mask prompts changing it. I will try skipping aspects you mentioned. Trying to develop a work flow using this. Thanks for taking the time to respond. I am a newbie with comfyui nodes. A bit over whelming for me right now.
Are you using an Inpainting checkpoint?. That's critical for good results.
Hey There! Thank you for the great tutorials! I'm missing the node "KSampler (Efficient)". Which one do I have to install?
or maybe I'm using the wrong workflow file? How is the workflow named in you Google Drive?
This link has the default workflows. I use the Pipe Loader and Pipe sampler, so there's no Efficient Sampler in use there anymore.
Link to the workflows from the video:
drive.google.com/drive/u/1/folders/1C4hnb__HQB2Pkig9pH7NWxQ05LJYBd7D
If there's trouble with the Latent Upscale node, try this one:
github.com/city96/SD-Latent-Upscaler
Thank you for gettin back. The Default workflow is working for me but not the Inpaint one. I get all. the nodes running but when the generation reaches the Face Detailer I get "Error occurred when executing FaceDetailer: Couldn't load custom C++ ops. This can happen if your PyTorch and torchvision versions are incompatible"
@@rusch_meyer Hm, that is strange. Are there any errors when you use the Img2mg workflow? What if you'd import the image from the Inpainting workflow into Img2Img and use the face detailer there?
which workflow should i download
The one called 'Inpainting'.