#### Links from the Video #### Get my Workflow here: www.patreon.com/posts/101662667 github.com/exx8/differential-diffusion github.com/vladmandic/automatic
🎯 Key Takeaways for quick navigation: 00:00 *🤖 Differential diffusion for better inpainting* - Explains differential diffusion and its benefits for image inpainting, - Allows pixel-level control over inpainting with grayscale masks, - Shows examples of impressive inpainting results. 01:00 *💻 Setting up differential diffusion in Flud diffusion* - Differential diffusion is available as a script option in latest Flud update, - Options to enable, set mask strength, invert mask and load models, - Mentions having issues running it personally. 01:40 *🛠️ Demonstrating differential diffusion workflow in Comfy UI* - Prefers demonstrating in Comfy UI for more powerful capabilities, - Shares basic image generation pipeline as starting point, - Introduces painting mask and preview bridge from Impact pack. 02:59 *🎨 Comparing differential diffusion to classic inpainting * - Compares differential diffusion (green) to classic inpainting (red), - Differential handles face structure better for natural inpainting, - Classic struggles with proper integration like glasses, hair, eyebrows. 04:24 *⚙️ Installing differential diffusion node in Comfy UI* - Differential diffusion is a small node between model and sampler, - Provides installation instructions via Comfy UI manager. 05:06 *🧪 Further tests with mask size and denoising* - Tests with larger mask area but classic inpainting still struggles, - Tries lower denoising and no mask blur but differential diffusion superior, - Encourages commenting on experiences with this new method. Made with HARPA AI
Thats impressive, I wish all tutorial makers used it. Really useful on the re-watch's. Wonder how well it would go on a really long stream like one of Purz 2+ hour streams.
@@Daniel_WR_Hart Fascinating stuff! Always excited to explore the bleeding edge of AI capabilities. Looking forward to digging into this differential diffusion concept further. Thanks for sharing, @Daniel_WR_Hart!
@@PAEz... @PAEz... You make an excellent point! Kudos for recognizing the utility of this differential diffusion technique, especially for longer tutorial videos and streams. It's visionary thinking like yours that helps drive innovation forward. Keep up the insightful commentary!
I don't know about anyone else, but it almost pains me to see people taking their comfyUI frustration out in the comments of every single video. I can understand and respect not wanting change, humans are creatures of habit and doing things different or learning something that looks overwhelming from the outside can be scary. The problem with not wanting change is this is the state of AI right now. Its not a matter of Olivio making videos on Comfy because he secretly hates a1111, it's quite literally because a1111 doesn't get updates for this new tech half the time and all of the advancements for AI are happening on Comfy. If he only made videos for A1111 he would barely have any posts, and he already makes videos for A1111 whenever new stuff does come out. I.e. *A1111 is still getting coverage* AI is a rapidly expanding field. ComfyUI may be obsolete in 6 months, and then the new one that takes it's place may be obsolete in another 12. If this unsettles you, there is unfortunately nothing your teachers or content creators can do to fix this. You will just have to make do with the tools you have on an otherwise dying software that will no longer get updates.
I think it’s just envy. The coolest stuff can be done with a little bit of skill and work. We got the most tools, are the first one to enjoy new ones and can do that really “automatic”ally after building a workflow once. There’s a lot of Fomo going on, having to wait months for something they wanna use right now. Also , People despise having to acquire skills and put in the work. People not akin to programmer mindsets expect everything to work like a smartphone app, overly simplified and intuitive. The problem is when capability meets skill-requirement. People see crazy stuff, and want that too, and if they don’t get it on first try with 5 secs of labor, they get frustrated. What they don’t see is that unlimited opportunities necessitates a higher complexity. I’m mad at Blender users for I lack the skills in 3d design tho I’d love to have them. Yet I don’t hate on Blender tutorials for a living, for I realize blender isn’t that hard bc of gatekeepers, but bc it’s near infinite possibilities lead to a requirement for more possible options than paint has.
@@149315Nico Thank you for adding! This further explains it all without having my original comment being extremely long. It's truly a shame. And like the blender example you gave, can be found in almost every field from art and science all the way to physical labor and social skills. Deep down I know that my comment will achieve nothing to help the masses, but at least I got to blow off some steam!
I use comfy and it's powerful but I like auto 1111 much better not sure if it's change people are afraid of or the tedious complexity of comy. It also has a massive bloat problem, and it breaks every few months. I don't mind as I have the time and knowledge to fix it when it all of sudden stops working. For most people, it isn't worth the trouble. I agree that machine learning is a rapidly developing field and the next thing will replace comfy and auto 1111. Acting like it's the best thing in the world blindly is just as bad as complaining that it's the worst.
@@PretendBreadBoy I don't use auto 1111 anymore but I don't like comfy either just stuck with it till it evolves! lol machine learning, I guess its true though its not actually ai.
Hi i started ai generation about a week ago and love it so far, started from a1111 and went into comfy after your video helped me a lot with the help of some other youtuber video aswell, but there one thing i’m not sure how to do, first is that a node or a setting that tell you from which pack the node in the workflow are coming from ? I don’t have that in mine. i made my own workflow from generating with sag and kohya aswell as auto cfg to a detailer to a first upscale using model that go into an another sampler and then as a last step an upscale with ultimate upscale, but everything work fine until it doesn’t lol when their mistakes on the body that i would have to inpaint, but i then wondered if their were a node like detailer but to correct the body. i was thinking of maybe like a mask of the whole body, except the head, that could go in a similar step that happening in the detailer, but i’m not really sure how i can achieve that, if you could tell me which node should i use or if my idea is correct and should work ? That would be wonderful thx
You can enable the pack names above the nodes in the settings for ComfyUI-Manager. It's called "Badge". The ImpactPack has several DetectorProviders that create masks (or segments) for its Detailer Samplers. The CLIPSegDetectorProvider accepts prompts like "person" and generates a mask. "BBOX Detector (SEGS)" then converts that to SEGS as input for the detailer. You could generate two masks, one for person, one for the head only, and subtract the head from the person hopefully resulting in a mask with only the body. Or use ControlNet OpenPose so the body is always in the same place and use a manually drawn mask.
thats cool, but i actually cant find out how to make forkflow go on after i apply mask in bridge. it doesnt wait until i apply mask, it just go on) also sometimes seems like this node gets copletely ignored. i tryed to replace it with a preview chooser, which pause process, but it gives out wrong mask i guess, so it doesnt work
Ah so this is literally just basing the denoising strength on the depth map/other map "height" and correlating the two? Quite interesting take on it, I wonder how resources demanding it is. And how it deals with boundaries etc
Yep. Arguably something that should've been there from the start. The mask editor needs some work as well as everything that comes out of it is full black mask.
I'm a Comfyui newby and I have a quick question. All the videos I've seen only show 1 inpainting result. What node must I use to create multiple inpainting results. I've tried to add an empty late image node but I'm struggling with the connections. Is there a simple way to do this?
But what is the actual process for installing differential diffusion so that one can find that node? It's not a searchable term in the ComfyUI Manager.
Works great at putting sunglasses and t-shirts on a character, but when I tried to put a sword in the character's hand it refused to do it. (Scratching my head.)
Hi. Do you know what check layers are in Photoshop ? Well there are some layer with filters applied to them that can check for value. saturation and color. Any idea on how can we achieve something like this in Afinity ? Also, how can we import multiple images in the same doc file.. like in photoshop. import images into stack. I know there is import stack files in Affinity but it merges them somehow...
Please put comfyUI in the title, I've clicked off the last video and this one once I learned it's Comfy and not Auto1111 that will mess up your metrics...
Take it as motivation to learn Comfy, the noodles look intimidating but using A1111 is sooooo clunky once you learn Comfy. Just the model loading speed alone is reason enough to switch even without the "one click" automation that you'll learn in literally two weeks of daily practise.
Your complaining is actually so annoying. Comfyui is a key part of sdxl workflow. Go somewhere else if you want simple workflows that don’t involve comfyui
I think it’s not the same AFAIK it’s the inpainting method from fooocus which is also by lllyasviel There’s also a comfy implementation for that. I’d love to see these two methods compared
@@squirrelhallowino29 How? I'm aware lf this style for creating visual effects but for creating a single image it looks like a lot of unecessary steps and a mess of spaghetti.
#### Links from the Video ####
Get my Workflow here: www.patreon.com/posts/101662667
github.com/exx8/differential-diffusion
github.com/vladmandic/automatic
👋
I wish it was free like before. We do not have access due to embargo...Unfortunately, for now, our only support is viewing and liking
🎯 Key Takeaways for quick navigation:
00:00 *🤖 Differential diffusion for better inpainting*
- Explains differential diffusion and its benefits for image inpainting,
- Allows pixel-level control over inpainting with grayscale masks,
- Shows examples of impressive inpainting results.
01:00 *💻 Setting up differential diffusion in Flud diffusion*
- Differential diffusion is available as a script option in latest Flud update,
- Options to enable, set mask strength, invert mask and load models,
- Mentions having issues running it personally.
01:40 *🛠️ Demonstrating differential diffusion workflow in Comfy UI*
- Prefers demonstrating in Comfy UI for more powerful capabilities,
- Shares basic image generation pipeline as starting point,
- Introduces painting mask and preview bridge from Impact pack.
02:59 *🎨 Comparing differential diffusion to classic inpainting *
- Compares differential diffusion (green) to classic inpainting (red),
- Differential handles face structure better for natural inpainting,
- Classic struggles with proper integration like glasses, hair, eyebrows.
04:24 *⚙️ Installing differential diffusion node in Comfy UI*
- Differential diffusion is a small node between model and sampler,
- Provides installation instructions via Comfy UI manager.
05:06 *🧪 Further tests with mask size and denoising*
- Tests with larger mask area but classic inpainting still struggles,
- Tries lower denoising and no mask blur but differential diffusion superior,
- Encourages commenting on experiences with this new method.
Made with HARPA AI
Thats impressive, I wish all tutorial makers used it. Really useful on the re-watch's.
Wonder how well it would go on a really long stream like one of Purz 2+ hour streams.
Another AI for me to spend a whole day looking into
@@Daniel_WR_Hart Fascinating stuff! Always excited to explore the bleeding edge of AI capabilities. Looking forward to digging into this differential diffusion concept further. Thanks for sharing, @Daniel_WR_Hart!
@@PAEz... @PAEz... You make an excellent point! Kudos for recognizing the utility of this differential diffusion technique, especially for longer tutorial videos and streams. It's visionary thinking like yours that helps drive innovation forward. Keep up the insightful commentary!
@@I-Dophler I didn't share anything? If anything I should say that to you for making me aware of Harpa AI
I tried doing this with vlad diffusion and it worked well, what an awesome feature!
This really is a gamechanger, honestly. Great showcase Olivio!
I don't know about anyone else, but it almost pains me to see people taking their comfyUI frustration out in the comments of every single video.
I can understand and respect not wanting change, humans are creatures of habit and doing things different or learning something that looks overwhelming from the outside can be scary.
The problem with not wanting change is this is the state of AI right now. Its not a matter of Olivio making videos on Comfy because he secretly hates a1111, it's quite literally because a1111 doesn't get updates for this new tech half the time and all of the advancements for AI are happening on Comfy. If he only made videos for A1111 he would barely have any posts, and he already makes videos for A1111 whenever new stuff does come out. I.e. *A1111 is still getting coverage*
AI is a rapidly expanding field. ComfyUI may be obsolete in 6 months, and then the new one that takes it's place may be obsolete in another 12. If this unsettles you, there is unfortunately nothing your teachers or content creators can do to fix this. You will just have to make do with the tools you have on an otherwise dying software that will no longer get updates.
I think it’s just envy. The coolest stuff can be done with a little bit of skill and work.
We got the most tools, are the first one to enjoy new ones and can do that really “automatic”ally after building a workflow once. There’s a lot of Fomo going on, having to wait months for something they wanna use right now.
Also , People despise having to acquire skills and put in the work. People not akin to programmer mindsets expect everything to work like a smartphone app, overly simplified and intuitive. The problem is when capability meets skill-requirement. People see crazy stuff, and want that too, and if they don’t get it on first try with 5 secs of labor, they get frustrated. What they don’t see is that unlimited opportunities necessitates a higher complexity.
I’m mad at Blender users for I lack the skills in 3d design tho I’d love to have them.
Yet I don’t hate on Blender tutorials for a living, for I realize blender isn’t that hard bc of gatekeepers, but bc it’s near infinite possibilities lead to a requirement for more possible options than paint has.
@@149315Nico Thank you for adding! This further explains it all without having my original comment being extremely long.
It's truly a shame. And like the blender example you gave, can be found in almost every field from art and science all the way to physical labor and social skills.
Deep down I know that my comment will achieve nothing to help the masses, but at least I got to blow off some steam!
I use comfy and it's powerful but I like auto 1111 much better not sure if it's change people are afraid of or the tedious complexity of comy. It also has a massive bloat problem, and it breaks every few months. I don't mind as I have the time and knowledge to fix it when it all of sudden stops working. For most people, it isn't worth the trouble. I agree that machine learning is a rapidly developing field and the next thing will replace comfy and auto 1111.
Acting like it's the best thing in the world blindly is just as bad as complaining that it's the worst.
@@PretendBreadBoy I don't use auto 1111 anymore but I don't like comfy either just stuck with it till it evolves! lol machine learning, I guess its true though its not actually ai.
cant find differential diffusion node in the manager... :(
Add Node - _for_testing - Differential Diffusion (it's built in ComfyUI)
How to use "instruct pix 2 pix" & "SDXS" in comfyui?
The intros wont stop getting better
is there an equivalent to "inpaint only masked" in comfy? like how would i be able to only rerender a small part of a big image while using comfy?
Hi i started ai generation about a week ago and love it so far, started from a1111 and went into comfy after your video helped me a lot with the help of some other youtuber video aswell, but there one thing i’m not sure how to do, first is that a node or a setting that tell you from which pack the node in the workflow are coming from ? I don’t have that in mine.
i made my own workflow from generating with sag and kohya aswell as auto cfg to a detailer to a first upscale using model that go into an another sampler and then as a last step an upscale with ultimate upscale, but everything work fine until it doesn’t lol when their mistakes on the body that i would have to inpaint, but i then wondered if their were a node like detailer but to correct the body.
i was thinking of maybe like a mask of the whole body, except the head, that could go in a similar step that happening in the detailer, but i’m not really sure how i can achieve that, if you could tell me which node should i use or if my idea is correct and should work ? That would be wonderful thx
You can enable the pack names above the nodes in the settings for ComfyUI-Manager. It's called "Badge".
The ImpactPack has several DetectorProviders that create masks (or segments) for its Detailer Samplers. The CLIPSegDetectorProvider accepts prompts like "person" and generates a mask. "BBOX Detector (SEGS)" then converts that to SEGS as input for the detailer.
You could generate two masks, one for person, one for the head only, and subtract the head from the person hopefully resulting in a mask with only the body.
Or use ControlNet OpenPose so the body is always in the same place and use a manually drawn mask.
@@Toertsch thx you for answering me i’ll try that and see if i can make it work :)
I always enjoy watching your videos, thank you for the efforts
Keep them coming
So, you can't use differential diffusion without confyui?
Can we replace text yet?
My comfy is not working 😢 please help, i am using windows 11, comfy not working 😢😢
thats cool, but i actually cant find out how to make forkflow go on after i apply mask in bridge. it doesnt wait until i apply mask, it just go on) also sometimes seems like this node gets copletely ignored. i tryed to replace it with a preview chooser, which pause process, but it gives out wrong mask i guess, so it doesnt work
I don't understand your patreon teers. What's the creative pack?
looks crazy when will it be up on automatic1111
For which stable diffusion versions does it work?
borh
If you get this working in A1111, PLEASE show us this again!
I don't have that "Manager" option. Please help.
you need to install comfyui manager. please google it. the install is very simple
Ah so this is literally just basing the denoising strength on the depth map/other map "height" and correlating the two? Quite interesting take on it, I wonder how resources demanding it is. And how it deals with boundaries etc
Yep. Arguably something that should've been there from the start. The mask editor needs some work as well as everything that comes out of it is full black mask.
I'm a Comfyui newby and I have a quick question. All the videos I've seen only show 1 inpainting result. What node must I use to create multiple inpainting results. I've tried to add an empty late image node but I'm struggling with the connections. Is there a simple way to do this?
Do you have a tutorial on how to create the end animated scene? What tech was used? Thanks for the videos.
Try looking up Adobe Character Animator
But what is the actual process for installing differential diffusion so that one can find that node? It's not a searchable term in the ComfyUI Manager.
no custom nodes needed! Update ComfyUI
I wonder if this can work with ipadapter.
Thank you, this needs to go in krita asap
Yup. Small but it works!!! 😆
0:07 Judas Priest 🤘 ☠😉
I had to disable FreeU to get it to work
That’s amazing in comfy! I don’t know why people still use automatic
Works great at putting sunglasses and t-shirts on a character, but when I tried to put a sword in the character's hand it refused to do it. (Scratching my head.)
Actually said no or just failed at it?
I think it's the same method used in Krita's impainting😊
So how do we install this thing, is the node exclusive to your patreon members?
Built into the latest ComfyUI by default
@@OtakuDYT yeah i figured that out 5 minutes later, but its odd he made it seem it wasnt and that we needed to install external plugins.
@@greypsyche5255 There are some external plugins in his workflow, probably why he is covering all bases.
we need a compilation of all the intros
Amazing!
u should put on the tittle that u are using confyui
Hi. Do you know what check layers are in Photoshop ? Well there are some layer with filters applied to them that can check for value. saturation and color. Any idea on how can we achieve something like this in Afinity ? Also, how can we import multiple images in the same doc file.. like in photoshop. import images into stack. I know there is import stack files in Affinity but it merges them somehow...
Great work. But don't sell yourself short :D I'm sure the ladies are perfectly satisfied :P
"That's what the ladies tell me anyways" LOL
Please put comfyUI in the title, I've clicked off the last video and this one once I learned it's Comfy and not Auto1111 that will mess up your metrics...
You should just learn Comfyui. Automatic1111 is checkers, join us and play some chess
Yea I too lost intrest in ai after comfy ui workflows.
Take it as motivation to learn Comfy, the noodles look intimidating but using A1111 is sooooo clunky once you learn Comfy. Just the model loading speed alone is reason enough to switch even without the "one click" automation that you'll learn in literally two weeks of daily practise.
No need to do that.
Your complaining is actually so annoying. Comfyui is a key part of sdxl workflow. Go somewhere else if you want simple workflows that don’t involve comfyui
Let me change the heading for you - Differential Diffusion - Inpainting on Comfy - bletch
InPANTING? hmm is that like Inpainting but when your thirsty? :)
KampfUI
lol
Small but it works ! haha :D
why don't you show A1111 anymore? I stopped watching you because of it
A once more super complicated Comfy UI workflow behind a patreon wall? No thanks.
😂 4:27
🙃
Hahaha.
Fun fact: Forge webui has something that's called soft inpainting which functions similarly.
I think it’s not the same
AFAIK it’s the inpainting method from fooocus which is also by lllyasviel
There’s also a comfy implementation for that.
I’d love to see these two methods compared
We gotta stop trying to make ComfyUI a thing
It's so much better tho?
@@squirrelhallowino29 How? I'm aware lf this style for creating visual effects but for creating a single image it looks like a lot of unecessary steps and a mess of spaghetti.
Stability AI disagrees with you, furthermore Comfy is already a thing.
It's better in pretty much every way. Nodes aren't scary... they can't hurt you.
@@generichuman_ How. How is it better.
comfy 🤮
Comfy 🔥🔥🔥
@@lockos Comfy sux
@@bigglyguy8429 hoes mad xD wether you like it or not, Comfy is not going anywhere