You are a God. 12 minutes of useful information and clear instruction. Just WOW.
8 หลายเดือนก่อน +1
Duuude! I had no idea i had this much power with SD 1111. THANK YOU for taking the time to share your knowledge 🙏 this helps me with my AI art adventures immeasurably.
I'm super impressed that you had such little halo effect around your inpainted masks, every time i inpaint something i end up with smudging of colours, i guess that's just using the wrong type or hte mask blur being too high.
finally something explaiing the importance of fill, original and latent noise in particular. Fill to remove something, latent noise to add something, original to edit something. Yes it's over simplified, but it's a start.
Thank you for the very useful guide! I'm always getting amazing pictures with just small details that are ugly, until now I struggled to patch these up, but this guide helped a lot! Will scare with my friends~
I am using sdxl a lot and the control net models are not so good as v1.5 so i didn't use them too much beside canny that i use all the times, i was hoping they can improve it or sd3 comes out
Sometimes people forgot that inpaint sketch is exist, but yeah you could use that for tweaking colors or remove stuff as well. It's much better to introduce new subject in the images or remove the subject from the images.
great video. can you create a video on how to input certain object (with image of object) blend into current working file.. on spefific area on the picture.. example .. maybe on the floor, corner of the pix or on the hand
Is harder to control an actual object or image, what i do is to use Photoshop to place it and then with image to image i make it blend better , you can look at the video with AI mockups
Thank you, good to watch something step by step & with varied examples. Now I am wondering how you'd approach bunny at 9:18 not giving _proper_ shade, though :)
I didn't use it for a while, i am recreating the workflow in comfy ui, to see if i can find a way to get better control. Denoise setting influences the results, prompts also, the image that starts from , just need to find a balance between setting i guess
youre the best on youtube for Forge! Thanks a lot! Do you know any way to get two character Loras working together properly in Forge? I struggle with the known methods from a1111 and also Forge Couple doesnt seem to work. Inpaint works but not generation directly
I didn't find a good way, i usually just use in paint, or i make a selection of the image in Photoshop and use that in img2img to get the right character playing with denoise to be somehow similar then i blend it again back in Photoshop with masking so it fits
@@pixaroma thanks for answering so fast! Yesterday i tried the Not masked area Option you talked about and its way faster and reliable than just using the masked area for couple pics
Is more difficult, depends on how big is the tattoo and how much skin that is without tattoo around, you can do a combination of masked content fill first to fill that with a color, then use the masked content original, but probably is quicker with remove tool in photohop
@@johnyoung4409 you can also try to paint the tattoo with a soft brush in photoshop picking color of the skin with eyedropper, and then run through img2img or inpaint at different denoise strength to reconstruct that skin. But so far remove tool in photoshop or generative fill with the word remove did better job
@@pixaromaThanks for your info! Yes, I did watched a video in youtube that using generative fill with photoshop to remove the tattoo. th-cam.com/video/YPEBymT_lz0/w-d-xo.html which is very impressive. He just use the word 'remove tattoo' and photoshop automatically do a great job. I'm curious why sd cannot do the same thing?
This tutorial was super helpful, thank you so much. Question... normally I use the 'Hires. fix' option to upscale images I like and then pass them to Inpaint. In your example, how do you go about upscaling your edited image after Inpainting it? Which tab do you send it to, or do you just use the 'Resize' slider on the Img2Img tab?
Probably you use an updated version that doesn't work the one that they do beta testing, try this maybe th-cam.com/video/RZJJ_ZrHOc0/w-d-xo.htmlsi=ihjv_AH66VT1BM-C
Hi Great video but...help!!! please lol. I am trying to remove a item and replace with the wall behind. I use masked only area and tried original and latent noise. Low denoising nothing changes, high denoising the whole image changes. Like its not seeing the masked area? Any ideas? thanks in advance. Thanks in advance.
If you have Photoshop you can remove that with content aware tool, some things are just not so simple to do with ai. You can also paint in any editor with a color similar to your background and try to use then I paint or maybe img2img to generate that missing part. Even if i do someting for an image that work it might not work for your image, because ai try to guess what is there and sometimes it guess something else :)
@@pixaroma thank you for the reply. For context, I wanted a image of a cat cooking but the AI gave me a Cat cooking a smaller cat which was on fire, AI can be pretty dark....haha
For some reason my Inpaint doesn't look like yours. I don't have the little (i) and my inpaint looks different, do you have an extension added? Plus my mask has a semi see through checkerboard circle, not a white circle. I now the video is 6 months old and that might be the issue. I don't get the same results as you with the same settings, stuff outside the mask also get change. I'm on ver f2.1v1.10.1 python 3.10.6 torch 2.3.1+cu121, any help would be great.
When you created the "latent noise" did you have to send the new image with the latent noise to Inpaint? Or can you just continue to generate without clicking "send to"?
In the video i just increased the denoise strength so is not visible the noise, I showed that to see how it looks, is useful when you dont have nothing in the image and you want to add something there, it ads that noise so it can create anything there
@@pixaroma Thanks for answering. I was a bit confused because I was adding the Latent Noise, then sending it to the working section, but didn't see you use the "send to" function so I thought I missed a step. But its nice to know that you don't have to do that! You have the best inpainting video on youtube btw. :) Infact your whole channel is underrated. You have a lot of great tutorials. I'm going to go through it!
I dont get why mine does not work, i paint the area where i want a change, but it just keeps giving me the same image over and over again, am using a pony model
There isn't one, sorry for misunderstanding that is just an overlays in video editing meant to visualize what area can see and how it would look like, i should have mentioned in the video, sorry about that
You dont have bounding box i used that on video to show that if you put a small point in those corners thst big will be that bounding box, so you can visualize it but that doesn't appear on the interface, sorry for the confusion
@@pixaroma Thank you for clarification. So far I have used Fooocus which is actually the same, although you made me like ForgeUI :) I am afraid SD will be very soon just a platform to run Flux.
You don't actually have a bounding box i put there to have an idea of the area, you just add a tiny dot so the area seen is bigger and since the dot is tiny will not affect the outcome but will see more from the image so it understands better how to inpaint
Sorry for misleading I should have been more clear, that is just a square I put in postprocessing the video to draw the attention on how much it area sees before, and if I put dots how much area it will see after, there is not such square in stable diffusion, is just a video overlay in capcut :) animated to resize to show before and after
At least this AI generated voice sounds better than all the others. I can still tell, though. Doesn't quite feel right and there are some little oddities.
I have watched 40min videos on inpainting that did not have as much valuable information as your 11min video. You are killing it brother.
thank you so much
Best vid on inpainting I have come across yet.
You are a God. 12 minutes of useful information and clear instruction. Just WOW.
Duuude! I had no idea i had this much power with SD 1111.
THANK YOU for taking the time to share your knowledge 🙏 this helps me with my AI art adventures immeasurably.
This was so useful. Your guides are S tier
I'm super impressed that you had such little halo effect around your inpainted masks, every time i inpaint something i end up with smudging of colours, i guess that's just using the wrong type or hte mask blur being too high.
maybe the new version has something to do with that since it changed a lot since I did the video, or the model and prompt, not sure
Keep doing what you're doing ! Nice work👍
Omg this was seriously useful thank you!!!
This is awesome. I just saw that the tiling issue is fixed. I hope you met your goal!
Thank you, I hope so too :D
2ww²😂@@pixaroma
Outstanding clarity in delivering
Best inpainting tutorial ever!! Clear, precise, short and very well explained! Subscribed!
finally something explaiing the importance of fill, original and latent noise in particular.
Fill to remove something, latent noise to add something, original to edit something. Yes it's over simplified, but it's a start.
Very useful, straight to the point, no fluff or bs. Thank you very much! I'd buy a course from you in that style!
Best video on inpainting i've seen. Great stuff man.
Thank you for the very useful guide! I'm always getting amazing pictures with just small details that are ugly, until now I struggled to patch these up, but this guide helped a lot! Will scare with my friends~
Did you just make the best video about inpainting in Stable Diffusion so far?
:) actually is an old video 😁 but I did my best at that time
Great to find such good tutorials for Forge. Would love to see a deep dive into the Integrated ControlNet tabs some time
I am using sdxl a lot and the control net models are not so good as v1.5 so i didn't use them too much beside canny that i use all the times, i was hoping they can improve it or sd3 comes out
@@pixaroma I noticed the same... Wasn't sure if it was just me lol
Discovered your channel yesterday! Already a big fan, Thank you!!!
Thank you ☺️
Excellent tutorial! Keep it up.
Sometimes people forgot that inpaint sketch is exist, but yeah you could use that for tweaking colors or remove stuff as well. It's much better to introduce new subject in the images or remove the subject from the images.
12 mins of awesomeness, thanks a bunch!
great video. can you create a video on how to input certain object (with image of object) blend into current working file.. on spefific area on the picture.. example .. maybe on the floor, corner of the pix or on the hand
Is harder to control an actual object or image, what i do is to use Photoshop to place it and then with image to image i make it blend better , you can look at the video with AI mockups
Very helpful video!
Very informative video, Thanks👍🏻
Thank you, good to watch something step by step & with varied examples.
Now I am wondering how you'd approach bunny at 9:18 not giving _proper_ shade, though :)
I didn't use it for a while, i am recreating the workflow in comfy ui, to see if i can find a way to get better control. Denoise setting influences the results, prompts also, the image that starts from , just need to find a balance between setting i guess
Thank you so much that was excellent.
youre the best on youtube for Forge! Thanks a lot! Do you know any way to get two character Loras working together properly in Forge? I struggle with the known methods from a1111 and also Forge Couple doesnt seem to work. Inpaint works but not generation directly
I didn't find a good way, i usually just use in paint, or i make a selection of the image in Photoshop and use that in img2img to get the right character playing with denoise to be somehow similar then i blend it again back in Photoshop with masking so it fits
@@pixaroma thanks for answering so fast! Yesterday i tried the Not masked area Option you talked about and its way faster and reliable than just using the masked area for couple pics
Very detailed tutorial, thanks for your work!
What if I'd like to remove the tattoo from one's body, without distorsion. Any idea?
Is more difficult, depends on how big is the tattoo and how much skin that is without tattoo around, you can do a combination of masked content fill first to fill that with a color, then use the masked content original, but probably is quicker with remove tool in photohop
@@pixaromaMany thanks for your reply, I've tried several times, but the result so far is not so good.
@@johnyoung4409 you can also try to paint the tattoo with a soft brush in photoshop picking color of the skin with eyedropper, and then run through img2img or inpaint at different denoise strength to reconstruct that skin. But so far remove tool in photoshop or generative fill with the word remove did better job
@@pixaromaThanks for your info! Yes, I did watched a video in youtube that using generative fill with photoshop to remove the tattoo. th-cam.com/video/YPEBymT_lz0/w-d-xo.html which is very impressive. He just use the word 'remove tattoo' and photoshop automatically do a great job. I'm curious why sd cannot do the same thing?
Man, You're the best !
Very useful tutorial
Thanks a lot
Hey! thanks for the incredible video. Do you know how could I embed the stable diffusion model into my app?
I don't but you can talk maybe with the creator of the model, like on civitai must be a way to contact it.
Can you please do inpainting with comfyui? Specifically, keeping one's face on a latent image while generating a new image. Thanks
I am trying for a while to get good inpainting, I still need to do more research on it but is in plan in one of the future episodes
Best video :)
I remember in the times of 1.5 DDIM was the recommended Sampler for inpaint, that has changed for SDXL?
You can try what works best i usually just use the recommended setting for that model
To change a colour and kept original item use more denoising strength and control net - canny or depth
yeah I work with canny usually :) even is not keeping perfectly, at least it keeps contours and composition
This tutorial was super helpful, thank you so much. Question... normally I use the 'Hires. fix' option to upscale images I like and then pass them to Inpaint. In your example, how do you go about upscaling your edited image after Inpainting it? Which tab do you send it to, or do you just use the 'Resize' slider on the Img2Img tab?
I send to extras and upscale it there or i use topaz gigapixel ai
@@pixaroma Thank you for the speedy response!
thank's a lot, very usefull.
Wow, this is fast! Is it 4090? I have only 3.2 it/s with 4070
I speed it up so you don't wait, is fast but not so fast I think 4-5 seconds per image, yes is 4090
Thanks
I dont see the soft inpainting option in my Forge
Do U know why this is and how to fix this
Probably you use an updated version that doesn't work the one that they do beta testing, try this maybe th-cam.com/video/RZJJ_ZrHOc0/w-d-xo.htmlsi=ihjv_AH66VT1BM-C
you get my sub bro,
Hi Great video but...help!!! please lol. I am trying to remove a item and replace with the wall behind. I use masked only area and tried original and latent noise. Low denoising nothing changes, high denoising the whole image changes. Like its not seeing the masked area? Any ideas? thanks in advance. Thanks in advance.
If you have Photoshop you can remove that with content aware tool, some things are just not so simple to do with ai. You can also paint in any editor with a color similar to your background and try to use then I paint or maybe img2img to generate that missing part. Even if i do someting for an image that work it might not work for your image, because ai try to guess what is there and sometimes it guess something else :)
@@pixaroma thank you for the reply. For context, I wanted a image of a cat cooking but the AI gave me a Cat cooking a smaller cat which was on fire, AI can be pretty dark....haha
For some reason my Inpaint doesn't look like yours. I don't have the little (i) and my inpaint looks different, do you have an extension added? Plus my mask has a semi see through checkerboard circle, not a white circle. I now the video is 6 months old and that might be the issue. I don't get the same results as you with the same settings, stuff outside the mask also get change. I'm on ver f2.1v1.10.1 python 3.10.6 torch 2.3.1+cu121, any help would be great.
They changed the interface a lot in the last month so many things are completely different
@@pixaroma Thanks for the reply, I'm checking out your videos pertaining to ComfyUI, might be moving to that soon.
Thanks!
When you created the "latent noise" did you have to send the new image with the latent noise to Inpaint? Or can you just continue to generate without clicking "send to"?
In the video i just increased the denoise strength so is not visible the noise, I showed that to see how it looks, is useful when you dont have nothing in the image and you want to add something there, it ads that noise so it can create anything there
@@pixaroma Thanks for answering. I was a bit confused because I was adding the Latent Noise, then sending it to the working section, but didn't see you use the "send to" function so I thought I missed a step. But its nice to know that you don't have to do that! You have the best inpainting video on youtube btw. :)
Infact your whole channel is underrated. You have a lot of great tutorials. I'm going to go through it!
Nice intro
I dont get why mine does not work, i paint the area where i want a change, but it just keeps giving me the same image over and over again, am using a pony model
maybe it needs an inpainting model, did you tried with the juggernaut xl model to see if that is work and is not the model cause
nice bro.. but can you load loras?
not sure if works with all lora but i tested on some and it worked for me
how do i get the DPM++ 2M Karras ? i dont have it :/
You should have dpmpp 2m or 2pm++ 2m on samplers and then karras on schedulers
I don't see expand that bounding box, help me?
There isn't one, sorry for misunderstanding that is just an overlays in video editing meant to visualize what area can see and how it would look like, i should have mentioned in the video, sorry about that
Is that SD 1.5?
No, is SDXL
The new interface is pretty much different :/ I don't know how to make bounding box.
You dont have bounding box i used that on video to show that if you put a small point in those corners thst big will be that bounding box, so you can visualize it but that doesn't appear on the interface, sorry for the confusion
@@pixaroma Thank you for clarification. So far I have used Fooocus which is actually the same, although you made me like ForgeUI :) I am afraid SD will be very soon just a platform to run Flux.
its weird, i dont have the "eye" nor the ability to make those "small dots", using forge UI btw.
The interface is from 8 months ago, they changed a lot of things
@pixaroma oh ok
how do you expand the bounding box?
You don't actually have a bounding box i put there to have an idea of the area, you just add a tiny dot so the area seen is bigger and since the dot is tiny will not affect the outcome but will see more from the image so it understands better how to inpaint
@@pixaroma Actually you are doing something at 4:16 manually editing the bounding box for onlymasked area but how what is the shortcut for that :D
Sorry for misleading I should have been more clear, that is just a square I put in postprocessing the video to draw the attention on how much it area sees before, and if I put dots how much area it will see after, there is not such square in stable diffusion, is just a video overlay in capcut :) animated to resize to show before and after
@@pixaroma oh no problem. Actually that would be good extension for only mask
Oh lol, I watched the whole video hoping to learn how soft inpainting works... Seriously...
At the moment of recording which is a few months back I included what i know about inpainting, sorry I was not able to cover all.
@@pixaroma No big deal, I just can't find a video specifically about this. Only one Korean one with a weird translation. :(
Thanks anyway.
At least this AI generated voice sounds better than all the others. I can still tell, though. Doesn't quite feel right and there are some little oddities.
yes, is getting better and better in time, hope soon will be realistic enogh
ai photo movie
th-cam.com/video/uP04emczDi8/w-d-xo.html
cute, but didnt say how it was made