these look nice, however they were not the same as the originals. Is there any way to make them exact?? both dresses were originally long and the red one had sleeves but both came back as mini dresses and the red dress came back strapless
He seems not to grasp inpainting, with the green dress on the girl in front of the white building... he needed to paint over her knees and also paint in the straps and such even if that means painting over the girls arms and legs... when inpainting what you are saying to the model is... "Here is a dress, see how it looks, now here is a space in this image and you should fit this dress to the image" ... So yes you would end up occluding other parts of the image if its a long dress. Note you are not inpainting the girl in the picture, but the overall image, and the patterns you are painting are just sourced from the image prompt upload. so to do that you need to mimic in your mind what space would be taken up by that. Instead this guy painted over the existing dress only... he needs to learn how basic diffusion works
Your INPAINT needs to match the image prompt input and not the dress the model is currently wearing, so you imagine the length of the dress and where the straps might be and paint there even if its over her arms or legs. Hence the term 'inpaint' you are painting the dress INTO the image.... not into the previous dress, but the image as a whole... this is a case of PEBCAK
Thank you. I didn't know this about inpainting. I guess this must mean that the model must be the same size as the dress for this to work since this won't match otherwise.
@@fictionaddiction4706 Well yes and no... so if the model sees a miniskirt and your prompt telling it to make a long dress... well it will have a go at it if your inpainting goes down that far over the knees... I'm not saying it will do a good job, just that it will have a go!
@@fictionaddiction4706 Also what I have found is that if you use an image you badly put together in photoshop, (lets say a crappy faceswap you tried using manual techniques like we did way back in the old days... 12 months ago...lol) and then you load that into the model and do a variation of it... for obvious reasons the model will try to remove any errors in skin tone and make it look as if it was a true image... now how well it does that is again down to settings, prompting and of course the model you are using.
@@mickelodiansurname9578 That is awesome. I'm still doing the old face swap because the new tools sometimes look weird and I like the control. So I'm glad I can make the images look less weird using AI. If you try to make a clothing store and you want to try your image of a small dress over different sized models, one thin, the other fat. Would inpainting work in changing the dress size as well?
@@fictionaddiction4706 I think, although I'm not entirely sure, I seen an app or github repo that does that start to finish... you upload your inventory images of dresses I would guess and it puts them on models... can't remember for the life of me what it was called and it was months ago. But yeah you could automate this rather easily... so you have the clothes directory with the snaps of the clothing, you have the model endpoint (lot of front ends will create an endpoint for you, or just pay for one I suppose) and then a script that just pulls each image out, prompts the model based on the associated description of that product, and outputs each finished image... Now you'd need to have something that catches the image, with vision, and looks at it to make sure there aren't five fingers sticking out of the models head... if so in it goes again... inpainting can work to make a model fat, and you can control that using the endpoint. Then again putting the image of the model in as and image prompting with text saying "this woman should be much fatter, 18 stone, she's a truck!" would do that right?
How can I setup to work in EXTREME SPEED MODE FOR ADDITIONAL STABILITY. At the beggining of the Colab appear the following message: Attention! When working in the interface with the FaceSwap and CPDS controlnet, crashes are possible; it is also recommended to work in Extreme speed mode for additional stability. When working with the ImagePrompt and PyraCanny controls, 85% of the work will be stable.
I initially had Fooocus installed locally, when I click your collab link and then click the URL as you mention in your video, it opens Fooocus and not Defooocus, any idea why?
these look nice, however they were not the same as the originals. Is there any way to make them exact?? both dresses were originally long and the red one had sleeves but both came back as mini dresses and the red dress came back strapless
He seems not to grasp inpainting, with the green dress on the girl in front of the white building... he needed to paint over her knees and also paint in the straps and such even if that means painting over the girls arms and legs... when inpainting what you are saying to the model is... "Here is a dress, see how it looks, now here is a space in this image and you should fit this dress to the image" ... So yes you would end up occluding other parts of the image if its a long dress. Note you are not inpainting the girl in the picture, but the overall image, and the patterns you are painting are just sourced from the image prompt upload. so to do that you need to mimic in your mind what space would be taken up by that. Instead this guy painted over the existing dress only... he needs to learn how basic diffusion works
Your INPAINT needs to match the image prompt input and not the dress the model is currently wearing, so you imagine the length of the dress and where the straps might be and paint there even if its over her arms or legs. Hence the term 'inpaint' you are painting the dress INTO the image.... not into the previous dress, but the image as a whole... this is a case of PEBCAK
Thank you. I didn't know this about inpainting. I guess this must mean that the model must be the same size as the dress for this to work since this won't match otherwise.
@@fictionaddiction4706 Well yes and no... so if the model sees a miniskirt and your prompt telling it to make a long dress... well it will have a go at it if your inpainting goes down that far over the knees... I'm not saying it will do a good job, just that it will have a go!
@@fictionaddiction4706 Also what I have found is that if you use an image you badly put together in photoshop, (lets say a crappy faceswap you tried using manual techniques like we did way back in the old days... 12 months ago...lol) and then you load that into the model and do a variation of it... for obvious reasons the model will try to remove any errors in skin tone and make it look as if it was a true image... now how well it does that is again down to settings, prompting and of course the model you are using.
@@mickelodiansurname9578 That is awesome. I'm still doing the old face swap because the new tools sometimes look weird and I like the control. So I'm glad I can make the images look less weird using AI.
If you try to make a clothing store and you want to try your image of a small dress over different sized models, one thin, the other fat. Would inpainting work in changing the dress size as well?
@@fictionaddiction4706 I think, although I'm not entirely sure, I seen an app or github repo that does that start to finish... you upload your inventory images of dresses I would guess and it puts them on models... can't remember for the life of me what it was called and it was months ago.
But yeah you could automate this rather easily... so you have the clothes directory with the snaps of the clothing, you have the model endpoint (lot of front ends will create an endpoint for you, or just pay for one I suppose) and then a script that just pulls each image out, prompts the model based on the associated description of that product, and outputs each finished image...
Now you'd need to have something that catches the image, with vision, and looks at it to make sure there aren't five fingers sticking out of the models head... if so in it goes again...
inpainting can work to make a model fat, and you can control that using the endpoint. Then again putting the image of the model in as and image prompting with text saying "this woman should be much fatter, 18 stone, she's a truck!" would do that right?
Thank you for your video.
Anyway, is there any tool to add accessories?
Thanks for this....loaded a few times but not working now...any suggestions? said its not finding gradio
How can I setup to work in EXTREME SPEED MODE FOR ADDITIONAL STABILITY. At the beggining of the Colab appear the following message:
Attention! When working in the interface with the FaceSwap and CPDS controlnet, crashes are possible; it is also recommended to work in Extreme speed mode for additional stability. When working with the ImagePrompt and PyraCanny controls, 85% of the work will be stable.
Why doesn't it work at all?
Thanks for video, How can I get the same dress with the same pattern without using the mask?

I initially had Fooocus installed locally, when I click your collab link and then click the URL as you mention in your video, it opens Fooocus and not Defooocus, any idea why?
Abbsolute game changer ! thanks man !😍🤩
Why did not you choose a floral pattern dress? It never capture the exact floral patterns. Plan is very easy to pick
Just inpaint around the clothes
Sixty seconds of Photopea plus image-to-image does the same thing and with a lot more flexibility.
Hey could u do a super quick dot point tutorial of how to this (put a dress on your own model) with photopea or point me to a tutorial?
can i do the same in video??
So i can't see the link to follow the tutorial.
colab.research.google.com/github/ehristoforu/DeFooocus/blob/main/DeFooocus_colab.ipynb
bro link 😿 plz
colab.research.google.com/github/ehristoforu/DeFooocus/blob/main/DeFooocus_colab.ipynb
Thanks
link plzzzzzz
colab.research.google.com/github/ehristoforu/DeFooocus/blob/main/DeFooocus_colab.ipynb
Super