So nice to get a stable diffusion tutorial that just gets on with it, doesn't introduce too many concepts and distractions all at once, and concentrates on a specific task! And another particular TH-cam bugbear of mine, no tinkling piano's or overloud music in the background. You are an excellent educator!
These settings under "Soft inpainting" helped me to achieve much better transitions than the defaults: Schedule bias: 4.8 Preservation strength: 5.85 Transition contrast boost: 2 Mask influence: 0 Difference threshold: 0.25 Difference contrast: 4 This was for a medium detail landscape pic which I wanted to extend again and again. If you're having trouble with smudges and artifacts at the transition, fool around with these numbers. Under these numbers, in the web ui, is a description of what each one does.
I recently tried the last version of Krita with a SD extension and got very similar results to Photoshop outpainting... for free and open source. Worth a try imho. Great video btw, I didn't know the Mosaic extension!
Really appreciate mate! It's a new thing for many of us. If I may ask, can you share a link for the SDXL Styles you've been using? I realized that there are many style extensions out there. Would you recommend this?
I appreciate that :) Even if I use AI for voice it still take a lot of works to do the tutorial, and is even harden then with a real voice because I have to synchronize it with the video, so i only can make like one sentence a the time watching video and coming out with text and then converting it and then add it to video and make it match.
Thank you for this, however for the image that im tryign to fix, half the females head is cropped so I need to outpaint a new head, which is never in line with the rest of the neck and clothes. Is there any way to align or just keep generation?
Is there any guide for prompting with juggernaut xl v9 models? I can't get it to work for some reason. By the way Super high quality tutorial! love the style.
What part doesn't work? You just prompt like for any stable diffusion model i dont do anything different. On the model page on civitai you can see example of images it has an info there and you can see what prompts and setting was used, so you should get the same results. You can use also the sdxl styles extension for art styles, is named styleselectorXL
@pixaroma Also, is there a way to use SD Forge to transform images from 2048 x 2048 and higher resolution without crashing or distorting image. I may need to find an effecient upscaling method compatible with SD Forge. Some automatic 1111 tutorials I tried to follow but SD Forge seems to work a little different in some areas. I'm contemplating in continue saving up and using gigapixel which is expensive but at the same time it upscales; it seems to correct line art and fix it which his BIG PLUS for me. Perhaps, I'm just giving you another idea for your next tutorials. =) hehe Take care and all the best... 😊🙏
Sending image to extras and using an upscaler doesnt distort it. About crashing that is where your video card VRAM comes in, if it doesnt have enough VRAM it will crash if you do intensive work. I still use also Topaz Gigapixel. And about tutorials even if I do one that works for me probably will not work for many, I have the RTX4090 so most of the things works :)
doesnt using denoising at 1 skip the mosaic? i mean, its still usefull tool to get dimensions and mask, but i m pretty sure with denoising at 1 it skips the mosaic, or am i missing something?
I can't get this to work at all. I copy all the parameters from the video, but the image generation ignores the context of the original image and makes a completely different image in the mask.
It depends on the image, the prompt, and the Denoise value, try different denoise values see if can find some that works, also depends on what you try to create there maybe it doesn't know how to do that or thr selected area is too small
@@pixaroma i got slightly better results by adjusting the soft inpainting values. Can you share the internal values you use in the soft inpainting parameters? Maybe a video explining these parameters, would be very very useful. In any case, thanks for the videos about Forge, very good content.
@Rafaelufpr08 i dont do more video for it since they stopped the official update, there are some versions but nothing stable so i switched to comfy UI and trying to recreate there
@@pixaroma SD is a tough area, even more so for those who arrived late, like me. Every time I start to understand something, I discover that it is already dated and everyone is using something different. Well, thanks anyway.
@Rafaelufpr08 yeah is it moving do fast that even if i am trying to stay updated with all the things is still move to fast and when i learn something then something better appear so i use that and so on 😂 i just hope all the thing i learn will be useful in the future somehow
I have a photo of my character just up to the waist, I have tried a lot of results but not one of them actually generates legs down, would you or anyone have any tips?
did you mentioned feet in prompt? maybe just expand the image in photoshop or something to the size you want and draw some basic shapes where the feet are and use inpaint after to fix it
what if I have an image (not created by AI) already, then what prompt should I input or just leave it blank? Also I don't see the soft inpainting option
you describe what you want to see in that area. The video is a few months old, the interfaces changed a lot of times in the last month, they are still adding and removing stuff
@@pixaroma it’s an square image of my wife standing in the middle, the back is a beige color wall. Now I want to expand the sides so that it’ll make a 16:9 image I could load the extension and mask but failed the setting and prompt so that it extends as I wish
Attention: Lora's or/and Wildcard's interrupt this process if used as +prompts. And sometimes to high Denoising creates a whole other picture in the masked area. I got good results with 0,8.
Very helpflul, but I think deonoising of 1.0 during the first regeration messed it up for me. I reduced to 0.75 but now it changed the image too much and using 0.5 leaves the mosaic in (blurry)
they keep changing the interfaces, not sure how they changed the settings and what else they did, in the last month each week new updates with new changes, some works some doesnt, hard to keep track
@@pixaroma surely the functionality should be the same? I am experimenting constantly, but can't get it to work. All this is still pretty new to me, so it's probably some setting somewhere that's messign it up
@@pixaroma Some of the videos have very human-like cadence, other times it sounds generated. Thanks for confirming. Never heard this voice before. Do you mind telling me how you got those SDXL styles you selected? The extension.
@@pixaroma Hey bro, can you make an in-depth guide on your practices for upscaling in forge possibly some explanation of some of the tools and how they can best be used? There is a bunch of guides for upscaling out there but they're very poorly done and hard to follow. And everyone has different out-of-date conflicting information.
Thank you for posting this video and showing tutorials for ForgeUI.
So nice to get a stable diffusion tutorial that just gets on with it, doesn't introduce too many concepts and distractions all at once, and concentrates on a specific task! And another particular TH-cam bugbear of mine, no tinkling piano's or overloud music in the background. You are an excellent educator!
لا استطيع أن اخبرك كم استفدت من قناتك. لقد فتحت لي ابواب كثير كانت مغلقة. لقد بسطت مفاهيم كثيرة كنت اظنها معقده. شكرا لك استمر فى عملك المبهر.
What a time to be alive. These kind of tools are incredible.
High quality tutorials dude, not too fast and easy to understand. Some tutorial channels make me roll my eyes. Subbed.
Thanks
Thank you so much ☺️
These settings under "Soft inpainting" helped me to achieve much better transitions than the defaults:
Schedule bias: 4.8
Preservation strength: 5.85
Transition contrast boost: 2
Mask influence: 0
Difference threshold: 0.25
Difference contrast: 4
This was for a medium detail landscape pic which I wanted to extend again and again. If you're having trouble with smudges and artifacts at the transition, fool around with these numbers. Under these numbers, in the web ui, is a description of what each one does.
I recently tried the last version of Krita with a SD extension and got very similar results to Photoshop outpainting... for free and open source.
Worth a try imho.
Great video btw, I didn't know the Mosaic extension!
This is great, Your channel always brings something new.
Ah, i'd wondered where outpaint was hiding! Thanks for this!
very useful!
Great content, thank you.
Dude, your awesome. Thanks for the tutorials.
Brilliant ! Thank you so much !!
Nice way of doing graphics these days. Thanks for the video
Thanks a lot! It was very useful.
Glad it was helpful :)
Please make more videos like how to use Control Net for poses and cool stuff. Thanks in advance
Thank you!!
Thank you very much!
You are the best
Really appreciate mate! It's a new thing for many of us. If I may ask, can you share a link for the SDXL Styles you've been using? I realized that there are many style extensions out there. Would you recommend this?
check this video and in the comments on that video is a download link for styles I made th-cam.com/video/UyBnkojQdtU/w-d-xo.html
Thanks!
Great tutorial! Could you make a tutorial for using instandid in forge ui?
I didn't use that so far, but everything i use at some point I will do a tutorial, so depending on what i use at the moment
Thank's a lot
even though I know the voice of this channel is AI I still love the channel as the tutorials are thorough and easy to follow 😀
I appreciate that :) Even if I use AI for voice it still take a lot of works to do the tutorial, and is even harden then with a real voice because I have to synchronize it with the video, so i only can make like one sentence a the time watching video and coming out with text and then converting it and then add it to video and make it match.
Thank you for this, however for the image that im tryign to fix, half the females head is cropped so I need to outpaint a new head, which is never in line with the rest of the neck and clothes. Is there any way to align or just keep generation?
I didnt use forge for a while, maybe combined with a control net somehow, not sure, is always random with AI
Is there any guide for prompting with juggernaut xl v9 models? I can't get it to work for some reason. By the way Super high quality tutorial! love the style.
What part doesn't work? You just prompt like for any stable diffusion model i dont do anything different. On the model page on civitai you can see example of images it has an info there and you can see what prompts and setting was used, so you should get the same results. You can use also the sdxl styles extension for art styles, is named styleselectorXL
@pixaroma
Also, is there a way to use SD Forge to transform images from 2048 x 2048 and higher resolution without crashing or distorting image. I may need to find an effecient upscaling method compatible with SD Forge. Some automatic 1111 tutorials I tried to follow but SD Forge seems to work a little different in some areas. I'm contemplating in continue saving up and using gigapixel which is expensive but at the same time it upscales; it seems to correct line art and fix it which his BIG PLUS for me. Perhaps, I'm just giving you another idea for your next tutorials. =) hehe
Take care and all the best... 😊🙏
Sending image to extras and using an upscaler doesnt distort it. About crashing that is where your video card VRAM comes in, if it doesnt have enough VRAM it will crash if you do intensive work. I still use also Topaz Gigapixel. And about tutorials even if I do one that works for me probably will not work for many, I have the RTX4090 so most of the things works :)
Do you need a specific model to do inpainting, my current model BRA7 doesnt seem to work.
I think not all models support inpainting i know the one i used support, the juggernaut one
@@pixaroma my outpointing result looks like a separate image from the source , any idea why??
@@pixaroma ill try juggernaut
Not sure, is hard to tell all i know i already put in that video
@@pixaroma I tried SD v 1-5 Inpainting and it worked better
How did u use sdxl in inpaint?
I can't use it, doono why. I have 4090
I have rtx 4090 and didn't have a problem, make sure the model has inpaint, if you use the same model juggernaut it should work
doesnt using denoising at 1 skip the mosaic?
i mean, its still usefull tool to get dimensions and mask, but i m pretty sure with denoising at 1 it skips the mosaic, or am i missing something?
Is an old video, not sure what I did there, was long time ago, yeah usually 1 is skipping
xd
I can't get this to work at all. I copy all the parameters from the video, but the image generation ignores the context of the original image and makes a completely different image in the mask.
It depends on the image, the prompt, and the Denoise value, try different denoise values see if can find some that works, also depends on what you try to create there maybe it doesn't know how to do that or thr selected area is too small
@@pixaroma i got slightly better results by adjusting the soft inpainting values. Can you share the internal values you use in the soft inpainting parameters? Maybe a video explining these parameters, would be very very useful. In any case, thanks for the videos about Forge, very good content.
@Rafaelufpr08 i dont do more video for it since they stopped the official update, there are some versions but nothing stable so i switched to comfy UI and trying to recreate there
@@pixaroma SD is a tough area, even more so for those who arrived late, like me. Every time I start to understand something, I discover that it is already dated and everyone is using something different. Well, thanks anyway.
@Rafaelufpr08 yeah is it moving do fast that even if i am trying to stay updated with all the things is still move to fast and when i learn something then something better appear so i use that and so on 😂 i just hope all the thing i learn will be useful in the future somehow
I have a photo of my character just up to the waist, I have tried a lot of results but not one of them actually generates legs down, would you or anyone have any tips?
did you mentioned feet in prompt? maybe just expand the image in photoshop or something to the size you want and draw some basic shapes where the feet are and use inpaint after to fix it
what if I have an image (not created by AI) already, then what prompt should I input or just leave it blank?
Also I don't see the soft inpainting option
you describe what you want to see in that area. The video is a few months old, the interfaces changed a lot of times in the last month, they are still adding and removing stuff
@@pixaroma it’s an square image of my wife standing in the middle, the back is a beige color wall. Now I want to expand the sides so that it’ll make a 16:9 image
I could load the extension and mask but failed the setting and prompt so that it extends as I wish
@@QuangNguyen-md8ky for outpainting not always worked, I just use photoshop, it has that generative fill from crop and is fast and works good for me
@@pixaroma yeah I’ve tried pts and works well. Since i dont have license now and rarely use pts so I want to make use of SD
Attention: Lora's or/and Wildcard's interrupt this process if used as +prompts. And sometimes to high Denoising creates a whole other picture in the masked area. I got good results with 0,8.
thanks for mentioning this, the forge has changed a lot since I did the video and not sure how many things it still works with the new version
Why not inpaint model?
for me juggernautxl worked fine for inpaint :) so I am using that
Very helpflul, but I think deonoising of 1.0 during the first regeration messed it up for me. I reduced to 0.75 but now it changed the image too much and using 0.5 leaves the mosaic in (blurry)
they keep changing the interfaces, not sure how they changed the settings and what else they did, in the last month each week new updates with new changes, some works some doesnt, hard to keep track
@@pixaroma surely the functionality should be the same? I am experimenting constantly, but can't get it to work. All this is still pretty new to me, so it's probably some setting somewhere that's messign it up
@@TheHYPERION9 Can be the program or can be the settings, I didnt use forge for a while so not sure what can be the cause
Your voice sounds like AI bro lol. I think you have spent too much time in the metaverse. Amazing tutorials!
It is AI voice :)
@@pixaroma Some of the videos have very human-like cadence, other times it sounds generated. Thanks for confirming. Never heard this voice before. Do you mind telling me how you got those SDXL styles you selected? The extension.
I found this new method since then that let me have unlimited styles saved, and easier to search in the list, th-cam.com/video/UyBnkojQdtU/w-d-xo.html
@@pixaroma Hey bro, can you make an in-depth guide on your practices for upscaling in forge possibly some explanation of some of the tools and how they can best be used? There is a bunch of guides for upscaling out there but they're very poorly done and hard to follow. And everyone has different out-of-date conflicting information.
@@pixaroma Ok, thanks. I thought it was something different I grabbed your styles, they're pretty cool. Incredible share with your community.
Thanks a lot! It was very useful.