I personally love the vary subtle function as it helps me reconfigure some poses by brushing and erasing parts then using it. Haven’t really dug in deeper than the default settings but I’ll check now with your tutorial. Thanks!
When I watched this yesterday, it was an absolute revelation. Having not previously explored that 'Developers debug mode' I had no idea Fooocus allowed for such control. When I went today to apply everything this video taught me, they'd updated the UI and now everything's different. Fortunately, I see Kleebz Tech has already dropped a new video today explaining the new features, which is really impressive. Rodney here seems to be the absolute master of Fooocus!
Video on the new features will probably be tomorrow. That video is just if having issues updating. But most of the settings should be the same since I am doing that stuff right now.
I often use the the vary function by adding wildcards for hair and eyes, color, that way I get the same image but can get different hairstyles and hair colour. Great video.
Thank you for this video, it was really comprehensive. The "dark cave exiting into the jungle" technique was my favourite - it really stirred up the imagination ! 🧙
The upscale 1.5 and 2x use Stable Diffusion to add details when upscaling. It will change the face. You could do a similar thing as in this video and adjust the denoise value with theForced Overwrite of Denoising Strength of "Upscale" setting. But you can also use faceswap to help keep the face when upscaling. You do need to enable the option to use image prompts with upscaling and vary. Upscale 2x does not really change the image.
Couple of videos in the works. Update video probably tomorrow. He caught me a little off guard as I didn't think he would push that out for a few more days.
Another brilliant video as always thanks! I always do a double take "huh?'" when copying your moves when you say check "off" for a box you are checking on though :)
I do have several inpainting videos that cover many advanced features. th-cam.com/video/kpD5_Bs9Qeo/w-d-xo.htmlsi=gUHT4zoV2Nv569oA is one of them and the ones like generating tattoos and text on shirts also cover some more advanced inpainting stuff.
one small part that is missing is that you can also use foocus to make the gradient. prompt: "simple black and white gradient template, top of the picture is white, The bottom side is black". You dont get a perfect gradient, but some cool gradients to use. :-)
@@KLEEBZTECH unrelated but do you have any thoughts on getting consistent color values across gens? like if i say blue it could give me a pretty wide range of blues
Great video, Fooocus is a very versatile tool. I also find the canny works better if you replace it with the new model from xinsir which is more accurate than the 128 lora. Do you plan to making a video on the new enhance feature in the mashb1t version?
Yes. I have been testing it out for a bit and have a video in the works. Probably next video after a quick video on the latest update. He pushed out the update a little quicker than I expected.
Really cool stuff,Thanks ,Can you make a video explaining the upscaling method used, if we can change the upscaler,if we can generate at a higher resolution, and the upscaling denoising strength?
I have considered doing an updated short video on upscaling. I do have one I did a while back but I have a better understanding of it. But will mention that the 1.5 and 2 upscale use SD to improve the details and that is when the denoising comes into play like with the vary option. The fast upscale is just a normal upscale. There is not easy way to change the upscale models without changing the code I believe.
I am working on an A.I. model focused on my city Peru focused on social networks, so I need photos of her in specific places in my city, I have collected photos of landscapes and people as a base model. I do not use a LoRa of my character, but rather make it consistent with a specific Promt. I use Vary Strong or Pyra Canny to generate my model with the background included, and then I crop my model with Photoshop, and add it to the original background, the result is decent but I need the lighting to make it perfect and realistic, I have been testing with Vary to preserve the lighting, I was about to give up until your video enlightened me, I would like you to see the results and tell me what you think and in what aspects I can improve it
That Cheyenne model really is rather "freaky". I have yet to give it a try. Glad I'm not the only one who doubles the same image in pyra and img prompt😂
I love Cheyenne! One of the best models for non realistic content. depending on what I am doing yes I will double them up or even triple them up. Could have done faceswap as well but that is harder with the lighting trick although it can work. Getting it to work when doing video is the challenge.
Thank you for your interesting and useful videos. I designed a necklace and would love to see it on a model. While I understand how to apply designs to clothing, I'm struggling to do the same with jewelry. Could you please teach me how to do this, or provide any tips? I would greatly appreciate your help.
as always I enjoy and learn from your videos, great quality, I was wondering if it is possible to have a open-source ai for video generation on a local PC, even a picture-to-video would be great,
There are some but you do need a really decent GPU and I have not really messed with many of them yet. I find most are gimmicks at the moment but are getting better. But I will probably be diving into some in the next couple months since I have been watching what is out there.
@@KLEEBZTECH haha yeah, i like your content, because you work on it and give a complete tutorial ,btw I have seen "open sora" too I don't know what that is
Thank you for this guide. Do you have any plans to make a LoRA? Another question, your default output format is always png. Is there a specific reason for that? It takes up more space than jpeg :)
If you do not want the auto describe to change styles you can simply remove the argument and disable the feature. Auto describe also applies the styles of the option currently selected in the describe tab in input image. Would it be beneficial to auto describe and not adjust styles?
I do like the auto describe feature. But I am not a fan of it changing styles since I rarely use the default ones. But that is just my opinion. Not sure how other people feel.
Is there no option to use Vary AND InPaint together ? For example, I might be happy with the generated person, BUT would if I would like to have a variety of wristwatches, or necklaces, or rings, or whatever, to see what works best. I was hoping I could mix vary and inpaint, and just paint out the watch/ring etc, and generate a big batch of variations.
There are several ways but I recommend changing your browser to dark mode so all things are dark. Not near a computer right now to give you the other ways.
I want try other model , download from hugging web but why i always get pause message in cmd ?? , only juggernautxl work, i put in checkpoint folder and format is s.afetensor
You didn't have luck with the outpainting? Did you try only one side at a time or did you try multiple? When outpainting for best results only do one side at a time.
Are you doing just one edge at a time? I find you will usually get lines with some but not usually all attempts. You can also use inpainting with a low denoise strength to help clean up any lines after.
Welcome to the channel! I do have a few videos on getting the most out of Fooocus. I hope you find them helpful. And don't hesitate to ask any questions. If I can answer in the comments easily I will try. If not I am pretty active on the Github page as well but lately Mashb1t has been super active and answering almost every question. Not sure how he does it with all the work he does on Fooocus.
and from me little advice to u first do extreme speed with jpeg output. so fast than when you see what you want than you can create again quality png version of that
The problem is that the image will be much different usually and you can't use things like the negative prompt on extreme speed. It can be useful to get a basic idea and of course if a slower machine not a bad idea.
we are blessed to have you
Well thank you.
I personally love the vary subtle function as it helps me reconfigure some poses by brushing and erasing parts then using it. Haven’t really dug in deeper than the default settings but I’ll check now with your tutorial. Thanks!
When I watched this yesterday, it was an absolute revelation. Having not previously explored that 'Developers debug mode' I had no idea Fooocus allowed for such control. When I went today to apply everything this video taught me, they'd updated the UI and now everything's different. Fortunately, I see Kleebz Tech has already dropped a new video today explaining the new features, which is really impressive. Rodney here seems to be the absolute master of Fooocus!
Video on the new features will probably be tomorrow. That video is just if having issues updating. But most of the settings should be the same since I am doing that stuff right now.
Great lesson! Thank you very much for your hard work!
You are welcome!
I often use the the vary function by adding wildcards for hair and eyes, color, that way I get the same image but can get different hairstyles and hair colour. Great video.
Great tip!
I used this way to turn comic characters to real photo, and its very effective. vary subtle + cpds. using the good tensor model can do amazing results
GREAT tutorial! The volumetric lighting tip was really helpful.
There are so many ways it can be used. I just keep having fun finding new ideas.
Thank you for this video, it was really comprehensive. The "dark cave exiting into the jungle" technique was my favourite - it really stirred up the imagination ! 🧙
You can do some really cool stuff. I had a hard time with that sort of image before.
Great info, love it. The issue I've had is the "upscale" does upscale the image, but also completely changes the face.
The upscale 1.5 and 2x use Stable Diffusion to add details when upscaling. It will change the face. You could do a similar thing as in this video and adjust the denoise value with theForced Overwrite of Denoising Strength of "Upscale" setting. But you can also use faceswap to help keep the face when upscaling. You do need to enable the option to use image prompts with upscaling and vary. Upscale 2x does not really change the image.
This tutorial is very useful and I've benefited a lot from it, thank you for sharing.
You're very welcome!
Fooocus 2.5.0 is out with some advanced features.☺waiting for your next video on advanced features
Couple of videos in the works. Update video probably tomorrow. He caught me a little off guard as I didn't think he would push that out for a few more days.
Awesome video! Lots of fantastic techniques! Thank for the shout out also and the link to the lora 🙌
You are so welcome!
Thanks for this. I ❤ Foocus.
I do as well. 👍
Another brilliant video as always thanks! I always do a double take "huh?'" when copying your moves when you say check "off" for a box you are checking on though :)
😂
Great video with some very helpful tips. Thanks. More 'advanced' tips like this on Inpaint would be good too, when you have time :)
I do have several inpainting videos that cover many advanced features. th-cam.com/video/kpD5_Bs9Qeo/w-d-xo.htmlsi=gUHT4zoV2Nv569oA is one of them and the ones like generating tattoos and text on shirts also cover some more advanced inpainting stuff.
@@KLEEBZTECH Thanks - much appreciated.
one small part that is missing is that you can also use foocus to make the gradient. prompt: "simple black and white gradient template, top of the picture is white, The bottom side is black". You dont get a perfect gradient, but some cool gradients to use. :-)
Very true, but depending on what you need it for it will probably be easier to do outside Fooocus. But for simple ones it would work.
wow the amount of doors you opened, thank you for sharing!
Thank you so much!
You're welcome!
Thanks for another great video! Would you please consider one for the new "Enhance" features? Thanks!
In the works. I plan on covering the basics in the update video that I'm working on right now. Then I'll have another one that goes further into it.
@@KLEEBZTECH Great! Thanks!
Great tips, thank you
You are so welcome!
Many thanks, very interesting.
Very welcome!
Another excellent tip.
thanks for all the great tips, i was wondering how to get better results from subtle vary specifically while changing the model
Glad it was helpful!
@@KLEEBZTECH unrelated but do you have any thoughts on getting consistent color values across gens? like if i say blue it could give me a pretty wide range of blues
@Skunkmail uh.... not off the top of my head. But might be something to think about.
That's a very useful video. Thank you for sharing!
You're very welcome!
Great video, Fooocus is a very versatile tool. I also find the canny works better if you replace it with the new model from xinsir which is more accurate than the 128 lora. Do you plan to making a video on the new enhance feature in the mashb1t version?
Yes. I have been testing it out for a bit and have a video in the works. Probably next video after a quick video on the latest update. He pushed out the update a little quicker than I expected.
God Bless you good man
Really cool stuff,Thanks ,Can you make a video explaining the upscaling method used, if we can change the upscaler,if we can generate at a higher resolution, and the upscaling denoising strength?
I have considered doing an updated short video on upscaling. I do have one I did a while back but I have a better understanding of it. But will mention that the 1.5 and 2 upscale use SD to improve the details and that is when the denoising comes into play like with the vary option. The fast upscale is just a normal upscale. There is not easy way to change the upscale models without changing the code I believe.
Yes and fooocus uses the sd upscale too for the quality option
I am working on an A.I. model focused on my city Peru focused on social networks, so I need photos of her in specific places in my city, I have collected photos of landscapes and people as a base model. I do not use a LoRa of my character, but rather make it consistent with a specific Promt. I use Vary Strong or Pyra Canny to generate my model with the background included, and then I crop my model with Photoshop, and add it to the original background, the result is decent but I need the lighting to make it perfect and realistic, I have been testing with Vary to preserve the lighting, I was about to give up until your video enlightened me, I would like you to see the results and tell me what you think and in what aspects I can improve it
Best way would be email which is listed on my channel page.
That Cheyenne model really is rather "freaky". I have yet to give it a try. Glad I'm not the only one who doubles the same image in pyra and img prompt😂
I love Cheyenne! One of the best models for non realistic content. depending on what I am doing yes I will double them up or even triple them up. Could have done faceswap as well but that is harder with the lighting trick although it can work. Getting it to work when doing video is the challenge.
Thank you for your interesting and useful videos. I designed a necklace and would love to see it on a model. While I understand how to apply designs to clothing, I'm struggling to do the same with jewelry. Could you please teach me how to do this, or provide any tips? I would greatly appreciate your help.
Amazing
Thank you! Cheers!
as always I enjoy and learn from your videos, great quality, I was wondering if it is possible to have a open-source ai for video generation on a local PC, even a picture-to-video would be great,
There are some but you do need a really decent GPU and I have not really messed with many of them yet. I find most are gimmicks at the moment but are getting better. But I will probably be diving into some in the next couple months since I have been watching what is out there.
@@KLEEBZTECH thanks, i think you should look into "live portrate" (I hope I get the name right lol ) you can animate a picture with a video
Yes seen that all over the place.
@@KLEEBZTECH haha yeah, i like your content, because you work on it and give a complete tutorial ,btw I have seen "open sora" too I don't know what that is
Thank you for this guide. Do you have any plans to make a LoRA? Another question, your default output format is always png. Is there a specific reason for that? It takes up more space than jpeg :)
I do have plans for LoRA at some point but as for the PNG format, it is the better quality format.
Thank's a lot
Very welcome.
If you do not want the auto describe to change styles you can simply remove the argument and disable the feature.
Auto describe also applies the styles of the option currently selected in the describe tab in input image.
Would it be beneficial to auto describe and not adjust styles?
I do like the auto describe feature. But I am not a fan of it changing styles since I rarely use the default ones. But that is just my opinion. Not sure how other people feel.
Спасибо!
Is there no option to use Vary AND InPaint together ? For example, I might be happy with the generated person, BUT would if I would like to have a variety of wristwatches, or necklaces, or rings, or whatever, to see what works best. I was hoping I could mix vary and inpaint, and just paint out the watch/ring etc, and generate a big batch of variations.
In the advanced options for inpainting in the debug menu you can adjust the denoise strength.
--enable-uov-describe-image has been changed to --enable-auto-describe-image. Check out my Patreon: www.patreon.com/KleebzTech
thx man
In the latest Fooocus Update that was changed to --enable-auto-describe-image
awesome video! please how to change Fooocus to a dark theme?
There are several ways but I recommend changing your browser to dark mode so all things are dark. Not near a computer right now to give you the other ways.
I want try other model , download from hugging web but why i always get pause message in cmd ?? , only juggernautxl work, i put in checkpoint folder and format is s.afetensor
Without more information it can be hard to determine the issue. I will mention the the Github page discussion for Fooocus is very active and friendly.
github.com/lllyasviel/Fooocus/discussions
With new update the command changed, now it's "--enable-auto-describe-image"
Yup.
Do you know any ai to extend image, I have tried using fooocus but the results too bad
You didn't have luck with the outpainting? Did you try only one side at a time or did you try multiple? When outpainting for best results only do one side at a time.
When I extend the image in outpaint it creates a line of difference between the original and extended image. The image I have created is an anime
Are you doing just one edge at a time? I find you will usually get lines with some but not usually all attempts. You can also use inpainting with a low denoise strength to help clean up any lines after.
Oh.. Thanks ❤. I will try
--enable-uov-describe-image makes run.bat crash at start and doesn't even open! how do I fix this?
Now that tells me you have not watched my latest video on the 2.5.0 update. lol. jk. It has been changed to --enable-auto-describe-image
@@KLEEBZTECH I just found your channel today! Thanks so much!
Welcome to the channel! I do have a few videos on getting the most out of Fooocus. I hope you find them helpful. And don't hesitate to ask any questions. If I can answer in the comments easily I will try. If not I am pretty active on the Github page as well but lately Mashb1t has been super active and answering almost every question. Not sure how he does it with all the work he does on Fooocus.
@@KLEEBZTECH Thanks! I am gonna watch your entire user guide playlist. Believe you me.
Do we have this on Forge?
I am sure you can do similar in Forge but I don't really use Forge much so couldn't guide you in the right direction.
and from me little advice to u first do extreme speed with jpeg output. so fast than when you see what you want than you can create again quality png version of that
The problem is that the image will be much different usually and you can't use things like the negative prompt on extreme speed. It can be useful to get a basic idea and of course if a slower machine not a bad idea.
Think how to remove the background of the living room? pls video
✨👌😎🙂😎👍✨
If Fooocus doesn't recognize --enable-describe-uov-image try --enable-auto-describe-image instead
they updated foocus and broke it >,< I have no idea how to fix it
Working on a video now but if you go to the Github page there are instructions on how to fix it and also can get help.