Hi, you are part of a very small group of non-German-speaking tutorial people that I subscribe to, and this is for two reasons: you have a really good explanatory voice (especially understandable for me since I speak bad English). And secondly, I find your videos to be really well structured. I couldn't find a German tutorial channel that masters both of these aspects. I'm not saying they are bad, but learning is a very personal experience and they just didn't work for me. You do it and for that I owe you a big thank you from Germany, so here it is: Thank you very much 💙!
Thanks for the feedback! I also do try my best to get good subtitles with proper punctuation so it translates better if people use that feature. Although I don't know how much that makes a difference but I hope so.
Thanks for the info and explanation for the new version. None of the instructions for updating worked for me, I had the original Fooocus. However, I installed the Fooocus Mashb1t you recommended a while back and that worked. I re-directed the config file to where I kept all the checkpoints, loras, etc. So, all works well now. Thanks again. 🙂
Glad to be back! It was a really terrible morning to wake up to it. I guess in a way someone did steal it, TH-cam. This is now the second time I have lost this channel and had to recover it and I really wish they would change how they handle these things.
@@KLEEBZTECH It's crazy. I've been watching your videos and there's nothing there that one would say, wow, that's really going to create problems. It's an educational channel how to use an ai art program. Anyway, glad you're back and hopefully TH-cam will leave you alone.
Thank you for an excellent overview! These changes have been in high demand and long awaited, but I admit that receiving them all at once is a bit overwhelming. Thank you for bringing clarity to this revitalized interface. I am excited by all the possibilities and conveniences in this new Fooocus. If Fooocus remained stagnant it would die a slow and lingering death. 😏
I am still learning them all and have already figured out more when it comes to the enhance feature which I plan on doing a more detailed video soon. Now to see if I can find some interesting ways of using some of these features.
Keeping `Inpaint Respective Field` at 0 will produce more details, but the output will blend in poorly with the rest of the image in terms of style, lighting, etc. which is evident in your examples. I usually get better results with values around 0.3 to 0.5 (your mileage may vary though).
@@jay_13875 yeah I should have gone into more detail on that and do plan on it when I do my full video on it. All the general rules for inpainting pretty much apply for enhance. I do wonder if it would make more sense that the default settings were something closer to 0.3 because I can't imagine using it with the default of regular inpainting at least for fixing things like the face and eyes.
One big challenge I have had is trying to deal with multiple faces. I have figured out if you have two people and one is a man and one is a woman then you can specify that in the detection settings and you can do each one individually.
as always great explanation and work 👍 and did you tried the enhance settings with faceswap and other controlnets? cause i was hoping that when you get final image from a faceswap it makes it worse in quality and makes way smoother so now if we enhance it in new feature does it really gonna improve it ?..
Thanks for the great video. I have tested it as well and find that with juggernautXL it doesn’t do a good job with the enhance option. Have you experienced same problems? Especially if the prompts are longer, then the face doesn’t fit to the character
If using improved detail that is potentially a possibility. I have some suggestions but I'm not near a computer and about to finally get some sleep so I'll try to remember to give you more tomorrow. Don't hesitate to check back in if I don't answer. I currently have an extremely busy schedule. I find enhanced works good for certain things but not other things. But I also don't generate images of people often so I'm less likely to notice any issues when it comes to that.
@@KLEEBZTECH thanx for replying. A few weeks ago i also have noticed that juggernaut struggles, when using faceswap and improved detail inpainting. I then fixed it, by generating the base images for my faceswap with a different realistic model. I just left it and went on, now that this happens again with the enhanced function, i get the feeling that juggernaut struggles, whenever it has to reproduce/repaint images that it produced itself. Maybe its a VAE problem. Maybe i will send the juggernaut team an email. But it would be great to find out if other people have the same problems. Anyway thanks for your help and enjoy the weekend mate 😊
@@lenny_Videos try adjusting the inpaint respective field to a value closer to 0.2 and see if it helps. That will allow it to use a little bit more of the surrounding area to get a better idea of what to generate. But it should still zoom in a decent amount.
Thank you for all of your Help. Now I have a question. Is there an equivalent to the run.bat file in the Stability Matrix version of fooocus that you know of?
Not exactly but you can set the options in the settings for Fooocus in SM. I think it is a little gear icon but not near a computer to verify. That way you can use launch options.
Add the command line to the package itself. As @Kleebztech mentioned, a gear icon is right before u click “Launch”, you’ll see a puzzle piece and also gear icon. Click gear and add command for auto prompt
this update hasn't changed anything in terms of speed of inpainting, same as before. You can make inpaint faster using less steps or another performance, but keep in mind to set the inpaint engine to None when not using Quality or Speed as results will be worse.
Will think about a good way of doing that. I have considered trying to do live streaming and that might be something I could do. But I know when I do that then the AI is going to make my life difficult like when I record and does not do what it should. lol.
Perfect informative video, thank you! I really liked the new masking interface and the 'enhance' feature. I used to think of Fooocus as a simplified, minimalist derivative of Stable Diffusion. However, as new features are added with the interface improvements, my opinion is beginning to change. I have no complaints, but I am genuinely curious to see where this development will lead. My biggest problem is that Fooocus runs slowly on my MacStudio M2 Max with a 30-core GPU, 64GB RAM. Stable Diffusion, on the other hand, offers a faster generation time compared to Fooocus. I know that both programs are optimised for NVIDIA GPUs, but it would be nice to see improvements on the Apple Mac side as well. :)
You say Stable Diffusion runs faster than Fooocus but just to be clear, Fooocus is also Stable Diffusion. These tools are just the interfaces to run it. I am curious which user interface you are using that is faster since I have not run into any that are that noticeably faster and often most are slower. But that also could have to do with it being a Mac but I don't know much about how they run on those so I am curious.
@@KLEEBZTECH The interface is the same for Mac and PC. There is a significant speed difference in rendering time (generate) when using Stable Diffusion compared to Fooocus. I open both in the Chrome browser. I still prefer Fooocus for its ease of use. The other advantage is that your Fooocus content is very educational.
@@kzmtsk My confusion is you mentioning Stable Diffusion as if it is different than Fooocus. Fooocus is Stable Diffusion. ComfyUI is also Stable diffusion as is A1111 and Swarm. I was wondering which one you are referring to when you say Stable Diffusion because I am curious and I use most of them off and on and have not found any to be really much different in terms or speed. Of course I am using a PC with Nvidia hardware so will have a different experience. Also have you looked over on the Github page to see if there are ways of getting faster generation on the Mac? I do know it is not "officially supported" but there might be some tips that others have found that will help.
I just checked it over and I am pretty sure it should work without issue and it does have the newer options for presets like default vae. But if you run into any issues please let me know.
Love the videos and appreciate your time spent to educate! I have a question maybe you would have an answer to point me in the right direction… I use windows (AMD) and I can’t run the run.bat for anime or realistic. But the standard run.bat works fine. Is this a limitation from my AMD drivers or do I need to do something specific to access those other run.bats.
I assume you followed the instructions for AMD on the Github page if the run.bat is working and you would need to edit the other bat files as well if you look at the instructions here github.com/lllyasviel/Fooocus?tab=readme-ov-file#linux-amd-gpus
Hi, @kleebztech , first of all thanks a lot for this kind of videos , it helps a lot to understand much better foocus . Just one question , maybe Im wrong but withing the new version Im not able to find the checkbox where to translate prompts . Could you help me ??
I don't know of any specifically but I would suggest searching on CivitAI since there are a ton of models there. Like this LoRA civitai.com/models/361809/saturday-morning-retro-superheroes
If looking to train a model you might want to consider someplace like CivitAI since they make it easy and you can easily earn buzz to pay for the training.
These sorts of things are not out of the question but I do recommend people be active on the Github discussion page since Mashb1t is often open to things if he can do it and is worth doing. github.com/lllyasviel/Fooocus/discussions/3264
How do you stop the enhance from changing the person in the photo into a completely new person ? Say if you were just trying to enhance an old photo of a family member. Obviously it's no good if it changes the enhanced photo into a new person. Is there a way to get it to do that ? Kind of like the free web version of Krea.AI's enhance feature does - a simple slider somewhere ?
Well no matter what it is going to change the face but you can adjust the inpaint denoising strength in the inpaint settings and set it to a low value. 1 is to completely regenerate that part of the image and if you go lower it will not change it as much. So maybe start at .2 or something and go from there.
I wonder if it is possible to have a picture from my phone uploaded to the fooocus image reference and apply styles to change the image. For example. I have a picture of my dog. Can I upload to fooocus to make a watercolor of it?
There are several different ways but this video might give you some ideas. Be warned it is a little more advanced so if you are not really familiar with Fooocus you may also want to watch some of my earlier videos. th-cam.com/video/9yDwJe5ddfM/w-d-xo.html
I'd like you to make a video on what the different models included in the drop down menu of Fooocus are for. I'm afraid to try because, they're 6gig each (plus loads of other stuff it seems to dl too). I made the mistake of trying SAI, it was downloading forever, and the results were so inconsequential and poor, that I didn't even know what it did, what it is used for, or whether I even used it for it's intended purpose. I can't spare the time or SSD space to blow 6+ gig just to see what the other models do, or are for.
That dropdown is the presets and each one uses different models but you also can get models elsewhere to use. SAI for example is the original SDXL model and is not that great since there have been big improvements with newer models. I do have other videos that actually cover some of them. For example I have a video on the Playground preset th-cam.com/video/BJGTlwuLRDI/w-d-xo.html and also a general one on checkpoints th-cam.com/video/GgEEb2K0j7s/w-d-xo.html . The anime one uses a specific model meant for anime. Pony will use the base Pony model which I do have a video on as well th-cam.com/video/FebP-lpbZ8E/w-d-xo.html
I have a question. I notice that most generations only generate half the persons body or 3/4 at most. Can I have this program generate a full body view?
Aspect ratio can help so using a more vertical one will help, and also mentioning details such as types of paints of shoes. But I do cover that and more in that video.
@@KLEEBZTECH Cool, thank you for the reply. I am new to SD. The reason I want this on my computer so bad is for the freedom. Im tired of dealing with Bing and Midjourney's oversensitive content filters.
Welcome to offline image generation! I do have lots of videos on using Fooocus which I still prefer for my SDXL content. I am also doing videos on SwarmUI as well but would suggest sticking with Fooocus for now. And many things are similar between SD and the online ones when it comes to prompting. But you have more control options with Fooocus than those.
I've personally liked using the sampler dpmpp_3m_sde_gpu, slightly faster with slightly better results. Also as the uni_pc_bh2 is one of the newest on the list, I've received some good results from that as well, although it doesn't work as well with outpainting.
Hello, the outputs folder that should be in foocus is created on the desktop of the main user on my computer. How can I fix this? Briefly, how can I determine the path of the outputs folder?
You can try using things like different ethnicities. Also the checkpoint used can be a big factor. I know for example most of the realistic pony models do not have much variety.
@@KLEEBZTECH Super appreciate the tip. I was able to do that before I saw this. Don’t know why I didn’t think of that sooner 😂 Watching the Olympics helped spark that idea.
It will attempt to update but you may run into issues when you do. I have a video covering one of the more common issues and how to solve it. Also can go to the Github page for help if any issues. th-cam.com/video/cxjnzTpV4cg/w-d-xo.html
Great video as usual. But I have a question though, does using the enhancing option correct the deformed hand? EDIT: I typed the question before I watched your video fully, but you answered it throughout the video, that the hand doesn't get correctly formed if it is deformed, and all what it does is improving the details. Thank you!
You may have missed my comment on that. It will probably not do much for hands since if the hand is deformed you will just get a better looking deformed hand or depending on how you set it up just another deformed hand.
@@KLEEBZTECH Yes when I typed the Q I was still around the 15th minute of the video, then I discovered later on that you mentioned that, therefore I edited my comment before your answer 😁. Thank you!
Information on updating here: github.com/lllyasviel/Fooocus/discussions/3293 and I do have my last video which covers one of the issues you may encounter.
Hey Rodney! i'm trying this new feature enhance and i'm using Upscale or Variation with the three steps #1, #2, #3 but at some point Fooocus gets an error on the browser interface. In the cmd is still running so i'm able to get the results. Do you know what is goning on? did it happen to you? Thanks for the video!!
I have not run into that. I am doing some more testing tomorrow for a more detailed video on that feature and will see if I can recreate the issue. Do you know at what step it is getting the error? It sounds similar to what can happen if you delete an image during the generation process that was just generated.
@@KLEEBZTECH I did but for some reason it was not doing it as it only updated to V400 and would not update to V500. When I looked at the json file it was showing V400 so I changed all the references from V400 to V500 and when I ran it, it updated.. I have V300 or V310.
I have video here. It is a little old but nothing has really changed when it comes to downloading and installing. th-cam.com/video/j1WuQndmgFE/w-d-xo.html
Upscaling with 1.5x or 2x uses SD to add details which also can change details. Why I usually upscale before doing other things. And you can use things like faceswap when upscaling as well if you enable that in the debug menu.
Years ago there was a superb word processor package called Word Perfect, it was brilliant but the developers kept adding improvements until in the end it was just a mess. I am not loving this version. Thanks for posting though.
Why not? There really is nothing that makes it a mess since there is not much added to the interface unless you enable the enhance feature. As for the Stuff in the inpainting area I find it has been much improved since some things you no longer need to dig into the debug menus. One thing to keep in mind is that if Fooocus does not get improvements then people will move on to other tools and Fooocus will be left behind.
I also do encourage people to head over to the Github page and give feedback on new features. Almost everything Mash1t asks for everyone's opinion and input well in advance.
@@KLEEBZTECH Thanks for replying. I got into this just to make some images for my great granddaughter and she loved them. Now there are pages of styles and presets- good for professionals to have choice but not so much fun for casual users. A good video would be how to edit the files so you do not have all these models and styles. Currently downloading Pony 6 which I did not want. Good Luck.
@@paulmorris5166you can hide all the advanced features. To use pony you just use upscore_9 tags and it's fine. It's not that complicated. Watch a video on it
I would point out it is just one person working on all of this for free and is an open source project so expecting it to work like a normal paid software program might be expecting a little too much.
I can't imagine it would have any impact on using a LoRA. I have been using them a ton lately and have not noticed any difference. And I can't think of anything that was changed that would impact that.
@@KLEEBZTECH results are bad since the update (at least with my LoRA), and it was pretty good before so… and, of course, I use the same checkpoint model and settings 😔
Have you tried using the exact same seed and see if you can regenerate the same images? If you use the metadata and paste into the prompt you can try recreating the same image to see if you get different results. That is what I will often do to test when I think something has changed. As far as I know there should have been nothing that would impact that.
Looks like Fooocus is DEAD. I still use it for in-painting. Swarm just doesn't do what Fooocus can do right now. I would always add in a 1920x180 preset in Fooocus and every few weeks it would disappear because a new version had been made, but now, that preset has been there for so many months. Sad to see it gone. Also, sad to see you are not posting like you used to ( more often ) I was really hoping for more Swarm tutorials.
I do agree about Swarm not being as good for inpainting. And I do apologize for the away time recently. Have had some real world stuff really impact my ability to put stuff out but fingers crossed that should be out of the way soon so I can get back to putting some videos out. I do have a couple in the works. Sadly I have yet to find a good way of getting Swarm to inpaint very well.
@@KLEEBZTECH Hi Rodney, thanks for the reply, I appreciate it. I understand what you mean, TH-cam is a hobby and nobody can put 100% effort into it, and there is that burn-out factor as well. Take your time, everyone will appreciate you when you're back. I do love watching your videos, you are a great teacher ! Thank you.
Thumbnail made using this LoRA: civitai.com/models/585533/wow-bubbles-sdxl-style-lora?modelVersionId=653409
👋
Hi, you are part of a very small group of non-German-speaking tutorial people that I subscribe to, and this is for two reasons: you have a really good explanatory voice (especially understandable for me since I speak bad English). And secondly, I find your videos to be really well structured. I couldn't find a German tutorial channel that masters both of these aspects. I'm not saying they are bad, but learning is a very personal experience and they just didn't work for me. You do it and for that I owe you a big thank you from Germany, so here it is: Thank you very much 💙!
Thanks for the feedback! I also do try my best to get good subtitles with proper punctuation so it translates better if people use that feature. Although I don't know how much that makes a difference but I hope so.
A great Fooocus update and another great video . Thanks. Agree. Having extensively tried A111, Comfy and Invoke I predominantly use Fooocus now.
Fooocus is the best SD for ease of use and usability. The AI understands simple terms.
Thanks for the info and explanation for the new version. None of the instructions for updating worked for me, I had the original Fooocus. However, I installed the Fooocus Mashb1t you recommended a while back and that worked. I re-directed the config file to where I kept all the checkpoints, loras, etc. So, all works well now. Thanks again. 🙂
Thanks, good informative video, playing around with the enhance function today
It was very informative, thank you!
You're very welcome!
Introduction and Update Overview - 00:00:00
New Default Models and Download Instructions - 00:00:34
Python Dependencies and Code Improvements - 00:01:06
Masterpiece Focus Changes - 00:01:43
Developer Debug Mode and Restart Sampler - 00:02:25
VAE Preset Specification - 00:03:05
Flag Changes: Enable Auto Describe Image - 00:03:47
Playground Preset Adjustments - 00:04:24
Pony Preset Overview - 00:05:00
Pony V6 Model and Preset Details - 00:05:36
Inpainting and Playground Preset Changes - 00:07:00
Default Main Engine Version in Presets - 00:08:24
Enhanced Checkbox and Inpainting Changes - 00:08:56
Segment Anything and Auto-Masking Features - 00:10:01
Detailed Use of Segment Anything Model - 00:11:17
Enhanced Tab and Usage - 00:14:36
Enhanced Feature: Automated Enhancements - 00:15:12
Detection and Inpainting Improvements - 00:18:25
Summary and Final Thoughts - 00:26:13
Wow...these look like great enhancements! Looking forward to trying them out.
Kleebz so glad to have you back. I thought somebody stole your channel.
Glad to be back! It was a really terrible morning to wake up to it. I guess in a way someone did steal it, TH-cam. This is now the second time I have lost this channel and had to recover it and I really wish they would change how they handle these things.
@@KLEEBZTECH It's crazy. I've been watching your videos and there's nothing there that one would say, wow, that's really going to create problems. It's an educational channel how to use an ai art program. Anyway, glad you're back and hopefully TH-cam will leave you alone.
Thank you, Great video as always
Amazing tutorial on 2.5.0. Thanks so much!
You're welcome!
Thank you for an excellent overview! These changes have been in high demand and long awaited, but I admit that receiving them all at once is a bit overwhelming. Thank you for bringing clarity to this revitalized interface.
I am excited by all the possibilities and conveniences in this new Fooocus. If Fooocus remained stagnant it would die a slow and lingering death. 😏
I am still learning them all and have already figured out more when it comes to the enhance feature which I plan on doing a more detailed video soon. Now to see if I can find some interesting ways of using some of these features.
All good on my side, thanks for your help. 🙂
Keeping `Inpaint Respective Field` at 0 will produce more details, but the output will blend in poorly with the rest of the image in terms of style, lighting, etc. which is evident in your examples. I usually get better results with values around 0.3 to 0.5 (your mileage may vary though).
@@jay_13875 yeah I should have gone into more detail on that and do plan on it when I do my full video on it. All the general rules for inpainting pretty much apply for enhance. I do wonder if it would make more sense that the default settings were something closer to 0.3 because I can't imagine using it with the default of regular inpainting at least for fixing things like the face and eyes.
One big challenge I have had is trying to deal with multiple faces. I have figured out if you have two people and one is a man and one is a woman then you can specify that in the detection settings and you can do each one individually.
Very informative, many thanks
Very welcome
as always great explanation and work 👍 and did you tried the enhance settings with faceswap and other controlnets? cause i was hoping that when you get final image from a faceswap it makes it worse in quality and makes way smoother so now if we enhance it in new feature does it really gonna improve it ?..
I will be having a video on the enhance feature very soon.
Thanks for the great video. I have tested it as well and find that with juggernautXL it doesn’t do a good job with the enhance option. Have you experienced same problems?
Especially if the prompts are longer, then the face doesn’t fit to the character
If using improved detail that is potentially a possibility. I have some suggestions but I'm not near a computer and about to finally get some sleep so I'll try to remember to give you more tomorrow. Don't hesitate to check back in if I don't answer. I currently have an extremely busy schedule. I find enhanced works good for certain things but not other things. But I also don't generate images of people often so I'm less likely to notice any issues when it comes to that.
@@KLEEBZTECH thanx for replying. A few weeks ago i also have noticed that juggernaut struggles, when using faceswap and improved detail inpainting. I then fixed it, by generating the base images for my faceswap with a different realistic model. I just left it and went on, now that this happens again with the enhanced function, i get the feeling that juggernaut struggles, whenever it has to reproduce/repaint images that it produced itself. Maybe its a VAE problem. Maybe i will send the juggernaut team an email. But it would be great to find out if other people have the same problems.
Anyway thanks for your help and enjoy the weekend mate 😊
@@lenny_Videos try adjusting the inpaint respective field to a value closer to 0.2 and see if it helps. That will allow it to use a little bit more of the surrounding area to get a better idea of what to generate. But it should still zoom in a decent amount.
@@KLEEBZTECH thanks mate, will try that
Thank you for all of your Help. Now I have a question. Is there an equivalent to the run.bat file in the Stability Matrix version of fooocus that you know of?
Not exactly but you can set the options in the settings for Fooocus in SM. I think it is a little gear icon but not near a computer to verify. That way you can use launch options.
Add the command line to the package itself. As @Kleebztech mentioned, a gear icon is right before u click “Launch”, you’ll see a puzzle piece and also gear icon. Click gear and add command for auto prompt
I have focused install and set up..how do I update?
how can i make it faster in inpainting , its too slow in this update
this update hasn't changed anything in terms of speed of inpainting, same as before. You can make inpaint faster using less steps or another performance, but keep in mind to set the inpaint engine to None when not using Quality or Speed as results will be worse.
great great video - thx a lot
Do you have any news on Fooocus? I hear stability is now 3.5 and was wondering if Fooocus would see an update....
Nothing I have heard.
Can you do a video where you go through the steps you would take to produce an image, as well as how you would improve it? :)
Will think about a good way of doing that. I have considered trying to do live streaming and that might be something I could do. But I know when I do that then the AI is going to make my life difficult like when I record and does not do what it should. lol.
Perfect informative video, thank you! I really liked the new masking interface and the 'enhance' feature. I used to think of Fooocus as a simplified, minimalist derivative of Stable Diffusion. However, as new features are added with the interface improvements, my opinion is beginning to change. I have no complaints, but I am genuinely curious to see where this development will lead. My biggest problem is that Fooocus runs slowly on my MacStudio M2 Max with a 30-core GPU, 64GB RAM. Stable Diffusion, on the other hand, offers a faster generation time compared to Fooocus. I know that both programs are optimised for NVIDIA GPUs, but it would be nice to see improvements on the Apple Mac side as well. :)
You say Stable Diffusion runs faster than Fooocus but just to be clear, Fooocus is also Stable Diffusion. These tools are just the interfaces to run it. I am curious which user interface you are using that is faster since I have not run into any that are that noticeably faster and often most are slower. But that also could have to do with it being a Mac but I don't know much about how they run on those so I am curious.
@@KLEEBZTECH The interface is the same for Mac and PC. There is a significant speed difference in rendering time (generate) when using Stable Diffusion compared to Fooocus. I open both in the Chrome browser. I still prefer Fooocus for its ease of use. The other advantage is that your Fooocus content is very educational.
@@kzmtsk My confusion is you mentioning Stable Diffusion as if it is different than Fooocus. Fooocus is Stable Diffusion. ComfyUI is also Stable diffusion as is A1111 and Swarm. I was wondering which one you are referring to when you say Stable Diffusion because I am curious and I use most of them off and on and have not found any to be really much different in terms or speed. Of course I am using a PC with Nvidia hardware so will have a different experience. Also have you looked over on the Github page to see if there are ways of getting faster generation on the Mac? I do know it is not "officially supported" but there might be some tips that others have found that will help.
I am just looking to get more info for when others might run into issues since I do not have a Mac to test on.
Hi friend. When upscaler images, the face appears very different. Is it possible to fix this with face detection in the 2.5 update?
Did this update break your Fooocus preset generator?
@@RamonGuthrie I don't believe so.
I just checked it over and I am pretty sure it should work without issue and it does have the newer options for presets like default vae. But if you run into any issues please let me know.
Love the videos and appreciate your time spent to educate!
I have a question maybe you would have an answer to point me in the right direction…
I use windows (AMD) and I can’t run the run.bat for anime or realistic. But the standard run.bat works fine. Is this a limitation from my AMD drivers or do I need to do something specific to access those other run.bats.
I assume you followed the instructions for AMD on the Github page if the run.bat is working and you would need to edit the other bat files as well if you look at the instructions here github.com/lllyasviel/Fooocus?tab=readme-ov-file#linux-amd-gpus
Of course you can always just switch to those presets after launching as well.
Hi, @kleebztech , first of all thanks a lot for this kind of videos , it helps a lot to understand much better foocus . Just one question , maybe Im wrong but withing the new version Im not able to find the checkbox where to translate prompts . Could you help me ??
Mashb1ts fork has that feature.
how to deploy fooocus on hungingface space?
Do you know any checkpoint or LoRA to create images of superheroes
I don't know of any specifically but I would suggest searching on CivitAI since there are a ton of models there. Like this LoRA civitai.com/models/361809/saturday-morning-retro-superheroes
How to train model that i can use in Pony / Fooocus
If looking to train a model you might want to consider someplace like CivitAI since they make it easy and you can easily earn buzz to pay for the training.
still waiting for more control net models, like open pose, for better consistent character generation
These sorts of things are not out of the question but I do recommend people be active on the Github discussion page since Mashb1t is often open to things if he can do it and is worth doing. github.com/lllyasviel/Fooocus/discussions/3264
Thank you
You're welcome
How do you stop the enhance from changing the person in the photo into a completely new person ? Say if you were just trying to enhance an old photo of a family member. Obviously it's no good if it changes the enhanced photo into a new person. Is there a way to get it to do that ? Kind of like the free web version of Krea.AI's enhance feature does - a simple slider somewhere ?
Well no matter what it is going to change the face but you can adjust the inpaint denoising strength in the inpaint settings and set it to a low value. 1 is to completely regenerate that part of the image and if you go lower it will not change it as much. So maybe start at .2 or something and go from there.
I wonder if it is possible to have a picture from my phone uploaded to the fooocus image reference and apply styles to change the image. For example. I have a picture of my dog. Can I upload to fooocus to make a watercolor of it?
There are several different ways but this video might give you some ideas. Be warned it is a little more advanced so if you are not really familiar with Fooocus you may also want to watch some of my earlier videos. th-cam.com/video/9yDwJe5ddfM/w-d-xo.html
I'd like you to make a video on what the different models included in the drop down menu of Fooocus are for. I'm afraid to try because, they're 6gig each (plus loads of other stuff it seems to dl too). I made the mistake of trying SAI, it was downloading forever, and the results were so inconsequential and poor, that I didn't even know what it did, what it is used for, or whether I even used it for it's intended purpose.
I can't spare the time or SSD space to blow 6+ gig just to see what the other models do, or are for.
That dropdown is the presets and each one uses different models but you also can get models elsewhere to use. SAI for example is the original SDXL model and is not that great since there have been big improvements with newer models. I do have other videos that actually cover some of them. For example I have a video on the Playground preset th-cam.com/video/BJGTlwuLRDI/w-d-xo.html and also a general one on checkpoints th-cam.com/video/GgEEb2K0j7s/w-d-xo.html . The anime one uses a specific model meant for anime. Pony will use the base Pony model which I do have a video on as well th-cam.com/video/FebP-lpbZ8E/w-d-xo.html
How can you run fooocus ai online and create nsfw content for ai models? Also any way to create image to video ?
what about the seed? should I allow random when auto masking and enhancing?
No need to usually worry about that
I have a question. I notice that most generations only generate half the persons body or 3/4 at most. Can I have this program generate a full body view?
I have a video that covers that sort of thing: th-cam.com/video/YODdFMlZz2Q/w-d-xo.html
Aspect ratio can help so using a more vertical one will help, and also mentioning details such as types of paints of shoes. But I do cover that and more in that video.
@@KLEEBZTECH Cool, thank you for the reply. I am new to SD. The reason I want this on my computer so bad is for the freedom. Im tired of dealing with Bing and Midjourney's oversensitive content filters.
Welcome to offline image generation! I do have lots of videos on using Fooocus which I still prefer for my SDXL content. I am also doing videos on SwarmUI as well but would suggest sticking with Fooocus for now. And many things are similar between SD and the online ones when it comes to prompting. But you have more control options with Fooocus than those.
I've personally liked using the sampler dpmpp_3m_sde_gpu, slightly faster with slightly better results. Also as the uni_pc_bh2 is one of the newest on the list, I've received some good results from that as well, although it doesn't work as well with outpainting.
Hello, the outputs folder that should be in foocus is created on the desktop of the main user on my computer. How can I fix this? Briefly, how can I determine the path of the outputs folder?
there is a config.txt file that you edit in the Fooocus folder. And another example txt file with instructions.
@@KLEEBZTECH thanks
Hey there! Is there any reliable prompt or setting that can enable me to get a broarder variety of face generations for my human images?
You can try using things like different ethnicities. Also the checkpoint used can be a big factor. I know for example most of the realistic pony models do not have much variety.
@@KLEEBZTECH Super appreciate the tip. I was able to do that before I saw this. Don’t know why I didn’t think of that sooner 😂 Watching the Olympics helped spark that idea.
@@sat-success also you could use wild cards to change a few things. Even just giving a name can often change things.
Will the app update when you run it?
It will attempt to update but you may run into issues when you do. I have a video covering one of the more common issues and how to solve it. Also can go to the Github page for help if any issues. th-cam.com/video/cxjnzTpV4cg/w-d-xo.html
Thank you so much this video working
You're welcome!
Great video as usual.
But I have a question though, does using the enhancing option correct the deformed hand?
EDIT:
I typed the question before I watched your video fully, but you answered it throughout the video, that the hand doesn't get correctly formed if it is deformed, and all what it does is improving the details. Thank you!
You may have missed my comment on that. It will probably not do much for hands since if the hand is deformed you will just get a better looking deformed hand or depending on how you set it up just another deformed hand.
@@KLEEBZTECH Yes when I typed the Q I was still around the 15th minute of the video, then I discovered later on that you mentioned that, therefore I edited my comment before your answer 😁.
Thank you!
@@TomiTom1234 Yup I see the edit now. lol.
Hello friends. How can I update? I have no information (
Information on updating here: github.com/lllyasviel/Fooocus/discussions/3293 and I do have my last video which covers one of the issues you may encounter.
Hey Rodney! i'm trying this new feature enhance and i'm using Upscale or Variation with the three steps #1, #2, #3 but at some point Fooocus gets an error on the browser interface. In the cmd is still running so i'm able to get the results. Do you know what is goning on? did it happen to you? Thanks for the video!!
I have not run into that. I am doing some more testing tomorrow for a more detailed video on that feature and will see if I can recreate the issue. Do you know at what step it is getting the error? It sounds similar to what can happen if you delete an image during the generation process that was just generated.
I changed the anime bat to download update and what was downloaded is animaPencilXL_v310, should it not have downloaded V500?
Rewatch 1:00 for instructions on getting the latest model.
Had to manually change the anime.json file so I could get it to update to V500
That will work as well but if you add what I show in the video that will tell it to always get the latest model.
@@KLEEBZTECH I did but for some reason it was not doing it as it only updated to V400 and would not update to V500. When I looked at the json file it was showing V400 so I changed all the references from V400 to V500 and when I ran it, it updated.. I have V300 or V310.
Odd. Well at least you are good to go.
If I don't have Fooocus on my PC. How do I directly download the latest version (2.5.0) to a PC (Windows)?
I have video here. It is a little old but nothing has really changed when it comes to downloading and installing. th-cam.com/video/j1WuQndmgFE/w-d-xo.html
I look fwd to this because every time I upscale, faces lose likeness and skin loses almost all texture
Upscaling with 1.5x or 2x uses SD to add details which also can change details. Why I usually upscale before doing other things. And you can use things like faceswap when upscaling as well if you enable that in the debug menu.
Years ago there was a superb word processor package called Word Perfect, it was brilliant but the developers kept adding improvements until in the end it was just a mess. I am not loving this version. Thanks for posting though.
Why not? There really is nothing that makes it a mess since there is not much added to the interface unless you enable the enhance feature. As for the Stuff in the inpainting area I find it has been much improved since some things you no longer need to dig into the debug menus. One thing to keep in mind is that if Fooocus does not get improvements then people will move on to other tools and Fooocus will be left behind.
I also do encourage people to head over to the Github page and give feedback on new features. Almost everything Mash1t asks for everyone's opinion and input well in advance.
@@KLEEBZTECH Thanks for replying. I got into this just to make some images for my great granddaughter and she loved them. Now there are pages of styles and presets- good for professionals to have choice but not so much fun for casual users. A good video would be how to edit the files so you do not have all these models and styles. Currently downloading Pony 6 which I did not want. Good Luck.
@@paulmorris5166you can hide all the advanced features. To use pony you just use upscore_9 tags and it's fine. It's not that complicated. Watch a video on it
Fooocus is moving very quickly towards the 80:20 formula
Thanks a lot. everything works fine!!! 👍👍👍Thank you! you helped me a lot!!!👍👍👍
4:25 "Do do"
Hi great friend, kindly when you make a video on faceswap that is completely changed. Thanks in advance
Had to do a brute force reinstall from scratch to get it to install properly, but otherwise this version looks more stable than the last. 👍
ChatGPt provided 2 commands and I solved it in 2 seconds.
@@gamersgabangest3179 Good for you but proper software updates shouldn't require manual tweaking from the average end user.
I would point out it is just one person working on all of this for free and is an open source project so expecting it to work like a normal paid software program might be expecting a little too much.
How can i mask the background only? HEHE
You could mask the subject in the image and then invert the mask.
@@KLEEBZTECH its not working for me. Do i have to click invert the mask first before generating or after?
Before you generate you select invert mask. Then generate.
There is a Enhance function after u click on enhance, where u load an image... but that does absolutly nothing... what is that for?
That is for doing the enhance feature with an image already generated.
This update destroyed my LORA… 27 hours of training gone (yeah I have a low GPU)…Now I have terrible results when I use it 😢
I can't imagine it would have any impact on using a LoRA. I have been using them a ton lately and have not noticed any difference. And I can't think of anything that was changed that would impact that.
@@KLEEBZTECH results are bad since the update (at least with my LoRA), and it was pretty good before so… and, of course, I use the same checkpoint model and settings 😔
Have you tried using the exact same seed and see if you can regenerate the same images? If you use the metadata and paste into the prompt you can try recreating the same image to see if you get different results. That is what I will often do to test when I think something has changed. As far as I know there should have been nothing that would impact that.
@@KLEEBZTECH I’ll try 👍
Looks like Fooocus is DEAD. I still use it for in-painting. Swarm just doesn't do what Fooocus can do right now. I would always add in a 1920x180 preset in Fooocus and every few weeks it would disappear because a new version had been made, but now, that preset has been there for so many months. Sad to see it gone. Also, sad to see you are not posting like you used to ( more often ) I was really hoping for more Swarm tutorials.
I do agree about Swarm not being as good for inpainting. And I do apologize for the away time recently. Have had some real world stuff really impact my ability to put stuff out but fingers crossed that should be out of the way soon so I can get back to putting some videos out. I do have a couple in the works. Sadly I have yet to find a good way of getting Swarm to inpaint very well.
@@KLEEBZTECH Hi Rodney, thanks for the reply, I appreciate it. I understand what you mean, TH-cam is a hobby and nobody can put 100% effort into it, and there is that burn-out factor as well. Take your time, everyone will appreciate you when you're back. I do love watching your videos, you are a great teacher ! Thank you.
First.
First to first, second to comment
this version is too slow in inpainting
You said doodoo. I'll see myself out.
I know this is unrelated but give the Quran a read