Thank you so much! I'm really impressed with FOOOCUS. I was a big fan of Mid, and I can see your tutorials, have had a major impact on my skills with stable diffusion.
🎯 Key Takeaways for quick navigation: 00:00 📝 *Introduction and Overview* - Provides a detailed explanation and additional information about Focus. - Emphasizes the need for deeper understanding beyond basic tutorials to avoid frustration. - Encourages users to catch up on the basics of Focus from the previous video. 00:41 🎨 *Image Generation Settings* - Explains the process of image generation by typing prompts in Focus. - Focuses on advanced settings like speeds, aspect ratios, and image numbers. - Highlights the absence of certain ratios due to optimal resolutions working with multiples of 64. 01:50 🎭 *Styles in Focus* - Introduces various styles available in Focus, including the unique "Focus V2" style. - Discusses the styles as positive and negative prompts and provides external resources for visual examples. - Encourages experimentation with mixing and matching styles for creative combinations. 03:00 🧠 *Understanding Models in Focus* - Explores base models (checkpoints) like "Juggernaut" and "Dream Shaper" for image generation. - Introduces SDXL Turbo, a newer, faster version for quicker image generation. - Guides on adding new models and understanding the significance of base models in stable diffusion. 04:37 🖼️ *Adding SDXL Models in Focus* - Demonstrates the process of adding an SDXL model (Dynavision XL) to Focus. - Emphasizes the importance of checking details and version compatibility. - Discusses the historical use of refiners and how they became less essential over time. 06:17 🎨 *Using Loras in Focus* - Introduces "Loras" as patches influencing the outcome of images in Focus. - Discusses Loras that control detail levels and those designed around specific celebrities or characters. - Explains the application of Aura to modify image characteristics with visual demonstrations. 08:12 🎚️ *Advanced Tab Settings in Focus* - Explores advanced settings in the "Advanced" tab, including guidance scale and sharpness. - Demonstrates how the guidance scale impacts color depth and the effect of sharpness on image details. - Shows subtle changes in images based on adjustments to the advanced settings. 09:58 🔍 *Debugging and SDXL Turbo in Focus* - Briefly discusses the "Debug" section and its role in Focus (full debugging details omitted). - Introduces SDXL Turbo as a faster image generation option, providing settings for efficient use. - Guides users on changing settings for SDXL Turbo to enhance performance. 11:12 📸 *Input Image Section in Focus* - Acknowledges the "Input Image" section in Focus and hints at a future in-depth video. - Provides an overview, leaving details for a more comprehensive explanation in future content. - Wraps up the video, expressing hope that viewers found the information valuable. Made with HARPA AI
Great explanation on what Loras and checkpoints are, and how to use them! i was having trouble understanding these concepts but now im able to generate better images.
I am using Colab for image generation and i want to use Realistic Vision v6.B1 model as refiner can you tell me how to download that model directly in colab?
@@JayPatel-hm6vb If you are using the free Colab on the fooocus sight, i think thats the limitation of that one. It has limited memory and image prompt features use more.
I think the refiner in Fooocus doesn't add more time to generate the image because it's not creating the full image and then refining as a second full pass. What it does is to exchange the model in the middle of the generation (in this case at 80% as put in the slider) so that at the end the number of steps is the same regardless you use the refiner or not. That said, it's probably better to use a model that doesn't need a refiner and you save 6GB of disk space. That option is useful even when not using a refiner though: you can use it to flip to a different model mid-generation, creating the structure with one and then polishing the details with another... and you can use 1.5 models there! It's a way of using 1.5 models if you switch sooner (as explained i the text with the option) so that the main generation will be with the second 1.5 model.
So the LoRas for SD 1.5 won't work in Fooocus, even if I use SD 1.5 models as a refiner, correct? Should I only search for and use SDXL 1.0 based LoRas? Should I even bother with refiners? Sometimes I get weird results for eyes and sometimes hands and I haven't yet figured out, if it's my prompting or refiner and LoRa's fault...
No, an sd 1.5 lora isn't compatible with sdxl base mode, even with a sd 1.5 refiner. I mean it will still effect it but you will get poor results. Refiners can be useful just not necessary. A good example to try is the sd 1.5 Realistic Vision v5.1 model as a refiner (0.4 setting) with juggernaut as it can give excellent results if you like the look of that model.
@artyneon If the realistic vision 6.0 works for you keep using. I downgraded because I was having issues/bad images. keep using juggernaut v8, they just updated Fooocus so that is the standard now. 👍
thank you for your patience explaining everything! I find it difficult to upscale a base image without changing details. Is there a way to preserve 100% original details ?
without knowing more. Are they in your fooocus\models\checkpoints folder? If so make sure you didn't put them in another folder inside of that one, sometimes it doesn't detect subfolders... also did you make any changes to the config.txt?
Thanks for the 4-parts tutorial of using Fooocus! But what I haven't found yet is how to control the steps? Mine always set to 60 steps, is it possible to reduce that?
Thank you very much, your videos are very good, I have subscribed to your channel, where do you run fooocus? What hardware do you have? How long does it take to generate an image? I ask you because I have an Nvidia M4000 GPU but it is very very slow, 24 minutes to generate a high quality 1408x704 image. Is that time normal?
I run fooocus on my machine, I have a 4070 Ti and takes about 12 seconds average per image. The m4000, even with 8gb, is on the very low end because of its older architecture.
I love your videos, man. They are very helpful to a newbie in the world of AI digital image creation, like myself. I think I understood everything you talk about in this video, but I don't understand the "Random" command and those "seeds." Would you care to elaborate?
The seed number is the starting point of the math. Having it locked (random unchecked) then you will generate the same picture if you don’t change any settings or your text prompt. Having random checked will give a random seed number and a new image every generation even without changing any settings. Sticking to a single seed number makes it easier to alter settings and get smaller changes in the image. Such as trying to get the subject to smile without it changing the image entirely.
Awesome video! Just to clarify, is it not possible to use SD 1.5 with Fooocus? If not I would really appreciate the tutorial to customize Fooocus to make it possible to use SD 1.5 models.
Fooocus is built from the ground up for SDXL. As far as i can tell the developer doesn't have plans for a 1.5 version. The only way to use 1.5 models is as a refiner, which can work quite well.
Sorry to hear that... This is a great entry level program but Stable Diffusion even at its minimum needs some power. If the laptop is using integrated graphics then its a no go. Even if it has an nvidia gpu it needs at least 4gb Vram, but 6gb+ is best.
Check out the first video here: th-cam.com/video/828wYIp2HAM/w-d-xo.html
What’s taken me days of experimentation to figure out, you explained in this video - plus more. Greatly appreciated.
Glad it made sense! 😁
explaining the offset lora example is great information as well @ 6:54 I had no idea what that thing was about until your video thank you
Thank you so much! I'm really impressed with FOOOCUS. I was a big fan of Mid, and I can see your tutorials, have had a major impact on my skills with stable diffusion.
Thanks man. Very nice tutorial. Comprehensive and to the point. I will be coming back for more tutorials on Fooocus.
🎯 Key Takeaways for quick navigation:
00:00 📝 *Introduction and Overview*
- Provides a detailed explanation and additional information about Focus.
- Emphasizes the need for deeper understanding beyond basic tutorials to avoid frustration.
- Encourages users to catch up on the basics of Focus from the previous video.
00:41 🎨 *Image Generation Settings*
- Explains the process of image generation by typing prompts in Focus.
- Focuses on advanced settings like speeds, aspect ratios, and image numbers.
- Highlights the absence of certain ratios due to optimal resolutions working with multiples of 64.
01:50 🎭 *Styles in Focus*
- Introduces various styles available in Focus, including the unique "Focus V2" style.
- Discusses the styles as positive and negative prompts and provides external resources for visual examples.
- Encourages experimentation with mixing and matching styles for creative combinations.
03:00 🧠 *Understanding Models in Focus*
- Explores base models (checkpoints) like "Juggernaut" and "Dream Shaper" for image generation.
- Introduces SDXL Turbo, a newer, faster version for quicker image generation.
- Guides on adding new models and understanding the significance of base models in stable diffusion.
04:37 🖼️ *Adding SDXL Models in Focus*
- Demonstrates the process of adding an SDXL model (Dynavision XL) to Focus.
- Emphasizes the importance of checking details and version compatibility.
- Discusses the historical use of refiners and how they became less essential over time.
06:17 🎨 *Using Loras in Focus*
- Introduces "Loras" as patches influencing the outcome of images in Focus.
- Discusses Loras that control detail levels and those designed around specific celebrities or characters.
- Explains the application of Aura to modify image characteristics with visual demonstrations.
08:12 🎚️ *Advanced Tab Settings in Focus*
- Explores advanced settings in the "Advanced" tab, including guidance scale and sharpness.
- Demonstrates how the guidance scale impacts color depth and the effect of sharpness on image details.
- Shows subtle changes in images based on adjustments to the advanced settings.
09:58 🔍 *Debugging and SDXL Turbo in Focus*
- Briefly discusses the "Debug" section and its role in Focus (full debugging details omitted).
- Introduces SDXL Turbo as a faster image generation option, providing settings for efficient use.
- Guides users on changing settings for SDXL Turbo to enhance performance.
11:12 📸 *Input Image Section in Focus*
- Acknowledges the "Input Image" section in Focus and hints at a future in-depth video.
- Provides an overview, leaving details for a more comprehensive explanation in future content.
- Wraps up the video, expressing hope that viewers found the information valuable.
Made with HARPA AI
Great explanation on what Loras and checkpoints are, and how to use them! i was having trouble understanding these concepts but now im able to generate better images.
3:40 is good info about why people still tend to use 1.5 beyond hardware requirement and just getting different results
El mejor tutorial de SD que he visto
Thanks for clarity❤
great tutorial! thank you.
Glad you liked it! Hope it helped.
I am using Colab for image generation and i want to use Realistic Vision v6.B1 model as refiner can you tell me how to download that model directly in colab?
Wondering this too, thanks!
As well as the Lora models. Would really appreciate an answer! Thanks!
I use it locally but you can try the answer in the discussion here: github.com/lllyasviel/Fooocus/discussions/1228
@@JumpIntoAI thank you it works fine but when I use it along with input image it runs out of memory.
@@JayPatel-hm6vb If you are using the free Colab on the fooocus sight, i think thats the limitation of that one. It has limited memory and image prompt features use more.
Thank you great explanations!
Ευχαριστούμε!
Your welcome! And thanks!
Very informative Thanks ❤
I think the refiner in Fooocus doesn't add more time to generate the image because it's not creating the full image and then refining as a second full pass. What it does is to exchange the model in the middle of the generation (in this case at 80% as put in the slider) so that at the end the number of steps is the same regardless you use the refiner or not.
That said, it's probably better to use a model that doesn't need a refiner and you save 6GB of disk space.
That option is useful even when not using a refiner though: you can use it to flip to a different model mid-generation, creating the structure with one and then polishing the details with another... and you can use 1.5 models there! It's a way of using 1.5 models if you switch sooner (as explained i the text with the option) so that the main generation will be with the second 1.5 model.
So the LoRas for SD 1.5 won't work in Fooocus, even if I use SD 1.5 models as a refiner, correct? Should I only search for and use SDXL 1.0 based LoRas? Should I even bother with refiners? Sometimes I get weird results for eyes and sometimes hands and I haven't yet figured out, if it's my prompting or refiner and LoRa's fault...
No, an sd 1.5 lora isn't compatible with sdxl base mode, even with a sd 1.5 refiner. I mean it will still effect it but you will get poor results.
Refiners can be useful just not necessary. A good example to try is the sd 1.5 Realistic Vision v5.1 model as a refiner (0.4 setting) with juggernaut as it can give excellent results if you like the look of that model.
@@JumpIntoAI I have a Realistic Vision v6.0, should I download the previous version? I also use Juggernaut v8 as a base, should I use a v6?
@artyneon If the realistic vision 6.0 works for you keep using. I downgraded because I was having issues/bad images. keep using juggernaut v8, they just updated Fooocus so that is the standard now. 👍
My fooocus dont open 😢 i just can use on Google colab, how can i install the models?
Hello, how to add an existing image to the refined tool?
thank you for your patience explaining everything!
I find it difficult to upscale a base image without changing details. Is there a way to preserve 100% original details ?
Check out Upscayl : www.upscayl.org/#download It's a free open source upscaler that's very easy to use.
where is the updated fooocus version. im tryin to find a video that explains the control net tab that should be next to the other tabs in your video
If you are talking about Fooocus MRE that version is no longer updated, if you are talking about Ruined Fooocus, I haven't yet used that one.
really useful, thanks
i downloaded the same models and placed them into the checkpoints folder but still they do not appear on the seleciton list.
without knowing more. Are they in your fooocus\models\checkpoints folder? If so make sure you didn't put them in another folder inside of that one, sometimes it doesn't detect subfolders... also did you make any changes to the config.txt?
Thanks for the 4-parts tutorial of using Fooocus! But what I haven't found yet is how to control the steps? Mine always set to 60 steps, is it possible to reduce that?
nvm i found it on 10:52, thank you!
Thank you.
Thank you very much, your videos are very good, I have subscribed to your channel, where do you run fooocus? What hardware do you have? How long does it take to generate an image? I ask you because I have an Nvidia M4000 GPU but it is very very slow, 24 minutes to generate a high quality 1408x704 image. Is that time normal?
I run fooocus on my machine, I have a 4070 Ti and takes about 12 seconds average per image. The m4000, even with 8gb, is on the very low end because of its older architecture.
Thanks
I love your videos, man. They are very helpful to a newbie in the world of AI digital image creation, like myself. I think I understood everything you talk about in this video, but I don't understand the "Random" command and those "seeds." Would you care to elaborate?
The seed number is the starting point of the math. Having it locked (random unchecked) then you will generate the same picture if you don’t change any settings or your text prompt. Having random checked will give a random seed number and a new image every generation even without changing any settings.
Sticking to a single seed number makes it easier to alter settings and get smaller changes in the image. Such as trying to get the subject to smile without it changing the image entirely.
@@JumpIntoAI thank you very much for the explanation!
Awesome video!
Just to clarify, is it not possible to use SD 1.5 with Fooocus?
If not I would really appreciate the tutorial to customize Fooocus to make it possible to use SD 1.5 models.
Fooocus is built from the ground up for SDXL. As far as i can tell the developer doesn't have plans for a 1.5 version. The only way to use 1.5 models is as a refiner, which can work quite well.
Please make a video to how to switch Fooocus to Defooocus (Fooocus Latest version)
Intel Laptop Not Working Bro 😔
Sorry to hear that... This is a great entry level program but Stable Diffusion even at its minimum needs some power. If the laptop is using integrated graphics then its a no go. Even if it has an nvidia gpu it needs at least 4gb Vram, but 6gb+ is best.
Super👌
Is your voice AI?
It's my voice😊. But the older the video the more it's been synthed from Poor Quality and Background Noise.