I feel atm moment with AI there are a few types cherry pickers prompt pick done, the Ai meme ers (more f'ed up the happier and more proud they are), and then the silent ones the do inpaint and put a lot of time into the work but dont say its AI for the fear of being attacked. one thing for sure this stuff is here to stay and only getting better. And boy do I love the open source community's for the interfaces etc. I do wounder the um backend of the UI's the secret sauces as easy diffusion, a1111, fooocus, comfy same prompt same seed same everything and every different results. So I find myself using all of them, some are better for one thing to another
Often inpainting is a huge time waste, is much faster to generate a batch of img2img while you do something else and use a manual traditional workflow with masking and then give one pass with low denoise to smooth out the manual masking. Otherwise better use a comfy UI and use a detailer with Segmentation. But Foocus is a very good tool compared to traditional inpainting.
Hello; I have a question: How can I replace - for example - a necklace not with a random generated necklace but with a photo of my own necklace I want to use. Or I want to add "my" sunglasses taken from a product photo or similar?
Just a FYI. Best way to zoom in is to keep mouse over the image, press "Shift" and use the scroll wheel. Also keeping mouse over the image and pressing "s", toggles a fit to current window mode and you can also zoom within this too. Hope that helps :)
Although using InvokeAI and the various tools from Pinokio is really what's making my channel take off with my AI creations, I always love to have these options like what you're presenting. Thank you as always!!
Thanks! Hard to say really, I do like the latest Juggernaut v7, Realism Engine, NightVisionXL has been nice to use, for more 2.5D-3D look Starlight and Real Cartoon XL is great....again really hard to say as I'm often testing a lot and swapping between models often. I also still like using SD1.5 Models especially for photorealism. How about you?
I think another good trick is to first upscale the image and then do the detail inpainting, you will get a very nice high resolution image with a lot of detail that way
Glad to hear it my friend! Typically half body to close up you should get decent results but for full body the AI still has trouble generating details in smaller areas like the face. Inpainting is a great way to fix those details. 👍
It''s way better than generative fill at the moment, there isn't much you can control in PS but I suspect they will catch up eventually. But since Fooocus and stable diffusion is open source, it's a no brainer for us that can run it locally. 👍
You're welcome! You have to select developer debug mode to get the options. My bad, I had a clip that showed that part and I just realized now that it's missing from that part of the video....ugh! hahaha! Sorry about that.
Great video. In fact, it's the first that explains so clearly how to use inpaint. I would like to ask for a suggestion because I am very new to Fooocus. I created a simple scene of a cottage next to a lake. Fooocus generated what appears to be a person standing on the deck and I tried to erase but another odd shape was created in its place. Perhaps I overlooked something obvious in your video but I just want to delete/remove that shape from the scene. What key word/s should I use to erase the object? Thank you.
Glad it was helpful! You can put in the negative prompt field “person” and even remove the prompt as well. Just use the standard inpaint mode and mask the area to remove. Might take a few generations but that should work.
@@MonzonMedia Thank you for replying. It was interesting when you wrote that it may take a few generations to erase the object. Thank you for bringing that up. I had no idea this would be part of the solution.
Thank you so much for all the Fooocus tutorials, they're super straight forward and understandable for a total beginner like myself! On some of your other videos I've seen you giving names to the AI Models (like lalisa tisson katy dobrev) to have consistency in their looks. How do you do that? Sorry if you already explained in another video and I missed it :D
You're welcome! Yes if you give a name to your character they will retain the same face. I use 1-2 random names. You can just use a random name generator or make up your own. I will do a video on this very soon.
I think far too many people focus on "AI Art" as "type in some text and it makes a picture" which is a crying shame because it is so much more powerful than that. I use it often in combination with my own photography to enhance and add or even change things, the type of work that artists have been doing for quite some time in a much more slow and painful way with digital tools like Photoshop. Being able to just change my model's hair color or outfit in seconds is insanely powerful to TRADITIONAL art.
I’m glad you mention that John! I do the same as well since I do free lance photography. I have a few videos to edit that will focus on this exact topic! You’re 💯 right!
I installed Fooocus on Mac m1. For some reason I don't see the the Inpaint Engine version, Denoising strength and respective field. How can see those options?
I can't drag the image from the image I generated because it just zooms in right away instead DERP. Also I don't see any "additional inpaint prompt" field in my version (which I just downloaded tonight)
Hey, great videos. - One thing im not always quite sure of is which box to use for text prompt. - like for example when you added the turtle on the rock, could you have added it in the top text prompt?
Thank you for the great tutorial! But I have a problem I do not know how to solve. When I try to inpaint in a real photo, let's say a family photo, and I want to inpaint Santa or Ghandi. It works fine, but the inpainted Ghandi does not have the same quality. Ghandi is much sharper and better colors than uncle Bob. That makes it look very unreal. How can I make the inpainting look like the rest of the picture? Another thing is size. Sometimes the size of the inpainted person is off. It can be to big or to small. Is there a way to tell Fooocus the size I want of the person or object I inpaint?
Great video, thank you! Amazing tool, I wonder if I need to have something extra installed (Stable Diffusion) or does this work as a stand-alone program?
Thank you so much for your video tutorials, especially for this one. But I'm still struggling while I have to inpaint a character which partially overlaps another character and take action. For instance, when I have a warrior on picture and I just wanna paint another warrior which is injuring the first one. Even if I put more details in the prompt it seems if it doesnt understand. Any advice please? Really appreciated... thank you very very much
Thanks. A bit of a different question, is there a way to naturally synthesize the character files I want in an existing image background file rather than a text prompt?
I've been using automatic1111 for awhile now, what's the main difference with Fooocus? I feel like I could do this exact same thing on A1111 inpainting, so I'm curious is this more precise/better? Or is Fooocus just basically a more beginner friendly version that does the same stuff?
Fooocus is definitely more beginner friendly and very simple to use where A1111 is more flexible and versatile and typically has access to more extensions than any other SD platform. It's great to learn if you are willing to take on the learning curve.
I've read that some people use A1111 to generate the base images then do the inpainting exclusively on Fooocus so they don't have to bother with various Controlnet models or the right checkpoints. Are they right? Is that a good workflow to adopt?
In the advanced tab check on "developer debug mode" within there is a tab for inpaint, you'll see an option for "mask erode or dilate" you can adjust this to get a more finer mask. Do some experiments and only change the numbers 1 number at a time as it's really sensitive.
how can i generate a scene where the person from an input image (selfie, so close to the camera) will be generated farther away like.. standing in a field? if i just put the photo into input image and prompt something explicitely saying the input image should be used... it is not used in any aspect at all. isn't this what input image is there for?
great recap! What would you say is the biggest different between in-painting and the main prompt input -vs- inpainting and "modifying" or adding content - when the main prompt can do the same, correct?
I use inpainting as a touch up tool or detailing. Sometimes there are specific areas you want to work on where the AI wouldn't focus on. Also it can come in handy for fixing details as shown in the video.
Yes this is coming soon, but I will tell you 100% consistency isn't possible for stable diffusion in general but there are work arounds to get you close, then you would need to do some post production.
Thanx for the great tutorial. I have been playing a lot with inpainting and i struggle with removing stuff. Like for example move the turtle from the rock. You got any tips for removing things?
You're very welcome! Great question, if we use that specific example I would just prompt for "rock in the water" mask the turtle and a part of the rock and select "modify content" for the inpaint method. For good measure I'll put turtle in the negative prompt as well but not really needed. Basically you still want to describe the scene without the object you are removing.
great video, does a direction instruction work, never had luck with that, wish Ai had a layer to understand man left woman right etc but so far its hit and miss
You're welcome! Usually inpaint can work, just remove the prompt, inpaint and go, might take a couple tries though. th-cam.com/video/Ky6B8oRStMU/w-d-xo.htmlsi=I_DpHpZV1SB5huzL
I've found it hard to improve body segments in full body pictures. For example, an AI model in various Yoga poses often results in ugly or mangled looking arms, legs, and feet. Pyracanny reference helps to some degree, but harder to hide if the model is not wearing gloves or footwear to cover up my noobness. Any tips when trying to draw better anatomy of hands and feet?
Yeah those can be tricky. You can typically fix those things with inpainting or as you mentioned using some controlnet functions. There is another forked version of Fooocus that has more controlnet controls that I'll be introducing soon that can help as well. You can also upscale the image, bring it back in to do some touch ups. Lot's of workarounds that can be done.
Is it possible to use fooocus on web without installation and collab. Because it would be much easier to directly navigate to the web version instead of launching collab notebook or local installation
When I inpaint the face, it always makes the face look at the camera, even putting "look at camera" or "look at viewer" in the negative prompts doesn't work. Is there a way to fix this?
You can prompt looking at the side, or looking away. Also you can decrease the denoising strength as shown on 2:21 this will keep it more like the original.
Maybe a stupid question, but if the algorithm can fix the always wonky faces and eyes, why doesn't the program always do that automatically? So, after generating an image, check and fix the legs and fingers andface and especially the eyes?
The annoying thing about inpainting to me was that when you wanted to use 'highly detailed face' to add skin textures etc, and you were using a source image of a recognizable person (eg yourself), that it would always subtly alter the features enough to make you look like a different person. Your suggestion of altering the 'Denoising strength' makes that much less noticeable. Thanks.
So for the face you want to name your character, with 1-2 fictitious names, for eye color you can put it in your prompt. You don't need inpaint to do that. Character consistency is a big topic and unfortunately there isn't a way to do it 100% of the time at the moment, you can get close with lora training but not everyone can do that. I will be doing a video soon on this topic that can get you 70-80% of the way there, then you would have to do some post production.
@@MonzonMedia I need that video to name my character, it is being impossible for me to change their facial expression with impaint and not change their face completely
Put in the negative prompt blurry background, bokeh, shallow depth of field, and also in the main prompt it’s often good to reference something in the background. Wide or deep depth of field can work too. As an example "portrait photo of (subject or person), nature, mountain background, deep depth of field. Negative prompt "blurry background, bokeh, shallow depth of field".
You're probably better off using a photo editor like photoshop, photopea etc to add your logo. Technically you can do it with AI but the nature of stable diffusion is random and tricky to control for things like that. Post in an editor would be much easier.
I think I don't use camera spects in my prompt never but thanks for the answer. I speak spanish. Or did you mean the Fooocus Photograph style? (I left that marked, also Fooocus V2 and Fooocus Negative in my last requests)@@MonzonMedia
Im beginning to believe this AI does not know what cyberpunk is or bionic eyes, cyberware, contact lenses, iris! its not giving me nothing! Nada! :( oh well
yeah it might be the model then cause if i input beautiful woman cyberpunk style i get what i want just that macro detail fails which you show in this video.@@MonzonMedia
Do you use inpainting often? It's a great way to add details to your images and works very well with your own photos.
I feel atm moment with AI there are a few types cherry pickers prompt pick done, the Ai meme ers (more f'ed up the happier and more proud they are), and then the silent ones the do inpaint and put a lot of time into the work but dont say its AI for the fear of being attacked.
one thing for sure this stuff is here to stay and only getting better.
And boy do I love the open source community's for the interfaces etc.
I do wounder the um backend of the UI's the secret sauces as easy diffusion, a1111, fooocus, comfy same prompt same seed same everything and every different results.
So I find myself using all of them, some are better for one thing to another
Often inpainting is a huge time waste, is much faster to generate a batch of img2img while you do something else and use a manual traditional workflow with masking and then give one pass with low denoise to smooth out the manual masking. Otherwise better use a comfy UI and use a detailer with Segmentation.
But Foocus is a very good tool compared to traditional inpainting.
inpainting is the best way to make realistic AI enhanced photos by modifying a real image or scene and making it yours.
Thanks for this tutorial. I hadn't tried inpaint in yet, it looks simple and powerful, Fooocus is fantastic.
Yes very simple and powerful! You're welcome!
More Fooocus content please!!! Im new to this and I def see fooocus as a perfet beginner tool for this type of stuff! Love the videos!
Glad to hear it and yes of course more to come soon! It's the perfect platform for people just starting out or even experienced ones. 👍
Zoom = Cursor over Pictures and press "S"
Good Video, liked it. I wanna see more videos of other sektions from Foookus from you.
You got it! Just posted one yesterday on consistent characters th-cam.com/video/weDcFFwSzIA/w-d-xo.htmlsi=1zEQhLefAUqFIRmR
I love the calming way you explain, +1 sub
I appreciate that! 😊 thanks for the support.
Great Tutorial - Thanks from Germany !
Hello;
I have a question: How can I replace - for example - a necklace not with a random generated necklace but with a photo of my own necklace I want to use. Or I want to add "my" sunglasses taken from a product photo or similar?
Just a FYI. Best way to zoom in is to keep mouse over the image, press "Shift" and use the scroll wheel. Also keeping mouse over the image and pressing "s", toggles a fit to current window mode and you can also zoom within this too. Hope that helps :)
Ugh of course it works, silly me!!! hahaha! I'm not sure why when I was trying it, it wasn't working 😆 Thanks for verifying!
Although using InvokeAI and the various tools from Pinokio is really what's making my channel take off with my AI creations, I always love to have these options like what you're presenting. Thank you as always!!
You're welcome my friend! Are you enjoying invokeai? I've been using pinokio as my main launcher and I'm really impressed with it.
thnak you for your videos. Clear and detailed
Amazing videos ! So much value in simple way :) What are your most favourite checkpoint to use with SDXL ?
Thanks! Hard to say really, I do like the latest Juggernaut v7, Realism Engine, NightVisionXL has been nice to use, for more 2.5D-3D look Starlight and Real Cartoon XL is great....again really hard to say as I'm often testing a lot and swapping between models often. I also still like using SD1.5 Models especially for photorealism. How about you?
We have learnt the features on this subject wonderfully. Thank you very much. Continue detailed training.
Glad it was helpful! Yes more to come soon!
I think another good trick is to first upscale the image and then do the detail inpainting, you will get a very nice high resolution image with a lot of detail that way
Really good tutorial...this helped me fix my images😀
Glad to hear it my friend! Typically half body to close up you should get decent results but for full body the AI still has trouble generating details in smaller areas like the face. Inpainting is a great way to fix those details. 👍
@@MonzonMedia thanks a lot!!! I’m looking forward for more exciting creation techniques
@@MonzonMedia I’m really surprised by this feature and how well it performs compared to generative fill in photoshop
It''s way better than generative fill at the moment, there isn't much you can control in PS but I suspect they will catch up eventually. But since Fooocus and stable diffusion is open source, it's a no brainer for us that can run it locally. 👍
@@MonzonMedia adobe will start to charge double for the subscription if they get better than mid journey 😅
Fantastic demo👍 Beats Adobe Firefly hands down!
I may be biased but I'd agree. 😁 SD is just much more versatile.
Thanks for the great tutorials! One thing: I can't seem to find the denoise settings under advanced, I only get guidance and image sharpness settings?
You're welcome! You have to select developer debug mode to get the options. My bad, I had a clip that showed that part and I just realized now that it's missing from that part of the video....ugh! hahaha! Sorry about that.
So easy to follow, and fantastic results. 💯
Appreciate it bro! Thanks for stopping by 🙌🏼
Great video. In fact, it's the first that explains so clearly how to use inpaint. I would like to ask for a suggestion because I am very new to Fooocus. I created a simple scene of a cottage next to a lake. Fooocus generated what appears to be a person standing on the deck and I tried to erase but another odd shape was created in its place. Perhaps I overlooked something obvious in your video but I just want to delete/remove that shape from the scene. What key word/s should I use to erase the object? Thank you.
Glad it was helpful! You can put in the negative prompt field “person” and even remove the prompt as well. Just use the standard inpaint mode and mask the area to remove. Might take a few generations but that should work.
@@MonzonMedia Thank you for replying. It was interesting when you wrote that it may take a few generations to erase the object. Thank you for bringing that up. I had no idea this would be part of the solution.
@mysticgatekeeper would be nice if there was a dedicated object eraser! But yeah that’s the nature of txt2img.
Would you mind telling me what promt did you use for this girl 3:19?
Thank you so much for all the Fooocus tutorials, they're super straight forward and understandable for a total beginner like myself! On some of your other videos I've seen you giving names to the AI Models (like lalisa tisson katy dobrev) to have consistency in their looks. How do you do that? Sorry if you already explained in another video and I missed it :D
You're welcome! Yes if you give a name to your character they will retain the same face. I use 1-2 random names. You can just use a random name generator or make up your own. I will do a video on this very soon.
Great, looking forward to it! Thank you@@MonzonMedia
how can I get a different face in inpaint?❤
hello, thank you! how do you explain AI that i want to improve the face to the specific face?
I think far too many people focus on "AI Art" as "type in some text and it makes a picture" which is a crying shame because it is so much more powerful than that. I use it often in combination with my own photography to enhance and add or even change things, the type of work that artists have been doing for quite some time in a much more slow and painful way with digital tools like Photoshop. Being able to just change my model's hair color or outfit in seconds is insanely powerful to TRADITIONAL art.
I’m glad you mention that John! I do the same as well since I do free lance photography. I have a few videos to edit that will focus on this exact topic! You’re 💯 right!
is there a way to change the inpainter? i've downloaded juggernaut v8 inpaint, but is there any way to use it in Fooocus?
I installed Fooocus on Mac m1. For some reason I don't see the the Inpaint Engine version, Denoising strength and respective field. How can see those options?
I can't drag the image from the image I generated because it just zooms in right away instead DERP. Also I don't see any "additional inpaint prompt" field in my version (which I just downloaded tonight)
Hey, great videos. - One thing im not always quite sure of is which box to use for text prompt. - like for example when you added the turtle on the rock, could you have added it in the top text prompt?
Thank you for the great tutorial! But I have a problem I do not know how to solve.
When I try to inpaint in a real photo, let's say a family photo, and I want to inpaint Santa or Ghandi. It works fine, but the inpainted Ghandi does not have the same quality. Ghandi is much sharper and better colors than uncle Bob. That makes it look very unreal. How can I make the inpainting look like the rest of the picture?
Another thing is size. Sometimes the size of the inpainted person is off. It can be to big or to small. Is there a way to tell Fooocus the size I want of the person or object I inpaint?
That is exactly my problem too. Please give an indepth explanation how to inpaint smoothly in an excisting photo. That would be great!
Great video, thank you! Amazing tool, I wonder if I need to have something extra installed (Stable Diffusion) or does this work as a stand-alone program?
This is a standalone stable diffusion platform. 👍🏼
Thank you so much for your video tutorials, especially for this one. But I'm still struggling while I have to inpaint a character which partially overlaps another character and take action. For instance, when I have a warrior on picture and I just wanna paint another warrior which is injuring the first one. Even if I put more details in the prompt it seems if it doesnt understand. Any advice please? Really appreciated... thank you very very much
Thanks. A bit of a different question, is there a way to naturally synthesize the character files I want in an existing image background file rather than a text prompt?
thanks very much, very good tutorial 😊😉👍
You’re welcome! More to come! 👍🏼
Just press "s" to make the inpaint image full screen. Left top exclamation mark shows more hotkeys
Thanks! Yeah I figured it out after, for some reason it wasn’t working for me at first. Appreciate it👍🏼
Can you make a video on how to realistify images or drawings in fooocus
Sure I'll see what I can do.
I've been using automatic1111 for awhile now, what's the main difference with Fooocus? I feel like I could do this exact same thing on A1111 inpainting, so I'm curious is this more precise/better? Or is Fooocus just basically a more beginner friendly version that does the same stuff?
Fooocus is definitely more beginner friendly and very simple to use where A1111 is more flexible and versatile and typically has access to more extensions than any other SD platform. It's great to learn if you are willing to take on the learning curve.
I've read that some people use A1111 to generate the base images then do the inpainting exclusively on Fooocus so they don't have to bother with various Controlnet models or the right checkpoints. Are they right? Is that a good workflow to adopt?
I notice pixels beyond the inpaint get changed a bit so the brush edge is soft? Anyway to make it inpaint stictly on the masked area?
In the advanced tab check on "developer debug mode" within there is a tab for inpaint, you'll see an option for "mask erode or dilate" you can adjust this to get a more finer mask. Do some experiments and only change the numbers 1 number at a time as it's really sensitive.
How about making illusion images like hidden text and logos? Could you please see if its possible on fooocus?
oooohhhh that would be cool to cover! Thanks for the suggestion!
how can i generate a scene where the person from an input image (selfie, so close to the camera) will be generated farther away like.. standing in a field?
if i just put the photo into input image and prompt something explicitely saying the input image should be used... it is not used in any aspect at all. isn't this what input image is there for?
great recap! What would you say is the biggest different between in-painting and the main prompt input -vs- inpainting and "modifying" or adding content - when the main prompt can do the same, correct?
I use inpainting as a touch up tool or detailing. Sometimes there are specific areas you want to work on where the AI wouldn't focus on. Also it can come in handy for fixing details as shown in the video.
🙏 Sir, thank you for the awesome videos.- I request you create a video to create consistent characters with different poses in fooocus. 🙏🙏
Yes this is coming soon, but I will tell you 100% consistency isn't possible for stable diffusion in general but there are work arounds to get you close, then you would need to do some post production.
Of course. I will be waiting for it eagerly. Thank you.@@MonzonMedia
what is the difference between stop at and weight?
Thanx for the great tutorial. I have been playing a lot with inpainting and i struggle with removing stuff. Like for example move the turtle from the rock. You got any tips for removing things?
You're very welcome! Great question, if we use that specific example I would just prompt for "rock in the water" mask the turtle and a part of the rock and select "modify content" for the inpaint method. For good measure I'll put turtle in the negative prompt as well but not really needed. Basically you still want to describe the scene without the object you are removing.
I used the inpaint around the subject but im getting white circle around it, how to fix that ?
great video, does a direction instruction work, never had luck with that, wish Ai had a layer to understand man left woman right etc but so far its hit and miss
Yea it works but it tends to move the head as well, like a simple prompt would be front view man looking left.
To change the background. How do you inpaint when the background is completely white or transparent?
very helpful videos, keep it up!
Glad the vids are helpful! More to come soon!
That's amazing!
Have fun and enjoy!
Thanks for this awesome video. Please how do you remove an object from an image?
You're welcome! Usually inpaint can work, just remove the prompt, inpaint and go, might take a couple tries though. th-cam.com/video/Ky6B8oRStMU/w-d-xo.htmlsi=I_DpHpZV1SB5huzL
@@MonzonMedia Thank you sir.
I've found it hard to improve body segments in full body pictures. For example, an AI model in various Yoga poses often results in ugly or mangled looking arms, legs, and feet. Pyracanny reference helps to some degree, but harder to hide if the model is not wearing gloves or footwear to cover up my noobness. Any tips when trying to draw better anatomy of hands and feet?
Yeah those can be tricky. You can typically fix those things with inpainting or as you mentioned using some controlnet functions. There is another forked version of Fooocus that has more controlnet controls that I'll be introducing soon that can help as well. You can also upscale the image, bring it back in to do some touch ups. Lot's of workarounds that can be done.
@@MonzonMedia wait for more control )
Is it possible to use fooocus on web without installation and collab. Because it would be much easier to directly navigate to the web version instead of launching collab notebook or local installation
Not possible, it's a local platform or has to be run on some notebook or cloud service.
When I inpaint the face, it always makes the face look at the camera, even putting "look at camera" or "look at viewer" in the negative prompts doesn't work. Is there a way to fix this?
You can prompt looking at the side, or looking away. Also you can decrease the denoising strength as shown on 2:21 this will keep it more like the original.
Maybe a stupid question, but if the algorithm can fix the always wonky faces and eyes, why doesn't the program always do that automatically? So, after generating an image, check and fix the legs and fingers andface and especially the eyes?
I wish they'd add the ability to install different models and Loras just from a link, at least in the colab.
Yeah I think they can make it easier for people. I don't use the colab version though.
Anyone know a prompt for getting a photo of a character's back and full body shot?
Thanks Alot❤
You're welcome 😊
The annoying thing about inpainting to me was that when you wanted to use 'highly detailed face' to add skin textures etc, and you were using a source image of a recognizable person (eg yourself), that it would always subtly alter the features enough to make you look like a different person. Your suggestion of altering the 'Denoising strength' makes that much less noticeable. Thanks.
Thanks
Hey mate, great tutorials! Do you know how to use inpaint so it maintains the same face and eye color, just fixing the "inconsistencies"?
So for the face you want to name your character, with 1-2 fictitious names, for eye color you can put it in your prompt. You don't need inpaint to do that. Character consistency is a big topic and unfortunately there isn't a way to do it 100% of the time at the moment, you can get close with lora training but not everyone can do that. I will be doing a video soon on this topic that can get you 70-80% of the way there, then you would have to do some post production.
@@MonzonMedia I need that video to name my character, it is being impossible for me to change their facial expression with impaint and not change their face completely
if you hold SHIFT and scroll the mouse wheel in the image box that will zoom too
How to change the necklace or earrings?
You could just use inpainting.
@@MonzonMedia I tried, but I want to use predefined stuff, but not able to get the best results
Do you have any way to unblur the background?
Put in the negative prompt blurry background, bokeh, shallow depth of field, and also in the main prompt it’s often good to reference something in the background. Wide or deep depth of field can work too. As an example "portrait photo of (subject or person), nature, mountain background, deep depth of field. Negative prompt "blurry background, bokeh, shallow depth of field".
thank you sir
Most welcome!
How can i add a logo to my image? For example adding a logo to a shirt?
You're probably better off using a photo editor like photoshop, photopea etc to add your logo. Technically you can do it with AI but the nature of stable diffusion is random and tricky to control for things like that. Post in an editor would be much easier.
dudes playing extra safe with those negative prompts
😬 Don't want to get demonetized by YT! 😂
how use unlimit time in fooocus?
can you access your previously generated images in this software. if so help. plssssssssssssssssssssssssssssssssssssssss
In the main fooocus folder there is a folder “outputs” where all your images are stored.
Not available for the iMac
available for M1/M2 Mac only at the moment which I mention in the installation video. This is very common for most stable diffusion platforms.
thanks
You're welcome!
I'm facing Errors in face swaping 🙂
What’s the error?
@@MonzonMedia runtime disconnects every time while using face swap 🐱
Would like to see as good results as here at Playground ... 😉
Fair enough 👍 We're already working on improvements, we wanted to ship the new model first since the previous one wasn't that good.
🙏
It's a pity that it doesn't work on AMD :(
It does but currently in Beta but from what I hear performance isn't that great and installation is a pain.
Is it free?
Yes if you meet the system requirements since it runs locally. Link is in the description
The topic of inpainting has not been covered at all, especially when it comes to smaller details - the perspective of the generated objects is broken!
Why sometimes a camera appears out of nowhere 😂
Yeah sometimes if you use camera specs in your prompt it can happen from time to time with certain models. 😊
I think I don't use camera spects in my prompt never but thanks for the answer. I speak spanish. Or did you mean the Fooocus Photograph style? (I left that marked, also Fooocus V2 and Fooocus Negative in my last requests)@@MonzonMedia
oh interesting....hard to know without seeing your settings, prompts etc.
Im beginning to believe this AI does not know what cyberpunk is or bionic eyes, cyberware, contact lenses, iris! its not giving me nothing! Nada! :( oh well
It could be the model you are using. Sometimes you have to be creative with the prompt. Don’t have an example of what you’re trying to do?
yeah it might be the model then cause if i input beautiful woman cyberpunk style i get what i want just that macro detail fails which you show in this video.@@MonzonMedia
its nice but its really a resource hog lol
Indeed but that's true for any SD platform. GPU VRAM and system RAM hungry.