#### Links from the Video #### huggingface.co/yesyeahvh/bad-hands-5/tree/main huggingface.co/datasets/Nerfgun3/bad_prompt/tree/main huggingface.co/nick-x-hacker/bad-artist/tree/main huggingface.co/datasets/Nerfgun3/bad_prompt/tree/main
Are there any embeddings for bad eyes? I know there's the face restoration option, but that usually makes the images photorealistic and sometimes it doesn't work very well for artsy stuff. I don't want to be inpainting eyes, considering I'm working with batch img2img
Quick little tip: Instead of copy and pasting or memorizing the names of the negative embeddings, just click the "Show/hide extra networks" button in the middle under the Generate button. There you can see all of your embeddings. Just click once in the negative prompts and the simply select which negative embedding you would like to use.
@@S4SA93 I think it is all taken care of - Here is the output when I did a run "Textual inversion embeddings loaded(4): bad-artist-anime, bad-ar..." no braces, just comma between each - had other negative prompt text in there with it.
Very nice Photoshop process. I realized that working artistically with photoshop can save a lot of trouble-- for example, just brush out an extra finger instead of inpainting 20x and hoping. But the sharpening trick is really a game changer!
you also can use edge detection filter in photoshop > invert received image(ctrl+I) > and use this image as mask on sharped image to avoid oversharped artifacts as showed in this video
Heya, I used your cocktail (minus the anime one) and it's great! However, I also tested adding the popular "easynegative" embed to see what would happen... after comparing dozens of outputs with/without it, I determined that if it's used with 0.5 weight it improved images even further. Note that I was testing on realistic images and omitted the Anime neg embed you showed.
There are other emeddings like ng_deepnegative_v1_75t, bad-image-v2-39000, bad-picture-chill-75v, verybadimagenegative_v1.3, and Unspeakable-Horrors-64v, that work with many models too!
I noticed that GFPGAN visibility CodeFormer helps a lot when generating any persona. In the end, it all depends on the models. Thanks for the link to the text hints.
Nice tip, It actually makes sense that a sharper image would produce finer details when re- upscaling. For the opposite reason I would be careful with upscaling after those blurring touch ups in the editor and leave it as a last step. Any manual blurring or smearing in my experience has an high chance to be interpreted as part of the background, unless an higher denoise is set, but that mangles everything at that point. Going back and forth long enuff and the color shifting monster will get ya. I have not found a real solution for that issue, the colors slowly shift, a really dark blue will slowly shift to purple and the blacks go up in gamma. I tried with the option in the settings or with the cutoff plugin, nothing really work so far. It would be so cool to just paint something in manually or smear off an extra finger in photoshop, send it to img2img for a beauty pass, go back to photoshop, work some more... but the colors move around too fast for that workflow. Is there a controlnet just for the tones and hue? that would be massive!
Be careful with embeddings, they are normally trained on specific models, when those models are updated and the embeddings are not updated you will get a bit of distortion. As the models progress but the embedding stays the same that distortion becomes more and more prevalent. To further complicate things the embeddings will affect your other networks like LoRA and LyCORIS, which if those are trained on some other model can drastically alter the results in a negative way. Not to mention things like Clip Skip and CFG, they will also greatly alter results of the embeddings.
In Stable diffusion it is very helpful to use negative prompts, interacting with the AI I was amazed at how similar it is to human thinking, and come to think of it we were programmed the same, including using negative prompts such as the 10 commandments from the Bible. 😀
I appreciate the video but I feel like something is missing after 4:40. After sharpening the upscaled image and bringing it back into img2img, what did you do with it? Did you upscale again at an even higher res (2048x3072) for more details? Did you run Generate at the same resolution just hoping more details would be added? Or are you just suggesting this workflow before going into inpainting to tweak specific areas?
Something I thought was strange when testing out this process of Negitive prompts. If you have TI embeds like "" having a comma in between each negative drastically changes the output. ie ", " opposed to "". Have you ever dealt with this? Do you know why it is happening? Also the changing the position of the Negative was effecting the output. Using only around each negative TI and no comma's in-between, but changing the order of say 5 neg TI's for example. I would really like to see a video on this type of testing, what is the rhyme + reason?
When I add the negative embeddings from the extra networks button, it doesn't use the angle brackets, but for LoRA it does. Do you need the angle brackets for the negative embeddings?
@@OlivioSarikas SDUpscale also applies a few negative prompt img2img to fix a bunch of things that would cause the upscaler to make the bigger image uglier instead of more enhanced. Negative Embeddings are just regular Embeddings but trained on the worst results instead of the best quality.
@@OlivioSarikas Also, you can use standard parentheses for those negative embeddings, i.e. (bad-artist:0.8). Don't even need to put "by bad-artist" or anything, just the negative embed is fine. :)
Thanks for making this tutorial! I’ve been trying to figure out how to train and get this idea working. So it’s basically just training images you don’t want and putting that training in the negative embeddings? These people usually train images that are class images that generate messed up faces like “person”, “woman”, etc.? Then use a different class for the negative training after?
I always have these 4 most of the time they give amazing results, however is there a reason behind the use of pointy brackets instead of parenthesis ? 🤔
Why images of human characters created by AI give results of two heads and more fingers than it should be? And some times, those fingers represents an alien creature like tentacles/hands. Is the neural technology build upon aliens embedded into human interface?
Useful, but for img2img upscale in need first more VRAM. With extra I can go to a insane resolution, but maybe that work there too? 🤔 Need to test that. And I need to test that negative prompt stuff more.
@Olivio, how could you use an actual photograph and render it using AI for whatever prompt while maintaining the face ? i.e. creating an avatar or image of your face to so many different renders. How could that be done?
Very nice, but I would recommend trying a different sharpening method than unsharpen masking. Haven't tried it yet, but I would bet using a high-pass filter would not give you the artifacts along the rim of the cloak.
For things like EasyNegative, you can just type that in and be able to improve your images right away. So are they only tagging their training images with EasyNegative? Are they tagging everything like usual? Usually when someone trains something, like a character, if they didn't want their hair style to change, they would only tag the things in the image they want changed. like if their eyes change color, they'd tag the eye color, but they wouldn't tag the hair. So, for a basic example, if you wanted to make a neg embed to make eyes with too many eye highlights never happen again, you would only tag the eyes, right? Or is this incorrect? That's all that holds me back. @@OlivioSarikas
You need to download the embeddings because when you type them in the negative prompt they will be replaced by their values. You do not know their values so you must download them. (Sorry if my English is bad, I am learning. Hope you understand my comment ^^)
I have a RTX 3080Ti and cant go bigger then 1024 without getting a Cuta out of mermory. What card do people use to go up to 1500? I help my self with ultimate upscaler but most times I see the checkerboard. Is there a trick?
succesfully using it on a gtx1650 4GB card. can generate up to 1024px, but slow time (1 to 3 minute per image).."extras" upscale take around same time, but img2img upscaling to 8k can take 1 hour with all the steps involved.
Instead of AI I see it more of a programming work which doesn’t improve users artist skills yet can help them to be a programmer. manual work would always be the true Art. AI would be a disaster for mankind created and improved by mankind.
You don't seem to know much about art then, creating art is not just using a pen or a pencil, it's the entire process that includes the idea, the composition and the execution, many people are just copying and pasting prompts and get a nice picture, but the ones that are doing great things are using AI as one more tool, combined with photoshop and other tools and some insane art will be created in the near future that wouldn't ever be possible to create by human hand alone.
@@hectord.7107 This is what no one understands. People jump from video to video, bashing everyone in the comments for using AI whenever a new convenient tool is getting released. Funnily enough, I even found someone commenting in 3Blue1Brown's recent video, that he will stop watching 3B1B, because he used AI images (video contains images *transformed* by another artist aided by Midjourney). People don't seem to realize, that it's not just about _typing words into boxes_ and spamming pretty images over the internet, making artists mad. This argument is getting really annoying and is already obsolete. People already create *insane* images, completely *transforming* the base txt2img result, which immediately throws the the copyright argument straight into the trash can. Thanks to the Inpainting tool in Stable Diffusion, we can make amazing high resolution transformations from a simple photoshop sketch, still putting *massive* amounts of tedious, manual work into the result image, creating it piece by piece, utilizing the creativity to the max, still keeping the sketched composition, which is *your work*. Using artists' names is literally useless nowadays, because it has a very little impact on this process, just like using a random word in the prompt. People, rather than starting nonsense and useless dramas all over the internet, use this to your advantage and stop being a baby :)
#### Links from the Video ####
huggingface.co/yesyeahvh/bad-hands-5/tree/main
huggingface.co/datasets/Nerfgun3/bad_prompt/tree/main
huggingface.co/nick-x-hacker/bad-artist/tree/main
huggingface.co/datasets/Nerfgun3/bad_prompt/tree/main
Thanks always for the url :D
Hey there @olivio Sarikas, wanted to know is that an extension you use to get stuff from your clipboard to your img2img canvas at 4:20 ?
@@Mandraw2012 that an operaGX thing
Are there any embeddings for bad eyes? I know there's the face restoration option, but that usually makes the images photorealistic and sometimes it doesn't work very well for artsy stuff. I don't want to be inpainting eyes, considering I'm working with batch img2img
👋
Quick little tip: Instead of copy and pasting or memorizing the names of the negative embeddings, just click the "Show/hide extra networks" button in the middle under the Generate button. There you can see all of your embeddings. Just click once in the negative prompts and the simply select which negative embedding you would like to use.
Thanks for the tip!
That's nice, but it does not add the pointy brackets. So I wonder does it need the brackets if it is not adding them itself?
@@S4SA93 I think it is all taken care of - Here is the output when I did a run
"Textual inversion embeddings loaded(4): bad-artist-anime, bad-ar..." no braces, just comma between each - had other negative prompt text in there with it.
@@nickkatsivelos6613 Yea it seems to work without the brackets but I am wondering why he adds them then
What fork Kate you running because that’s not on auto1111… I see it on vladmandic fork
Very nice Photoshop process. I realized that working artistically with photoshop can save a lot of trouble-- for example, just brush out an extra finger instead of inpainting 20x and hoping. But the sharpening trick is really a game changer!
you also can use edge detection filter in photoshop > invert received image(ctrl+I) > and use this image as mask on sharped image to avoid oversharped artifacts as showed in this video
Heya, I used your cocktail (minus the anime one) and it's great! However, I also tested adding the popular "easynegative" embed to see what would happen... after comparing dozens of outputs with/without it, I determined that if it's used with 0.5 weight it improved images even further. Note that I was testing on realistic images and omitted the Anime neg embed you showed.
There are other emeddings like ng_deepnegative_v1_75t, bad-image-v2-39000, bad-picture-chill-75v, verybadimagenegative_v1.3, and Unspeakable-Horrors-64v, that work with many models too!
One of my favourite tricks is to use LoRAs with negative weights. You can get some fun effects with the right LoRA
That's an excellent idea, I will try that soon... =]
Delightful result! 👍 After sharpening the skin appears better and there is more detail throughout the image.
I noticed that GFPGAN visibility CodeFormer helps a lot when generating any persona. In the end, it all depends on the models. Thanks for the link to the text hints.
Nice tip, It actually makes sense that a sharper image would produce finer details when re- upscaling. For the opposite reason I would be careful with upscaling after those blurring touch ups in the editor and leave it as a last step. Any manual blurring or smearing in my experience has an high chance to be interpreted as part of the background, unless an higher denoise is set, but that mangles everything at that point. Going back and forth long enuff and the color shifting monster will get ya. I have not found a real solution for that issue, the colors slowly shift, a really dark blue will slowly shift to purple and the blacks go up in gamma. I tried with the option in the settings or with the cutoff plugin, nothing really work so far. It would be so cool to just paint something in manually or smear off an extra finger in photoshop, send it to img2img for a beauty pass, go back to photoshop, work some more... but the colors move around too fast for that workflow. Is there a controlnet just for the tones and hue? that would be massive!
uber cool info as always. highly appreciated, Olivio!
Be careful with embeddings, they are normally trained on specific models, when those models are updated and the embeddings are not updated you will get a bit of distortion. As the models progress but the embedding stays the same that distortion becomes more and more prevalent. To further complicate things the embeddings will affect your other networks like LoRA and LyCORIS, which if those are trained on some other model can drastically alter the results in a negative way. Not to mention things like Clip Skip and CFG, they will also greatly alter results of the embeddings.
Great info as always. Also, you have a really nice looking virtual home. 😉
In Stable diffusion it is very helpful to use negative prompts, interacting with the AI I was amazed at how similar it is to human thinking, and come to think of it we were programmed the same, including using negative prompts such as the 10 commandments from the Bible. 😀
Thank you Olivio for being so Consistent 🙏🏽🙏🏽👑💪🏾
I appreciate the video but I feel like something is missing after 4:40.
After sharpening the upscaled image and bringing it back into img2img, what did you do with it? Did you upscale again at an even higher res (2048x3072) for more details? Did you run Generate at the same resolution just hoping more details would be added? Or are you just suggesting this workflow before going into inpainting to tweak specific areas?
no, i rendered it with the same settings again, but with the sharpened input image
@@OlivioSarikas Gotcha. Thank you and thanks for all your videos. They are very helpful.
Thanks So Much I Learn So Much From Your Videos! :)
Thanks a lot Olivio.
Something I thought was strange when testing out this process of Negitive prompts. If you have TI embeds like "" having a comma in between each negative drastically changes the output. ie ", " opposed to "". Have you ever dealt with this? Do you know why it is happening? Also the changing the position of the Negative was effecting the output. Using only around each negative TI and no comma's in-between, but changing the order of say 5 neg TI's for example.
I would really like to see a video on this type of testing, what is the rhyme + reason?
embeds use same syntaxe as normal prompt, not as loras
sry if it was asked already but what is the plugin or w/e that enables choosing of vae / clip skip on top of the main page in ui?
Its in automattic111 settings, settings/User Interface/QuickSetings list change it to: sd_model_checkpoint, sd_vae, CLIP_stop_at_last_layers
@@treblor oh, thank you very much
When I add the negative embeddings from the extra networks button, it doesn't use the angle brackets, but for LoRA it does. Do you need the angle brackets for the negative embeddings?
no you don't need them
How is it better than sd upscale? SD UPscale seems more simple and fast.
SD upscale just upscales the image. Img2img renders a new image with a lot more detail that the original didn't have
@@OlivioSarikas SDUpscale also applies a few negative prompt img2img to fix a bunch of things that would cause the upscaler to make the bigger image uglier instead of more enhanced.
Negative Embeddings are just regular Embeddings but trained on the worst results instead of the best quality.
are the mandatory on the neg prompt ?
Not for embeddings, those brackets are for LORA, using just the name of the embedding works fine
Really? I didn't know that. Thank you
@@OlivioSarikas Also, you can use standard parentheses for those negative embeddings, i.e. (bad-artist:0.8). Don't even need to put "by bad-artist" or anything, just the negative embed is fine. :)
you can use ctr+c -> ctrl+v for copy paste to A1111 from any place :)
Aren't the brackets and weight only for Lora and LoCon?
How does your upscale method compare to Topaz gigapixel?
Why don't I have a embedding folder? :(
Thanks for making this tutorial! I’ve been trying to figure out how to train and get this idea working. So it’s basically just training images you don’t want and putting that training in the negative embeddings? These people usually train images that are class images that generate messed up faces like “person”, “woman”, etc.? Then use a different class for the negative training after?
I always have these 4 most of the time they give amazing results, however is there a reason behind the use of pointy brackets instead of parenthesis ? 🤔
I was wondering the same, so I simply tested both ways. I got consistently better outputs with the pointy brackets as shown in the vid
you are a hero i love u
Why images of human characters created by AI give results of two heads and more fingers than it should be? And some times, those fingers represents an alien creature like tentacles/hands. Is the neural technology build upon aliens embedded into human interface?
Useful, but for img2img upscale in need first more VRAM. With extra I can go to a insane resolution, but maybe that work there too? 🤔 Need to test that. And I need to test that negative prompt stuff more.
@Olivio, how could you use an actual photograph and render it using AI for whatever prompt while maintaining the face ? i.e. creating an avatar or image of your face to so many different renders. How could that be done?
Check my video on Lora Training: th-cam.com/video/9MT1n97ITaE/w-d-xo.html
Very nice, but I would recommend trying a different sharpening method than unsharpen masking. Haven't tried it yet, but I would bet using a high-pass filter would not give you the artifacts along the rim of the cloak.
Hello colleague, how do you leave the characteristics of the character's face, just change the clothes among others?
Whenever I use negative embeddings this error always show's up
"runtimeerror: expected scalar type half but found float"
is it not better to just use highres fix from the get go?
So how do you actually know what negs are in the neg embedding? is there a way to see what negs are actually used?
embeddings folder does not exist, should I create one, or I installed something wrong?
Unsharpen Mask with 1 1 0 does nothing to my picture in Photoshop, what am I missing?
why I dont have Restore Faces button?
yeah but all these models seem to be focused on faces and people. how to get midjourney like doodles/cartoons/food etc
thx
Alright, but how would one go about training their own negative embeddings?
Basically like a normal embedding, but with the stuff you don't want to have
For things like EasyNegative, you can just type that in and be able to improve your images right away. So are they only tagging their training images with EasyNegative? Are they tagging everything like usual?
Usually when someone trains something, like a character, if they didn't want their hair style to change, they would only tag the things in the image they want changed. like if their eyes change color, they'd tag the eye color, but they wouldn't tag the hair.
So, for a basic example, if you wanted to make a neg embed to make eyes with too many eye highlights never happen again, you would only tag the eyes, right? Or is this incorrect? That's all that holds me back.
@@OlivioSarikas
should i download this embeddings from hugging face like bad artists etc or they work if i just using them in bad prompts without downloading too
You need to download the embeddings because when you type them in the negative prompt they will be replaced by their values. You do not know their values so you must download them.
(Sorry if my English is bad, I am learning. Hope you understand my comment ^^)
Why < >
greater than and less than sign?
I dont see any improvment in the negative embedings example. The 2 neg embs hat 7 fingers, and the all neg has some extra leaves, but thats it.
I have a RTX 3080Ti and cant go bigger then 1024 without getting a Cuta out of mermory. What card do people use to go up to 1500? I help my self with ultimate upscaler but most times I see the checkerboard. Is there a trick?
try changing the line in your webui-user.bat to this set COMMANDLINE_ARGS=--precision full --no-half --medvram
can also try: set COMMANDLINE_ARGS= --medvram --upcast-sampling
I have a 3080 10gb and I can go higher than 1024.
@@treblor thanks!
So, is basically a highpass filter with an overlay
What is the system requirements for running stable diffusion?
at least minimal RTX3060 12Gb and above
More VRAM = more stable and more features work with
I run it locally on a 1060 6gb. It's slow, but in theory any card with 4gb of vram can do it. So minimal is smaller than that haha.
I could run it locally with a GTX980 But I recently upgraded to a 3060ti which is much faster..980 worked though!
I'm using A1111 with RTX2080S, 8 Gb, it's running very well (with NVIDIA CUDA & --xformers option)
succesfully using it on a gtx1650 4GB card. can generate up to 1024px, but slow time (1 to 3 minute per image).."extras" upscale take around same time, but img2img upscaling to 8k can take 1 hour with all the steps involved.
i fell in love with her
another one that is used a lot is a EasyNegative
do all stable diffusion users have a habit of counting fingers on ANY images, or is it just me?
The less work you have to create an image, the more you tend to be a perfectionist. :D
मुझे फोटोग्राफी का बोहत सिखने है। मेरे पास फोन है। बाकी कोई डीवाईस नहीं है। लेपटॉप कम्पुटर नहीं है। तो मे ऐआई टुल कैसे उपयोग कर सकते हैं। फ्री वाले
Nice videos but maybe you need to add more steps to gain more details.
in anime girl i always have weird eyes no matter that i write in the negative prompt
How to use it in negative prompt? Should we use it like ?
what is A1111? how to install it?
Automatic1111 and look at his older videos or google it
automatic1111 >>> go google
search for how to install automatic1111 stable diffusion
Its an AI software that generate photo from text
@@Max-sq4li
Stable Diffusion is an AI.
A1111 is an interface to interact with Stable Diffusion.
Hello Olivio! Can I have a one on one consultation with you? Do you have an email where I can contact you? Thanks.
FIRST 🎉
you made a mistake including photoshop which is irrelevant.
Instead of AI I see it more of a programming work which doesn’t improve users artist skills yet can help them to be a programmer.
manual work would always be the true Art. AI would be a disaster for mankind created and improved by mankind.
The time will tell.
You don't seem to know much about art then, creating art is not just using a pen or a pencil, it's the entire process that includes the idea, the composition and the execution, many people are just copying and pasting prompts and get a nice picture, but the ones that are doing great things are using AI as one more tool, combined with photoshop and other tools and some insane art will be created in the near future that wouldn't ever be possible to create by human hand alone.
@@hectord.7107 This is what no one understands. People jump from video to video, bashing everyone in the comments for using AI whenever a new convenient tool is getting released. Funnily enough, I even found someone commenting in 3Blue1Brown's recent video, that he will stop watching 3B1B, because he used AI images (video contains images *transformed* by another artist aided by Midjourney).
People don't seem to realize, that it's not just about _typing words into boxes_ and spamming pretty images over the internet, making artists mad. This argument is getting really annoying and is already obsolete. People already create *insane* images, completely *transforming* the base txt2img result, which immediately throws the the copyright argument straight into the trash can. Thanks to the Inpainting tool in Stable Diffusion, we can make amazing high resolution transformations from a simple photoshop sketch, still putting *massive* amounts of tedious, manual work into the result image, creating it piece by piece, utilizing the creativity to the max, still keeping the sketched composition, which is *your work*. Using artists' names is literally useless nowadays, because it has a very little impact on this process, just like using a random word in the prompt.
People, rather than starting nonsense and useless dramas all over the internet, use this to your advantage and stop being a baby :)
Thanks for the info but im not paying for photoshop
Like he said any image software that has sharpening features will work. There are also free open source alternatives.
@@blizado3675 which ones
my images looks bad when I go up more than 512 in 1.5 based models. what's the issue?
same problem in 2.1 models up to 768...
nick-x-hacker/bad-artist a little off very sus nick choice on HuggingFace it shows no pickles detected but you can never be 100% sure