Negative Embeddings - ULTRA QUALITY Trick for A1111

แชร์
ฝัง
  • เผยแพร่เมื่อ 13 เม.ย. 2023
  • Negative Embeddings can help a lot to improve your image Quality. Here is how to use them in A1111. Also I show your my unshapen Trick, to get much better results when upscaling.
    #### Links from the Video ####
    huggingface.co/yesyeahvh/bad-...
    huggingface.co/datasets/Nerfg...
    huggingface.co/nick-x-hacker/...
    huggingface.co/datasets/Nerfg...
    Support my Channel:
    / @oliviosarikas
    Subscribe to my Newsletter for FREE: My Newsletter: oliviotutorials.podia.com/new...
    How to get started with Midjourney: • Midjourney AI - FIRST ...
    Midjourney Settings explained: • Midjourney Settings Ex...
    Best Midjourney Resources: • 😍 Midjourney BEST Reso...
    Make better Midjourney Prompts: • Make BETTER Prompts - ...
    My Affinity Photo Creative Packs: gumroad.com/sarikasat
    My Patreon Page: / sarikas
    All my Social Media Accounts: linktr.ee/oliviotutorials
  • แนวปฏิบัติและการใช้ชีวิต

ความคิดเห็น • 120

  • @OlivioSarikas
    @OlivioSarikas  ปีที่แล้ว +18

    #### Links from the Video ####
    huggingface.co/yesyeahvh/bad-hands-5/tree/main
    huggingface.co/datasets/Nerfgun3/bad_prompt/tree/main
    huggingface.co/nick-x-hacker/bad-artist/tree/main
    huggingface.co/datasets/Nerfgun3/bad_prompt/tree/main

    • @havemoney
      @havemoney ปีที่แล้ว +2

      Thanks always for the url :D

    • @Mandraw2012
      @Mandraw2012 ปีที่แล้ว +3

      Hey there @olivio Sarikas, wanted to know is that an extension you use to get stuff from your clipboard to your img2img canvas at 4:20 ?

    • @medmen04
      @medmen04 ปีที่แล้ว +2

      @@Mandraw2012 that an operaGX thing

    • @precursor4263
      @precursor4263 ปีที่แล้ว +1

      Are there any embeddings for bad eyes? I know there's the face restoration option, but that usually makes the images photorealistic and sometimes it doesn't work very well for artsy stuff. I don't want to be inpainting eyes, considering I'm working with batch img2img

    • @LouisGedo
      @LouisGedo ปีที่แล้ว +1

      👋

  • @fenrir20678
    @fenrir20678 ปีที่แล้ว +95

    Quick little tip: Instead of copy and pasting or memorizing the names of the negative embeddings, just click the "Show/hide extra networks" button in the middle under the Generate button. There you can see all of your embeddings. Just click once in the negative prompts and the simply select which negative embedding you would like to use.

    • @polystormstudio
      @polystormstudio ปีที่แล้ว +1

      Thanks for the tip!

    • @S4SA93
      @S4SA93 ปีที่แล้ว +1

      That's nice, but it does not add the pointy brackets. So I wonder does it need the brackets if it is not adding them itself?

    • @nickkatsivelos6613
      @nickkatsivelos6613 ปีที่แล้ว

      @@S4SA93 I think it is all taken care of - Here is the output when I did a run
      "Textual inversion embeddings loaded(4): bad-artist-anime, bad-ar..." no braces, just comma between each - had other negative prompt text in there with it.

    • @S4SA93
      @S4SA93 ปีที่แล้ว

      @@nickkatsivelos6613 Yea it seems to work without the brackets but I am wondering why he adds them then

    • @SantoValentino
      @SantoValentino ปีที่แล้ว

      What fork Kate you running because that’s not on auto1111… I see it on vladmandic fork

  • @benjamininkorea7016
    @benjamininkorea7016 ปีที่แล้ว +22

    Very nice Photoshop process. I realized that working artistically with photoshop can save a lot of trouble-- for example, just brush out an extra finger instead of inpainting 20x and hoping. But the sharpening trick is really a game changer!

  • @nalisten
    @nalisten ปีที่แล้ว

    Thank you Olivio for being so Consistent 🙏🏽🙏🏽👑💪🏾

  • @optimoos
    @optimoos ปีที่แล้ว

    uber cool info as always. highly appreciated, Olivio!

  • @justspartak
    @justspartak ปีที่แล้ว

    Delightful result! 👍 After sharpening the skin appears better and there is more detail throughout the image.

  • @coda514
    @coda514 ปีที่แล้ว +1

    Great info as always. Also, you have a really nice looking virtual home. 😉

  • @AltoidDealer
    @AltoidDealer ปีที่แล้ว +7

    Heya, I used your cocktail (minus the anime one) and it's great! However, I also tested adding the popular "easynegative" embed to see what would happen... after comparing dozens of outputs with/without it, I determined that if it's used with 0.5 weight it improved images even further. Note that I was testing on realistic images and omitted the Anime neg embed you showed.

  • @TheElement2k7
    @TheElement2k7 ปีที่แล้ว

    Thanks for the tips, something I will check out 😊

  • @Vitaliy_zl
    @Vitaliy_zl ปีที่แล้ว +8

    you also can use edge detection filter in photoshop > invert received image(ctrl+I) > and use this image as mask on sharped image to avoid oversharped artifacts as showed in this video

  • @AI_EmeraldApple
    @AI_EmeraldApple ปีที่แล้ว +13

    There are other emeddings like ng_deepnegative_v1_75t, bad-image-v2-39000, bad-picture-chill-75v, verybadimagenegative_v1.3, and Unspeakable-Horrors-64v, that work with many models too!

  • @AIAddict-88
    @AIAddict-88 ปีที่แล้ว

    Thanks So Much I Learn So Much From Your Videos! :)

  • @12MANY
    @12MANY ปีที่แล้ว

    Thanks a lot Olivio.

  • @Ureroll
    @Ureroll ปีที่แล้ว +4

    Nice tip, It actually makes sense that a sharper image would produce finer details when re- upscaling. For the opposite reason I would be careful with upscaling after those blurring touch ups in the editor and leave it as a last step. Any manual blurring or smearing in my experience has an high chance to be interpreted as part of the background, unless an higher denoise is set, but that mangles everything at that point. Going back and forth long enuff and the color shifting monster will get ya. I have not found a real solution for that issue, the colors slowly shift, a really dark blue will slowly shift to purple and the blacks go up in gamma. I tried with the option in the settings or with the cutoff plugin, nothing really work so far. It would be so cool to just paint something in manually or smear off an extra finger in photoshop, send it to img2img for a beauty pass, go back to photoshop, work some more... but the colors move around too fast for that workflow. Is there a controlnet just for the tones and hue? that would be massive!

  • @michail_777
    @michail_777 ปีที่แล้ว +2

    I noticed that GFPGAN visibility CodeFormer helps a lot when generating any persona. In the end, it all depends on the models. Thanks for the link to the text hints.

  • @rproctor83
    @rproctor83 ปีที่แล้ว +12

    Be careful with embeddings, they are normally trained on specific models, when those models are updated and the embeddings are not updated you will get a bit of distortion. As the models progress but the embedding stays the same that distortion becomes more and more prevalent. To further complicate things the embeddings will affect your other networks like LoRA and LyCORIS, which if those are trained on some other model can drastically alter the results in a negative way. Not to mention things like Clip Skip and CFG, they will also greatly alter results of the embeddings.

  • @eugeniusro
    @eugeniusro ปีที่แล้ว +1

    In Stable diffusion it is very helpful to use negative prompts, interacting with the AI I was amazed at how similar it is to human thinking, and come to think of it we were programmed the same, including using negative prompts such as the 10 commandments from the Bible. 😀

  • @nio804
    @nio804 ปีที่แล้ว +10

    One of my favourite tricks is to use LoRAs with negative weights. You can get some fun effects with the right LoRA

    • @moon47usaco
      @moon47usaco ปีที่แล้ว +1

      That's an excellent idea, I will try that soon... =]

  • @HAJJ101
    @HAJJ101 ปีที่แล้ว +1

    Thanks for making this tutorial! I’ve been trying to figure out how to train and get this idea working. So it’s basically just training images you don’t want and putting that training in the negative embeddings? These people usually train images that are class images that generate messed up faces like “person”, “woman”, etc.? Then use a different class for the negative training after?

  • @Hakaan911
    @Hakaan911 ปีที่แล้ว +6

    embeds use same syntaxe as normal prompt, not as loras

  • @Simsonlover222
    @Simsonlover222 ปีที่แล้ว

    you are a hero i love u

  • @Rjacket
    @Rjacket ปีที่แล้ว +1

    Something I thought was strange when testing out this process of Negitive prompts. If you have TI embeds like "" having a comma in between each negative drastically changes the output. ie ", " opposed to "". Have you ever dealt with this? Do you know why it is happening? Also the changing the position of the Negative was effecting the output. Using only around each negative TI and no comma's in-between, but changing the order of say 5 neg TI's for example.
    I would really like to see a video on this type of testing, what is the rhyme + reason?

  • @darcasvisual
    @darcasvisual ปีที่แล้ว

    Hello colleague, how do you leave the characteristics of the character's face, just change the clothes among others?

  • @blizado3675
    @blizado3675 ปีที่แล้ว

    Useful, but for img2img upscale in need first more VRAM. With extra I can go to a insane resolution, but maybe that work there too? 🤔 Need to test that. And I need to test that negative prompt stuff more.

  • @cobraeconomics4881
    @cobraeconomics4881 ปีที่แล้ว +2

    How does your upscale method compare to Topaz gigapixel?

  • @JDRos
    @JDRos 10 หลายเดือนก่อน

    Aren't the brackets and weight only for Lora and LoCon?

  • @snatvb
    @snatvb ปีที่แล้ว

    you can use ctr+c -> ctrl+v for copy paste to A1111 from any place :)

  • @Rasukix
    @Rasukix ปีที่แล้ว

    is it not better to just use highres fix from the get go?

  • @dinogators8323
    @dinogators8323 2 หลายเดือนก่อน

    thx

  • @BlackJade_OFM
    @BlackJade_OFM ปีที่แล้ว

    So how do you actually know what negs are in the neg embedding? is there a way to see what negs are actually used?

  • @hplovecraftmacncheese
    @hplovecraftmacncheese 11 หลายเดือนก่อน +1

    When I add the negative embeddings from the extra networks button, it doesn't use the angle brackets, but for LoRA it does. Do you need the angle brackets for the negative embeddings?

    • @OlivioSarikas
      @OlivioSarikas  11 หลายเดือนก่อน +1

      no you don't need them

  • @xzypergods9867
    @xzypergods9867 2 หลายเดือนก่อน

    Whenever I use negative embeddings this error always show's up
    "runtimeerror: expected scalar type half but found float"

  • @arielm9847
    @arielm9847 ปีที่แล้ว +5

    I appreciate the video but I feel like something is missing after 4:40.
    After sharpening the upscaled image and bringing it back into img2img, what did you do with it? Did you upscale again at an even higher res (2048x3072) for more details? Did you run Generate at the same resolution just hoping more details would be added? Or are you just suggesting this workflow before going into inpainting to tweak specific areas?

    • @OlivioSarikas
      @OlivioSarikas  ปีที่แล้ว +4

      no, i rendered it with the same settings again, but with the sharpened input image

    • @arielm9847
      @arielm9847 ปีที่แล้ว +2

      @@OlivioSarikas Gotcha. Thank you and thanks for all your videos. They are very helpful.

  • @Nottiex
    @Nottiex ปีที่แล้ว +4

    sry if it was asked already but what is the plugin or w/e that enables choosing of vae / clip skip on top of the main page in ui?

    • @treblor
      @treblor ปีที่แล้ว +3

      Its in automattic111 settings, settings/User Interface/QuickSetings list change it to: sd_model_checkpoint, sd_vae, CLIP_stop_at_last_layers

    • @Nottiex
      @Nottiex ปีที่แล้ว +1

      @@treblor oh, thank you very much

  • @koguister
    @koguister ปีที่แล้ว

    embeddings folder does not exist, should I create one, or I installed something wrong?

  • @ocoro174
    @ocoro174 ปีที่แล้ว

    yeah but all these models seem to be focused on faces and people. how to get midjourney like doodles/cartoons/food etc

  • @hishamzireeni8932
    @hishamzireeni8932 ปีที่แล้ว

    @Olivio, how could you use an actual photograph and render it using AI for whatever prompt while maintaining the face ? i.e. creating an avatar or image of your face to so many different renders. How could that be done?

    • @OlivioSarikas
      @OlivioSarikas  ปีที่แล้ว

      Check my video on Lora Training: th-cam.com/video/9MT1n97ITaE/w-d-xo.html

  • @Shingo_AI_Art
    @Shingo_AI_Art ปีที่แล้ว +1

    I always have these 4 most of the time they give amazing results, however is there a reason behind the use of pointy brackets instead of parenthesis ? 🤔

    • @AltoidDealer
      @AltoidDealer ปีที่แล้ว +1

      I was wondering the same, so I simply tested both ways. I got consistently better outputs with the pointy brackets as shown in the vid

  • @terrence369
    @terrence369 ปีที่แล้ว +2

    Why images of human characters created by AI give results of two heads and more fingers than it should be? And some times, those fingers represents an alien creature like tentacles/hands. Is the neural technology build upon aliens embedded into human interface?

  • @globalnucleartrue
    @globalnucleartrue ปีที่แล้ว +7

    How is it better than sd upscale? SD UPscale seems more simple and fast.

    • @OlivioSarikas
      @OlivioSarikas  ปีที่แล้ว +4

      SD upscale just upscales the image. Img2img renders a new image with a lot more detail that the original didn't have

    • @kuromiLayfe
      @kuromiLayfe ปีที่แล้ว +1

      @@OlivioSarikas SDUpscale also applies a few negative prompt img2img to fix a bunch of things that would cause the upscaler to make the bigger image uglier instead of more enhanced.
      Negative Embeddings are just regular Embeddings but trained on the worst results instead of the best quality.

  • @S4SA93
    @S4SA93 ปีที่แล้ว

    Unsharpen Mask with 1 1 0 does nothing to my picture in Photoshop, what am I missing?

  • @Charkel
    @Charkel ปีที่แล้ว +1

    Why don't I have a embedding folder? :(

  • @metanulski
    @metanulski ปีที่แล้ว

    I dont see any improvment in the negative embedings example. The 2 neg embs hat 7 fingers, and the all neg has some extra leaves, but thats it.

  • @sophytes1430
    @sophytes1430 ปีที่แล้ว

    Why < >
    greater than and less than sign?

  • @sneirox
    @sneirox ปีที่แล้ว

    i fell in love with her

  • @AlexSmith-qw5qg
    @AlexSmith-qw5qg ปีที่แล้ว

    should i download this embeddings from hugging face like bad artists etc or they work if i just using them in bad prompts without downloading too

    • @Jordan-my5gq
      @Jordan-my5gq ปีที่แล้ว

      You need to download the embeddings because when you type them in the negative prompt they will be replaced by their values. You do not know their values so you must download them.
      (Sorry if my English is bad, I am learning. Hope you understand my comment ^^)

  • @norko7422
    @norko7422 ปีที่แล้ว

    my images looks bad when I go up more than 512 in 1.5 based models. what's the issue?

    • @norko7422
      @norko7422 ปีที่แล้ว

      same problem in 2.1 models up to 768...

  • @CaptainFutureman
    @CaptainFutureman ปีที่แล้ว +3

    Very nice, but I would recommend trying a different sharpening method than unsharpen masking. Haven't tried it yet, but I would bet using a high-pass filter would not give you the artifacts along the rim of the cloak.

  • @shadowdemonaer
    @shadowdemonaer ปีที่แล้ว +1

    Alright, but how would one go about training their own negative embeddings?

    • @OlivioSarikas
      @OlivioSarikas  ปีที่แล้ว +4

      Basically like a normal embedding, but with the stuff you don't want to have

    • @shadowdemonaer
      @shadowdemonaer 7 หลายเดือนก่อน +1

      For things like EasyNegative, you can just type that in and be able to improve your images right away. So are they only tagging their training images with EasyNegative? Are they tagging everything like usual?
      Usually when someone trains something, like a character, if they didn't want their hair style to change, they would only tag the things in the image they want changed. like if their eyes change color, they'd tag the eye color, but they wouldn't tag the hair.
      So, for a basic example, if you wanted to make a neg embed to make eyes with too many eye highlights never happen again, you would only tag the eyes, right? Or is this incorrect? That's all that holds me back.
      @@OlivioSarikas

  • @EmilioNorrmann
    @EmilioNorrmann ปีที่แล้ว +5

    are the mandatory on the neg prompt ?

    • @wkdpaul
      @wkdpaul ปีที่แล้ว +9

      Not for embeddings, those brackets are for LORA, using just the name of the embedding works fine

    • @OlivioSarikas
      @OlivioSarikas  ปีที่แล้ว +4

      Really? I didn't know that. Thank you

    • @PizzaTimeGamingChannel
      @PizzaTimeGamingChannel ปีที่แล้ว +3

      @@OlivioSarikas Also, you can use standard parentheses for those negative embeddings, i.e. (bad-artist:0.8). Don't even need to put "by bad-artist" or anything, just the negative embed is fine. :)

  • @MarcioSilva-vf5wk
    @MarcioSilva-vf5wk ปีที่แล้ว

    So, is basically a highpass filter with an overlay

  • @TheRealBlackNet
    @TheRealBlackNet ปีที่แล้ว

    I have a RTX 3080Ti and cant go bigger then 1024 without getting a Cuta out of mermory. What card do people use to go up to 1500? I help my self with ultimate upscaler but most times I see the checkerboard. Is there a trick?

    • @Tigermania
      @Tigermania ปีที่แล้ว +3

      try changing the line in your webui-user.bat to this set COMMANDLINE_ARGS=--precision full --no-half --medvram

    • @treblor
      @treblor ปีที่แล้ว +2

      can also try: set COMMANDLINE_ARGS= --medvram --upcast-sampling

    • @snoweh1
      @snoweh1 ปีที่แล้ว +1

      I have a 3080 10gb and I can go higher than 1024.

    • @TheRealBlackNet
      @TheRealBlackNet ปีที่แล้ว

      ​@@treblor thanks!

  • @Arty-vy6zs
    @Arty-vy6zs ปีที่แล้ว

    another one that is used a lot is a EasyNegative

  • @babamaheshvrrajrajeshvre9963
    @babamaheshvrrajrajeshvre9963 ปีที่แล้ว

    मुझे फोटोग्राफी का बोहत सिखने है। मेरे पास फोन है। बाकी कोई डीवाईस नहीं है। लेपटॉप कम्पुटर नहीं है। तो मे ऐआई टुल कैसे उपयोग कर सकते हैं। फ्री वाले

  • @bryan98pa
    @bryan98pa ปีที่แล้ว

    Nice videos but maybe you need to add more steps to gain more details.

  • @peace.n.blessings5579
    @peace.n.blessings5579 ปีที่แล้ว

    What is the system requirements for running stable diffusion?

    • @Max-sq4li
      @Max-sq4li ปีที่แล้ว

      at least minimal RTX3060 12Gb and above
      More VRAM = more stable and more features work with

    • @TrentSterling
      @TrentSterling ปีที่แล้ว +2

      I run it locally on a 1060 6gb. It's slow, but in theory any card with 4gb of vram can do it. So minimal is smaller than that haha.

    • @AIAddict-88
      @AIAddict-88 ปีที่แล้ว

      I could run it locally with a GTX980 But I recently upgraded to a 3060ti which is much faster..980 worked though!

    • @dlep9221
      @dlep9221 ปีที่แล้ว +1

      I'm using A1111 with RTX2080S, 8 Gb, it's running very well (with NVIDIA CUDA & --xformers option)

    • @mr_frank9016
      @mr_frank9016 ปีที่แล้ว

      succesfully using it on a gtx1650 4GB card. can generate up to 1024px, but slow time (1 to 3 minute per image).."extras" upscale take around same time, but img2img upscaling to 8k can take 1 hour with all the steps involved.

  • @Vitaliy_zl
    @Vitaliy_zl ปีที่แล้ว

    do all stable diffusion users have a habit of counting fingers on ANY images, or is it just me?

    • @blizado3675
      @blizado3675 ปีที่แล้ว

      The less work you have to create an image, the more you tend to be a perfectionist. :D

  • @skyevent8356
    @skyevent8356 ปีที่แล้ว

    in anime girl i always have weird eyes no matter that i write in the negative prompt

  • @user-gu9vf3cc4u
    @user-gu9vf3cc4u ปีที่แล้ว

    How to use it in negative prompt? Should we use it like ?

  • @isycoolro
    @isycoolro ปีที่แล้ว

    Hello Olivio! Can I have a one on one consultation with you? Do you have an email where I can contact you? Thanks.

  • @support8804
    @support8804 ปีที่แล้ว +2

    what is A1111? how to install it?

    • @Steamrick
      @Steamrick ปีที่แล้ว

      Automatic1111 and look at his older videos or google it

    • @havemoney
      @havemoney ปีที่แล้ว

      automatic1111 >>> go google

    • @Tigermania
      @Tigermania ปีที่แล้ว +5

      search for how to install automatic1111 stable diffusion

    • @Max-sq4li
      @Max-sq4li ปีที่แล้ว +1

      Its an AI software that generate photo from text

    • @Jordan-my5gq
      @Jordan-my5gq ปีที่แล้ว

      ​@@Max-sq4li
      Stable Diffusion is an AI.
      A1111 is an interface to interact with Stable Diffusion.

  • @AniCho-go-Obzorov-Net
    @AniCho-go-Obzorov-Net 11 หลายเดือนก่อน

    какие то костыли, и зачем такие извращения с апскейлом =="

  • @MarkDemarest
    @MarkDemarest ปีที่แล้ว

    FIRST 🎉

  • @Akami-hz8xz
    @Akami-hz8xz 11 หลายเดือนก่อน

    you made a mistake including photoshop which is irrelevant.

  • @NiteshSaini1
    @NiteshSaini1 ปีที่แล้ว +1

    Instead of AI I see it more of a programming work which doesn’t improve users artist skills yet can help them to be a programmer.
    manual work would always be the true Art. AI would be a disaster for mankind created and improved by mankind.

    • @13RedCorpse
      @13RedCorpse ปีที่แล้ว +1

      The time will tell.

    • @hectord.7107
      @hectord.7107 ปีที่แล้ว +2

      You don't seem to know much about art then, creating art is not just using a pen or a pencil, it's the entire process that includes the idea, the composition and the execution, many people are just copying and pasting prompts and get a nice picture, but the ones that are doing great things are using AI as one more tool, combined with photoshop and other tools and some insane art will be created in the near future that wouldn't ever be possible to create by human hand alone.

    • @DarkStoorM_
      @DarkStoorM_ ปีที่แล้ว +1

      @@hectord.7107 This is what no one understands. People jump from video to video, bashing everyone in the comments for using AI whenever a new convenient tool is getting released. Funnily enough, I even found someone commenting in 3Blue1Brown's recent video, that he will stop watching 3B1B, because he used AI images (video contains images *transformed* by another artist aided by Midjourney).
      People don't seem to realize, that it's not just about _typing words into boxes_ and spamming pretty images over the internet, making artists mad. This argument is getting really annoying and is already obsolete. People already create *insane* images, completely *transforming* the base txt2img result, which immediately throws the the copyright argument straight into the trash can. Thanks to the Inpainting tool in Stable Diffusion, we can make amazing high resolution transformations from a simple photoshop sketch, still putting *massive* amounts of tedious, manual work into the result image, creating it piece by piece, utilizing the creativity to the max, still keeping the sketched composition, which is *your work*. Using artists' names is literally useless nowadays, because it has a very little impact on this process, just like using a random word in the prompt.
      People, rather than starting nonsense and useless dramas all over the internet, use this to your advantage and stop being a baby :)

  • @yoteslaya7296
    @yoteslaya7296 ปีที่แล้ว

    Thanks for the info but im not paying for photoshop

    • @blizado3675
      @blizado3675 ปีที่แล้ว

      Like he said any image software that has sharpening features will work. There are also free open source alternatives.

    • @yoteslaya7296
      @yoteslaya7296 ปีที่แล้ว

      @@blizado3675 which ones

  • @clumsy_en
    @clumsy_en ปีที่แล้ว

    nick-x-hacker/bad-artist a little off very sus nick choice on HuggingFace it shows no pickles detected but you can never be 100% sure