5 EASY WAYS TO MAKE BETTER AI ART | Stable Diffusion 2023

แชร์
ฝัง
  • เผยแพร่เมื่อ 20 ม.ค. 2025

ความคิดเห็น • 50

  • @randomvideosoninternet8954
    @randomvideosoninternet8954 ปีที่แล้ว

    this video is gold, and in years to come, this will be one of the best video out there, and become classic.

  • @jahonky5573
    @jahonky5573 2 ปีที่แล้ว +2

    I appreciate the continuous uploads!

    • @binks_live
      @binks_live  2 ปีที่แล้ว

      thanks so much jihad (you're clearly very good at valorant from this comment I can just tell)

  • @morphman86
    @morphman86 ปีที่แล้ว

    For the iteration process, I've found that the revamped loopback script is quite handy.
    Instead of batching, you can run iterative prompt changes and generate a few dozen (or hundreds) images and pick your favourite from those to iterate again.
    I found this gives much faster results than batching 2-4 and changing the prompt, scale or denoising then run again 10-20 times.
    Just don't forget to tick the box to save the prompt. You don't want to lose a prompt set you really like just because you tinkered with one or two values for the next iteration.

  • @chaerazard
    @chaerazard ปีที่แล้ว

    Thank you for the tips, I am very inspired by your work

  • @thebrokenglasskids5196
    @thebrokenglasskids5196 ปีที่แล้ว

    Can save a lot of those roulette rounds in img to img by using in-painting at that stage instead. Just mask the crown and run batches of 4 until you get what you want by in-painting masked only + original and clear the prompt out and replace it with a prompt specifically for what you want changed in the masked area. That way all you’re altering is the crown and keeping everything in the original that you liked instead of having it change as well, thus fixing one problem, but creating others. I recommend creating a separate custom in-painting model based of whatever model you're using to render. There's tutorials around the internet on how to do that so I won't get into that here, but It's not difficult and can be an invaluable tool to getting exactly what you want into your image, thus giving you near total control of your subject matter instead of relying on Russian roulette style trial and error batch runs.
    Also, you can increase the quality a lot by sending it back to image to image at the end and refining the details further by running multiple low denoise rounds on it.
    Start with a batch of 4 at something around 0.25 and up the samples to 60. Pick the best result and repeat the process, but lower the denoise further while increasing the sample to 100 or more. Keep repeating this and choosing the best result until you’re at 0.1 denoise and maxed at 150 samples.
    Refining the subtle details like this makes a huge difference in getting that perfect end result before upscaling.
    I wouldn't leave upscaling "as is" either tbh. Dialing in a good upscaling mix between two upscalers per model can yield higher results in my experience. Especially if you're going for life-like realism. Should also turn CodeFormer visibility up to 1 and set the strength to be the same as you have it set to in the main settings under Face Restoration so you get an upscaled face that looks consistent to your base renders of the image. You can then tweak it from there in subsequent passes to get it right where you want it.

  • @swannschilling474
    @swannschilling474 2 ปีที่แล้ว +2

    Sampling Steps up is also always helping when doing img2img, also playing with embeddings and Hypernetworks is a great way to get different Styles!

    • @binks_live
      @binks_live  2 ปีที่แล้ว +1

      I have a video in the works on hypernetworks, if you have any tips let me know!

    • @swannschilling474
      @swannschilling474 2 ปีที่แล้ว

      @@binks_live I realized that the right Hypernetwork helps a lot to get your faces right, and it its a lot faster than the restore faces option... 😊

  • @sketchionic6356
    @sketchionic6356 ปีที่แล้ว +1

    You are also going to make a tutorial about installing that UI you have for us. Please. thank you

  • @etp7393
    @etp7393 2 ปีที่แล้ว +1

    The quality of the video is great, definitely trying AI art now!!!

  • @maurisnake15
    @maurisnake15 ปีที่แล้ว

    Which prompts were you using? Amazing results

  • @nackedgrils9302
    @nackedgrils9302 2 ปีที่แล้ว +2

    Wow, I didn't know that there was an upscaler in Extras, I've always used the img2img SD Upscaler script with underwhelming results most of the time, so I have to re-roll with different settings which is extremely time-consuming on my setup (20min. to upscale 2x). Also couldn't figure out how to run batches of images in txt2img, I thought that the setting for it was the ''Batch Size'' slider which my setup wouldn't allow to run with any other value than 1. Now I'll be able to prompt, go do something else and choose which image to work with when I get back!
    It's such a pain to be using this on a laptop but SD has now convinced me to save up to build a proper PC!

    • @viralvideocli
      @viralvideocli ปีที่แล้ว

      AI shorts
      th-cam.com/video/8_88FvU5raI/w-d-xo.html

  • @AI_Generated_21
    @AI_Generated_21 2 ปีที่แล้ว +2

    Very useful video! Great job! Subscribed!

    • @binks_live
      @binks_live  2 ปีที่แล้ว

      Glad it was helpful! Thanks so much for the sub!

  • @addisonavery_
    @addisonavery_ 2 ปีที่แล้ว +9

    In this sea of hype around the new era of AI, it's refreshing to find a channel with no nonsense and clear instructions. Thank you! I'm working on a graphic novel and sometimes I add a general prompt and it repeatedly generates a very similar character even with seed set to -1. Now this isn't necessarily bad as I'd like to use the same character throughout but I'd also like to explore my options before it "locks" it in. Is this a bug with the SD WebUI? Also I've noticed that once I open around 4-5 instances of the WebUI ( not running simultaneously ) I begin to see a degradation of quality. Specifically there seems to be a faint orange peel like texture over my images. Is this normal? Sorry for the long message.

    • @binks_live
      @binks_live  2 ปีที่แล้ว +1

      Hey Addison! Sorry I missed this comment but hopefully this can still be useful. The 'seed' variable has the most impact from generation to generation but some models can be predisposed to provide similar looking faces. Unfortunately, this can only be avoided by using a different model with more training data or training the model additionally yourself. As far as the degradation of quality, if you're running multiple instances at a time could be using up a lot of your available VRAM or you could be right about it being a bug within WebUI. I'm looking into a better solution and considering developing my own if I can't find one I'm a big fan of. If you have any questions, feel free to join my Discord! discord.gg/JvcYXZr86q

    • @nackedgrils9302
      @nackedgrils9302 2 ปีที่แล้ว +1

      I've also found that using SD a lot sometimes gives me poorer and poorer results and I've also encountered the xformers bug that makes every output image look scrambled. The solution I've found was to re-install it (already re-installed it two times in a single week already). I've also noticed that using img2img and re-running the output again messes up the colours if I'm using a model that doesn't have a .vae file.

  • @arbrian683
    @arbrian683 ปีที่แล้ว

    Thank you so much for the information

  • @hypnotic852
    @hypnotic852 2 ปีที่แล้ว +1

    I just stumbled across your videos and I have to say extremely helpful, do you plan on making a video on how to master prompt crafting, I’ve been on an endless journey trying to find the answer but always came up short

    • @binks_live
      @binks_live  2 ปีที่แล้ว +1

      I certainly can start working on one, check back in a few days! Can you tell me a bit more about what you’re looking for?

    • @hypnotic852
      @hypnotic852 2 ปีที่แล้ว

      I’ve been using stable diffusion to create characters and when I describe like the clothing or like their physical characteristics in detail a lot of it is lost, I’ve even tried to add weights to parts that the ai wasn’t picking up but when it does other parts are lost and it’s just a massive headache

  • @izmi2938
    @izmi2938 ปีที่แล้ว

    I search about negative prompt guide you mentioned but I found nothing, help?

  • @DecoTunes28
    @DecoTunes28 ปีที่แล้ว

    Where can I download this software and is it still free?

  • @bazingatnt
    @bazingatnt ปีที่แล้ว +1

    i checked their site but there is nothing like you use .Just a spimle site with realy bad result .How we can acsess same panel like yours ?

    • @Which-Way-Out
      @Which-Way-Out ปีที่แล้ว

      He's using Automatic1111 webgui

  • @JohnVanderbeck
    @JohnVanderbeck ปีที่แล้ว +9

    I've been turning Restore Faces OFF a lot lately. I find the option just ruins the faces. It smooths them out, makes them blurrier and saps detail. It makes them look very photoshopped. Turning it off I get much more detailed and real feeling faces and any issues like screwed up eyes or teeth I can just fix later.

    • @thebrokenglasskids5196
      @thebrokenglasskids5196 ปีที่แล้ว

      The effect of restore faces depends on the model being used and how it was trained when it was created.
      For some models it helps, others it hurts.
      Also depends on the prompts being used. Especially the negative ones.

  • @BlastGorilla5253
    @BlastGorilla5253 ปีที่แล้ว

    Very respectfully I am Asking !!!!
    I don't know How to get That software and How to Install it. Plz Help Me
    I am very Eager to learn and Generate AI art.
    Humble request 🙏🏻

    • @carlosruiz6179
      @carlosruiz6179 ปีที่แล้ว

      Search for how to install "stable diffusion" then learn with these videos.

  • @creatiffshik
    @creatiffshik ปีที่แล้ว +1

    Early step generations in stable diffusion look somewhat impressionistic and that's cool! Moving further in refinement it tend to move twards something more... mainstream, obvious, but in first impression this images are.. somewhat more emotional and give a dophamine shot. I think there's a way to keep this early - like first two step generations - and make some work around them making it more fine in hand made style. They tend to look like my friend's Alish paintings, made from a photo, but keeping overal mouar feeling of wel low quality blooming film and lens, that gives a feeling of an easy shot, made to catch a short-lasting moment of life. Also i tend to feel like this early generetions are looking like a mediocre(in a good life-sense) beauty in its best.

  • @StygianStyle
    @StygianStyle ปีที่แล้ว

    Does the sampling method make a big difference? A lot of people use Euler a. I'm new to stable diffusion, so I'm just referring to the tutorials I've seen.

    • @pixelpuppy
      @pixelpuppy ปีที่แล้ว

      some sampling methods cause a drastic change from steps. You can use the same seed and try different samplers to see the difference. Most of them just use different ways to resample the diffusion, trading speed for quality. If you turn up the live update frequency, you can see how these samplers work - Euler A does a sorta blurry painting that it refines over time, DIMM does all these whacky colors to define edges.

    • @StygianStyle
      @StygianStyle ปีที่แล้ว

      @@pixelpuppy I haven't experimented much because there are millions of different ways to use prompts and tweak different settings. I've been just trying to find what the experienced people seem to recommend.

  • @thekleroterion
    @thekleroterion 2 ปีที่แล้ว +1

    hi, can this be done from google drive? collab?

    • @binks_live
      @binks_live  2 ปีที่แล้ว

      Not as far as I know. Some Hugging Face models have online versions that are VERY slow but do work. Let me know if you have any more questions!

    • @thekleroterion
      @thekleroterion 2 ปีที่แล้ว

      @@binks_live I got like 50 models, and I have a script that load on collab , but dont kno wwich ones are better , or the name and the model file link

  • @azaharrahat2512
    @azaharrahat2512 ปีที่แล้ว

    what is site name???

    • @Which-Way-Out
      @Which-Way-Out ปีที่แล้ว

      He's using a locally installed version of Automatic1111

  • @vintorpraiseandworship
    @vintorpraiseandworship ปีที่แล้ว

    amazing tips, can you replay me with that negative prompt

  • @풀문-u3p
    @풀문-u3p 2 ปีที่แล้ว

    좋아요😄~구독~♡

  • @SirSalter
    @SirSalter ปีที่แล้ว +1

    Take a sip of your drink, every time he says “go ahead”.

    • @binks_live
      @binks_live  ปีที่แล้ว

      You got me laughing uncontrollably in the airport, thank you! 🤣

  • @cloudofzero
    @cloudofzero ปีที่แล้ว

    Anyone else love reading tiny words? Still Good Video.

  • @ArielTavori
    @ArielTavori ปีที่แล้ว +1

    Dude, you have absolutely got to lock the seed in order to compare what restore faces does, and does not do. The example you show at the beginning suggests it made the whole image better and changed the composition, etc. Which is absolutely not the case.
    If you lock the seed and regenerate the exact image again you will see the only changes it makes are to the face/hair region; and with a solid model doing close-ups of a face it frequently actually RUINS the face, making it much lower resolution.
    It also makes a huge difference which algorithm you have selected in the settings. gfpGAN is excellent at protecting the identity of a specific person without changing them, but it has limited usefulness to making 'good' face is 'better'. Codeformer on the other hand, can make a beautiful face out of a complete mess, but it will not protect the original identity and may even change the race.

    • @ArielTavori
      @ArielTavori ปีที่แล้ว +1

      FYI, there's also an option in the settings to "save a copy before performing restore faces" so you can keep both files and choose the best for each individual image.

  • @viralvideocli
    @viralvideocli ปีที่แล้ว

    AI shorts
    th-cam.com/video/8_88FvU5raI/w-d-xo.html

  • @damarh
    @damarh 2 ปีที่แล้ว

    This is like mining for crypto, except instead of losing your life savings, you get an e-wiafu.

  • @witness1013
    @witness1013 ปีที่แล้ว

    Most of these explanations are wrong