How to change ANYTHING you want in an image with INPAINT ANYTHING A1111 Extension [Tutorial Part1]

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 มิ.ย. 2024
  • #aiart, #stablediffusiontutorial, #automatic1111
    This tutorial walks you through how to change anything you want in an image with the powerful Inpaint Anything extension. We will install the extension, then show you a few methods to inpaint and change anything in your image. The results are AMAZING!
    Chapters:
    00:00 Intro
    01:12 Overview of Inpaint Anything Extension
    01:43 Install Inpaint Anything
    02:37 How to use Inpaint Anything
    03:18 Comparing different SAMs
    04:43 Changing the cloth
    10:05 Changing the background - method 1
    11:55 Changing the background - method 2
    13:27 Continue to change the image
    15:30 Changing hair color
    16:35 Bonus: Latent Upscaling to fix minor issues
    18:49 Final result
    Useful links
    Inpaint Anything github:
    github.com/Uminosachi/sd-webu...
    Segment Anything github:
    github.com/facebookresearch/s...
    Comparison of the different Segment Anything Models (SAMs):
    docs.google.com/spreadsheets/...
    **If you enjoy my videos, consider supporting me on Ko-fi**
    ko-fi.com/keyboardalchemist

ความคิดเห็น • 184

  • @cyberspider78910
    @cyberspider78910 6 วันที่ผ่านมา

    this video is gold standard for anyone starting with automatic1111

  • @rexs2185
    @rexs2185 10 หลายเดือนก่อน +1

    Once again, KA brings the great tutorials! Thank you for the detailed explanation!

    • @KeyboardAlchemist
      @KeyboardAlchemist  10 หลายเดือนก่อน

      You're welcome! I'm glad you liked the video!

  • @undoriel
    @undoriel 8 หลายเดือนก่อน +1

    Your tutorials on SD are the easiest to follow and very informative. Please keep them coming! You've got yourself a subscriber :)

    • @KeyboardAlchemist
      @KeyboardAlchemist  8 หลายเดือนก่อน

      I'm glad you liked my videos! Thank you for support my channel!

  • @whalhard
    @whalhard 10 หลายเดือนก่อน +16

    This is one of the clearest video's on a stable diffusion subject that I have seen, without feeling rushed. Well done.
    Keep making them and I will keep watching them. 👍

    • @KeyboardAlchemist
      @KeyboardAlchemist  10 หลายเดือนก่อน +2

      Thank you very much! I'll keep them coming! =)

    • @sairampv1
      @sairampv1 10 หลายเดือนก่อน

      @@KeyboardAlchemist can you change the position of generated images as in make them have actions like take a portrait picture make it run , fught , climb , etc ?

  • @brynbulloch
    @brynbulloch 10 หลายเดือนก่อน +12

    You are a REALLY great teacher! Everyone has their own learning style and since getting started in Ai art, I have watched countless different channels hoping to find someone whose pace felt natural to me. I finally found you!!! I gave up on SD and A1111 several months ago out of frustration but I have missed control over the details that MJ lacks. Watching your tutorial made me eager to give it another go. Can’t wait to watch the rest of your videos. I have LIKED and SUBSCRIBED and I will definitely SHARE your content. Thank you for the time and attention to detail in this video. I especially appreciated that you put important details in text in sync with where you were speaking about them. Very helpful to hear and read the important points at the same time. I wish you the BEST of luck with your channel and can’t wait to watch your subscriber numbers SOAR soon! Sorry th is was so long. But gotta’ go now and watch some more of your vids!

    • @KeyboardAlchemist
      @KeyboardAlchemist  10 หลายเดือนก่อน

      Thank you very much for your kind feedback and your support! I hope you enjoy my other videos and all future videos as well. Cheers!

  • @allenraysales
    @allenraysales 10 หลายเดือนก่อน +1

    Thank you just what I needed! Keep up the great tutorials! Time saver!

    • @KeyboardAlchemist
      @KeyboardAlchemist  10 หลายเดือนก่อน

      Thank you very much! I'm glad this was helpful for you. Stay tuned for Part 2 of this Inpaint Anything video.

  • @FullStackFalcon
    @FullStackFalcon 9 หลายเดือนก่อน +2

    Amazing tutorials, your content got me hooked with SD and AI editing. you are a great teacher, liked and subbed 🚀

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 หลายเดือนก่อน

      I'm glad you liked the tutorial! And thanks for your like and sub!

  • @cbccbd
    @cbccbd 9 หลายเดือนก่อน +1

    Great video. You deserve A LOT more views!

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 หลายเดือนก่อน

      Thank you, I appreciate your kind words!

  • @SteveWarner
    @SteveWarner 10 หลายเดือนก่อน +50

    Really great tutorial. Just a heads up. There's a much faster and easier way to do this that uses less resources. It's the Photopea extension. It will add Photopea, which is akin to an online version of Photoshop, into your A1111 install. You send the T2I image to Photopea, then use the standard masking features that you would in a program like Photoshop to mask out the area you want. The full range of tools is there and you can make extremely complex masks in seconds. When done, use the Send to Inpaint button in Photopea to send the image and mask back to the I2I section of A1111. Now you don't have to download unnecessary Segmentation models that eat up your hard drive space. This works like a charm and makes all inpainting tasks so much easier.

    • @KeyboardAlchemist
      @KeyboardAlchemist  10 หลายเดือนก่อน +5

      Thank you for the tip! I've heard good things about Photopea. Will definitely give it a try!

    • @deama15
      @deama15 10 หลายเดือนก่อน +3

      @@KeyboardAlchemist Another video with photopedia?

    • @DannySi
      @DannySi 10 หลายเดือนก่อน

      not sure if it's because I'm using an A1111 fork called SDNext, but the photopea extension doesn't seem to work properly. It doesn't allow me to send anything to photopea or back from photopea idk

    • @j_shelby_damnwird
      @j_shelby_damnwird 8 หลายเดือนก่อน +2

      Great suggestion, man, thank you very much!

    • @Elfyja
      @Elfyja 8 หลายเดือนก่อน

      this made me giggle, its understandable but I'm imagening a person who's just know how to open their email and surf the internet finding this comment be like ????

  • @sebastianmueller1740
    @sebastianmueller1740 10 หลายเดือนก่อน +1

    Great and detailed tutorial, thank you!

    • @KeyboardAlchemist
      @KeyboardAlchemist  10 หลายเดือนก่อน

      You're welcome! I'm glad you enjoyed the video!

  • @winonaiverdoberman2496
    @winonaiverdoberman2496 10 หลายเดือนก่อน +1

    OMG!! this is SUPER helpful and detailed tutorial!!i have been dying to learn how to do those things. finally dream come true !!! thank u sooo much!!Defo SUBSCRIBED and LIKED IT!!!!

    • @KeyboardAlchemist
      @KeyboardAlchemist  10 หลายเดือนก่อน

      I'm glad this tutorial was helpful for you! Thank you very much for the support! I have part 2 of this inpainting tutorial coming soon. Stay tuned.

  • @barcob5558
    @barcob5558 10 หลายเดือนก่อน +1

    Excellent! Thanks for sharing.

  • @TapticDigital
    @TapticDigital 4 หลายเดือนก่อน +3

    You can indeed zoom in on inpaint with A1111, just hover over the (i) button for a list of controls. Alt+wheel to zoom, ctrl+wheel to adjust brush, etc.

    • @HorseyWorsey
      @HorseyWorsey 3 หลายเดือนก่อน

      based but what "i" button I don't see it in the Inpaint section or anywhere really.

  • @jettro8523
    @jettro8523 9 หลายเดือนก่อน +1

    great video, covered many questions i had!

  • @daishum000
    @daishum000 5 หลายเดือนก่อน

    its so helpful and details! thanks!

  • @Bj0rn666
    @Bj0rn666 2 หลายเดือนก่อน +1

    This video just earned you a new follower. Im using SD forge but this is still good information. Thanks!

    • @KeyboardAlchemist
      @KeyboardAlchemist  2 หลายเดือนก่อน

      Thanks for the sub! I appreciate your support!

  • @76abbath
    @76abbath 10 หลายเดือนก่อน +1

    I don't know this extension, thanks a lot for this video!!! ❤

    • @KeyboardAlchemist
      @KeyboardAlchemist  10 หลายเดือนก่อน

      You're welcome! Glad it helped!

  • @chapicer
    @chapicer 7 หลายเดือนก่อน +1

    yuor channel is so great, plz continue making videos!!!

  • @alec-gy8ey
    @alec-gy8ey 9 หลายเดือนก่อน +5

    You can zoom in by holding Alt + mouse scroll

  • @Rasukix
    @Rasukix 9 หลายเดือนก่อน +1

    incredible tutorial!

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 หลายเดือนก่อน

      I'm glad you liked it! Thanks for the sub!

  • @yeezythabest
    @yeezythabest 10 หลายเดือนก่อน +1

    Subscribed and activated the bell ! great video

  • @Vadim666I
    @Vadim666I 10 หลายเดือนก่อน +1

    Great tutorial. I`ll try this tommorow)

    • @KeyboardAlchemist
      @KeyboardAlchemist  10 หลายเดือนก่อน

      Thank you and have fun! This is a great extension.

  • @lenny_Videos
    @lenny_Videos 8 หลายเดือนก่อน +1

    thanx for the great tutorial 🙂

  • @ignat3802
    @ignat3802 3 หลายเดือนก่อน +1

    Thanks my guy. Great guide, even my stunted brain could understand!

    • @KeyboardAlchemist
      @KeyboardAlchemist  3 หลายเดือนก่อน

      You're welcome! Thanks for watching!

  • @CEAG23
    @CEAG23 28 วันที่ผ่านมา +1

    gracias!!!!!

  • @sb6934
    @sb6934 9 หลายเดือนก่อน +1

    Thanks !

  • @just_logi
    @just_logi 9 หลายเดือนก่อน +1

    realy good thank u

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 หลายเดือนก่อน

      You're welcome! I'm glad you liked the video!

  • @wakeup2.369
    @wakeup2.369 9 หลายเดือนก่อน +1

    you can enlarge the image by pressing the alt key and using the mouse wheel!

  • @_inspirasiislam
    @_inspirasiislam 10 หลายเดือนก่อน +1

    Thanks

  • @japaoyagami3273
    @japaoyagami3273 6 หลายเดือนก่อน

    Muito obrigado por ensinar

  • @angloland4539
    @angloland4539 10 หลายเดือนก่อน +1

  • @waterwater5931
    @waterwater5931 8 หลายเดือนก่อน

    Thank you for this impressive video! I would like to know is it possible to change the target cloth that in other image to the mask instead of using prompt to generate random cloth according to the prompt?

    • @KeyboardAlchemist
      @KeyboardAlchemist  8 หลายเดือนก่อน

      Yes, you can. Watch Part 2 of my Inpaint Anything video (th-cam.com/video/k8FfCicu5G8/w-d-xo.html) where I provide some suggestions using controlNet Reference Only preprocessor.

  • @DrAmro
    @DrAmro 10 หลายเดือนก่อน +3

    Hey Alchemist, i'll nominate you "man of the year" for Nobel prize, you're a living guide bro....
    btw, can you make a guide about secret Capita written words like BREAK, AND & those we don't know anything about + advanced extensions with its detailed uses.
    I think it'll be a magical series.❤👍

    • @KeyboardAlchemist
      @KeyboardAlchemist  10 หลายเดือนก่อน +3

      Thank you very much for your suggestions! It's funny that you mention the keywords like BREAK, AND, etc. I'm working on tutorial about prompting basics, which will include these keywords, and I can explore their uses and effects a bit more in that video. And of course, I will have more tutorials about detailed usage of A1111 extensions. Stay tuned for more! Cheers!

  • @anup-kaushal
    @anup-kaushal 7 หลายเดือนก่อน +1

    Really detailed and to the point, loved it

    • @KeyboardAlchemist
      @KeyboardAlchemist  7 หลายเดือนก่อน

      Thank you! I'm glad you liked the video.

    • @anup-kaushal
      @anup-kaushal 7 หลายเดือนก่อน

      You're welcome@@KeyboardAlchemist

  • @proyectorealidad9904
    @proyectorealidad9904 10 หลายเดือนก่อน +3

    you can zoom with mousewheel+alt

    • @KeyboardAlchemist
      @KeyboardAlchemist  10 หลายเดือนก่อน

      TIL, thank you for this tip! I never knew about this keyboard shortcut.

  • @FullStackFalcon
    @FullStackFalcon 9 หลายเดือนก่อน +1

    please do a video on prompts

  • @DrivenTrigger
    @DrivenTrigger 5 หลายเดือนก่อน

    What GPU are you using out of curiousity? For Inpaint Anything I only get like 1.5 it/s on a 3080
    Great tutorial also, liked and subscribed 👍

  • @tomarco7998
    @tomarco7998 10 หลายเดือนก่อน +2

    Is it also pissble to impaint image2image here? Got a Photo of a Shirt that I want to replace on a Model generated with midjourney

  • @vanarunedottir
    @vanarunedottir 9 หลายเดือนก่อน

    Does this work with all versions of SD? In particular does it work with the latest SDXL, or only 1.5?

  • @lugotorix6911
    @lugotorix6911 8 หลายเดือนก่อน +1

    Thanks for the tutorial, is there a system where we just change the pose and keep the face and clothes the same? I'm looking forward to the tutorial if it can be done somehow.

    • @KeyboardAlchemist
      @KeyboardAlchemist  8 หลายเดือนก่อน +1

      Thanks for watching! Yes, there are a number of ways that you can go about it. Most of it will involve using ControlNet models. I may make a video about it down the line, but it might be a while since I have quite a few videos in the queue. Stay tuned.

  • @blender_wiki
    @blender_wiki 9 หลายเดือนก่อน +1

    Is so refreshing a real person talking with a real voice instead this "comics" toons TH-camrs that are really hard to follow in a tutorial with their funky voice.

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 หลายเดือนก่อน

      Thank you, I'm glad you liked the video!

  • @celiocarvalho64
    @celiocarvalho64 9 หลายเดือนก่อน +2

    3:17 show the message "Segment Anything failed"
    how to solve it?

  • @diablokatakuri
    @diablokatakuri 6 หลายเดือนก่อน +1

    hey @KeyboardAlchemist great tutorial, I just want to ask how do you get the custom inpainting model 7:28 and howd you install it? can you put in the safetensor/pickletensor format? Uminosachi said its model diffusers on this folder , C:\Users\username\.cache\huggingface\hub

    • @KeyboardAlchemist
      @KeyboardAlchemist  6 หลายเดือนก่อน

      Hi, I'm glad you liked the video. This is correct, the models are located in this directory ('C:\Users\username\.cache\huggingface\hub'). I did not have to manually put the models in here though. After I installed the Inpaint Anything extension, these were auto-populated. You can try adding a subfolder in the 'hub' folder with this name: 'models--Uminosachi--realisticVisionV51_v51VAE-inpainting' to see if it will pull the files for you. If it does successfully pull the files from huggingface, you should get a 'snapshots' subfolder with the model in there. If this doesn't work, then you might have to reinstall the Inpaint Anything extension. I hope this helps you.

  • @i01binary
    @i01binary 9 หลายเดือนก่อน

    try pressing s to get full screen canvas

  • @rezahasny9036
    @rezahasny9036 5 หลายเดือนก่อน

    Dude, can you make a tutorial to generate using InpaintAnything without changing the pose. Like we use open pose controlnet

  • @MobileJeremie
    @MobileJeremie 5 หลายเดือนก่อน

    great video! i loaded it created mask and selected the models/sampler but i am getting a error when i run inpainting... nonprogrammer here, any ideas why?

  • @chrisrosch4731
    @chrisrosch4731 8 หลายเดือนก่อน +2

    Really enjoyed this tutorial. Do you think there is a way to add specific items to an image using inpaint? Let's say I want to add specific lamps, plants, or paintings to an image how would I go about this? Does it make sense to train my own LoRa for each item and then just use the LoRa for the mask to add the specified object in the image? Can not quite wrap my head around of how that could be achieved. Liked and subscribed! :)

    • @KeyboardAlchemist
      @KeyboardAlchemist  8 หลายเดือนก่อน

      Hello, thanks for the sub! Yes, a technique that you can use is, put your image into photoshop or gimp, then overlay or draw the object you want onto the image, then bring that edited image into a1111 and do inpaint on that object. Or another way is you can use Inpaint Sketch to draw the object you want directly on to the image within A1111. You can use different colors to give some context to the AI. Both of these methods will give you more consistent results. Hope this helps!

    • @chrisrosch4731
      @chrisrosch4731 8 หลายเดือนก่อน +1

      Is there a way to use the second option and get consistens results i.e. the same model of lamp placed in the room for different rooms? The problem when I just use inpaint sketch is that it will just place any generic lamp, no? Does it make sense to train my own LoRa (or maybe use dreambooth?) to get more consistent results without using photoshop? Thanks for you help!@@KeyboardAlchemist

    • @KeyboardAlchemist
      @KeyboardAlchemist  8 หลายเดือนก่อน

      @@chrisrosch4731 Yes, training your own LORA of the object and then inpaint using that LORA will definitely do the trick. But if you have limitations regarding training your own LORAs, then you can also try a different method (not involving a LORA), which will work but may involve a bit of trial and error. If you have watched part2 of my inpainting video (link here for reference: th-cam.com/video/k8FfCicu5G8/w-d-xo.html), I described a method to use Inpainting + Control Net Reference preprocessor, which will probably get you close to what you want to do (you can do the same method in Img2Img as well; not just within Inpaint Anything extension). Be sure to do the following things to increase your chances of success: (1) make sure your reference image and input image are the same size; you will have a much easier time with it, (2) don't put any positive prompts in when you are doing inpainting; you never know which keyword is going to mess with your reference image's style (you can always add keywords back later), (3) make sure your inpaint denoising strength is very high (0.9 - 1.0), (4) make sure your Control Weight is very high (greater than 1.5), (5) Control Mode = 'ControlNet is more important', and (6) you may need to try a few different models/checkpoints because the impact of the model on this process is very high. Finally, you will probably need to generate several images with random seeds and hopefully get the one that you like. Hope this helps!

    • @chrisrosch4731
      @chrisrosch4731 7 หลายเดือนก่อน

      Hey Keyboard Alchemist. First off, thank you so much for your detailed answer. Honestly it took me quite a while to reply to you because this is new to me and I first had to dig a little deeper to understand your reply.
      Now, if I understood you correctly training my own LoRas with specific models of furniture or species of plants should yield the most consistently good results. I would love to go for the option that produces the best results without having to do a lot of manual refining. My goal is to have my clients use this so they can upload images of their own apartments and have good results where the furniture looks realistic, both in terms of the actual model of furniture (e.g. specific Ikea lamp) and also in terms of the furniture looking realistic in the image.
      I watched your part 2 inpaint anything video twice but was not able to get the inpaint anything tab shown. Did they change the appearance? Is it now integrated into the below tab where you have to click the checkbox to enable Unit 0, unit 1, etc? Maybe I have to uninstall everything to have it show again? Maybe that is not needed anymore and the below tab of Control net yields similar results?
      Really grateful for the information you provide and if there is anything that comes to your mind that could work best for my experiment please let me know. I do not know if that is an option to you but if we could hop on a quick 5 minute Discord call and talk about possibilities I would be so happy. Also willing to pay you for your time of course (also beforehand if you wish).
      Cheers,
      Chris
      @@KeyboardAlchemist

  • @joeskis
    @joeskis 4 หลายเดือนก่อน

    do you know what to do if we're getting an error during run segment anything: cannot set version_counter for inference tensor.?

  • @dulay28
    @dulay28 17 วันที่ผ่านมา

    Can it do reference image of the clothes that I want?

  • @xunbaoxinwen
    @xunbaoxinwen 9 หลายเดือนก่อน

    I try install expansion "inpaint anything" on colab, but not show in the main page. any how can help?

  • @u.google
    @u.google 9 หลายเดือนก่อน +1

    how do you put the inpainted masked photo inpaint anything tab to the img2img because there no Only masked padding, pixels in it so i want to move it to the img2img inpain is there a way to do that? pls help

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 หลายเดือนก่อน +1

      Yes, there is a way to do this, if I'm understanding your question correctly. After you create your mask, on the left-hand side there is a 'Mask Only' tab. In that tab, you can click the 'Get Mask' button, then click the 'Send to Img2Img Inpaint', which will bring the mask to the 'Inpaint Upload' tab within Img2Img. I hope this helps you. Cheers!

  • @Gh0sty.14
    @Gh0sty.14 9 หลายเดือนก่อน

    For some reason it's not adding any of the inpainting models I already have.

  • @MrFreeagent505
    @MrFreeagent505 9 หลายเดือนก่อน

    Hi i've haven't been able to run inpaint anything. i get "ImportError: cannot import name 'YOLO' from 'ultralytics' (unknown location)". I've spent a good bit of time looking but can't find a solution to what i'm doing wrong. thank you if anyone can help.

  • @JeanDeLaCroix_
    @JeanDeLaCroix_ 9 หลายเดือนก่อน +1

    When I use the standard version of inpaint in Img2img, I get results that are heavily influenced by the masked area. For example, if I want to change the clothes and the character is wearing white, it's hard for me to replace it with red without going through Photoshop. Does this method help to ignore a bit more what's happening under the mask?

    • @KeyboardAlchemist
      @KeyboardAlchemist  8 หลายเดือนก่อน

      I'm just guessing here, but it sounds like you might be using the Masked Content = 'original' setting. If you want to change something in the regular inpaint interface, you should be using Masked Content = 'fill'. If you use Inpaint Anything, there is no selection for which Masked Content setting, so in a sense this extension makes it a bit easier for you by taking away some of the options that could cause you problems. I hope this helps you. Cheers!

    • @JeanDeLaCroix_
      @JeanDeLaCroix_ 8 หลายเดือนก่อน +1

      @@KeyboardAlchemist thanks ! I'll test that :)

  • @WaseemOnlines
    @WaseemOnlines 4 หลายเดือนก่อน

    I got a black image when I press on run segment anything, any idea why?

  • @philliphartman2381
    @philliphartman2381 9 หลายเดือนก่อน

    Why does processing take so much longer with this app? Isn't there a way to control resolution?

  • @relaxation_ambience
    @relaxation_ambience 10 หลายเดือนก่อน +1

    Hi, from your examples I see, that you inpaint already existing things. But, for example, if I want a parrot on her shoulder, will it inpaint ?

    • @KeyboardAlchemist
      @KeyboardAlchemist  10 หลายเดือนก่อน +1

      The short answer is yes, you can inpaint an area in the image and prompt for something that doesn't already exist in the image, but the result you get will be inconsistent. You might get lucky and get the result that you want, or you might re-roll a bunch of times and still do not get the result that you want. A technique that you can use is, put your image into photoshop or gimp, then overlay or draw a parrot on her shoulder, then bring that edited image into a1111 and do inpaint on that parrot portion of the image. You will have a much easier time. Hope this helps.

  • @TransCanadaPhil
    @TransCanadaPhil 6 หลายเดือนก่อน

    all i get is a black background, not sure what i'm doing wrong. I'm putting in a prompt but whatever I mask out doing this and generate, is always just black background,

  • @RSV9
    @RSV9 10 หลายเดือนก่อน +3

    It is a good tool for complex masks and the result is very good but on my computer it is extremely slow. With A1111 normal inpainting it's much faster and also gives good results, so I don't know why in this extension it's so slow. I only have an NVIDIA®GeForce RTX™3050 Ti 4GB, maybe Google Colab could be faster.
    Good job, thanks

    • @KeyboardAlchemist
      @KeyboardAlchemist  10 หลายเดือนก่อน

      Thank you!

    • @kartikashri
      @kartikashri 9 หลายเดือนก่อน

      you can reduce sample steps to 30 or 20 to generate fast but note it might reduce quality

  • @GES1985
    @GES1985 หลายเดือนก่อน +1

    Is there a way to take an item or jewelry from one picture and put into another? Or is that just something to do photoshop

    • @KeyboardAlchemist
      @KeyboardAlchemist  29 วันที่ผ่านมา

      I made a video previously about this, check it out here: th-cam.com/video/akzu3R7lDZ4/w-d-xo.html. I hope this helps.

  • @jonorgames6596
    @jonorgames6596 9 หลายเดือนก่อน

    Im on AMD GPU. Gives me errors: ... Cannot set version_counter for inference tensor...

  • @rudeoff
    @rudeoff 6 หลายเดือนก่อน

    Did you change the nationality of your AI voiceover halfway through this video?

  • @bingbang9643
    @bingbang9643 9 หลายเดือนก่อน +1

    ive been using midjourney, but the wide range of options of stable diffusion makes me feel I'm missing out, can you guys comment all the reasons why stable diffusion is better. thanks... i have a rtx 3060 and a 1060 ti but I've heard those GPUs are not good enough so didn't even bother to try installing stable diffusion.

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 หลายเดือนก่อน +1

      RTX 3060 (assuming 8GB card) is more than enough to do stable diffusion with. There are some applications where more VRAM is better, but overall with 8GBs you can do a lot with stable diffusion.
      Personally, I just don't want to pay for midjourney and I don't feel like doing my image generation online. So I run stable diffusion locally and free on my PC. But there are many different reasons why someone might want to use stable diffusion over midjourney or vice versa, and you can find plenty of those opinions in TH-cam videos.

    • @PawFromTheBroons
      @PawFromTheBroons 6 หลายเดือนก่อน

      I do everything I want, with very advanced usages, sporting a 2060.
      So you should be fine...

  • @user-gq2bq3zf1f
    @user-gq2bq3zf1f 7 หลายเดือนก่อน

    When I run inpaintanything in StableDiffusionUI, especially when I run inpainting, I keep getting error Unexpected end of JSON input.I ran it through Google Labs, what should I do?

  • @gothix114
    @gothix114 8 หลายเดือนก่อน

    Just out of curiosity, do you usually have 2 people talking/alternating in your videos?

  • @OptimusGPrime
    @OptimusGPrime 10 หลายเดือนก่อน +2

    So I got this to change the colour of my image's hair, but it keeps changing the hairstyle. How do you get it to keep the hairstyle but only change the colour?

    • @KeyboardAlchemist
      @KeyboardAlchemist  10 หลายเดือนก่อน

      With this type of simple inpainting, unfortunately, we are at the mercy of RNG for the most part. You can try a couple of things to increase your odds a little: (1) specify the hair style you want in the positive prompt (i.e., instead of just saying "pink hair", you can say "long pink hair with broad curls"), (2) similarly if the model is constantly giving you hairstyle that you don't want, you can specify those styles in your negative prompt, (3) make sure that you don't expand the mask area too much, I would say 2 to 3 clicks of the 'expand mask region' should be enough. I hope this helps as a quick fix.
      In future videos, I'll introduce ways to use ControlNet to keep your composition exactly the same as the reference image. So stay tuned for more content later on! Cheers!

  • @novysingh713
    @novysingh713 4 หลายเดือนก่อน

    Why does only inpaint anything use all of my GPU when I upload any image and then give an error out of cuda memory

  • @xyzxyz324
    @xyzxyz324 7 หลายเดือนก่อน +1

    Why the models have an inpaint title in their definition names ? Are there different versions of the models to use in inpaint? i.e. realisticvision vs realisticvision inpaint ?

    • @KeyboardAlchemist
      @KeyboardAlchemist  7 หลายเดือนก่อน

      Yes, some models have an inpaint version of it; not all models have an inpaint version though.

  • @unoreverseyourmom6119
    @unoreverseyourmom6119 9 หลายเดือนก่อน

    Great tutorial. Any tips on how to generate naked full body portraits of myself in different poses? I need really cool pics for my tinder.

  • @lilillllii246
    @lilillllii246 7 หลายเดือนก่อน

    I use StableDiffusion locally and when I press RUN SEGMENTANYTHING in INPAINT ANYTHING, it doesn't generate a masking image. what should I do?

  • @chea9986
    @chea9986 7 หลายเดือนก่อน

    I follow step all, This step run inpainting is "error" message. How to fix it?

  • @the17bman
    @the17bman 6 หลายเดือนก่อน

    Need help... I downloaded the models but when I hit the "Run Segment Anything" button, it just fails almost instantly. Saying something about tensor sizes not matching. How am I supposed to fix that?

  • @Yoshenesis
    @Yoshenesis 9 หลายเดือนก่อน

    Hello, I have original clothing designs, I usually do deformations in Photoshop to adjust them to a model but it's a lot of work, I see that you can change clothes and even people's faces, but I don't know if I can wear my clothes without having to train a model, is there a method to change clothes from one image to another? greetings

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 หลายเดือนก่อน +1

      Hello, thanks for watching! The short answer is, you can do it, but it will take you some trial-and-error and time. Here is the long answer:
      I have not seen a perfect workflow that will essentially copy a piece of clothing from a reference image to an input image, but the workflow that I showed in this video (th-cam.com/video/k8FfCicu5G8/w-d-xo.html) with Inpainting + Control Net Reference preprocessor will get you close (you can do this in Img2Img too). Be sure to do the following things to increase your chances of success: (1) make sure your reference image and input image are the same size; you will have a much easier time with it, (2) don't put any positive prompts in when you are doing inpainting; you never know which keyword is going to mess with your reference clothing's style (you can always add keywords back later), (3) make sure your inpaint denoising strength is very high (0.9 - 1.0), (4) make sure your Control Weight is very high (greater than 1.5), (5) Control Mode = 'ControlNet is more important', and (6) you may need to try a few different models/checkpoints because the impact of the model on this process is very high. Finally, you will probably need to generate a bunch of images with the random seed and hopefully get the one that you like.
      I hope this helps you. Cheers!

    • @Yoshenesis
      @Yoshenesis 9 หลายเดือนก่อน +1

      @@KeyboardAlchemist Thanks for such a complete answer, I really appreciate it, I'll take your advice, it's really helpful

  • @datngo27
    @datngo27 8 หลายเดือนก่อน

    I got error: "Segment anything failed". Anyone knows how to fix it? Many thanks.

  • @AntonioDal.
    @AntonioDal. 3 หลายเดือนก่อน

    where do you get the original positive and negative prompts? 17:15

    • @KeyboardAlchemist
      @KeyboardAlchemist  3 หลายเดือนก่อน +1

      Got it from the CivitAI model download page for majicMix v5.

  • @jetson35
    @jetson35 10 หลายเดือนก่อน +1

    :O

  • @guillermosepulvedaf
    @guillermosepulvedaf 10 หลายเดือนก่อน +1

    Hello, Im trying to find "realisticVisionV30_v30VAE-inpainting" but on civitai the file is "realisticVisionV51_v30VAE-inpainting.safetensors".. its the same version??

    • @KeyboardAlchemist
      @KeyboardAlchemist  10 หลายเดือนก่อน +1

      Yes, that's perfectly fine. It's just the latest version of the realisticVision model. I have this version too.

    • @guillermosepulvedaf
      @guillermosepulvedaf 10 หลายเดือนก่อน

      @@KeyboardAlchemist Thanks!!

  • @bazadam6635
    @bazadam6635 10 หลายเดือนก่อน +1

    I see that it is downloading something when I am running the inpainting and is taking forever to show the results. Downloading something like pytorch .... any help?

    • @KeyboardAlchemist
      @KeyboardAlchemist  10 หลายเดือนก่อน

      I'm assuming this happened after you clicked on 'Run Inpainting'? The first time you ever run this extension, it will be downloading some things in the background, which includes the inpainting model that you have selected (those are files around 2GB or more), so it will take a few minutes. But after download is complete, you should be able to see results. I hope it worked for you.

    • @bazadam6635
      @bazadam6635 10 หลายเดือนก่อน

      @@KeyboardAlchemist Figured it out it was downloading the Inpainting Model

  • @michaelbuzbee5123
    @michaelbuzbee5123 9 หลายเดือนก่อน

    This is a very good tutorial but I have came across the "not enough GPU memory" error any time I try to use it. Even if it is something I generated at 512x512. Anyone know of a work around for this or do I just have to wait?

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 หลายเดือนก่อน

      How many GBs of VRAM are you working with? If it's 4GBs or lower, you might want to try putting '--lowvram' in your command line arguments. This will enable low VRAM usage, but it will make your generation slower. Also, if you are not using '--xformers' I would highly recommend using this in your command line arguments (it makes image generation faster).

    • @michaelbuzbee5123
      @michaelbuzbee5123 9 หลายเดือนก่อน

      @@KeyboardAlchemist I have 8 on my 5700, I have figured out the work arounds for everything else so I have included --medvram already. I also hit it when trying to use the SDXL checkpoint.

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 หลายเดือนก่อน

      @@michaelbuzbee5123 Oh you have a Radeon card. Unfortunately, I won't be much help with using Radeon cards with stable diffusion. I found this reddit post of someone saying they have success with NMKDs ONNX implementation, which I know nothing about, but the link is here if you want to check it out: www.reddit.com/r/StableDiffusion/comments/106i83w/onnx_only_512x512px_on_amd_card_more_than_that/. I hope you can figure out some work around.

  • @duskairable
    @duskairable 9 หลายเดือนก่อน

    at 17:35, i'm courius why did u need to upscale the image to 720x1080 in order to change/fix/add the detail?? is that even necessary??
    why not just keep the same image resolution and then just change the denoising strength and optionally upscale the image later?
    in my experiment, changing only the denoising strength is enough to change/fix/add the image (no need to upscale)
    iv'e tried this way and there is no difference in detail between the upscaled image and the non upscaled image with the same denoising strength,
    the only difference is the image resolution of course, but the detail on the subject is the same.

  • @arifkuyucu
    @arifkuyucu 4 หลายเดือนก่อน

    I take error Segment Anything Failed. return torch.empty_strided(
    TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.

  • @felixmontanez4090
    @felixmontanez4090 หลายเดือนก่อน +1

    what model did u use to make the base image?

    • @KeyboardAlchemist
      @KeyboardAlchemist  29 วันที่ผ่านมา

      The model is called 'majicMIX realistic', you can find it on CivitAI.

    • @felixmontanez4090
      @felixmontanez4090 29 วันที่ผ่านมา

      @@KeyboardAlchemist what prompt did u use

    • @KeyboardAlchemist
      @KeyboardAlchemist  28 วันที่ผ่านมา

      @@felixmontanez4090 17:10 of the video has all the prompt info that you will need. Cheers!

  • @vpst00
    @vpst00 9 หลายเดือนก่อน +1

    Can I use a MacBook pro with these tools?

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 หลายเดือนก่อน

      As long as you can successfully install and run Automatic1111 on your Mac, then installing these extensions would be possible too. Best of luck!

  • @panzerkampfwagen1944
    @panzerkampfwagen1944 7 หลายเดือนก่อน +1

    alt + mousewheel = zoom

  • @nihilitys
    @nihilitys 8 หลายเดือนก่อน

    inpaint anything doesnt work on amd gpus:(

  • @magnos_decimus
    @magnos_decimus 7 หลายเดือนก่อน

    The inpaint anything tool didn't work for me. All I get is a black screen.

  • @awais6044
    @awais6044 7 หลายเดือนก่อน

    Make a video.user upload their own image and change clothes,hair,fashion items using prompt.

  • @Esendor
    @Esendor 3 หลายเดือนก่อน +1

    16:10 How your gens is so fast? When i start "Run Inpainting" it goes for 10 minutes! Impossible to use.

    • @KeyboardAlchemist
      @KeyboardAlchemist  3 หลายเดือนก่อน

      When generating your image, you should take a look at the Performance Tab in Task Manager and see whether all of your dedicated GPU memory is maxed out. If it is maxed out and spilling into Shared Memory, that's when the image gen gets very slow. Not sure if this is the case, but worth looking into.

    • @Esendor
      @Esendor 3 หลายเดือนก่อน

      @@KeyboardAlchemist changed cuda settings - disabled shared memory for Python.exe in Nvidia panel. SD stopped to generate hi-res fix (not enough gpu memory), which was working fine before. As result i returned old settings. I don't understand those thigs at all. RTX 3060 12 gb

  • @zerokelvinmedia9955
    @zerokelvinmedia9955 10 หลายเดือนก่อน +1

    So..basically... is photoshoping😊 without photohop....

  • @thekotfather
    @thekotfather 7 หลายเดือนก่อน +1

    canvas-zoom, man

  • @uzairansari9222
    @uzairansari9222 10 หลายเดือนก่อน

    Tried this. The inpainting procedure has been going on since 20 minutes now. I don't think it's supposed to take this long.

  • @dailyrum2203
    @dailyrum2203 6 หลายเดือนก่อน

    your voice changed in between

  • @eminence_
    @eminence_ 9 หลายเดือนก่อน

    You should add note that this does not work on AMD GPUs

  • @stormmage
    @stormmage 9 หลายเดือนก่อน

    9:40 I would disagree that the Stable Diffiusion 2 models are less quality than the Realistic Vision V3 models. They all look equally bad. The Realistic Vision V3 models both have the problem that the head is too big for the body, with shoulders and arms that are too small. This distorts the neck, making it look thick and giraffe-like. Without inpainting, the Realistic Vision V3 models would look better, because they have more details on the skin. The SD models look like they're using a soft mesh instead of skin, and there are errors on the clothes (both are missing necessary support seams / lines). Used as an inpainting model, the RVv3 model did not work. All four images did a terrible job of matching skin tone at the inpainting line, and you can see where her neck is a warmer color than her upper chest.

  • @kallamamran
    @kallamamran 10 หลายเดือนก่อน +3

    OMG, the piano overlay did NOTHING for this video... Great video otherwise ;)

    • @KeyboardAlchemist
      @KeyboardAlchemist  10 หลายเดือนก่อน

      Thank you for your honest feedback!

  • @12Jerbs
    @12Jerbs 9 หลายเดือนก่อน

    Not sure if anything has changed, but the Inpaint Anything is pretty useless. I can make a mask, set a prompt, using realVision inpaint, and nothing really changes. I tried to change a white top to black = turns grey; white top to red = pink; etc.. my experience is no where near what you are showing in the video.

  • @AmoGlobine
    @AmoGlobine 9 หลายเดือนก่อน

    dont say its great whitout test it just because video seems cool....

  • @-flanders-8975
    @-flanders-8975 7 หลายเดือนก่อน

    ooof, no more background music plz.

  • @heckensteiner4713
    @heckensteiner4713 10 หลายเดือนก่อน +1

    Try hands next time!

  • @Silverstreamable
    @Silverstreamable 10 หลายเดือนก่อน +1

    whos the girl i want to date her
    when are we getting turing passed robotics x AI?