Guide to Change Image Style and Clothes using IP Adapter in A1111

แชร์
ฝัง
  • เผยแพร่เมื่อ 30 มิ.ย. 2024
  • #a1111 #stablediffusion #fashion #ipadapter #clothing #controlnet #afterdetailer #aiimagegeneration #tutorial #guide
    The video talks mainly about uses of IP Adapter in Automatic 1111 For applying a style or a dress based on a reference image.
    00:00:00 Introduction and sample results
    See examples what we will learn and generate in this video such as changing image style based on reference image, mixing two images, and changing dresses of a person manually and automatically
    00:00:46 downloading IP Adapter models
    00:02:06 How to use IP Adapter in A1111 alone
    Understanding how IP Adapter works in the background and Shows example of generating robot from a golden sphere, changing settings, and what to expect
    00:04:20 using IP Adapter with other Controlnets examples
    00:04:55 statue face with glasses and hat example
    00:07:07 IP Adapter with Open pose control net using LCM LoRA examples
    Generated images faster using LCM combined with IP Adapter and controlnet
    00:08:12 mixing two images using IP Adapter and Depth map or using two IP Adapters
    00:08:50 img2img usage with IP Adapter
    00:09:21 inpaint example, older woman face in the body of younger woman
    00:12:55 Changing Dresses based on reference dress image examples
    USING IP Adapter to change dresses based on reference dress manually
    00:16:19 Automatically detecting clothes and changing them using After detailer
    See settings needed to make fashion model in After detailer work properly to change clothes automatically without masking them yourself based on a prompt
    Here we see some examples of what we will do in this video.
    Github page of IP Adapter
    github.com/tencent-ailab/IP-A...
    download IP Adapter controlnet models from
    huggingface.co/h94/IP-Adapter...
    Controlnet overall models
    huggingface.co/lllyasviel/sd_...
    download Realistic vision model from
    civitai.com/models/4201/reali...
    deepfashion model can be downloaded from (allong with other after detailer models)
    huggingface.co/Bingsu/adetail...
    watch stable diffusion and A1111 guide if you are new to AI Image generation in SD
    • Beginners Guide for St...
    watch Complete Guide on Controlnet usage
    • Complete Controlnet Gu...
    watch After detailer guide
    • After Detailer for aut...
    watch developing LoRA model for clothes
    • LoRA Clothes and multi...
    Thanks to all creators from Pexels.com and Freepik.com for the images they provided
    www.pexels.com/
    www.freepik.com/
    Prompts used: all displayed in the video:
    For IP Adapter mostly simple prompts such as : a robot, for more complicated prompts I used Realistic vision prompt recommendation which is
    Prompt:
    RAW photo, SUBJECT, 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3
    (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation,
    Computing Specs
    Computer Specs:
    Laptop: Legion 5 Pro
    Processor :AMD Ryzen 7 5800H , 3201 Mhz
    System RAM: 16.0 GB
    Graphics GPU: NVIDIA GeForce RTX 3070 Laptop GPU 8GB
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 75

  • @dflfd
    @dflfd 7 หลายเดือนก่อน +1

    thank you, this is a great tutorial! 🤩 it's really amazing how fast these tools have developed over the last year. IP adapter is incredible, can't wait to install it.

    • @AI-HowTo
      @AI-HowTo  7 หลายเดือนก่อน +2

      indeed, its too fast, sometimes it feels scary how fast things are progressing, hopefully they end up affecting our lives positively on the long run.

  • @BabylonBaller
    @BabylonBaller 5 หลายเดือนก่อน

    Appreciate the tutorials buddy. I can always count on you!

    • @AI-HowTo
      @AI-HowTo  5 หลายเดือนก่อน

      Glad to hear it!

  • @BabylonBaller
    @BabylonBaller 7 หลายเดือนก่อน +1

    Brilliant my friend.. IP Adapter is quite powerful I see

  • @allenraysales
    @allenraysales 6 หลายเดือนก่อน

    Thank you so much I leveled up thanks to you and your videos. This is an amazing tutorial.

    • @AI-HowTo
      @AI-HowTo  6 หลายเดือนก่อน

      Great to hear that it was useful, wish you the best in you learning journey.

  • @b4ngo540
    @b4ngo540 6 หลายเดือนก่อน

    i really appreciate the amount of information and effort you put in each video of yours
    i enjoy watching these video while im comfyui user, but they are still really useful to watch for how you explain everything in a simple way, and im still able to use these tips and advices in comfyui
    it would be really interesting to see this quality of tutorials as a new series about comfyui
    and im down to join your journey going from installing it all the way to the super complicated workflows
    you don't need all this edit to videos by adding text notes , just hit record screen and talk to the mic , everything you say is clear even without these extra notes, while still every effort you put there is appreciated (but it would be better if less edit would produce more videos :D )

    • @AI-HowTo
      @AI-HowTo  6 หลายเดือนก่อน

      thank you, will minimize these texts in future videos, I enjoy making these videos too, they are fun, and I am learning while doing them too, was hoping to start comfyui videos too, but I am short on time, hopefully next month i will do some, and thanks for the encouragment and the notes

  • @szw7729
    @szw7729 6 หลายเดือนก่อน

    Thank you!

    • @AI-HowTo
      @AI-HowTo  6 หลายเดือนก่อน

      You're welcome!

  • @thewebstylist
    @thewebstylist 7 หลายเดือนก่อน

    Oh wow wish wasn’t so complex to setup

  • @omgkhazix9442
    @omgkhazix9442 6 หลายเดือนก่อน

    Thank you, it's been nothing but pleasure watching and learning from your videos. Where could I possibly contact you for some additional questions regarding A1111? Would love you to be my mentor.

    • @AI-HowTo
      @AI-HowTo  6 หลายเดือนก่อน

      Thank you, it nice to read that some people find these Videos useful, and hopefully they contain some useful info here and there, unfortunately i cannot at the time being allocate more time for the channel or do any private or business contacts beyond these comments, I wish if i can allocate more time for the channel, hopefully i will in the future, because making these videos is really fun, and I love this topic too.

  • @carloosmartz
    @carloosmartz 4 หลายเดือนก่อน

    Hello my friend, i want to create a LoRa for clothes. I want that if i use Lora parameters between -1 and 1 when i increment the parameter in the prompt the model have to be fat. I mean you know how to use numbers in the dataset .txt files to teach the AI that -1 i skinny 0 IS fit and 1 IS fat when i use the prompt?

  • @GabryBSK
    @GabryBSK 5 หลายเดือนก่อน

    Thank you! I have a beginner question for you: what is it the best way to get consistent characters in terms of face and body fisicity today?

    • @AI-HowTo
      @AI-HowTo  5 หลายเดือนก่อน +1

      no idea...I think it is a composition of multiple methods and techniques though, IP Adapter models are Good, Controlnets/segmentatio/compsition...etc. but for me, I find using a LoRA model of a a face/body with after detailer to give most accurate and lively results, compared to all other methods.... but it takes alot of time to train a model that is really good for a certain character.

  • @vishalchouhan07
    @vishalchouhan07 6 หลายเดือนก่อน +1

    hi.. can you create a tutorial for transferring interior design style from one image to another by using IP adaptor? I saw a video where multiple design styles were translated onto a single source image (for example a drawing room picture) to create various design options of the same room (as per the reference design style images)

    • @AI-HowTo
      @AI-HowTo  6 หลายเดือนก่อน +1

      The same principel applies for Architectural styles or any other style, the reference image style will impact the generated image even in architecture... I might be doing something in ComfyUI soon, but i might not have enough time to do so, not sure yet, but I''l try.

    • @vishalchouhan07
      @vishalchouhan07 6 หลายเดือนก่อน

      @@AI-HowTo thanks in advance ☺.. would surely wait for the tutorial.

  • @lilillllii246
    @lilillllii246 4 หลายเดือนก่อน

    Thanks. A bit of a different question, is there a way to naturally synthesize the character files I want in an existing image background file rather than a text prompt?

    • @AI-HowTo
      @AI-HowTo  3 หลายเดือนก่อน

      I dont fully understand the question, but for more complicated image senthesis, ComfyUI and segmentation could be the way to go, as it gives us more workflows and tools to work with for complex scenes, you can google IP Adapter for ComfyUI for more details or segmentation of a picture, hopefully that guides you somewhere useful.

  • @awais6044
    @awais6044 6 หลายเดือนก่อน

    In production level we don't do inpainting process.what we do.user upload image and enter text change only outfit.
    Or we used faceswap method.
    Thanks

    • @AI-HowTo
      @AI-HowTo  6 หลายเดือนก่อน +1

      I agree that manual inpainting is not practical for real world apps, it is only fine on a personal level.

  • @nicolaseraso162
    @nicolaseraso162 หลายเดือนก่อน

    Hey bro, do you know how to install insightface in Automatic1111 (I use PaperSpace) in order to use the option of Face ID in IP Adapter?

    • @AI-HowTo
      @AI-HowTo  หลายเดือนก่อน

      not sure, for me it worked without any problems, just downloaded the IP Adapter Face id models into the Controlnet models folder, and the Face ID loras into the LoRA folder, and made sure Controlnet was upto date, and it automatically downloaded necessary extra models related to insightface such as buffalo_l , not sure, why some have troubles with this while others dont.

  • @Valentina-zx1pi
    @Valentina-zx1pi 3 หลายเดือนก่อน

    Thank you! I have a qustion.. It worked but the dress looks blurry, how could I solve this? What VAE do I need to download? Does the dress that I put as input changes the quality?
    Also if you could please help me, it takes 30m to create an image, for you it takes seconds! is there anything i can do to solve this?
    thank you in advance!

    • @AI-HowTo
      @AI-HowTo  3 หลายเดือนก่อน

      You are welcome.
      1- VAE is only required if it is not baked into the Model... this is mentioned in the Model, when you download it for example from Civit ai, it tells you which VAE you need or you dont.
      2- blurriness might happen if the inpait area is small (for example, when you try to inpaint for example a large image using a small inpaint area which results in resizing up and losing quality) ... may also result sometimes from using low denoising levels, changing the model may help, image quality if used in the IP Adapter doesnt change much the output, because the image is just analyzed.
      3- Generation time depends on your graphics card, I used RTX 3080 8GB Laptop, if you have a low end Graphics card then it becomes slow...but 30 minutes, means there is something wrong, if your graphics card is low end, then its best to use free online tools for Comfy UI... or install Comfy UI on your computer, it will probably give you better performance and you will find videos on youtube for Comfy UI usage.

  • @user-gq2bq3zf1f
    @user-gq2bq3zf1f 7 หลายเดือนก่อน

    In the video, the outfits are slightly different from the reference, is it possible to make them exactly the same?

    • @AI-HowTo
      @AI-HowTo  7 หลายเดือนก่อน +1

      currently, no it is impossible in stable diffusion using controlnets or simple methods, as far as i know, stable diffusion will always introduce a certain chaos element into the generation, this is part of it is designed, the best match of dressing could be achieved by Training a LoRA or Dreambooth as shown in this video th-cam.com/video/wJX4bBtDr9Y/w-d-xo.htmlsi=FCafRmzr8675RBuZ , but this process takes a long time to do and may not work from first attempt, and even then, it maynot not give 100% match... 3D Tools such as Blender are the only way for 100% match of clothes so far.

  • @quotesspace1713
    @quotesspace1713 3 หลายเดือนก่อน

    ComfyUI please please 🙏🙏

  • @tonyibarra1523
    @tonyibarra1523 7 หลายเดือนก่อน +1

    Hello, thank you for such great videos!
    I watched your videos in order, from oldest to latest and nothing was working for me: I was trying to follow along what you were doing since several months, and I just assumed that back in August you were using SD1.5 because SDXL was not production ready. So I was simply replacing "1.5" with "XL".
    Now I notice that even your latest videos are still on SD1.5, and that might be the reason nothing works for me :(
    Can you please explain why you're using 1.5 instead of XL?
    And yes, maybe it could be an important subject that no one has covered so far: why ControlNet works so good in 1.5 and nothing seems to work on XL, except for Lineart and a couple more. But even Canny/OpenPose are not working, at least for me.
    Thanks in advance, it's been frustrating to follow and not getting the same results!

    • @AI-HowTo
      @AI-HowTo  7 หลายเดือนก่อน

      sorry to hear that, it is not always easy to get good results from Stable diffusion, lots of testing is required to get the hangs of it, and in the Videos, I try to explain the concepts and the overall settings, which might require some adjusting depending on the prompt/subject/video you are using.
      I use SD 1.5 still because it is light weight and fast compared to SDXL, if my PC was more powerful, i would not use SD 1.5, i expect results would be better in SDXL for the same videos though.
      your should consider downloading Controlnets for SDXL which have been updated in recent months and better suited for SDXL than the older controlnets huggingface.co/lllyasviel/sd_control_collection/tree/main
      I also suggest that you turn on Preview mode when using Controlnet, to see the output of the preprocessor, to see if it is detecting things correctly... notice for example in my first usage in this Video, I didnt get the robot in the first test of the control net, and had to lower the controlnet weight down to 0.5 .... in some cases, you might also need to start Controlnet from 0.25 (Starting Control step).
      Image sizes in SDXL are 1024x1024 to get good images, unlike SD 1.5 which are 512x512 ot 512x768 or 768x1024 these too play a great role...this could have contributed to the problem too, not sure.
      also consider reinstalling A1111 a new, incase some conflicts are happening and possibly causing things stuck somewhere and producing illogical results.

    • @tonyibarra1523
      @tonyibarra1523 7 หลายเดือนก่อน

      @@AI-HowTo Thank you so much for your answer. I will address each part:
      - Your videos are great and you explain very well. My frustration is about not being able to replicate what you seem to do so easily :)
      - I have a very old computer, not powerful at all. I do SD on Runpod, where I pay about $0.40 per hour. It's cheaper than getting a new PC/GPU !
      - I have downloaded and installed all those SDXL Controlnet models, I've been working on this for 3 weeks and I have made matrices in Excel with what works and what doesn't. Even the most basic OpenPose is not working :(
      - I use preview mode for everything, so I know the Pre-processor is doing its job. I will try tweaking the weight some more and see if I get anything useful.
      But even following you first ControlNet video in SDXL doesn't work. Yesterday I tried 1.5 and it works. I switch the same to SDXL (model and CNs) and nothing. A/B testing shows the issue seems to be in CN models, and I have tried them all: several Canny, OpenPose, Depth...
      - I understand the difference in resolution, and I'm trying the same images in CN, 512x512 when in 1.5 and 1024x1024 when in XL, just to make 100% sure.
      - A1111 is installed in Runpod template and it works, it's updated and I run the updates everyday before starting.
      I'd love to share my findings and comparisons with you if that would make any sense. Even give you access to my Runpod account so you can A/B like I do.
      Do you have a Discord or some other way we could talk about this, send you some screen captures and shots?

    • @AI-HowTo
      @AI-HowTo  7 หลายเดือนก่อน

      I see, hopefully things work out for you, other channels may have more info about Controlnet in SDXL that may help... unfortunately I dont have discord, nor can I allocate time more than I do now for this Channel, it is really fun to make these videos, and answer questions sometimes, and was hoping to make more and allocate more time for the channel, but unlikely to be able to for another year at least given how my life is running now :), wish you best of luck, if I found a question here, and i know its answer, I will reply here.

  • @novysingh713
    @novysingh713 4 หลายเดือนก่อน

    how can I replace the same dress on the model why do dress style and design change whenever you try to replace the dress on model?

    • @AI-HowTo
      @AI-HowTo  4 หลายเดือนก่อน

      when you replace the dress each time you often get a new dress style, unless you used a LoRA with a specific dress inside the detailer command which will dress up the new dress using the same style, as stable diffusion will always create new random elements with each new seed (if i understood you correctly)

  • @moulichand9852
    @moulichand9852 2 หลายเดือนก่อน

    is there is any script availabe without using web ui?

    • @AI-HowTo
      @AI-HowTo  2 หลายเดือนก่อน

      the web ui is built on top of python scripts, so everything in stable diffusion image generation or training is based on scripts, so they can be automated, but i have not used that unfortunately so i dont have enough expertise to guide you on that

  • @froilen13
    @froilen13 3 หลายเดือนก่อน

    Does this works for forge?

    • @AI-HowTo
      @AI-HowTo  3 หลายเดือนก่อน

      yes, all control net work in forge too.

  • @AiNomadArt
    @AiNomadArt 3 หลายเดือนก่อน

    Wondering if it can fix deformed hands by giving it a hand photo

    • @AI-HowTo
      @AI-HowTo  3 หลายเดือนก่อน +1

      :) , hope so, but of course it doesnt... after detailer (hand model), controlnets ...etc. the way to go with hands.

    • @AiNomadArt
      @AiNomadArt 3 หลายเดือนก่อน

      @@AI-HowTo I have tried the Adetailer hand with depth hand refiner, but keep getting similar bad hand even i set it high denoise, I got heart attack then....

  • @damned7583
    @damned7583 2 หลายเดือนก่อน

    where do I download the ip_adapter_clip_sd15 processor?

    • @AI-HowTo
      @AI-HowTo  2 หลายเดือนก่อน +1

      I think it is (ip-adapter_sd15.bin) ... all 1.5 models are in huggingface.co/h94/IP-Adapter/tree/main/models

    • @damned7583
      @damned7583 2 หลายเดือนก่อน

      @@AI-HowTo I work with Google Colab, could you tell me which folder to place this file in?

    • @AI-HowTo
      @AI-HowTo  2 หลายเดือนก่อน

      I think it should be the same as local installation folder --- which is the Controlnet model's folder, on my local installation that is stable-diffusion-webui\extensions\sd-webui-controlnet\models ... but i think A1111 also looks inside stable-diffusion-webui\models\ControlNet folder as well

  • @HeinleinShinobu
    @HeinleinShinobu 6 หลายเดือนก่อน

    where to download deepfashion2 model for adetailer?

    • @AI-HowTo
      @AI-HowTo  6 หลายเดือนก่อน +1

      huggingface.co/Bingsu/adetailer/tree/main

    • @AI-HowTo
      @AI-HowTo  6 หลายเดือนก่อน +1

      huggingface.co/Bingsu/adetailer/resolve/main/deepfashion2_yolov8s-seg.pt?download=true

    • @HeinleinShinobu
      @HeinleinShinobu 6 หลายเดือนก่อน

      @@AI-HowTo thanks!

  • @odev6764
    @odev6764 6 หลายเดือนก่อน

    is it possible to be done on comfyui ?

    • @AI-HowTo
      @AI-HowTo  6 หลายเดือนก่อน +1

      yes, its best done on ComfyUI, it gives your more options, and can achieve better workflows there.

    • @odev6764
      @odev6764 6 หลายเดือนก่อน

      ​@@AI-HowTo I'm try many control nets with segmentation, ip adapter, open pose, to enter with a picture and pass any clothes as reference but it always change a little bit the clothes, and face is not so good. do you know any way to use an image of a person an change just his clothes keep all details on the clothes?

    • @AI-HowTo
      @AI-HowTo  6 หลายเดือนก่อน +1

      I dont think keeping all details now is possible with SD... currently there are great researches on this subject without using SD, but they are not open source yet unfortunately like this one humanaigc.github.io/outfit-anyone/ hopefully soon we see something like this available as open source, now SD cannot do that as far as i know with high level of accuracy.

    • @odev6764
      @odev6764 6 หลายเดือนก่อน

      @@AI-HowTo I saw this project but they don't left it open source. the only way I believe it could be done is fine tuning an SD model to do it, but it requires a huge dataset

    • @AI-HowTo
      @AI-HowTo  6 หลายเดือนก่อน

      yes, I think with proper training good results can be achieved indeed, but requires lots of resources trials/errors, hopefully soon we get something out of the box or requires less training and resources.

  • @ibrahimismaeeldawood
    @ibrahimismaeeldawood 5 หลายเดือนก่อน

    did the same why not my image not converted. I added image of KIA car in return got a young girl

    • @AI-HowTo
      @AI-HowTo  5 หลายเดือนก่อน

      You might have used a light model, try using plus model, I suggest to watch the video carefully, IP Adapter basically just describes the image and affectst the output based on accurate image description and injection of its description into the generation... try other examples to detect where the root cause of the problem is.

  • @kallamamran
    @kallamamran 7 หลายเดือนก่อน

    LCM LORA is useless for final results. Quality is NOT good enough

    • @AI-HowTo
      @AI-HowTo  7 หลายเดือนก่อน +2

      with Euler a it gives good quality, but not as good as Normal generation indeed, I guess this is a compromise between speed vs. quality, so it will have its own usecases for Videos or Preparing content quicky, but for Quality Results the slower Normal Generation seems inescapable

  • @erenliify
    @erenliify 3 หลายเดือนก่อน

    ipadapter plus face / inpaint is not working for me. Trying to inpaint the face and set everything same with you but the result are robotic dumbest face. Not even a face...

    • @AI-HowTo
      @AI-HowTo  3 หลายเดือนก่อน

      not sure, make sure you are selecting (Whole Picture) for Inpait Area option instead of (only masked), this worked better for me.... usually when i inpaint, i selecte only masked, but with IP Adapter, results look different.

    • @erenliify
      @erenliify 3 หลายเดือนก่อน

      @@AI-HowTo every settings are same with you but not working. Then i change model, reduce ds to 0.4 now its better but still not working properly like you use :(z

    • @AI-HowTo
      @AI-HowTo  3 หลายเดือนก่อน

      but 0.4 denoising strength will not significantly change the style of the face, it will change it, but not enough to make it very different... in general, these models dont work as we always hope, we need to keep trying to figure something that works for us

  • @googleyoutubechannel8554
    @googleyoutubechannel8554 5 หลายเดือนก่อน +1

    I could only follow this because I've used A1111 quite a bit and know about how models are setup, you skipped over an enormous amount of critical context and basic information on folder structures, someone trying to follow this without expertise in SD would be completely lost fyi, you're too close to the subject and are not showing basic empathy.

    • @AI-HowTo
      @AI-HowTo  5 หลายเดือนก่อน +1

      Thanks for your input, correct, in newer videos i often try to reduce the amount of information that i may have mentioned in previous videos such as in Controlnet guide video, or others, I still need to learn alot in order to make videos engaging, reduce reptition while still useful to the majority of people at the same time.

  • @godpunisher
    @godpunisher 6 หลายเดือนก่อน

    We are living in a very interesting time. We have seen the birth and rise of AI. I do not wish it, but this maybe the last era of humans.

    • @AI-HowTo
      @AI-HowTo  6 หลายเดือนก่อน

      if AI is not propertly regulated, and oriented to help all people, it will definately be a big problem in the near future, especially what is know now as Artificial General Intelligence, which Chat GPT seems to be almost achieving that.

  • @masterzed1
    @masterzed1 2 หลายเดือนก่อน

    you edited to much......