LoRA Clothes and multiple subjects training for Stable diffusion in Kohya ss | Fashion clothes

แชร์
ฝัง
  • เผยแพร่เมื่อ 4 ก.ค. 2024
  • #stablediffusion #A1111 #AI #Lora #koyass #sd #clothes #cloth #dress #fashion #clothtraining
    This video discusses a possible method for training clothes in stable diffusion using kohya ss LoRA models...which can be useful for fashion business/cloth stores or other businesses or users.
    the goal here is to train objects such as clothes, use existing clothes which are on plastic mannequins or without to dress them on new real person (or AI generated person) in different poses/backgrounds/settings
    it is possible to train clothes without having them on mannequins or existing models using the cloth pictures only but results are not as good as here, with mannequins it gives better results and we can automatically remove the mannequin from the generated images with help of captioning and some proper data set preparation.
    the video will explain multiple subject training, object training method in general and compare results from regularized and non regularized sets
    Notes:
    1- we can use 1 to 20 images for training, the more the better
    2- clothes should ideally be seen from different angels and how they are dressed on a plastic mannequin or another real person
    3- the face of the mannequin or person must not repeat in most images to avoid learning it too
    4- we can change the cloth colors afterwards using prompts in most cases
    5- regularization can make model more felxible but reduce resemblance slightly
    6- training multiple subjects is possible, but can be annoying and less felxible than single subject because each subject may require different number of training steps
    7- use after detailer to get good full boy faces in all your training targets.
    8- it is possible to generated more than one prompt in Kohya ss by putting one prompt in each line in the sample image generation box
    Kohya ss repositry for training
    github.com/bmaltais/kohya_ss
    Birme for resizing/trimming
    www.birme.net/?target_height=768
    sorry, didnt keep the exact set of images i used for this video for classification, but i took a subset of those huggingface.co/datasets/AIHow... .... as explained in the video, class images should ideally be clothes, but I wanted to reduce effect of the mannequin (and disrupt 1girl token) so i used a woman with clothes on ... this works too since class images basic role is to disrupt training and reduce overfitting.... usually though class must be of same training image class (you can also not use class images at all)
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 210

  • @activemotionpictures
    @activemotionpictures 10 หลายเดือนก่อน +1

    wow, this is the most insightful video about LORA training. Thank you for sharing!

    • @AI-HowTo
      @AI-HowTo  10 หลายเดือนก่อน

      Thank you for your words, hopefully it maybe useful to some.

  • @daffatahta9059
    @daffatahta9059 11 หลายเดือนก่อน +1

    Nice work!

  • @ariftagunawan
    @ariftagunawan 10 หลายเดือนก่อน +1

    this is the guide I needed. thank you master...

    • @AI-HowTo
      @AI-HowTo  10 หลายเดือนก่อน

      You are welcome, I hope it will be useful to you or give you some clues of how to proceed about similar topics.

  • @JavierPortillo1
    @JavierPortillo1 11 หลายเดือนก่อน

    Thank you! this is very helpful!

    • @AI-HowTo
      @AI-HowTo  11 หลายเดือนก่อน

      Glad it was helpful!

  • @CBikeLondon
    @CBikeLondon 11 หลายเดือนก่อน +1

    excellent

  • @rbscli
    @rbscli 11 หลายเดือนก่อน +3

    awesome video, i'ts what i'm trying to train at the moment. You can use two lines on the samples so you can see two exemples or more on each epoch on koya training.

    • @AI-HowTo
      @AI-HowTo  11 หลายเดือนก่อน +1

      Great tip! thank you

  • @TheCndra
    @TheCndra 8 หลายเดือนก่อน

    Very nice tutorial, thanks.

    • @AI-HowTo
      @AI-HowTo  8 หลายเดือนก่อน

      You are welcome!

  • @rolgnav
    @rolgnav 11 หลายเดือนก่อน

    Awesome video! Thank you for taking the time to make it for us! Some questions: I'm trying to make some cultural clothing loras, and was using pictures of clothing on a mannequin (no head nor arms), I only have a front and a back picture for a specific color, but there are about 10 different colors, but pretty much the same design. (I have about 20 pictures, (10 front and 10 back clothing outfits) same design just different colors) So by not describing the colors, will it just randomize the colors? So everything I want to take out like the plain white backgrounds, I just caption it? Do I also caption no arms and head and mannequin? Basically in my previous attempt before seeing your video, some of my generated pictures ended up having no arms lol. Thank you in advance!

    • @AI-HowTo
      @AI-HowTo  11 หลายเดือนก่อน +1

      you are welcome, its purely experimental process, but training clothes alone without seeing how they dress on a mannequin or a person will not produce same good results, anything that repeats will have the strongest effect... if you have different colors, you should caption that, and yes white background must be captioned, when head is out of frame, it must be captioned too such as when croppped ... no arms must be captioned too... regarding output, I really cannot tell regarding colors, because SD is kind of stochastic... but in general, for istance, if i mix different objects of the same class, it might get some average of them and do some randomization of colors. .... having different colors of the same object might be useful to capture the shape of the object I believe....just continue experimentation, its really mostly about try-error and repeat.

    • @rolgnav
      @rolgnav 11 หลายเดือนก่อน +1

      @@AI-HowTo Thank you!

  • @Wozner
    @Wozner 11 หลายเดือนก่อน

    I love you! Thank you so much

    • @AI-HowTo
      @AI-HowTo  11 หลายเดือนก่อน

      You're welcome!

  • @migueld8970
    @migueld8970 10 หลายเดือนก่อน

    i was looking for how to do this. Thanks!

    • @AI-HowTo
      @AI-HowTo  10 หลายเดือนก่อน

      You are welcome, hopefully it is useful.

  • @carloosmartz
    @carloosmartz 4 หลายเดือนก่อน

    how do i add parameters in the dataset? in the captions maybe? i want a lora who makes fit people when i set 0 in the prompt and makes fat people when i increment this number. im doing the training with 3 diferent folders, 1 skinny, 1normal, and 1fat... should i just add a number in the captions?

    • @AI-HowTo
      @AI-HowTo  4 หลายเดือนก่อน +1

      as far as I know, captions are just used for describing the object, we dont add numbers or parameters there, just Keywords, so we should have Skinny_Shape_Triggerword for instance in every text caption in the skinny folder .... and another trigger word in the second folder captions ....etc ... using similar file structure as I described in this video .... the training however should ideally follow a style training guide (as th-cam.com/video/RT2jj-5t8x8/w-d-xo.html) mixed with this video ... so you train a style not an object ....then start experimenting, this could require you to build 10s of LoRA tests to achieve the results that you want, and could work out from the first test too ..... the resulting LoRA for instance will only affect body shape, then use After detailer to automatically inpaint the face with the target face you want to ReActor if you want to use faceswap methods .... as a LoRA can only have limited info, and we must combine multiple techniques to produce the final output.

    • @carloosmartz
      @carloosmartz 4 หลายเดือนก่อน

      @@AI-HowTo thank you , nice videos btw!

  • @mexa31416
    @mexa31416 11 หลายเดือนก่อน

    Thank you for doing this great video!!!,..., but I can't find a good tutorial to install the Kohya LoRA webui..👉🏻👈🏻

  • @stanislavchernichkin1954
    @stanislavchernichkin1954 7 หลายเดือนก่อน +2

    Have you tried the regional prompter? As I can see, this is a good way to separate faces from clothes, you can specify a face in one region and clothes in another.

    • @AI-HowTo
      @AI-HowTo  7 หลายเดือนก่อน

      Thanks for the tip, I have not tried it before, but I will check it out, baesd on what i read about it in the github, it looks very interesting extension with many uses, thanks.

    • @valorantacemiyimben
      @valorantacemiyimben 7 หลายเดือนก่อน

      Hello. How can we make modeling photographs that contain only dresses appear on the model?

  • @spasovan
    @spasovan 8 หลายเดือนก่อน

    Hi and thanks! It is very difficult to find the information you provides in such a concise way!
    I am working on a case where I am trying to produce images containing a relatively simple symbol which is defined by its geometry (like a swastika, cross, peace symbol). The geometry should not be perfectly matched (e.g. a cross could have different proportion, or have variations, but it always be recognized as a cross). The model should be able to generate the symbol on different locations and in different appearances - e.g. it could be sprayed on a wall in any color, it could be a tattoo on a skin, or a symbol on a flag. Ideally, I should be able to produce images of this object containing different perspective distortions (e.g. if you observe a wall with a symbol from an angle, it looks distorted).
    Could you give me some advice? Like what images to include in the training? Should I try to crop to the symbol in order to get rid of the rest (e.g. if I have an image from a wall with this graffiti and other graffiti, to crop only for the symbol of interest)? How a prompt should look like for this image (example image could be a wall with graffiti some trees on the side and another house in the background)? For the training set: should I try to include images of the most common scenarios (e.g. graffiti with the symbol and flags)? Or would it be enough to have images from a 3d renders with variations (such as different perspectives, colors, zoom level, etc)? Or both? And would the task that I describe, require a large training dataset?
    Thank you very much!

    • @AI-HowTo
      @AI-HowTo  8 หลายเดือนก่อน

      You are welcome,
      I think it is possible, since you are training a Symbol, so the class name would be a Symbol, if you want to create a regularized model (which gives more flexibility in my opinion), then images should be various symbols generated by the checkpoint used for instance ..... the data set doesnt need to be large since the symbol is not a complex concept, but what matters is that the symbol appears in all the images at different scales with different backgrounds or simple backgrounds, the symbol in the training dataset must be large enough that it can be recognized accross all images, the backgrounds should ideally be different in the images or simple, so that LoRA captures the symbol as the only object that repeats..... both methods work (3d renders or the scenarios that you describe), while 3D renders with simple backgrounds can produce smoother results and make training easier.... with LoRA testing is very important to see what works best for your data set and checkpoint, because sometimes, even using the same dataset with same parameters maywork for instance one one checkpoint better than it does on another.

    • @lotzewing
      @lotzewing 7 หลายเดือนก่อน

      @spasovan I am also figuring out how to do the same for a symbol on products like cups, plates, bottles, etc. Did you have any success or insights on this, I am new to this so I would love to connect and discuss with you.

    • @AI-HowTo
      @AI-HowTo  7 หลายเดือนก่อน

      sorry, I am only available for contact at the time being, I have not worked on symbols before, but one can investigate use of Contorlnets, inpainting methods, and other simpler methods than training first, depending on his requirements, as training may take sometime to complete, you can check other videos and try to gain some insight on the best and fastest approach, depending on your requirements.

  • @spawndli1410
    @spawndli1410 10 หลายเดือนก่อน

    I have always wondered what would happen if one tags the classification images appropriately, this in theory , just from in information standpoint could allow better classification, without worrying to much that you have found the correct class. For example a a few pictures with a man in pants would not be detrimental, as the as the classification tag for those images , have no tags for woman or dress at all. So it enforces the Dress , and the class dress against a broader reference frame i.e. With in person. Of course i don't know if the this is technically implemented , but just from information perspective it sounds feasible.

    • @AI-HowTo
      @AI-HowTo  10 หลายเดือนก่อน

      yes, it is possible to automatically caption the class images too, but I think i tried it before, it didn't turn out to be better, more experimentation is needed in this area, possibly certain tests might produce better results

  • @lilillllii246
    @lilillllii246 7 หลายเดือนก่อน

    Is there a way to apply the same outfit to different models at once?

  • @Trumf888
    @Trumf888 11 หลายเดือนก่อน

    You are the best! I have been looking for material on this topic for a long time, and could not find it. Thank you! But there are two questions:
    1) How to train shoes? and then put it on a person? moreover, the accuracy of shoes is important and there are only photos of shoes, without legs, that is, shoes are not worn by a person
    2) did LyCORIS transfer the model successfully enough, how to call the shoes on the person now? For example, a person is walking around the city and wearing these shoes?

    • @AI-HowTo
      @AI-HowTo  11 หลายเดือนก่อน

      any wearable or object can be trained in similar principles but not with 100% accuracy, SD 1.5 is not that good, you may get lots of deformity, but possibly some good samples ... shoes/clothes could be trained alone without the person but results may be of less quality...anyway but dont expect 100% accuracy still with finer details such as logo/text... hopefully SDXL may be better in this regard.

    • @AI-HowTo
      @AI-HowTo  11 หลายเดือนก่อน

      SDXL works better for training, but it requires more GPU, especially for anatomy, text, details...etc... same principles for training, but turning XDXL option on when selecting the model.
      LyCORIS in general, produces slightly better results than Standard LoRA in general...so if LoRA didnt work out, try LyCORIS settings...if you GPU is good, go with SDXL immediately.

  • @_aicons_
    @_aicons_ 10 หลายเดือนก่อน

    Hi, can I ask you where I can find so great pictures or wallpapers of the womans in leggings you have in PC? Are their yours or u downloaded them from any website? Thanks bro, and amazing tut!! 😎

    • @AI-HowTo
      @AI-HowTo  10 หลายเดือนก่อน +1

      Thanks, there are some great free resources of clothes/people/scenes such as in www.pexels.com/search/pexels/ for free.... also www.freepik.com/ but freepik only allows 10 freedownloads per day ... these sites can give images at high quality

  • @FarisTV_Pishang
    @FarisTV_Pishang 9 หลายเดือนก่อน

    Hi may i know, is it possible to change a character's outfit for a video? For example i training a blue hoodie and i want the character wear the hoodie for the entire video. I've been thinking to turn the video to png sequence and generate it in stable diffusion using the hoodie has been trained or do you have any other steps?

    • @AI-HowTo
      @AI-HowTo  9 หลายเดือนก่อน

      currently, this is very difficult, using LoRA or a check point trained on this hoodie allows you to change the person's outfit indeed using img2img if you increase the denoising strength higher than 0.5, but even with controlnet, you will still get flickering and several bad frames here and there, impossible to produce a smooth video, especially if the subject in the video is wearing a completely different clothing... img2img will also change the hole picture, not only the outfit... you may have checked this video before th-cam.com/video/PDlmnhtkgMQ/w-d-xo.html or be interested in watching it .... but currently, given SD capabilities, it is basically not possible to produce a flicker free video, and flickering and malformation will increase the more you try to change the character's dress, in my opinion and based on my experimets.

  • @Valentina-zx1pi
    @Valentina-zx1pi 10 หลายเดือนก่อน

    Hi! Thank you for the amazing video. Can I dress a model that I made in Midjourney using LoRA (I want to dress her with an exact specific dress from a brand) But I DON'T want to generate the model in Stabe Difussion, I want her to be exactly the model I made in Midjourney wearing an specifict dress. Thank you in advance! Id really appreciate your help

    • @AI-HowTo
      @AI-HowTo  10 หลายเดือนก่อน

      img2img tab can be used to manipulate existing image, so you bring your image from midjourny, into img2img tab and apply manipulations on the image there...... if the brand has been trained in SD well, you can use inpaint feature to change its dress for example which requires manual masking or using extensions for masking the clothes... ...however, dressing exact brand with extreme details is almost impossible in SD especially without having a well trained LoRA on that piece of clothes... if you dont want to Train a Model, then your only option would be to use Controlnet, putting your images in controlnet (in img2img) and try different options such as (Reference + Others) trying to get what you want ... but still... I doubt you will get exact results per your description.

  • @user-ox7du2lp8c
    @user-ox7du2lp8c 8 หลายเดือนก่อน

    HI! Can I keep the respective model instance prompt when merging models? For example, I merge the models of clothes and pants, but I use this merged model when generating, but I only want to generate pants, not clothes. How can I do this?Thanks!

    • @AI-HowTo
      @AI-HowTo  8 หลายเดือนก่อน

      usually yes, if captions have trigger words, but Merging doesnt (always) produce the best results compared to using single LoRA for the object, as far I saw, merging can mix some features and get some average features sometime from the two models, which is recommended sometimes and not that great othertimes... so if you used the Trigger word of Pants with appropriate prompts and it didnt give you the result that you want, then avoid merging in my opinion... it is a try and error process and doesnt produce same effect for all cases.

  • @faikpasa8653
    @faikpasa8653 11 หลายเดือนก่อน +1

    GOD BLESS FOR THOSE WHO PREPARED THİS VIDEO

    • @AI-HowTo
      @AI-HowTo  11 หลายเดือนก่อน

      Thank you, hope it is useful.

  • @user-si2xm3jn6r
    @user-si2xm3jn6r 6 หลายเดือนก่อน

    Hi!! awesome video, was wondering that whether the picure of the clothes of the mannequins you took from a camera or where they designed in some Ai website etc. If so i would like to which app

    • @AI-HowTo
      @AI-HowTo  6 หลายเดือนก่อน +1

      you can take them from camera in a store too... i used some random pictures just to explain the video

    • @user-si2xm3jn6r
      @user-si2xm3jn6r 6 หลายเดือนก่อน

      @@AI-HowTo great man!!! love the video

  • @___x__x_r___xa__x_____f______
    @___x__x_r___xa__x_____f______ 3 หลายเดือนก่อน

    That is so useful to know. I wonder if there is a way to have a chat with you over at discord or done thing?

    • @AI-HowTo
      @AI-HowTo  3 หลายเดือนก่อน

      sorry , not at the timeing.

  • @Avenger222
    @Avenger222 7 หลายเดือนก่อน

    It's "repeats" not steps. at 8:40, but overall really good and informative, thank you!

    • @AI-HowTo
      @AI-HowTo  7 หลายเดือนก่อน +1

      true, repeats seems more accurate, thanks.

  • @marcosf2445
    @marcosf2445 6 หลายเดือนก่อน

    Do you use regularization images in your lora training when you are training a person? I already tried with regularization images but the resemblance is lower when I didn't use this. I don't know if i'm doing something stupid.

    • @AI-HowTo
      @AI-HowTo  6 หลายเดือนก่อน

      its best to try both, with reg and without... i often obtained better results with regulraization images included, the model becomes more rich and flexible...training will learn some features from the reg image set too, however, reg images are not a must, many people are getting better results without it...it is basically not a try-error more than an exact science.

  • @___x__x_r___xa__x_____f______
    @___x__x_r___xa__x_____f______ 3 หลายเดือนก่อน

    at 20:52 you say a larger rank will cause the model to overfit faster, so only use high rank like 128 with larger datasets of hundreds, otherwise stay on 32:4. Would love for the source to this assertion. I 've been looking for reliable information in regard to dimension, and this sounds really interesting, but would love to get the source or vaguely where to find this source. thank you!

    • @AI-HowTo
      @AI-HowTo  3 หลายเดือนก่อน

      what i have said was mostly based on try-error tests of 10s or hundreds of LoRAs... higher rank allows absorbing more data, but i found it to overfit faster... Kohya has some wiki useful info that you can read and possible find some useful info inside it github.com/bmaltais/kohya_ss/wiki/LoRA-training-parameters the wiki is based on hoshikat.hatenablog.com/entry/2023/05/26/223229 which is written in Japansese but you can use Auto page translate , the japanese reference was the best i saw online, it contains many extra links on training subjects and stable diffusion that I saw to be better explained than any other english counter part.

  • @gerrydeleau7623
    @gerrydeleau7623 10 หลายเดือนก่อน

    Could it be possible to inject the clothes via img2img, not via lora? Means: Training the Modells in a Lora, dressing them through img2img? Any Workflow ideas?
    Thx for the video, nice approach!

    • @AI-HowTo
      @AI-HowTo  10 หลายเดือนก่อน

      You are welcome, one can dress a character using Controlnet canny or normal maps to match a certain dress using inpaint or using img2img with after detailer for fixing the face and contorlnet , which might be easier, but LoRA allows capturing the dress from multiple angels and have good enough details.... although it might be difficult to capture very complex details of logos/text/embroidery.... SD in general, is not that great in 100% matching of objects.

  • @BabylonBaller
    @BabylonBaller 11 หลายเดือนก่อน

    Perfect training video! Question, what was your prompt to create the regularization sets of full body shots? I have to fight with SD to stop giving me so many portraits and close ups.

    • @AI-HowTo
      @AI-HowTo  11 หลายเดือนก่อน +2

      masterpiece, best quality, pretty young woman, long hair, full body, standing, wearing a dress, looking at viewer
      Negative prompt: low quality, worst quality, bad anatomy,bad composition, poor, low effort, badhandv4
      Steps: 30, Sampler: Euler a, CFG scale: 6, Seed: 1782376303, Size: 512x768, Model hash: ec41bd2a82, Model: photon_v1, ENSD: 31337, ADetailer model: face_yolov8n.pt, ADetailer prompt: "masterpiece, best quality, pretty young woman, 1girl, portrait, intricate details, photorealistic, hires, 4k, detailed skin texture, detailed eyes, realistic skin ", ADetailer confidence: 0.3, ADetailer dilate/erode: 4, ADetailer mask blur: 4, ADetailer denoising strength: 0.4, ADetailer inpaint only masked: True, ADetailer inpaint padding: 32, ADetailer version: 23.7.5, Version: v1.4.0

    • @AI-HowTo
      @AI-HowTo  11 หลายเดือนก่อน +1

      for some i used after detailer to get some good class images, for some i didnt, i also did some 512x512 and some 512x768 images.... i also used multiple commands, for instance, generate some full body, some cowboy shot, some portraits

    • @BabylonBaller
      @BabylonBaller 11 หลายเดือนก่อน

      @@AI-HowTo Ahh perfect, I will give that a try. Thanks. Have you tried to combine Loras? Example, trying to make the instagram model you Lora'd a few weeks ago and make her wear the Blue Jeans or Blue dress from this tutorial? That was amazing

    • @AI-HowTo
      @AI-HowTo  11 หลายเดือนก่อน

      I did merge some LoRAs in which one had slim body, the other is slightly fat, and it gave some good results, like an average of both, Merging produced better aesthetic features, it is useful to merge sometimes, produces great results, features changed a little from both LoRAs, but the result was prettier than both.

    • @AI-HowTo
      @AI-HowTo  11 หลายเดือนก่อน

      I must say however, merger only worked for LoRAs or LyCORIS of the same Network rank, that I have developed before...for instance this LoRA will not merge with the previous one with 128 rank, because this one is 32 rank only.

  • @ibhmedia287
    @ibhmedia287 6 หลายเดือนก่อน

    Really hepful video thank you ! Do you think there is a way to automatize all this process to request a Lora training by api call on demand for a specific product? Thank you for your time and dedication.

    • @AI-HowTo
      @AI-HowTo  6 หลายเดือนก่อน

      I think its possible, but it requires some coding,, since Kohya can run using command script, all these tools are basically python code called through this Gradio user interface, so it can be automated, but requires some work and expertise in python and these training libraries, so that you can automatically caption the images, train, even automatically test, and set a specific suitable number of steps such as 2500 steps for instance, which may not be perfect for all data, but depends on what you want exactly .... but not sure how effective that can be in terms of quality.... soon we will get better tools that this for cloth fitting such as this one humanaigc.github.io/outfit-anyone/ but this is not open source, very likely will be commericial to use.

  • @DavidBrown-tv8fx
    @DavidBrown-tv8fx 11 หลายเดือนก่อน

    I just wanted to say a big thanks for your amazing videos! And now i want to train a glasses lora, how to prepare the training data? When training the glasses, I always seem to have instances of people, like their clothes and faces, showing up in the result. How can I avoid this kind of situation?

    • @AI-HowTo
      @AI-HowTo  11 หลายเดือนก่อน +1

      training data must not have repetitive people (same faces), more different the better , secondly, the regularization set of person (women/men) can make learning the faces in the data set weaker, so stopping early in training becomes important to avoid learning the people too, and learning glasses stronger since it repeats in all images...also include couple of pictures of the glasses alone without people as in this video...then experiment.....different trigger words can be used for different glass types... so basically, all wearable objects can be learned in similar way to this video.

    • @DavidBrown-tv8fx
      @DavidBrown-tv8fx 11 หลายเดือนก่อน +1

      @@AI-HowTo thank you so much, i will try it!

  • @matthewmounsey-wood5299
    @matthewmounsey-wood5299 11 หลายเดือนก่อน

    Hey! How long should it take to train a LoRA? I’ve a NVIDIA RTX3080 GPU with 9.5GHz VRAM on Windows. My first attempt took 48 hours. I saw one error message about TRITON not installed. Could this be the reason it took so long, or is this amount of time usual?

    • @Seany06
      @Seany06 11 หลายเดือนก่อน +1

      Hours wtf? Takes like 15 Mins to train a lora, even hypernetworks and embeddings don't take days lol

    • @AI-HowTo
      @AI-HowTo  11 หลายเดือนก่อน +1

      for this cloth example, without regularization it took 10 minutes, with regularization it took 20 minutes on RTX 3070 - 8GB laptop .... for larger data sets with hundreds of images it takes hours.... you seem to have driver problems, your RTX Nvidia drivers might not have been installed correctly... TRITON is not important because it is related to Linux, it wont slow you down.

  • @marcosf2445
    @marcosf2445 6 หลายเดือนก่อน

    Thanks for this tutorial. Is it possible to train, in a single lora, a person (woman) and, in the same training, a body style (fat, bodybuilder, etc?)? If so, how can I erase the faces from the bodies without prejudice the training? I don't want stable diffusion mix the faces from the bodies styles with the person that I'm training.

    • @AI-HowTo
      @AI-HowTo  6 หลายเดือนก่อน

      You are welcome, you can erase the face, in any paint software by making it white, SD will know there should be a face when it generates the image in this white location... LoRA can figure out the body style and the face, but i recommend the face is trained separately and use a prompt in the After Detailer to inpaint the face again... since SD is not so good with full body images anyway without after detailer, and a dedicated LoRA for a face can produce better results... you can test both in the same training set too, because the process is purely experimental.

  • @sunnytomy
    @sunnytomy 11 หลายเดือนก่อน

    This is a great tutorial. Have you ever tried to build Lora model to replicate a clothing with detailed design like a logo rather than pure color clothing?

    • @AI-HowTo
      @AI-HowTo  11 หลายเดือนก่อน +2

      Thank you, I have not unfortunately, I only did this video to explain the principles, I think the same principles apply, if logo/detailed design appears in all pictures it will be learned best, if not, the logo might not be learned properly, just like a human facial features...if LoRA can learn a detailed face properly, it shoud learn detailed clothes designs too.

    • @sunnytomy
      @sunnytomy 11 หลายเดือนก่อน

      ​@@AI-HowTo thanks for your prompt response, as I have tried myself and seen many other's sharing, no successful results by far. My thinking about this issue is that SD would start with an initial guess by class prompts, like clothes, and iterate to adapt specific information from LoRa model. The final outcome depends on how close the initial guess is to the subject in LoRa model and other factors like complex of the scene, as more prompts mean need of more balance and fusion effect. I am still studying this problem, even if maybe we will never get what we want, but I think to improve the result by fine-tuning of a LoRa model and supplement with other AI algorithms to achieve the goal would be a good project :)

    • @AI-HowTo
      @AI-HowTo  11 หลายเดือนก่อน

      true, randomity and diffusion affect the LoRA especially for accurate things, and SD is mostly about stochastic diffusion and transitioning from noise towards final output with help of text encoder, but control over accurate output is very difficult....I have seen notes that SDXL from stability is performing better in this regard, such as when cases of generating text or more accurate stuff, but training using SDXL is a pain, it requires stronger GPU.

  • @10186708
    @10186708 10 หลายเดือนก่อน

    Great video! did you try also with patterns ? tried some, and it doesn't work out at all from my side. Any idea?

    • @AI-HowTo
      @AI-HowTo  10 หลายเดือนก่อน

      LyCORIS could be a better option for patterns than standard LoRA, I didnt experiment much in this area, normal patterns like the one on this dress is learned well, but complex patterns that contains text for instance are more difficult to learn accurately, this is why the use of Controlnet for such more accurate replications could be better..with Controlnet even Text Can be Regenerated as in controlnet video th-cam.com/video/13fgBBI-ZXU/w-d-xo.html SD in general is not so good with precise drawing generations based on my experimentations with learning boots with complex patterns and logos.

    • @10186708
      @10186708 10 หลายเดือนก่อน +1

      Thanks!@@AI-HowTo

  • @mothishraj4463
    @mothishraj4463 2 หลายเดือนก่อน

    Hey, I have two questions,
    1) How did you get the image output for each epochs ? I'm getting only the tensor data
    2) Can I train a color and a pattern (Leapord pattern Fabric) and use it on any garment ? (By eliminating anything related to leopord or animal pattern ?

    • @AI-HowTo
      @AI-HowTo  2 หลายเดือนก่อน

      1) from sample images config section, as in th-cam.com/video/wJX4bBtDr9Y/w-d-xo.html we choose for instance 1 for epochs to generate 1 image each epoch, and we write in sample prompts the prompt that we want to dispaly, it must be written as shown in the sample text completed including the image size to dispaly
      2) yes, as in th-cam.com/video/RT2jj-5t8x8/w-d-xo.html training guide which is a style, this helps train styles/patterns rather than objects -- and yes we eliminate anything related to the pattern in the image description for the training images (leopord or animal pattern ) and keep everything else in the image description

  • @user-lh2xj2yn4j
    @user-lh2xj2yn4j 7 หลายเดือนก่อน

    Thanks for the lesson. However, the whole process should be much faster. Photos on the phone of any clothes on a hanger or on a table, with any lighting. One click - and the thing is dressed on the selected human model. I think in the near future such functionality is being developed.

    • @AI-HowTo
      @AI-HowTo  7 หลายเดือนก่อน

      true, development in this area is very fast, now, there is a new Controlnet Model called IP Adapter in stable diffusion, which can does that to some extent, not as accurate as training... but like you said, this will change 100% soon enough, because training is a time consuming process and doesnt always produce reliable results.

  • @mireusted499
    @mireusted499 7 หลายเดือนก่อน

    Is this the most accurate method to get consistency in the dresses? What about creating a 3d from image file of the dress and then somehow put it in stable diffusion? Would that be possible?

    • @AI-HowTo
      @AI-HowTo  7 หลายเดือนก่อน

      accurate dressing is best done using 3D models.
      in stable diffusion, we always get some level of inconsistency, because it has random and chaos elements and diffusion process, you cannot mix 3d objects in stable diffusion unless you train them the way we do in this video as images... there are alternative tricks such as using IP Adapter, but training produces the most accurate results here (which are not that accurate)

  • @shailpatel3553
    @shailpatel3553 8 หลายเดือนก่อน

    This is really great. You ever thought of making pdf or something.

    • @AI-HowTo
      @AI-HowTo  8 หลายเดือนก่อน

      Thanks, this subject is interesting in general, but I dont think I can allocate enough time for it in the upcoming months.

  • @adrianwasilewski9889
    @adrianwasilewski9889 8 หลายเดือนก่อน

    Great video, I would like to ask you, if there is any possibility to teach the model more than 2 or 5 objects?
    I make my own fernitures and I have about 150 units, that I would like to teach the model with, but I do not really know if it is even possible.

    • @AI-HowTo
      @AI-HowTo  8 หลายเดือนก่อน

      yes its possible, you can increase the network dimension if you have more objects, but often, if you have too many, it is best to train your own checkpoint point using Dreambooth which can give better results... but if it is only like 5 objects, it is possible using the method in this video.

    • @adrianwasilewski9889
      @adrianwasilewski9889 8 หลายเดือนก่อน

      @@AI-HowTo Do you have any tutorial on how to do it?
      Or can you give me some contact so i can ask more detailed question.
      I will naturally pay for your time and support.

    • @AI-HowTo
      @AI-HowTo  8 หลายเดือนก่อน

      unfortunately, I'm not working in this field, nor do I have any knowledge who does... you can check Civitai, you might find some people who are offering their services for a fee in this area and could provide more detailed assistance.

  • @fingerprint8479
    @fingerprint8479 9 หลายเดือนก่อน

    Hi, great tutorial, thanks.
    What is the impact of removing and not removing background of the images I am to use as source for the training?
    Any sugestion on what software to use for the purpose?
    Thanks

    • @AI-HowTo
      @AI-HowTo  9 หลายเดือนก่อน +1

      Thanks.
      Best background removal tool i found is: Adobe photoshop Neural filters, it is most accurate i have seen... alternatively, you can use the free A1111 background removal extensions
      I often find it better to have a simple background (or remove background all the same) for training, because the LoRA will then store only information related to your Object ... what matters the most about background, is that it shoudl be easily seperable from the object, for example, if the background is blurry, that also is good... but if the object is difficult to separate from the background, SD may find it difficult to learn the correct features in some cases.

    • @fingerprint8479
      @fingerprint8479 9 หลายเดือนก่อน

      @@AI-HowTo Hi, thanks.
      I found some dress images on mannequin like the ones you used in the tutorial above and I was able to reproduce your results step by step.
      I tried your suggestion on A1111 background removal extension and the results are very good, thanks.
      But what I am noticing is that the positive prompt is very very sensitive, any change will affect the apearance of the dress and the best dress reproductions I had are those where the face of the AI model is either the mannequin or very similar to it.
      Is there anything I can do to improve the fidelity os the dress reproduction across AI models? Thanks

    • @AI-HowTo
      @AI-HowTo  9 หลายเดือนก่อน +1

      100% accuracy in dress details might be very difficult to achieve... if mannequin starts appearing, then maybe an earlier epoch could be a good option, and use of after detailer to automatically inpaint the face is important for full body shots .... it is entirely experimental, testing new class images, reducing number of images that contain the mannequin, removing captions that describe the dress...etc.... purely experimental.

    • @fingerprint8479
      @fingerprint8479 9 หลายเดือนก่อน

      @@AI-HowTo Thanks for the response. What I am observing is a difficulty to separate the manequin from the dress. When the dress fidelity is good then I will see traces of the mannequin.
      I will try today remove the heads of all images and see what happens.
      Do you think this separation can be improved with prompts?
      Thanks

  • @gokuljs8704
    @gokuljs8704 9 หลายเดือนก่อน

    why are trying to use classification for images wont stable diffusion is already pretrained on those images right ?

    • @AI-HowTo
      @AI-HowTo  9 หลายเดือนก่อน +1

      classification images are only to disrupt training and make the LoRA richer and more flexible for new patterns/settings, it help reduces overfitting, but it is not a must in training LoRAs, in this very instance, i used it to disrupt the 1girl concept (or woman) because my images had mannequin and i wanted to reduce effect of the mannequin in my LoRA and give more power to the clothes.

  • @mariovinicius8765
    @mariovinicius8765 9 หลายเดือนก่อน

    I read in the prerequisites that only Python and Visual Studio are required. I want to know if it works with an AMD graphics card.

    • @AI-HowTo
      @AI-HowTo  9 หลายเดือนก่อน +1

      as far as I know, SD on A1111 and LoRA training can be done on AMD graphics card too.

    • @mariovinicius8765
      @mariovinicius8765 9 หลายเดือนก่อน

      @@AI-HowTo thank you!

  • @valorantacemiyimben
    @valorantacemiyimben 7 หลายเดือนก่อน

    Hello. How can we make modeling photographs that contain only dresses appear on the model?

    • @AI-HowTo
      @AI-HowTo  7 หลายเดือนก่อน

      After you train the dress as a LoRA, it becomes a matter of prompting in stable diffusion as shown in the video... however, training a dress alone can be less efficient...if I understood your question correctly.
      if you have pictures of a model and you dont want her face to appear in the training results, you can paint over the face with white color for example, and stable diffusion will understand later that this is a person... but this should not be done for every picture.

  • @user-gq2bq3zf1f
    @user-gq2bq3zf1f 6 หลายเดือนก่อน

    How do I create a clothing roller and apply it to an existing image to change clothes? Is there a related video?

    • @AI-HowTo
      @AI-HowTo  6 หลายเดือนก่อน

      this video was for Training LoRA of a piece of clothes... in general dressing in Stable diffusion is not accurate, you can check this video for example approximate dressing th-cam.com/video/shc83TaQmqA/w-d-xo.html which is a quick option .... hopefully we get better open source tools that does perfect dressing soon that is mixed with SD, now, i am not aware of any open source tools that does that.

  • @narcizyzzt
    @narcizyzzt 4 หลายเดือนก่อน

    can you help me? I have a problem with Kohya SS, when I start gui.bat the following error always appears: ''ImportError: cannot import name 'set_documentation_group' from 'gradio_client.documentation'
    I've already followed all the steps, this part is the only one I can't do

    • @AI-HowTo
      @AI-HowTo  4 หลายเดือนก่อน

      This seems to be related to Gradio installation, you can try inside Kohya ss installation folder to install Gradio such as using "pip install gradio_client" if it didnt work try using "pip install gradio_client==0.8.1" as some people reported it works from within Kohya ss folder, hopefully if this is a recent bug it gets fixed soon without the need for anyone to see this error again

  • @lilillllii246
    @lilillllii246 6 หลายเดือนก่อน

    The configuration file suddenly appeared at around 18:02 without the process of making it, where was this made? What do I do if I want to?

    • @AI-HowTo
      @AI-HowTo  6 หลายเดือนก่อน

      see the settings after this time, these are the configurations, you can check each of them, and save it as a config file, this file is not required, it is just to explain this point, i basically didnt use it as i explained each settings that we used (batch, network dim, alpha value...etc.)

  • @buga3473
    @buga3473 11 หลายเดือนก่อน +1

    it can train lora for items like: rings, watches, ... small items. If yes, can you make a video tutorial? Thanks

    • @AI-HowTo
      @AI-HowTo  11 หลายเดือนก่อน

      sorry, i wont be preparing another video about objects, it will be redundant, same principles apply for any kind of object, you train the ring/watch how it is dressed on a person with close ups... but still, when the shot is distant it wont appear properly without in paint... another thing, SD may not learn the perfect details of the object such as scripture or text written on a ring... lots of iterations will be required with regularization/without/with LyCORIS... it might confuse them a little.

    • @buga3473
      @buga3473 11 หลายเดือนก่อน

      so the graphics on the shirt are hard to train correctly

    • @AI-HowTo
      @AI-HowTo  11 หลายเดือนก่อน +1

      if graphics contain text for instance, it maybe difficult for SD to train on them correctly, while it is easy to learn some Logos/Symbols/detailed cloth/styles, it doesnt seem SD have been trained properly on text for instance which makes LoRA find it difficult to train on things that have text too...based on my trials.

  • @stepphen_ffc5606
    @stepphen_ffc5606 10 หลายเดือนก่อน

    Can i train model on runpod? Can you show us a example of using runpod with kohya training traditional clothes?

    • @AI-HowTo
      @AI-HowTo  10 หลายเดือนก่อน +1

      Yes, in this video I have explained in about 4minutes, th-cam.com/video/arx6xZLGNCA/w-d-xo.html how you can get Kohya up and running on Runpod ... newer SDXL videos are all done on Runpod too ... nothing changes, you just need to install following this video, then train as you always do ... This video also, shows more details how I trained a model for SDXL on Runpod ... basically you just need the first video to get it up and running, steps/procedures are the same because with gradiolive, you get Kohya run on your pc with same GUI.

  • @muharremacar5255
    @muharremacar5255 9 หลายเดือนก่อน

    All I want is to put a fabric image on a shirt or jacket template. Is there any way to do this? But I'm not talking about partial dressing. The entire shirt, including its folds, should be dressed appropriately with that fabric.

    • @AI-HowTo
      @AI-HowTo  9 หลายเดือนก่อน +1

      no quite sure, but I think you are referring to the style of the dress, despite what that dress is, if so, then training must be done as a style of different dresses with that fabric which captures the fabric information, not the dresses themselves, as in this video th-cam.com/video/RT2jj-5t8x8/w-d-xo.html about style training... please keep in mind, that if you are considering this for commercial application, then this could be difficult to achieve with high level of accuracy because SD is not the right tool for that, instead 3D software / photoshop is the way to go.

    • @muharremacar5255
      @muharremacar5255 9 หลายเดือนก่อน

      @@AI-HowTo While AI is capable of creating amazing things, it is not yet sufficient in ensuring high accuracy and consistency. What I actually want is something like a mockup. I want to introduce a properly taken photo of a fabric into the system, and have the system automatically dress it onto a clothing template. In other words, to dress my black and red plaid fabric pattern onto a plain white hoodie template. It's possible to do this with different systems, but I had thought that using AI would be more flexible and beautiful. However, it seems that this might not be feasible at the moment.

  • @___x__x_r___xa__x_____f______
    @___x__x_r___xa__x_____f______ 10 หลายเดือนก่อน

    What class would you set the reg folder to? Outfits. Didn't see this setup

    • @AI-HowTo
      @AI-HowTo  10 หลายเดือนก่อน +1

      for this example, i actually set woman class for the regularization set, women (wearing clothes), this helps to avoid learning the mannequin, since regularization is not a must in the first place for learning, one can also test having class Cloth for the regularization which is the norm when training... but my goal for reg was slightly different for this object.

    • @AI-HowTo
      @AI-HowTo  10 หลายเดือนก่อน

      the video also shows a case with reg and a case without reg, since regularization is not a must.

    • @___x__x_r___xa__x_____f______
      @___x__x_r___xa__x_____f______ 10 หลายเดือนก่อน

      @@AI-HowTo is class clothes preferred to outfit? I guess maybe clothes is preferred in photo gen, in contrast to outfit for anime? Did you check CLIP by prompting clothes or did you come to it from convention (reading about it)? thanks

    • @AI-HowTo
      @AI-HowTo  10 หลายเดือนก่อน

      its entirely experimental...you should test based on your dataset...I used class woman because i wanted to disrupt the mannequin in my dataset which is described in my cloth captions as 1girl ... normally, we should put clothes for class and images of clothes for instance generated by the SD ... I followed this approach based on my understanding on how class images works and I tested it and it worked fine enough so I kept it in the video.... if you are developing a LoRA I suggest you test both and see what works better for you depending on what is inside your image dataset .... if you dont have a mannequin, then dont use woman for class just generated images of clothes or dont use class images at all... test both cases, its really useful.

  • @willpulier
    @willpulier 11 หลายเดือนก่อน

    Do you freelance by chance? Great video 🙌

    • @AI-HowTo
      @AI-HowTo  11 หลายเดือนก่อน

      Thank you, not freelancing or accepting business inquiries at the time being.

  • @gokuljs8704
    @gokuljs8704 9 หลายเดือนก่อน

    can you tell me some resources how to learn to write that prompts which you have given inside sample prompt in guii

    • @AI-HowTo
      @AI-HowTo  9 หลายเดือนก่อน

      in Kohya i just used the default suggested by the Kohya training kit, others are just experimental, one can find some examples in Civitai.com and try to experiment... unfortunately, I dont have any specific resources for this, because I just write based on experimentation since SD has many randomness in generation and doesnt consider all the prompts written depending on many factors that are out of our control.

  • @stepphen_ffc5606
    @stepphen_ffc5606 9 หลายเดือนก่อน

    Can i train traditional clothes without same design, without same color? Is it possible to train traditional clothes with bunch of different styles? How can i better result?

    • @AI-HowTo
      @AI-HowTo  9 หลายเดือนก่อน +1

      yes, when you mix bunch of different clothes with a certain style, it becomes a style LoRA or Style training, the model will capture the clothing style itself with shapes and learn to generate new styles based on your training data, this video details how styles are trained th-cam.com/video/RT2jj-5t8x8/w-d-xo.html basically it requires more images to capture the style, class is cloth, so when SD generates anything with cloth it will mainly get things from your LoRA...SD has already learned the cloth concept, so it will just tone itself on the style and shapes of your training dataset and your training data will have bigger effect than the original sd cloth data.

  • @khoo1985
    @khoo1985 11 หลายเดือนก่อน

    Hi, can I train SDXL with kohya Lora on RTX3060 12GBvram?

    • @AI-HowTo
      @AI-HowTo  11 หลายเดือนก่อน

      yes, it works on 12GB Rtx 3060, but extremely slow, you can turn on SDXL option, choose gradient checking, adafactor optimizer (for both optimizer and learning method) if Adam 8 didnt work, --no-half-vae ...check the guide for additional confirmation on the Kohya ss GUI in the last tab to the right

  • @user-dx8qi6hp6i
    @user-dx8qi6hp6i 3 หลายเดือนก่อน

    Is it possible to "Exact" swap clothes by uploading clothes picture and model picture (both as input pic)?

    • @AI-HowTo
      @AI-HowTo  3 หลายเดือนก่อน

      at the time being, as far as i Know, No, it is impossible to do that using Stable diffusion.

    • @AI-HowTo
      @AI-HowTo  3 หลายเดือนก่อน

      IP Adapter may help in getting close approximation in some cases quickly using input image, but the results are not exact, and can be very far from being exact.

    • @user-dx8qi6hp6i
      @user-dx8qi6hp6i 3 หลายเดือนก่อน

      @@AI-HowTo Yes , you are correct, as far as I researched I couldn't find any method on stable diffusion.
      Hoping new releases will have some features to address this.
      Because Picking specific clothes from the internet and imposing that same clothes on our model is the real necessity as it brings the control on fashion outfits.

  • @TentationAI
    @TentationAI 5 หลายเดือนก่อน

    Hello, Im looking for train an accessories, a belt, is it possible with this method ?

    • @AI-HowTo
      @AI-HowTo  5 หลายเดือนก่อน

      yes if the accessory/belt appears clearly in the training dataset....but if a ring is too small for instance, it may not be easy to detect it during training properly... it is possible to train any kind of object or wearable basically as long as that object repeats and appears clearly in all images.

    • @TentationAI
      @TentationAI 5 หลายเดือนก่อน

      thanks a lot for your answer@@AI-HowTo

  • @samwerawtder1164
    @samwerawtder1164 9 หลายเดือนก่อน

    Can you share the test dataset you have? Such as the cothing and etc.

    • @samwerawtder1164
      @samwerawtder1164 9 หลายเดือนก่อน

      Also, may I know the specs of the machine you are using?

    • @qazifazliazeem6433
      @qazifazliazeem6433 6 หลายเดือนก่อน

      @@samwerawtder1164 he wrote in above comments that a NVIDIA 3070 was used with 8 GB Video Ram, and likely 16 or 32 gb ram

  • @hmmrm
    @hmmrm 9 หลายเดือนก่อน

    Thanks , where can we find the class images for regularization set to download

    • @AI-HowTo
      @AI-HowTo  9 หลายเดือนก่อน +1

      You are welcome.
      Sorry, didnt keep the exact set of images i used for this video for classification, but i took a subset of those huggingface.co/datasets/AIHowto/woman_class_images_better_1832/resolve/main/reg_images_general_fullbody_portraits_upperbody1832_images.zip .... as explained in the video, class images should ideally be clothes, but I wanted to reduce effect of the mannequin (and disrupt 1girl token) so i used a woman with clothes on which is also good... this works too since class images basic role is to disrupt training and reduce overfitting.... usually though class must be of same training image class (you can also not use class images at all)

    • @hmmrm
      @hmmrm 3 หลายเดือนก่อน

      @@AI-HowTo do you know were to download ready config files for clothing , style ...ect

  • @Shortsjoy_official
    @Shortsjoy_official 3 หลายเดือนก่อน

    Great content,
    Two questions -
    1. Is it possible to do this all by command line (Automated webapp)
    2. Can I also have the predefined male female models
    I want to create an app where user can upload cloths like 4-5 images and choose the male/female model and it should create the output.
    Is this possible and how ?
    Best Regards

    • @AI-HowTo
      @AI-HowTo  3 หลายเดือนก่อน

      Thank you, all these scripts are based on python command line scripts, what you are seeing is just a Gradio Webui calling Python code, so they can all be called from command line and automated and put on a webserver that hosts stable diffusion and Kohya, but it requires programming expertise, time and resources to do, because training using stable diffusion can take hours of GPU usage which is expensive, even image generation can cost few cents for each image on a server.
      choosing the (male/female) model can also be automated because you can automate the prompt creation and use extensions such as after detailer/ReActor for better face details ...etc.... but generally speaking, Stable diffisuion doesnt give 100% accuracy, and automating the LoRA model creation may not always produce quality results from first attempt, I think there are many possibilities using these AI Image generation models, but requires (time, expertise, resources, to test these experiments).

    • @Shortsjoy_official
      @Shortsjoy_official 3 หลายเดือนก่อน

      @@AI-HowTo Great thanks for your reply. I know it requires GPU, but I see some apps doing this, so I thought I might try that too, btw I am also a developer. I normally work in React, Node, Laravel and some other frameworks.
      I am looking for some solution which can be easy to implement and I can establish it as Saas.
      If you have any idea to do it please let me know or maybe we can do something together.

    • @AI-HowTo
      @AI-HowTo  3 หลายเดือนก่อน

      unfrotunately I do not work in this area, possibly if you look on Civitai you might find some people who might collaborate or work with you, best of luck.

  • @rasydev
    @rasydev 5 หลายเดือนก่อน

    Can you show how to train model (full body. face)

    • @AI-HowTo
      @AI-HowTo  5 หลายเดือนก่อน

      I think you mean something like this th-cam.com/video/vA2v2IugK6w/w-d-xo.html check the channel content you might find something useful inside

  • @fintech1378
    @fintech1378 11 หลายเดือนก่อน

    i see video with people just using default stable diffusion app to do this without further training?

    • @AI-HowTo
      @AI-HowTo  11 หลายเดือนก่อน +1

      this is example of object training, the point is not to produce pretty clothes/people.... the point is to produce something specific

    • @fintech1378
      @fintech1378 11 หลายเดือนก่อน

      @@AI-HowTo thanks for the reply. Possible to do another tutorial with ControlNet? Without training, produce a changed specific outfit worn by a person too

    • @AI-HowTo
      @AI-HowTo  11 หลายเดือนก่อน +1

      I may do, but control net doest allow you to produce exact design of some wearable, so it's purpose and effect are quite different, LoRA allows production of almost exact cloth designs/objects...etc.

    • @fintech1378
      @fintech1378 11 หลายเดือนก่อน +1

      @@AI-HowTo got it thanks cuz i saw one comment in this channel too that says its possible with some 'tricks'

  • @awais6044
    @awais6044 8 หลายเดือนก่อน

    We only need images of the outfit, can we train it, no human or any such object in the image, only the outfit or t-shirt should be generated. Is this possible. kindly make a video.

    • @AI-HowTo
      @AI-HowTo  8 หลายเดือนก่อน +1

      it is best to have a person wearing the dress or a mannequin in majority of the time, you can also mask the face or remove it (make it white), and the check point will replace the face later on with prompting...however, using the clothes only, will produce worse results unfortunately...because stable diffusion might find it more difficult to determine how the dress will be filled properly.

  • @gokuljs8704
    @gokuljs8704 9 หลายเดือนก่อน

    trying this for shoes for some reason sample images are not generate inside model

    • @AI-HowTo
      @AI-HowTo  9 หลายเดือนก่อน +1

      possibly you forgot to set value 1 for sample every n epochs field....if the prompt is explicitly written, sample folder is defined in basic tab, and 1 is written in samples tab for (Sample every n epochs) field, then images will show up, double check all these.

    • @gokuljs8704
      @gokuljs8704 9 หลายเดือนก่อน

      @@AI-HowTo Thank you! That was indeed the problem.

  • @stepphen_ffc5606
    @stepphen_ffc5606 9 หลายเดือนก่อน

    Can you share your classificiation folder on google drive? Is it possible?

    • @AI-HowTo
      @AI-HowTo  9 หลายเดือนก่อน +1

      i took a subset of those huggingface.co/datasets/AIHowto/woman_class_images_better_1832/resolve/main/reg_images_general_fullbody_portraits_upperbody1832_images.zip .... as explained in the video, class images should ideally be clothes, but I wanted to reduce effect of the mannequin (and disrupt 1girl token) so i used a woman with clothes on ... this works too since class images basic role is to disrupt training and reduce overfitting.... usually though class must be of same training image class (you can also not use class images at all)

  • @samwerawtder1164
    @samwerawtder1164 9 หลายเดือนก่อน

    Hi, can you share the dataset you have for women?

    • @AI-HowTo
      @AI-HowTo  9 หลายเดือนก่อน

      sorry, didnt keep the exact set of images i used for this video, but i took a subset of those huggingface.co/datasets/AIHowto/woman_class_images_better_1832/resolve/main/reg_images_general_fullbody_portraits_upperbody1832_images.zip .... as explained in the video, class images should ideally be clothes, but I wanted to reduce effect of the mannequin (and disrupt 1girl token) so i used a woman with clothes on ... this works too since class images basic role is to disrupt training and reduce overfitting.... usually though class must be of same training image class (you can also not use class images at all)

  • @Champsterz
    @Champsterz 10 หลายเดือนก่อน

    Hey. Is there a way i can contact you?

    • @AI-HowTo
      @AI-HowTo  10 หลายเดือนก่อน

      sorry, not accepting any private/business contacts at the time being, only replying on TH-cam when possible.

  • @stepphen_ffc5606
    @stepphen_ffc5606 10 หลายเดือนก่อน

    Hello. Can i train traditional clothes?

    • @AI-HowTo
      @AI-HowTo  10 หลายเดือนก่อน +1

      Yes you can train any kind of clothes, even the style of the cloth if the traditional clothes are of different types and you want to capture the style of that clothes regardless of its colors and little variations it may has.

    • @stepphen_ffc5606
      @stepphen_ffc5606 10 หลายเดือนก่อน

      You are a GOD. Can you show us even more tutorial please. Thanks man@@AI-HowTo

    • @AI-HowTo
      @AI-HowTo  10 หลายเดือนก่อน

      Thank you, will try to do some every now and then, hopefully they can be useful to some, to me as well.

  • @generalawareness101
    @generalawareness101 11 หลายเดือนก่อน

    If I am training on a style instead of an object then what? Let's say I am training armor. I remove all references in my tags that say armor for the object? What about a style?

    • @AI-HowTo
      @AI-HowTo  11 หลายเดือนก่อน

      armor is same as clothes, you describe everything else but the armor indeed such as the person wearing it, the background...etc..... for styles i may prepare another video about it... but styles are learned implicitly by having different number of pictures that share same style, and the style must be learned implicitly (discovered).

    • @generalawareness101
      @generalawareness101 11 หลายเดือนก่อน

      @@AI-HowTo I would love that video. :)

    • @generalawareness101
      @generalawareness101 11 หลายเดือนก่อน

      @@AI-HowTo What if the armor isn't the same? I mean there are a lot of different armor types.

    • @AI-HowTo
      @AI-HowTo  11 หลายเดือนก่อน

      in this case it becomes more like a style, same principles in terms of captioning, we describe everything but the armor because you want armors to be implicitly learned... you can use trigger words for each type of armor too to increase likelihood of getting features for that type of armor ....now if we caption the armor the LoRA will still work, but then we need more captions in the prompting process later on in SD, a lot of tests is required like any other LoRA since the process is basically experimental, and results sometimes could be better than you think or worse....I also suggest you test using LyCORIS LoCon for this process too it may work better than standard LoRA, if you found your results in LoRA not satisfying.

    • @generalawareness101
      @generalawareness101 11 หลายเดือนก่อน +1

      @@AI-HowTo I only use Locon, yes. I hope you make that vid to help in understanding better. Thank you.

  • @chadgtr34
    @chadgtr34 8 หลายเดือนก่อน

    how do you train boots shoes?

    • @AI-HowTo
      @AI-HowTo  8 หลายเดือนก่อน

      we follow same principels here for any wearable, since they are worn by people, class images should be boots and people wearing boots of other types......we shoudl ideally have these boots worn by people and some not worn, however, since boots are smaller objects, and can have complex patterns and possibly trade mark and writting, it is difficult to do accurate training in SD 1.5, with SDXL you have better chance, but still achieving 100% accuracy is almost imppossible in SD.

    • @chadgtr34
      @chadgtr34 8 หลายเดือนก่อน

      @@AI-HowTo for me, as long as the legs is not disporpotion, it is very useful.

  • @farizyal8108
    @farizyal8108 10 หลายเดือนก่อน

    where download the class image?

    • @AI-HowTo
      @AI-HowTo  10 หลายเดือนก่อน

      sorry, didnt keep the images i used for this video, I think i took a subset of those huggingface.co/datasets/AIHowto/woman_class_images_better_1832/resolve/main/reg_images_general_fullbody_portraits_upperbody1832_images.zip .... as explained in the video, class images should ideally be clothes, but I wanted to reduce effect of the mannequin so i used a woman with clothes on ... this works too since class images basic role is to disrupt training and reduce overfitting.... usually though class must be of same training image class (you can also not use class images at all)

    • @farizyal8108
      @farizyal8108 9 หลายเดือนก่อน

      @@AI-HowTo Thanks for help. This what i need.

    • @samwerawtder1164
      @samwerawtder1164 9 หลายเดือนก่อน

      Hi farizyal, can we connect on Discord or somewhere?

    • @AI-HowTo
      @AI-HowTo  9 หลายเดือนก่อน

      I am sorry, I cannot at the time being, I have other commitments, so I am not planning on making a discord channel or be available for private messages or freelancing at least for another year... if you have a question you can post it and will reply as soon as i can if i have the answer to it@@samwerawtder1164

  • @RRR-rp1bl
    @RRR-rp1bl 10 หลายเดือนก่อน

    How to make own safetensors file for AI model

    • @AI-HowTo
      @AI-HowTo  10 หลายเดือนก่อน

      the LoRA model is a safetensors file too.

  • @vinnybane-ki6eq
    @vinnybane-ki6eq 11 หลายเดือนก่อน

    Do you have an email I can reach you at?

  • @F27MARKETING
    @F27MARKETING 4 หลายเดือนก่อน

    For this much time used. Why would not you just mask and use photoshop. Then regenerate at SD.

    • @AI-HowTo
      @AI-HowTo  4 หลายเดือนก่อน

      This video is about training a specific subject, specific dress with specific style/design, it has its own applications, despite the time needed to do that... it extends to training any specific subject in general not just clothes.

  • @lkewis
    @lkewis 10 หลายเดือนก่อน

    If you're training to pants and dress classes, your regularisation sets should match those, not be a general combination as woman with both types of clothing items in

    • @AI-HowTo
      @AI-HowTo  10 หลายเดือนก่อน

      true, I think i mentioned that in the video, but my goal of regularization was different, please remember that regularization is not that necessary to begin with, so we can train even without it, my goal was to reduce the effect of the mannequin .... the goal of regularization is primary the disruption of the training of learning of a certain concept, and since the woman in the class images already wore clothes, the clothes there are also deduced indirectly, because both woman/cloth concepts are learned... I should have made another part with a different class images of clothes to display the difference, but these videos are too long already as they are.

  • @pelicankim
    @pelicankim 10 หลายเดือนก่อน

    This is amazing!
    Can I get your Email adress?
    I want to ask you something of LoRa Clothes Trainning stuff

    • @AI-HowTo
      @AI-HowTo  10 หลายเดือนก่อน

      sorry, at the time being only replying on commends here when possible, not accepting private or business inquiries elsewhere.

  • @rbdesignguy
    @rbdesignguy 2 หลายเดือนก่อน

    Why not just crop in photoshop and save yourself a step?

    • @AI-HowTo
      @AI-HowTo  2 หลายเดือนก่อน

      I think I did that at some point

  • @toygunsonly8093
    @toygunsonly8093 11 หลายเดือนก่อน

    can you make a turorial for 10 minutes and straight to the point? this tutorial has a lot of repetitive talk.

    • @AI-HowTo
      @AI-HowTo  11 หลายเดือนก่อน

      really difficult, but hopefully will do for future Training videos if any, will certainly try to stick to under 10 minutes and avoid repetitions in previous videos or in talk as you have suggested...sometimes i try to emphasize some concepts which i think are necessary to repeat, other times, i just forget :)

    • @molk
      @molk 10 หลายเดือนก่อน +2

      @@AI-HowTo I don´t agree, its very well explained, if someone find it repeptitive can play it at 2x speed