SDXL LORA Training Without A PC: Google Colab and Dreambooth

แชร์
ฝัง
  • เผยแพร่เมื่อ 2 ธ.ค. 2024

ความคิดเห็น • 242

  • @allyourtechai
    @allyourtechai  10 หลายเดือนก่อน +1

    ✨ Support my work on Patreon: www.patreon.com/allyourtech
    💻My Stable Diffusion PC: kit.co/AllYourTech/stable-diffusion-build

    • @CodeMania-y3e
      @CodeMania-y3e 4 หลายเดือนก่อน

      @allyourtechai hey man, in this video what you did, so in this process dreambooth and lora both works simultaneously???????

  • @shankoty1
    @shankoty1 8 หลายเดือนก่อน +2

    Please reply. I've tried 5 different times to train a lora but when I install it on focus it ignores the lora file and doesn't do anything. It was working very well before, and I managed to create much Loras with this method but now it doesn't do anything. What is the problem can you please help me ?

    • @allyourtechai
      @allyourtechai  8 หลายเดือนก่อน +4

      I’ve had pneumonia for the past two weeks. I’ll try to look once I’m better

    • @shankoty1
      @shankoty1 8 หลายเดือนก่อน +2

      @@allyourtechai​​⁠ Oh I'm sorry to hear that! Take all the rest, thanks for the reply and I wish you will recover soon. ❤

    • @Fanaz10
      @Fanaz10 7 หลายเดือนก่อน

      did u figure it out?

  • @kronostitananthem
    @kronostitananthem 4 หลายเดือนก่อน +3

    For anyone having the problem of Fooocus not loading your model:
    The current version of the script should output two files. One of them ends in "kohya". That one will work with Fooocus, the other one is the wrong format.

    • @allyourtechai
      @allyourtechai  4 หลายเดือนก่อน

      Thanks for sharing this!

    • @peteryu4919
      @peteryu4919 3 หลายเดือนก่อน

      Do you know how to have the current script return kohya for more than 500 steps/ default parameters?

  • @nadiaprivalikhina
    @nadiaprivalikhina 10 หลายเดือนก่อน +8

    Wanted to say thank you for this video! I've been looking for a tutorial like this one and most of what I found is total BS on how to make money with fake influencers. Appreciate your detailed explanation and scientific approach to the topic

    • @allyourtechai
      @allyourtechai  10 หลายเดือนก่อน

      I really appreciate that, thank you!

    • @kayinsho
      @kayinsho 9 หลายเดือนก่อน

      Yeah, this is an amazing vid. How can we make alterations to our LORA and save them? Let's say the face needs to be thinner, wider, etc.

    • @Fanaz10
      @Fanaz10 7 หลายเดือนก่อน

      how is it bs lol? why u mad?

  • @R3dBubbleButNerfed
    @R3dBubbleButNerfed 9 หลายเดือนก่อน +5

    Your issue with style is to forget to uncheck Foocus V2, Enhance and sharp.They drive the model toward realism.

  • @AstaIllum
    @AstaIllum 8 หลายเดือนก่อน +11

    I have tried following this guide step for step but my LORA doesnt do anything. I can download other loras and add, and they work perfectly, but not when i add my own.
    I am running Fooocus 2.2 on a google colab machine. My model is juggernautXL_v8Rundiffusion.safetensors and the lora is trained stable-diffusion-xl-base-1.0.
    I followed the guide 1-1 and used Dreambooth lora, with 8 pictures pf a celeb and added the prompt being the name of the used celeb person. The training takes around 2 hours and completes correctly - but when used on my fooocus it looks nothing like my Lora :( Can you help us?

    • @AstaIllum
      @AstaIllum 8 หลายเดือนก่อน

      Yes I tried that. Unfortunatelyit gives the same result.@ea03941d

    • @mestomasy
      @mestomasy 8 หลายเดือนก่อน +1

      My lora doesn't have any effect too. I had very good data and used keywords, but it don't work with any model, including sd_xl_base. Very upset(

    • @talvez_priest_wow3720
      @talvez_priest_wow3720 8 หลายเดือนก่อน +2

      i did like 10 Loras with this guide like 2-3 weeks ago, working fine, was out for 1 week, tried again now, my old LORAS work, my new LORAS dont do nothing, tried diferent training images, tried using them on Fooocus, A1111, Fusion and ComfyUI, dont work. They never showed on A1111/Fusion inferface on LORAS tab, but they worked, now, just what i trained 2-3 weeks ago work, the new ones do not.

    • @arnabing
      @arnabing 8 หลายเดือนก่อน +2

      I'm having the same issue. I'm using the model that the video suggests SDXL base 1 and my LoRA does nothing. Any solution here? :(

    • @ImSphex7
      @ImSphex7 7 หลายเดือนก่อน

      Hoping for a solution, have the same exact issue

  • @MindSweptAway
    @MindSweptAway 10 หลายเดือนก่อน +6

    Coming from your awesome Focus colab tutorial! When it finishes doing the steps thing, it kept repeating something along the lines of “Running jobs: [],” followed by “Get /is_model_training HTTP/1.1” on the output for a few hours. Is it supposed to do that, because my dataset contains around 50-100 images.

    • @allyourtechai
      @allyourtechai  10 หลายเดือนก่อน +1

      That many images would likely take 10-20 hours to train. I haven’t ever tried that large of a data set on colab. Does it show a progress percentage at any point?

    • @MindSweptAway
      @MindSweptAway 10 หลายเดือนก่อน +2

      @@allyourtechaiIt does give me a percentage at the beginning, but when it finishes it keeps outputting “Running jobs” with no percentage at all. I think it’s because I used Firefox and it’s usually the backbone for problems like this, so I might try running colab on chrome for now on. Thanks for listening! 😊

    • @DerKapitan_
      @DerKapitan_ 10 หลายเดือนก่อน +4

      @@allyourtechai I'm having the exact same issue here, except I'm using Chrome and I only used a dataset of six images, following the same steps and settings outlined in the video. It took about 1 hour and 45 minutes for all 500 training steps to complete, but after that, it gets stuck executing system() > _system_compat() > _run_command() > _monitor_process() > _poll_process()
      and it remains repeating “Running jobs: [],” followed by “Get /is_model_training HTTP/1.1” at the four-hour mark.

    • @utkucanay
      @utkucanay 9 หลายเดือนก่อน +1

      @@DerKapitan_ Did you find any solution?

    • @DerKapitan_
      @DerKapitan_ 9 หลายเดือนก่อน +3

      @@utkucanayIt turned out that it did create a LoRA file that I could download and use before it got stuck 'running jobs'. It didn't work well when I tried to use it, but I don't know if that was because of the glitchy process or poor training settings.

  • @baheth3elmy16
    @baheth3elmy16 8 หลายเดือนก่อน +9

    Just a note: 99% of the time, free GPU connection is not available on Google Colab. For that, user must change the setting from pf16 to bf16.

  • @Rachelcenter1
    @Rachelcenter1 2 หลายเดือนก่อน

    7:33 but my mascot doesnt look like anyone. what do i do in that case? the arnold schwartz reference confused me because it didnt look like you used that information at all to plug into google colab. 8:01 why does google collab not ask for any text prompts like other trainers such as civ?

  • @Akashwillisonline
    @Akashwillisonline 7 หลายเดือนก่อน +2

    my lora is ignored when i generate (2) ☹ Please help! I did the same process before. it worked. but it is not working now. There are few changes in auto train interface ( ex: there is something in training parameters section ("vae_model": "",). idk what that is!

    • @MISTERPASTA12
      @MISTERPASTA12 7 หลายเดือนก่อน +1

      I ran into the same issue, I noticed the slight differences in the parameters for the training, and my lora does not seem to be creating any effect. I wonder if it was updated and there is a simple fix. Just for context, I am running SDXL thru comfyui.

    • @CRIMELAB357
      @CRIMELAB357 5 หลายเดือนก่อน +1

      I do all my lora training on civitai. Too much bs with colab

  • @Productificados
    @Productificados 6 หลายเดือนก่อน

    What if I forgot to put custom words in the "enter your prompt here" section? :(

    • @allyourtechai
      @allyourtechai  6 หลายเดือนก่อน

      I don’t think you will be able to use the files generated in that case. There would be no trigger to prompt the system to use your LoRA

  • @nicolas.c
    @nicolas.c 8 หลายเดือนก่อน +2

    excellent tutorial! Sadly, Google colab keeps shuting down in the middle of the training... like at 64% (training only 10 images). I tried this for several days. Any solution? anyone? thanks in advance!

  • @ShashankBhardwaj
    @ShashankBhardwaj 8 หลายเดือนก่อน +2

    My Google Colab is stuck on this error after getting to loading the 4/7th pipeline component:
    INFO: 2401:4900:1c31:6d1e:3d79:ce0c:9144:588d:0 - "GET /is_model_training HTTP/1.1" 200 OK
    INFO: 2401:4900:1c31:6d1e:3d79:ce0c:9144:588d:0 - "GET /accelerators HTTP/1.1" 200 OK
    It repeats this every seconds. Help pls?

  • @Rohambili
    @Rohambili หลายเดือนก่อน

    Hey brother man!
    What resolutions and aspect ratios you using when you feed your pictures? i watched 3-4 videos of you training loras but you never go to that topic... Chatgpt says if you using resolutions, and aspect ratios closest which SDXL trained by originally you get faster workflows/results...
    640 x 1536: 10:24 or 5:12
    768 x 1344: 16:28 or 4:7
    832 x 1216: 13:19
    896 x 1152: 14:18 or 7:9
    1024 x 1024: 1:1
    1152 x 896: 18:14 or 9:7
    1216 x 832: 19:13
    1344 x 768: 21:12 or 7:4
    1536 x 640: 24:10 or 12:5

  • @nobody_dude
    @nobody_dude 8 หลายเดือนก่อน +1

    What can you do if the trained lora model is not visible in Stable Diffusion automatic 1111? Other xl loras are visible.

    • @allyourtechai
      @allyourtechai  8 หลายเดือนก่อน

      You can use anything other than a1111. A1111 doesn’t support the format yet

  • @taintofgreatness
    @taintofgreatness 10 หลายเดือนก่อน +2

    Kickass tutorial man.

  • @apoorvmishra2716
    @apoorvmishra2716 7 หลายเดือนก่อน +1

    Thanks for the video. Concise and very clear. But I am facing an issue, which from comments, many others are facing as well. I have created a LoRA using the above instructions (not on colab, but on GCP VM), but when I tried to use it on Fooocus with sd_xl_base_1.0 as the base model, the LoRA does not get loaded. Other LoRAs downloaded from civitai get loaded and work perfectly.
    On debugging, I found that fooocus is expecting LoRA keys in the following format:
    'lora_unet_time_embed_0', 'lora_unet_time_embed_2', 'lora_unet_label_emb_0_0', 'lora_unet_label_emb_0_2', 'lora_unet_input_blocks_0_0', 'lora_unet_input_blocks_1_0_in_layers_0', 'lora_unet_input_blocks_1_0_in_layers_2', 'lora_unet_input_blocks_1_0_emb_layers_1', 'lora_unet_input_blocks_1_0_out_layers_0', 'lora_unet_input_blocks_1_0_out_layers_3', 'lora_unet_input_blocks_2_0_in_layers_0', 'lora_unet_input_blocks_2_0_in_layers_2', 'lora_unet_input_blocks_2_0_emb_layers_1', 'lora_unet_input_blocks_2_0_out_layers_0', 'lora_unet_input_blocks_2_0_out_layers_3', 'lora_unet_input_blocks_3_0_op', 'lora_unet_input_blocks_4_0_in_layers_0', 'lora_unet_input_blocks_4_0_in_layers_2', 'lora_unet_input_blocks_4_0_emb_layers_1', 'lora_unet_input_blocks_4_0_out_layers_0', 'lora_unet_input_blocks_4_0_out_layers_3', 'lora_unet_input_blocks_4_0_skip_connection', 'lora_unet_input_blocks_4_1_norm', 'lora_unet_input_blocks_4_1_proj_in', 'lora_unet_input_blocks_4_1_transformer_blocks_0_attn1_to_q', 'lora_unet_input_blocks_4_1_transformer_blocks_0_attn1_to_k', 'lora_unet_input_blocks_4_1_transformer_blocks_0_attn1_to_v'
    Whereas the actual keys in the LoRA are in a slightly different format:
    'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_k.lora.down.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_k.lora.up.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_out.0.lora.down.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_out.0.lora.up.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.lora.down.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.lora.up.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_v.lora.down.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_v.lora.up.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.lora.down.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.lora.up.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_out.0.lora.down.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_out.0.lora.up.weight'
    @allyourtechai do you know how to resolve this issue? Or anyone else, can anyone help in resolving this? Thanks!

    • @allyourtechai
      @allyourtechai  7 หลายเดือนก่อน

      I haven't come across this one, but chatgpt did provide a way to map the keys properly:
      chat.openai.com/share/34c5ada6-f3b5-4ab8-8bb7-f92638d8e922

    • @apoorvmishra2716
      @apoorvmishra2716 7 หลายเดือนก่อน

      Thanks for the reply. Found an easier solve, by adding the following as hyperparameter: --output_kohya_format

    • @peteryu4919
      @peteryu4919 3 หลายเดือนก่อน

      @@apoorvmishra2716 How did you do that? I tried with json file and it doesn't work. Help pls!

  • @Tokaint
    @Tokaint 10 หลายเดือนก่อน

    so now it looks like me mixed with my celebrity look alike, is it because their name is in the prompt? Anyway to have it look just like me?

    • @allyourtechai
      @allyourtechai  10 หลายเดือนก่อน

      How many images did you train with?

    • @Tokaint
      @Tokaint 10 หลายเดือนก่อน

      @@allyourtechai I think 13 images, it gave me better results the second test when I just wrote my own prompt instead of a celeb look alike. Probably because I have a unique look

  • @Gygbu8245
    @Gygbu8245 10 หลายเดือนก่อน +1

    Awesome tutorial - dig your channel!

  • @olvaddeepfake
    @olvaddeepfake 6 หลายเดือนก่อน +1

    wouldn't it be better to save a checkpoint every so often to google drive? i know i will come back and it will be disconnected and the lora file will be gone

    • @allyourtechai
      @allyourtechai  6 หลายเดือนก่อน

      You definitely can as an option. I always stick around during the training personally but not everyone does

  • @palashkumbalwar4798
    @palashkumbalwar4798 8 หลายเดือนก่อน +1

    Hi, thanks for the tutorial
    I tried generating lora with same method with 24 images. But when I tested it on fooocus it didn't work.
    It's not at all generating the image it is trained on

    • @daansan26
      @daansan26 5 หลายเดือนก่อน

      Same here...

  • @djpolyester
    @djpolyester 7 หลายเดือนก่อน +2

    Omg, johnny sins making a tut on sd!

    • @allyourtechai
      @allyourtechai  7 หลายเดือนก่อน

      🤣

    • @bradydyson65
      @bradydyson65 7 หลายเดือนก่อน

      I thought he looked faintly like the guitarist from Rise Against

  • @CodeMania-y3e
    @CodeMania-y3e 4 หลายเดือนก่อน

    @allyourtechai hey man, in this video what you did, so in this process dreambooth and lora both works simultaneously???????

  • @havelicricket
    @havelicricket 9 หลายเดือนก่อน +1

    pls help, colab is automatically disconnected in 90 minutes, while training is on, and, usually, training takes place around at least 2 to 4 hours, how do I finish the training?

    • @allyourtechai
      @allyourtechai  9 หลายเดือนก่อน

      How many images are you using? Running this myself completed in under an hour

    • @havelicricket
      @havelicricket 9 หลายเดือนก่อน

      There are 11 images I am using@@allyourtechai

    • @havelicricket
      @havelicricket 9 หลายเดือนก่อน

      ❌ ERROR | 2024-02-11 06:40:05 | autotrain.trainers.common:wrapper:91 - train has failed due to an exception: Traceback (most recent call last):
      File "/usr/local/lib/python3.10/dist-packages/autotrain/trainers/common.py", line 88, in wrapper
      return func(*args, **kwargs)
      File "/usr/local/lib/python3.10/dist-packages/autotrain/trainers/dreambooth/__main__.py", line 312, in train
      trainer.train()
      File "/usr/local/lib/python3.10/dist-packages/autotrain/trainers/dreambooth/trainer.py", line 406, in train
      self.accelerator.backward(loss)
      File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 1962, in backward
      self.scaler.scale(loss).backward(**kwargs)
      File "/usr/local/lib/python3.10/dist-packages/torch/_tensor.py", line 492, in backward
      torch.autograd.backward(
      File "/usr/local/lib/python3.10/dist-packages/torch/autograd/__init__.py", line 251, in backward
      Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
      RuntimeError: Expected is_sm80 || is_sm90 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
      ❌ ERROR | 2024-02-11 06:40:05 | autotrain.trainers.common:wrapper:92 - Expected is_sm80 || is_sm90 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
      @@allyourtechai

    • @havelicricket
      @havelicricket 9 หลายเดือนก่อน

      11 images@@allyourtechai

    • @havelicricket
      @havelicricket 9 หลายเดือนก่อน

      @@allyourtechai ❌ ERROR | 2024-02-11 06:40:05 | autotrain.trainers.common:wrapper:91 - train has failed due to an exception: Traceback (most recent call last):
      File "/usr/local/lib/python3.10/dist-packages/autotrain/trainers/common.py", line 88, in wrapper
      return func(*args, **kwargs)
      File "/usr/local/lib/python3.10/dist-packages/autotrain/trainers/dreambooth/__main__.py", line 312, in train
      trainer.train()
      File "/usr/local/lib/python3.10/dist-packages/autotrain/trainers/dreambooth/trainer.py", line 406, in train
      self.accelerator.backward(loss)
      File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 1962, in backward
      self.scaler.scale(loss).backward(**kwargs)
      File "/usr/local/lib/python3.10/dist-packages/torch/_tensor.py", line 492, in backward
      torch.autograd.backward(
      File "/usr/local/lib/python3.10/dist-packages/torch/autograd/__init__.py", line 251, in backward
      Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
      RuntimeError: Expected is_sm80 || is_sm90 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)

  • @rioo3332
    @rioo3332 5 หลายเดือนก่อน +1

    I've waiting 2hour even more, but, the sel still running endlessly, it can't be till finished.
    How?

  • @ryxifluction6424
    @ryxifluction6424 10 หลายเดือนก่อน +1

    my results didn't come out that well, any troubleshooting? I got images of like other people ( didn't look like the person I put in or celebrity)

    • @allyourtechai
      @allyourtechai  10 หลายเดือนก่อน

      How many images did you train with and which software are you using to generate images after training? Are you using base stable diffusion xl to generate the images?

    • @mattattack6288
      @mattattack6288 10 หลายเดือนก่อน

      @@allyourtechai I was getting the same thing. I switched from automatic1111 to fooocus, and it works now. For some reason stable diffusion is not recognizing the lora

  • @_AlienOutlaw
    @_AlienOutlaw 8 หลายเดือนก่อน

    I was pulling my hair out trying to figure out how to locally train on a Mac and eventually found this video. Thank you! One question - I used a 16 image dataset to see just how real of a headshot I could generate and I'm currently on 10h 5m. I ended up getting a Colab Pro subscription after my first attempt was halted at 6hrs. Any insight on large jobs like this? I'd hate to lose progress when sleeping lol

  • @乾淨核能
    @乾淨核能 6 หลายเดือนก่อน

    04:55 why my training parameters are less than the ones you showed ?
    did you use full other than basic?
    but i have one extra parameter : "vae_model": "",

    • @allyourtechai
      @allyourtechai  6 หลายเดือนก่อน +1

      it is possible that they updated the scripts

    • @乾淨核能
      @乾淨核能 6 หลายเดือนก่อน

      @@allyourtechai thank you!

  • @vespucciph5975
    @vespucciph5975 7 หลายเดือนก่อน +1

    It doesn't work for me. I have tried it multiple times - with and without celebrity and also with different images. The settings are correct, I'm running it on fooocus. It seems to load and create images no problem but they don't look like me. not even close. what could have gone wrong?

    • @vespucciph5975
      @vespucciph5975 7 หลายเดือนก่อน +3

      Update: i got it working using the .safetensor file wirh „kohya“ in the name. In my case there are two.

    • @rubenbaenaperez6183
      @rubenbaenaperez6183 7 หลายเดือนก่อน

      ​@@vespucciph5975😮

    • @Fanaz10
      @Fanaz10 7 หลายเดือนก่อน +1

      @@vespucciph5975 hey bro, did it work well? I saw there was kohya file, but used the regular, now its deleted, have to run train again........ :(

    • @vespucciph5975
      @vespucciph5975 7 หลายเดือนก่อน

      @@Fanaz10 yes. It works as well as in his video.

    • @Fanaz10
      @Fanaz10 7 หลายเดือนก่อน

      @@vespucciph5975 yea bro I ran with kohya and it works. HOWEVER, whenever I try to add even one word to the prompt, the end result is unrecognizable. Do I have to do something with weights? I'm just trying to make a simple corporate portrait, like "tom cruise man, corporate portrait"

  • @lukejames3534
    @lukejames3534 8 หลายเดือนก่อน

    Couldn't able to train model. I'm getting these error "ImportError: cannot import name 'text_encoder_lora_state_dict' from 'diffusers.loaders' (/usr/local/lib/python3.10/dist-packages/diffusers/loaders/__init__.py)". Please help me to resolve this.

  • @ChrisChan126
    @ChrisChan126 10 หลายเดือนก่อน +1

    Man you're the best! I have a question, if the training got interrupted/stopped by accident, do I need to start everything all over agian?

    • @allyourtechai
      @allyourtechai  10 หลายเดือนก่อน +1

      Thank you! Yes if it fails and there is no progress or movement, you may need to restart unfortunately.

  • @TheHeartShow
    @TheHeartShow 9 หลายเดือนก่อน +1

    Anyone having an issue using the LoRA produced on A1111? Every single LoRA shows up except the ones trained with this method on A1111.

    • @allyourtechai
      @allyourtechai  9 หลายเดือนก่อน

      I use fooocus or InvokeAI (or even comfyui). No idea why a1111 would have issues though.

  • @jjsc3334
    @jjsc3334 9 หลายเดือนก่อน

    Some of LORA disappeared, click Refresh did not work. went to extensions - apply, update, still not work, these disappeared LORA files still in Lora folder, how to fix that?

    • @allyourtechai
      @allyourtechai  9 หลายเดือนก่อน

      if you refresh I believe it launches a new colab instance and you lose anything related to the old one.

  • @benpowell1463
    @benpowell1463 5 หลายเดือนก่อน

    The tutorial was great! I tried this 4 months after the video came out, and the interface and options have changed a lot. I got a Lora that worked in Auto1111 and forge UI, but the quality was unusable. Used around 20 head-shots of myself, all with different lighting, backgrounds and expressions. I was trying to use a SD1.5 model since my 4gb card can't run SDXL very well. The results didn't look like me unless the the Lora weight was full, and then it seems to be not flexible at all, just grungy images that are nearly identical of the head-shots I uploaded. I was sad to realize that it was asking me to pay for credits to try the training again, so I only got that one shot at it. It seems weird that i followed the instructions so exactly but got nowhere near the results you did. Would appreciate any help, I'm pretty much a beginner at this.

  • @JieTie
    @JieTie 9 หลายเดือนก่อน

    Great tut, but You could explain how to add captions to images, or maybe how to check what caption was inserted while training.
    Edit: or maybe just drop image1.png image1.txt image2.png image2.txt and it will be fine?

  • @cheick973
    @cheick973 9 หลายเดือนก่อน

    Hi, thank you so much for your good explanation. everything went good for me for the training, but in the end I don't see any output folder with safetensors file. I tried several times. Any idea ?

  • @edmoartist
    @edmoartist 9 หลายเดือนก่อน +1

    Great tutorial ! clear and to the point. Anyone know if you can input .txt files with captions instead of the ? Cheers

  • @ItsSunny-ym5jh
    @ItsSunny-ym5jh 10 หลายเดือนก่อน +1

    hey umm how can i put this model on hugging face without downloading it

    • @allyourtechai
      @allyourtechai  10 หลายเดือนก่อน

      If you have a hugging face account it will automatically upload the Lora to your account when the generation is complete

  • @leilagi1345
    @leilagi1345 8 หลายเดือนก่อน

    Hi! really helpful video. thank u so much for info!
    i just wonder what if i want to train my style of drawing (flat a little in modern japanese style but not anime, mostly full body of girls or boys in the streets) what trigger word should i use? just "drawing" or my uniq gibberish word like "uhfvuhfuh"?

  • @TatsuyaNFT
    @TatsuyaNFT 2 หลายเดือนก่อน

    Make a video on Stable Diffusion which can used on Any device with Google Colab
    Most I found have options to download Model from CivitAI, but doesn't have Additional options like adding LoRA Models or Embeds
    Would be helpful if you can make one on it

  • @mastertouchMT
    @mastertouchMT 9 หลายเดือนก่อน +1

    Greate tute!! One quick question.. The LoRAs work fine with Fooocus but they dont work in A1111?

    • @allyourtechai
      @allyourtechai  9 หลายเดือนก่อน +1

      They seem to work everywhere but A1111, and I haven’t figured out why that is yet

    • @mastertouchMT
      @mastertouchMT 9 หลายเดือนก่อน

      @@allyourtechai odd. Also noticed that they have a SD1 listed in the stable Diffusion version if that has something to do with it

    • @allyourtechai
      @allyourtechai  9 หลายเดือนก่อน +1

      @@mastertouchMT That's interesting. Worth digging into more. I'll see if I can find anything

    • @mastertouchMT
      @mastertouchMT 9 หลายเดือนก่อน

      @@allyourtechai I just worked with it in the new Forge platform. Have to go into the Jason file and change it to SDXL and its good to go!

  • @guttembergalves3996
    @guttembergalves3996 10 หลายเดือนก่อน +1

    Thanks for the video. I'm training my model right now, following his tips. Now the question I have is: do you know of any colab that I can run this .safetensor to generate the images based on the model I just trained? Thanks again and good luck.

    • @allyourtechai
      @allyourtechai  10 หลายเดือนก่อน +1

      I'll see what I can find!

    • @allyourtechai
      @allyourtechai  10 หลายเดือนก่อน +2

      Here you go!!! : th-cam.com/video/kD6kD6G_s7M/w-d-xo.html

    • @guttembergalves3996
      @guttembergalves3996 10 หลายเดือนก่อน

      ​@@allyourtechai Thanks. I'll check it out now.

  • @venom90210
    @venom90210 7 หลายเดือนก่อน

    my model came out as a JSON file what did i do wrong?

  • @jeffbezoz-z6d
    @jeffbezoz-z6d 9 หลายเดือนก่อน

    The generated Lora is working with great with Fooocus but it doesn't do anything in A1111, are you aware of this issue?

    • @allyourtechai
      @allyourtechai  9 หลายเดือนก่อน

      A1111 needs to update to allow for the format that colab puts out. Not anything I have control over unfortunately

  • @wordsofinterest
    @wordsofinterest 10 หลายเดือนก่อน +1

    Do you prefer training this way or does Kohya produce better results?

    • @allyourtechai
      @allyourtechai  10 หลายเดือนก่อน +1

      Kohya provides more flexibility with regularization images and higher training steps assuming you have the VRAM. Generally you will get a higher quality result from local training unless you spend money on a larger memory colab instance.... But depending on your use case, the quick, free training could be good enough. (I hope that helped answer)

  • @esalisbery
    @esalisbery 9 หลายเดือนก่อน

    Hello, just wanna ask if this works in training a specific card.

  • @MarkErvin-yg1kd
    @MarkErvin-yg1kd 9 หลายเดือนก่อน

    Great tutorial! I made it all the way through training, but when I try to access the Pytorch file, my file structure looks completely different. Mine is a long list folders, starting with bin, boot, contant, datalab, etc. I can't get it to go up a file menu to where yours is onscreen. Any ideas?

    • @allyourtechai
      @allyourtechai  9 หลายเดือนก่อน

      There is no output folder with safetensors file? That’s odd

  • @nochan6248
    @nochan6248 9 หลายเดือนก่อน

    I get this error after I hit the run button.
    ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
    lida 0.0.10 requires kaleido, which is not installed.
    Error is longer

    • @allyourtechai
      @allyourtechai  9 หลายเดือนก่อน

      Are you training with SDXL? It sounds like a missing package, but i'm unsure from the error. Can you also post the full error if you can?

    • @nochan6248
      @nochan6248 9 หลายเดือนก่อน

      yes sdxl. could not copy paste whole error in chat, youtube algorythm detected as spam likely.
      I did not save the error.
      @@allyourtechai

  • @animatedjess
    @animatedjess 10 หลายเดือนก่อน

    I tried to use this method to train the SSD-1B model but I got an error while training. Have you tried training an SSD-1B model?

    • @allyourtechai
      @allyourtechai  9 หลายเดือนก่อน +1

      I haven’t tried that yet. Let me see if I can get it to work though

    • @animatedjess
      @animatedjess 9 หลายเดือนก่อน

      @@allyourtechai thanks!

  • @rpc8169
    @rpc8169 9 หลายเดือนก่อน

    Hi thanks for the video! I tried this method, but testing the lora in SD, I'm getting images nothing like the training images, it's supposed to be a shirt, but I'm getting images of a beaver lol! Not sure what to do...

    • @allyourtechai
      @allyourtechai  9 หลายเดือนก่อน

      What was your trigger word and your dataset for the Lora?

  • @vadar007
    @vadar007 8 หลายเดือนก่อน

    Is there a recommended resolution size for the training images, e.g. 1024x1024?

    • @CRIMELAB357
      @CRIMELAB357 4 หลายเดือนก่อน

      Yes 1024x1024 hence the name sdXL

  • @fundazeynepayguler8177
    @fundazeynepayguler8177 10 หลายเดือนก่อน

    Hello, thank you for the tutorial. I'm curious about how to use captions in this context. I have around 100 images with captions that I've prepared using Kohya, along with a considerable amount of editing afterward. I'm wondering if it's possible to use them.

    • @allyourtechai
      @allyourtechai  10 หลายเดือนก่อน

      I’ll do a tutorial :)

    • @Deefail
      @Deefail 9 หลายเดือนก่อน

      ​@@allyourtechaiI need this so that I can train an art style

    • @JieTie
      @JieTie 9 หลายเดือนก่อน

      Any luck with that caption .txt files ? :)

  • @Zorot99
    @Zorot99 10 หลายเดือนก่อน +1

    For some reason, it only works when training for sdxl, when I try sd 1.5 I get an error.
    Anyone experiencing the same issue?

    • @PSYCHOPATHiO
      @PSYCHOPATHiO 10 หลายเดือนก่อน

      same

    • @allyourtechai
      @allyourtechai  10 หลายเดือนก่อน

      I haven’t tried that specific colab for 1.5 training, but all of my old colab I used for 1.5 no longer work, so it would seem that something major changed

    • @Zorot99
      @Zorot99 9 หลายเดือนก่อน

      @@PSYCHOPATHiO still didn’t find a solution?

    • @PSYCHOPATHiO
      @PSYCHOPATHiO 9 หลายเดือนก่อน

      @@Zorot99 did the SDXL but it kida crashed at the end or timed out, basically got nothing

    • @Zorot99
      @Zorot99 9 หลายเดือนก่อน

      @user-qc7rz1ep9d I only tested to train sdxl to see if it actually works or not since sd 1.5 is not working for me.

  • @monstamash77
    @monstamash77 10 หลายเดือนก่อน +2

    Awesome tutorial, thank you so much for sharing this video. Its going to help a lot of people like me with crappy GPUs

  • @TitohPereyra
    @TitohPereyra 10 หลายเดือนก่อน +1

    Thnx! I love this kind of video! I love automatic1111

  • @onfire60
    @onfire60 10 หลายเดือนก่อน +1

    Its amazing to me how people just upload their images to wherever. Do you know where those images go and how they may be used after you upload them? I mean this is really cool and all but I'm not sure i would suggest people upload personal images of themselves to random sites. Especially in this AI world. Just my opinion take it or leave it. Cool tutorial though!

    • @allyourtechai
      @allyourtechai  10 หลายเดือนก่อน +3

      This is a cloud instance of a virtual machine. The files go to Google cloud, run on colab, then disappear the moment you disconnect and the virtual machine is destroyed. Pretty safe all things considered.

    • @onfire60
      @onfire60 10 หลายเดือนก่อน +1

      @@allyourtechai What about the site where you see what celebrity you look like?

    • @allyourtechai
      @allyourtechai  10 หลายเดือนก่อน +2

      yep, that one for sure. In general if you are doing a LoRA for yourself, chances are someone in your life has already told you who you look like, so might not be necessary. Use your own judgement of course, but good point.

  • @elmapanoeselterritorio7343
    @elmapanoeselterritorio7343 7 หลายเดือนก่อน

    Hi, I followed all the steps carefully but when I trigger the propmt (in my case "jim carrey man") I keep getting images of jim carrey and not mine... Whats the problem? Amazing content btw, thanks

    • @elmapanoeselterritorio7343
      @elmapanoeselterritorio7343 7 หลายเดือนก่อน

      I even trained the Lora again setting up a combination of words that only represents me and not working

    • @Fanaz10
      @Fanaz10 7 หลายเดือนก่อน

      same here bro.. did u figure it out?

  • @pollockjj
    @pollockjj 10 หลายเดือนก่อน

    What size is the final Lora using the parameters you suggest? How does the size/quality compare to the local method you published earlier with those parameters?

    • @allyourtechai
      @allyourtechai  10 หลายเดือนก่อน +2

      It’s about 23MB in size versus 1.7GB for the version trained locally. Part of the reason for that is the 15GB vram limit on the free version of colab. My local guide requires about 20gb of vram to train. I also used 2000 training steps locally versus 500 in colab.
      So, the local version is higher quality, but is it 100X higher quality? No!
      I’ll do some side by side tests and we can see :)

    • @levis89
      @levis89 10 หลายเดือนก่อน

      @@allyourtechai if I have the lowest tier of colab purchased already, what number of steps would you recommend for the best results. Also, some people suggest changing clothes, expressions and environments in the sample photos for better results, do you agree with this?

    • @allyourtechai
      @allyourtechai  10 หลายเดือนก่อน

      I would go with 2000 steps for training for a better result. I would definitely try to get variations in both the expressions and clothing. Mine for example tends to put me in a grey polo since the bulk of my images were taken in a hurry with one set of clothes. Normally I try for varied lighting, clothing, etc to create the most flexible model possible.

    • @pollockjj
      @pollockjj 10 หลายเดือนก่อน

      @@allyourtechai Thanks for the additional info. I was able to get your exact settings working on my 12Gig 4070, but I get how for Collab it is essentially a free video card so I shouldn't complain. :)

    • @levis89
      @levis89 10 หลายเดือนก่อน

      @@allyourtechai gotcha! Appreciate the reply. Appreciate the content.
      Got a question I could really use your opinion on. If my final aim is to make a comic art style avatar of myself, should I think about training the LoRA on a different base? Something that has already been trained before on the particular style that Im aiming for? I’ve read that SDXL and juggernaut are designed for realistic images. And The google colab method has a fixed amount of bases that I can use, any in particular that you would suggest for this?
      Either way, you have earned my sub, looking forward to future videos!

  • @AmirAliabadi
    @AmirAliabadi 9 หลายเดือนก่อน

    Does this not need regularization images ? Seems like that is an import process to to lora training.

    • @allyourtechai
      @allyourtechai  9 หลายเดือนก่อน

      They help but are not required. Simply not an option at all in most cases when you aren’t training locally.

    • @AmirAliabadi
      @AmirAliabadi 9 หลายเดือนก่อน

      @@allyourtechai how about captioning ? does this support captioned text along with the training images ?

  • @rendymusa3527
    @rendymusa3527 9 หลายเดือนก่อน

    Do the photos to be prepared have to be the same size? Or can they be random?

    • @allyourtechai
      @allyourtechai  9 หลายเดือนก่อน

      Different sizes and aspect ratios are fine. You no longer need to crop all of the photos

  • @animatedjess
    @animatedjess 10 หลายเดือนก่อน

    does this work for training for styles? what would I need to enter in the prompt field?

    • @allyourtechai
      @allyourtechai  10 หลายเดือนก่อน

      It does! You would just provide a prompt trigger that describes the style and ensure that trigger is also in your text annotation files for the pictures you use to train the model. It might be something like “neon glow style” for example

    • @animatedjess
      @animatedjess 10 หลายเดือนก่อน

      @@allyourtechai in the google colab/ngrok app I don't see an option for text annotation. In the tutorial I just saw that you uploaded images only.

    • @Deefail
      @Deefail 9 หลายเดือนก่อน

      ​@@allyourtechaiAre you saying that we can just upload txts alongside with the images with the same name but different (extensions obviously) and it will work?

  • @researchandbuild1751
    @researchandbuild1751 9 หลายเดือนก่อน

    Your have to buy compute credits now to use colab

    • @allyourtechai
      @allyourtechai  9 หลายเดือนก่อน

      No more free credits? Or do you mean after you use all of your free credits?

    • @mikrodizels
      @mikrodizels 9 หลายเดือนก่อน

      Don't use colab for a couple of days, you will regain access to a GPU again, but keep in mind that it's like for 4 hours total max before you lose priority and not during peak hours. It sucks, but hey, it's free

  • @方奕斯
    @方奕斯 8 หลายเดือนก่อน

    Does the link still work? I got disallowed in the middle of my training

    • @harristamsi9682
      @harristamsi9682 5 หลายเดือนก่อน

      I think it doesnt work anymore

  • @OurResistance
    @OurResistance 7 หลายเดือนก่อน

    I am very frustrated that it does not allow me to use text files to describe the images. Therefore it is useless for most Lora training purposes!

    • @allyourtechai
      @allyourtechai  7 หลายเดือนก่อน +1

      Yeah, hard to find anything that allows for that unless it is run locally. If I find anything i'll let you know.

  • @diamondhands4562
    @diamondhands4562 6 หลายเดือนก่อน

    I created one today. I don't think this works anymore. I can't get it to work at all. Has anyone created one lately that works?

    • @allyourtechai
      @allyourtechai  6 หลายเดือนก่อน

      I’ll have to see if there is another one we can use. These change so frequently

  • @faraday8280
    @faraday8280 6 หลายเดือนก่อน

    can you make a video on how to install and run it on lokal hardware and not google colab ?

    • @allyourtechai
      @allyourtechai  6 หลายเดือนก่อน

      Yep, I have a couple videos on that already :)

  • @popovdejan
    @popovdejan 9 หลายเดือนก่อน

    We can't use that LORA with SD1.5 in Automatic1111 ??

    • @allyourtechai
      @allyourtechai  9 หลายเดือนก่อน +1

      It would seem that A1111 doesn’t support the format. Seems to work everywhere else

    • @glassmarble996
      @glassmarble996 9 หลายเดือนก่อน

      @@allyourtechai there is a setting in both forge and automatic1111 named show all loras etc something. enable it and loras will work. my question is, can we raise network dim and alpha dim? 22mb for sdxl is decrasing quality.

  • @faraday8280
    @faraday8280 6 หลายเดือนก่อน

    Tried a 1.5 and the lora does nothing in Automatic1111 D:

    • @allyourtechai
      @allyourtechai  6 หลายเดือนก่อน +1

      you might try fooocus. I haven't had any problems, but it's possible that enough code has changed that the colab doesn't work. I see that all the time. These systems have dozens of updates a week in some cases and they break things.

  • @ImNotQualifiedToSayThisBut
    @ImNotQualifiedToSayThisBut 6 หลายเดือนก่อน

    do we not need to tag the images anymore?

    • @allyourtechai
      @allyourtechai  6 หลายเดือนก่อน

      Ideally, yes, but running this remotely just doesn't allow for blip captioning.

  • @240clay98
    @240clay98 8 หลายเดือนก่อน

    Can this be done on a mac?

  • @CRIMELAB357
    @CRIMELAB357 4 หลายเดือนก่อน

    Sdxl checkpoint pls

  • @方奕斯
    @方奕斯 9 หลายเดือนก่อน

    Hello. Does this work with anime character also?

    • @allyourtechai
      @allyourtechai  9 หลายเดือนก่อน

      Yes, although I would probably train on top of an anime specific SDXL base model. You might still get good results on SDXL though, but I haven't tried.

  • @alirezaghasrimanesh2431
    @alirezaghasrimanesh2431 10 หลายเดือนก่อน +1

    thanks for your great content! very helpful.

    • @allyourtechai
      @allyourtechai  10 หลายเดือนก่อน

      You are very welcome, thanks for watching!

  • @freeviewmedia1080
    @freeviewmedia1080 9 หลายเดือนก่อน

    What about the captions text ?

    • @allyourtechai
      @allyourtechai  9 หลายเดือนก่อน

      Unfortunately that’s a limitation of the colab. I’ve been looking for alternatives but so far this is one of the best I have found

  • @axelrigaud
    @axelrigaud 10 หลายเดือนก่อน

    thank you ! Am I the only one to get this error after it processed the files ? "You don't have the rights to create a model under this namespace"

    • @allyourtechai
      @allyourtechai  10 หลายเดือนก่อน +1

      Do you have the correct api key for hugging face entered, and does it have write access like it needs?

    • @axelrigaud
      @axelrigaud 10 หลายเดือนก่อน +1

      @@allyourtechai yep i figured that out by reading again what was asked in the notebook :) my token was "read". thank you for replying !

    • @allyourtechai
      @allyourtechai  10 หลายเดือนก่อน

      @@axelrigaud Awesome, it's always nice when it turns out to be something simple!

  • @Fggg-hz9tl
    @Fggg-hz9tl 7 หลายเดือนก่อน

    Why does my finished model weigh 20 MB?

    • @vespucciph5975
      @vespucciph5975 7 หลายเดือนก่อน

      same here. I believe it's just a weights preset or something like that

  • @BabylonWanderer
    @BabylonWanderer 10 หลายเดือนก่อน +3

    great tutorial.. but plz plz use dark mode in your browser.. that white screen is blinding 😎

    • @allyourtechai
      @allyourtechai  10 หลายเดือนก่อน +2

      Haha, I just changed over to dark mode, and my eyes thank you too 😂

    • @BabylonWanderer
      @BabylonWanderer 10 หลายเดือนก่อน

      @@allyourtechai 🤣👍

  • @RodrigoIglesias
    @RodrigoIglesias 10 หลายเดือนก่อน +2

    Very good tutorial! Finally I can have my own XL Lora, I couldn't with an 2070 RTX 8GB 😊
    Edit: Do you think I could train a Juggernaut XL LoRA with this? It fails with the default settings 🤔

    • @allyourtechai
      @allyourtechai  10 หลายเดือนก่อน +3

      Let me take a look!

    • @Dean-vk8ff
      @Dean-vk8ff 10 หลายเดือนก่อน

      @@allyourtechai Great tutorial, still cant get round the error its producing when trying to use juggernaut

  • @क्लोज़अपवैज्ञानिक
    @क्लोज़अपवैज्ञानिक 9 หลายเดือนก่อน +1

    thanks

  • @krazyyworld1081
    @krazyyworld1081 8 หลายเดือนก่อน

    my lora is ignored when i generate

    • @allyourtechai
      @allyourtechai  8 หลายเดือนก่อน

      What software, and are you using the trigger in the prompt?

    • @krazyyworld1081
      @krazyyworld1081 7 หลายเดือนก่อน

      @@allyourtechai what your steps in the tutorial to the dot

  • @Hariom_baghel08
    @Hariom_baghel08 10 หลายเดือนก่อน

    I am Android user please help me 😢

    • @allyourtechai
      @allyourtechai  10 หลายเดือนก่อน

      What questions do you have?

    • @Hariom_baghel08
      @Hariom_baghel08 10 หลายเดือนก่อน

      @@allyourtechai Will what you say be reduced in Android or not?

  • @gamersgabangest3179
    @gamersgabangest3179 8 หลายเดือนก่อน

    You look like "Jerry rigs everything " lol

  • @Hariom_baghel08
    @Hariom_baghel08 10 หลายเดือนก่อน +1

    I am Android user

  • @jahinmahbub8237
    @jahinmahbub8237 5 หลายเดือนก่อน

    Oh god. not Taylor swift! You got a Death wish???

  • @rorutop3596
    @rorutop3596 9 หลายเดือนก่อน

    training is killed when starting? probably cuz i have a whopping 270+ images in the datasets as im training for style, but dont know how to figure it out..
    {'variance_type', 'dynamic_thresholding_ratio', 'thresholding', 'clip_sample_range'} was not found in config. Values will be initialized to default values.
    > INFO Running jobs: [2564]
    INFO: 180.242.128.113:0 - "GET /is_model_training HTTP/1.1" 200 OK
    INFO: 180.242.128.113:0 - "GET /accelerators HTTP/1.1" 200 OK
    > INFO Running jobs: [2564]
    INFO: 180.242.128.113:0 - "GET /is_model_training HTTP/1.1" 200 OK
    > INFO Running jobs: [2564]
    > INFO Killing PID: 2564

  • @davidelks7766
    @davidelks7766 10 หลายเดือนก่อน +1

    Thanks alott 🤍 keep going

    • @allyourtechai
      @allyourtechai  10 หลายเดือนก่อน +1

      Thank you too

  • @animeful7096
    @animeful7096 9 หลายเดือนก่อน

    my code gets struck on
    > INFO Running jobs: []
    INFO: 103.133.229.36:0 - "GET /is_model_training HTTP/1.1" 200 OK
    INFO: 103.133.229.36:0 - "GET /accelerators HTTP/1.1" 200 OK
    and it keeps on going and returns the same command again and again and never stops executing
    what is the problem here please help?

    • @AleixPerdigo
      @AleixPerdigo 9 หลายเดือนก่อน

      me too

    • @animeful7096
      @animeful7096 9 หลายเดือนก่อน +2

      @@AleixPerdigo actually that wasn't an error it was just an indication for completion of task during this a file named safetensors should be appearing in your folder that is your lora file

  • @Fggg-hz9tl
    @Fggg-hz9tl 7 หลายเดือนก่อน

    Why does my finished model weigh 20 MB?