Fluxgym Full Guide For Flux LoRA Training Easy WebUI And Low VRAM

แชร์
ฝัง
  • เผยแพร่เมื่อ 26 ธ.ค. 2024

ความคิดเห็น • 156

  • @brianmonarchcomedy
    @brianmonarchcomedy 3 หลายเดือนก่อน +9

    After I hit start training in about 10 seconds it says "Training Complete. Check the outputs folder for the LoRA files." Could it be because I have multiple GPUs in my system? I noticed in the train log it does say something about multi_gpu. Not an error. But thought maybe it's not supported. My GPU's aren't linked or anything.

    • @ChrissyAiven
      @ChrissyAiven 2 หลายเดือนก่อน

      A few seconds? And did the lora work? If so..gimme your computer 😁

    • @trexkioarts
      @trexkioarts 18 วันที่ผ่านมา

      @@ChrissyAiven jeje that's the issue it just gave me a rare file I don't remember what was the extension of the file or if it was an orphan file I call it that way jeje to those files without an extension like .jpg etc. Also the the size of the file was just a few KB

  • @prakashjan15
    @prakashjan15 3 หลายเดือนก่อน +2

    I was looking for affordable free lora training. God bless you !

  • @jason54953
    @jason54953 3 หลายเดือนก่อน +1

    if you issue the command python3.10 venv venv, to activate the environment, it helps to resolve any incompatibility issues.

  • @doriandowse
    @doriandowse 2 หลายเดือนก่อน +2

    My hit rate on these tutorials has been pretty low. There's always something that doesn't work no matter what you do (reactor 🤨).
    This tutorial however, not only worked, I was amazed at the results. This really is a game changer. I can now make my own Loras and that is fking cool! Thank you.
    PS. Although it's trained on Flux1-dev, the resulting Lora works great on Q5_K_S.

    • @TheFutureThinker
      @TheFutureThinker  2 หลายเดือนก่อน

      Nice 👍 have fun with it.
      Yes Flux 1Dev Train lora could work on other flux gguf quantization version.

  • @cadudatoros
    @cadudatoros หลายเดือนก่อน

    thank you! works! one detail. depending the python , probably some conflicts will happen. the setup on the page right now (nov 2024) works with 3.10 (64 bits) in my computer (AMD)

  • @EternalAI-v9b
    @EternalAI-v9b 2 หลายเดือนก่อน

    Hello, what is your default py reader? I dont usually have a preview of py files on the right when browsing with windows explorer, for example at 6:40

  • @trexkioarts
    @trexkioarts 3 หลายเดือนก่อน +2

    i just got complete the task in a few sec and have a train lora bat file? in the output folder. i dont get it.

  • @incrediblekullu7932
    @incrediblekullu7932 2 หลายเดือนก่อน +2

    FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:\\Users\\ishwr\\Music\\fluxgym\\outputs\\lora1\\sample'
    still tried alot but getting same error

  • @ademarco7769
    @ademarco7769 2 หลายเดือนก่อน

    Nice job. Easy explanation. Precise . Great results on lora . Thanx

  • @ArtificialHorizons
    @ArtificialHorizons 2 หลายเดือนก่อน +1

    Great video. I tried to set it up on my laptop but it didn't work. Can you create a Colab Notebook for this? I tried to do one but having trouble setting up the WebUI hosting part. It points to a local host.

  • @hmmyaa7867
    @hmmyaa7867 หลายเดือนก่อน

    Thank you for the amazing tutorial bro

  • @RamonGuthrie
    @RamonGuthrie 2 หลายเดือนก่อน +1

    Is it possible to use a T5 fp8 instead of the T5 fp16 model for faster training?

  • @TiagoTorressoria
    @TiagoTorressoria 3 หลายเดือนก่อน +4

    I get an error after 10 seconds of training where it seems that the GPU is not configured. Can someone help me?

    • @kristian5747
      @kristian5747 2 หลายเดือนก่อน +1

      @@TiagoTorressoria same here its run about 10 min, it finished early, when i cheked process it said error code 1

  • @researchandbuild1751
    @researchandbuild1751 หลายเดือนก่อน +1

    I just get stuck where it says it's creating epoch 1/16 and it just sits there, no progress reporting at all

    • @2manification
      @2manification หลายเดือนก่อน

      same problem

    • @aidendeans5569
      @aidendeans5569 28 วันที่ผ่านมา

      yeah i same the same issue

    • @researchandbuild1751
      @researchandbuild1751 26 วันที่ผ่านมา

      @@aidendeans5569 I got some replies on Reddit people said that it's working, the not reporting progress thing is a bug that the FluxGym dev gave up on I guess. But its still working correctly. I myself switched to using Kohya_ss with its GUI directly instead (using the SD3_Flux.1 branch of Kohya) and its working for me

  • @imoon3d
    @imoon3d 2 หลายเดือนก่อน

    Image preparation is also very important and you did not mention it in the video above.

    • @kalakala4803
      @kalakala4803 2 หลายเดือนก่อน

      Prepare metadata, number of images?

  • @dwoodwoo65
    @dwoodwoo65 2 หลายเดือนก่อน +1

    Can you train using the flux NF4 safetensor model?

    • @dwoodwoo65
      @dwoodwoo65 2 หลายเดือนก่อน

      Never mind I read in comments you said you haven’t tried with other quantizations. I have a 4080 (16gb vram) but it should like I should try the standard flux 1 dev model safetensor

  • @pawelthe1606
    @pawelthe1606 3 หลายเดือนก่อน +2

    FileNotFoundError: [WinError 3] The system cannot find the path specified, i got this error anyone can help please?

  • @timbersavage90
    @timbersavage90 3 หลายเดือนก่อน +1

    i have installed this 6 times and still get this error RuntimeError: use_libuv was requested but PyTorch was build without libuv support

  • @hitmanehsan
    @hitmanehsan หลายเดือนก่อน

    for some reason after i press ADD AI CAPTION WITH FELORNACE 2 my cmd stuck to downloading: pytorch_model.bin how can i fix this? is there anyway to reject downloading this? i mean is there anywat to download it manulay and replace it ? where should i download and where should i place it?

  • @ChrissyAiven
    @ChrissyAiven 2 หลายเดือนก่อน +1

    Hi my fluxgym is running for around 12 hours now, I have a 16GB 4080, so should I stop it? The sample files showing at the bottom are really strange too, nothing to do with the model or is it normal?

    • @RonnieMirands
      @RonnieMirands 2 หลายเดือนก่อน +1

      Mine is running about 4 hours, i thought it was just me lol. So there is something wrong maybe

    • @ChrissyAiven
      @ChrissyAiven 2 หลายเดือนก่อน +1

      @@RonnieMirands hm maybe ... btw I have a 4060 got the number wrong. Trying it on Flux Trainer inside ComfyUI now but seems the same, GPU is on 100% estimated time 15 hours.
      Well I don't mind IF it is working after this, but it seems to be wrong anyways :D

    • @RonnieMirands
      @RonnieMirands 2 หลายเดือนก่อน +1

      @@ChrissyAiven I´ve search more about this, and seems it really take a lot of hours! Thats why in beggining they talked it was possible train just with high end cards. So they optmized a lot, but still take lot of time :(

    • @ChrissyAiven
      @ChrissyAiven 2 หลายเดือนก่อน

      @@RonnieMirands ok thank you, then I will wait patently, not gonna make a lora everyday right? :)

    • @ChrissyAiven
      @ChrissyAiven 2 หลายเดือนก่อน +1

      @@RonnieMirands oh btw did you try this version and also the trainer within comfy? Gonna make a lora on each and then compare. My face is from fooocus and I wasn't able to rebuild it on comfy, so I hope Lora works :)

  • @prakashjan15
    @prakashjan15 3 หลายเดือนก่อน +2

    In my case lora file is not generating after whole process , I have 32 gb ram and 12 vrm gpu

    • @neolamas147
      @neolamas147 2 หลายเดือนก่อน

      SAME HERE been trying to get it working for 2 days now and it only can just keep downloading giant model files on my drive and doesn't ever start even training anything haha

    • @kristian5747
      @kristian5747 2 หลายเดือนก่อน

      ​@@neolamas147 I give up on this, my last hope just run it on google colab, but I have to upgrade to colab pro its 9.99, because standard colab have limited time

  • @VictorDmAlves
    @VictorDmAlves หลายเดือนก่อน

    Do you know how to traine using multiple datasets, Like in buked? I would want to create a LoRa using 512, 768 and 1024 at the same time, but I don't know how to do it (more precisely, where to store the aditional images, since the option of buked is avaiable in the advanced tab).

    • @TheFutureThinker
      @TheFutureThinker  หลายเดือนก่อน

      The good news is Flux lora training are able to take multiple size image in 1 dataset. So you can put all different sizes image into 1 training dataset. :)

  • @cgpixol
    @cgpixol หลายเดือนก่อน

    can this lora works for SDXL as well?

  • @satyajitroutray282
    @satyajitroutray282 3 หลายเดือนก่อน

    Great tut. I have a question, How do we resume an interrupted training from the last state?

  • @ademarco7769
    @ademarco7769 2 หลายเดือนก่อน

    one question; It seems that if i train a lora with a model and use it with another model, it does not perform fine. Is it possible in Fluxgym to set a different train model ? Like editing any settings file ?

    • @ChrissyAiven
      @ChrissyAiven 2 หลายเดือนก่อน

      From what I know you can only use them on flux, for SDXL or so you need a different trainer.

  • @Ishmaam
    @Ishmaam 3 หลายเดือนก่อน

    Thank you so much for your effort, having difficulty in part two (Add AI caption with Florence 2. Could not sort it out.

  • @dariocardajoli6831
    @dariocardajoli6831 3 หลายเดือนก่อน

    Fluxgym is underrated, only thing that confused me at first was how in the instructions it said to diwnload some flux-dev.sft file instead of the isual safetensor extension .. i simply copied my already existing safetensor in the fluxgym model unet folder and renamed it to have a sft extendion , did the same with the vao and it worked , any thoughts?

    • @TheFutureThinker
      @TheFutureThinker  3 หลายเดือนก่อน +1

      Sft, it is short terms for safetensors. Both works. I have my VAE named .sft also.

    • @dariocardajoli6831
      @dariocardajoli6831 3 หลายเดือนก่อน

      @@TheFutureThinker thank you , I was thinking it could stand for supervised fine tuning or something 😅 also another "rule" I broke was using the fp8 T5 text encoder during training by renaming it to fp16 and the scripts seem to correctly recognize it as fp8! Got any experience with using fp8 over fp16?

  • @NGIgri
    @NGIgri 3 หลายเดือนก่อน +1

    Well... This doesn't work fo rme. It ends training right after I started it. No loras in output folder.

    • @brianmonarchcomedy
      @brianmonarchcomedy 3 หลายเดือนก่อน +1

      Same here.... Says Training Complete. Check the outputs folder for the LoRA files after about 10 seconds. Did you figure it out. By chance do you have multiple GPU's in your system?

    • @NGIgri
      @NGIgri 3 หลายเดือนก่อน

      @@brianmonarchcomedy no. I have one rtx 4070ti. I just let it be. Just wait for another tool. Like kohya ss. Maybe those guys make it right. Or something else.

    • @brianmonarchcomedy
      @brianmonarchcomedy 3 หลายเดือนก่อน

      @@NGIgri I did notice that they might have come up with a new version since this video was made. I wonder if there's a glitch in the new version. You can see that there's a couple differences. Like on my version you type in the resolution you want. But in this version in the video, you have to check off 512 or 1024.

    • @NGIgri
      @NGIgri 3 หลายเดือนก่อน

      @@brianmonarchcomedy yeah. And I found strange the resolution changes itself. I tried to train LoRA on 1024, but it automaticaly changes to 512. Maybe it's because of12gb of VRAM.

  • @jocg9168
    @jocg9168 3 หลายเดือนก่อน +2

    I have some errors trying to figure out but not sure if anybody had the same [INFO] RuntimeError: use_libuv was requested but PyTorch was build without libuv support [ERROR] Command exited with code 1.
    for some reason doesn't see libuv did the same in other slower machine and works no idea what to do I try different version of pytorch and same.

    • @jocg9168
      @jocg9168 3 หลายเดือนก่อน +1

      I found the solution, basically is because I have two GPUs, so the script is trying to use both RTX3090 and for this reason, doesn't work.
      I just added the extra calls " --num_processes=1 ^ --num_machines=1 ^" on the "train.bat" file and now is using only one GPU without any error.
      I guess there must be something extra to be installed for multiple GPUs to work. Thanks again for the tutorials was very usefull

    • @timbersavage90
      @timbersavage90 3 หลายเดือนก่อน

      @@jocg9168 wow thanks for the info i also had 2 GPU's now it works

    • @ckhmod
      @ckhmod หลายเดือนก่อน

      @@jocg9168 Having same issues. How do you add those extra calls to the train script? I edit and then it disappears?
      Train.bat location? Not seeing that. Thanks for this info as it's definitely not with the pytorch etc. I'm a newb at this part.

  • @sistemci
    @sistemci 3 หลายเดือนก่อน

    How can we make seamless textures on Flux? I have trained my own lora but I can't get seamless results

  • @TheLuc1890
    @TheLuc1890 3 หลายเดือนก่อน

    are you using 1024 image ? ive tried this and comfyui flux training, the comfyui give me much better result (likeliness) than the flux gym, have you tried to compare lora from this and comfyui ? nice simple vid tho, gonna try it latter using your setting, and it seems like thee size of the dataset is also crucial ? (size as in bytes per image not in how many files)

    • @ChrissyAiven
      @ChrissyAiven 2 หลายเดือนก่อน

      Did you compare yet? I am about to do it too, but maybe you already got some experience?

    • @TheLuc1890
      @TheLuc1890 2 หลายเดือนก่อน +1

      @@ChrissyAiven yea tried it and this one had bad result i prefer ai toolkit Lora training using 1024px inage

    • @ChrissyAiven
      @ChrissyAiven 2 หลายเดือนก่อน

      @@TheLuc1890 ok thx but you can use 1024 px on comfy trainer too.

  • @GooveG
    @GooveG 2 หลายเดือนก่อน

    Finished after a couple of hours but didnt generate any actual lora output

    • @malikcooper1969
      @malikcooper1969 2 หลายเดือนก่อน

      I keep having the same thing happened. I trained maybe 7 models but only got my lora senor output model once

  • @NapalmCandy
    @NapalmCandy 3 หลายเดือนก่อน +1

    I have a 10Gb VRAM GPU (3080) I am getting out of memory error

    • @lom1910
      @lom1910 3 หลายเดือนก่อน

      Same here with 8GB

    • @he-xs5le
      @he-xs5le 2 หลายเดือนก่อน

      change your model to dev-fp8 ,that will be working

    • @lom1910
      @lom1910 2 หลายเดือนก่อน

      @@he-xs5le have you tried yourself?

    • @he-xs5le
      @he-xs5le 2 หลายเดือนก่อน

      @@lom1910 yes,I got the result

    • @lom1910
      @lom1910 2 หลายเดือนก่อน

      @@he-xs5le on 8gb right?

  • @lom1910
    @lom1910 3 หลายเดือนก่อน

    How to run it on already existing flux model, not fp16?

    • @TheFutureThinker
      @TheFutureThinker  3 หลายเดือนก่อน

      They are made for fp16 to train. As it mentioned. I haven't try gguf or other version.

    • @lom1910
      @lom1910 3 หลายเดือนก่อน

      @@TheFutureThinker I have 8GB VRAM and 16 RAM, but I instantly get out of memory with that default models. 😕 Pinokio either crashes completely or I get an error in the terminal

    • @TheFutureThinker
      @TheFutureThinker  3 หลายเดือนก่อน

      Not enough Vram , yes, it will not perform. And the trainer min. use 12gb vram.

  • @mr.entezaee
    @mr.entezaee 3 หลายเดือนก่อน

    Has the new kohya_ss update tutorial come so that it can work with the FLUX model?
    I recently installed it, but even though it was the latest update, it did not support FLUX. If possible, teach us how to update it. thanks

    • @Carl-md8pc
      @Carl-md8pc 3 หลายเดือนก่อน

      @@mr.entezaee @6:00

    • @Carl-md8pc
      @Carl-md8pc 3 หลายเดือนก่อน

      See @5:50

    • @TheFutureThinker
      @TheFutureThinker  3 หลายเดือนก่อน

      @@Carl-md8pc good you pay attention 👍

  • @ronbere
    @ronbere 3 หลายเดือนก่อน +4

    not working

  • @madarauchiha5433
    @madarauchiha5433 3 หลายเดือนก่อน

    How are you getting them hands so good?

    • @TheFutureThinker
      @TheFutureThinker  3 หลายเดือนก่อน

      Just using Flux to generate it, haven't change much, you can see in the last part.

  • @wereldeconomie1233
    @wereldeconomie1233 3 หลายเดือนก่อน

    This is a lot simple webui thanks

  • @JACARITUBE
    @JACARITUBE 3 หลายเดือนก่อน

    Great video, can this run on a GeForce RTX 2060 super 8GB?

    • @TheFutureThinker
      @TheFutureThinker  3 หลายเดือนก่อน

      As long as 12, 16, 20GB vram , then its good to go. 8 gb you can try , but it might take you a day to run.

    • @JACARITUBE
      @JACARITUBE 3 หลายเดือนก่อน

      @@TheFutureThinker 😂 ok thanks.

    • @pateltushar53471
      @pateltushar53471 2 หลายเดือนก่อน

      @@TheFutureThinker when i try to run lora model training with 2060 6gb in 10 hours only 6% training complete only

  • @唐钰明-n5z
    @唐钰明-n5z 3 หลายเดือนก่อน

    with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined] What's the problem, please

    • @TheFutureThinker
      @TheFutureThinker  3 หลายเดือนก่อน

      Please try with GPU.

    • @唐钰明-n5z
      @唐钰明-n5z 3 หลายเดือนก่อน

      @@TheFutureThinker Thank you. What should I do to use the GPU, please

  • @demian98765
    @demian98765 3 หลายเดือนก่อน

    I love you!!!! thank you!!!!

  • @FirozKochi
    @FirozKochi หลายเดือนก่อน

    Not working plus error😔

  • @Larimuss
    @Larimuss 2 หลายเดือนก่อน

    Kinda crazy how just vram adds 3x faster speed.
    I just wish nvidia sold a 48g. Vram graphics card at a 4070ti price point. It costs them nothing to double vram. And given the price of 4070tj we got ripped off hard with 12gb vram. So sad. Even games in 4k you benefit from 24gb vram. They just want to upsell. Given the price of a 4090 it should be 48gb vram too

  • @unknownuser3000
    @unknownuser3000 3 หลายเดือนก่อน

    3080 can train flux now?

    • @TheFutureThinker
      @TheFutureThinker  3 หลายเดือนก่อน

      again, in other comment, don't care what model you have. 12, 16, 20 GB Vram that's what they have in the setting.

  • @黄勇刚-v2z
    @黄勇刚-v2z 3 หลายเดือนก่อน

    Can I interrupt it?

    • @TheFutureThinker
      @TheFutureThinker  3 หลายเดือนก่อน

      If you want to start over again. Then you can stop it in the middle of the train, but it won't resume.

  • @MilesBellas
    @MilesBellas 3 หลายเดือนก่อน

    Maybe show a few examples?
    Great video.
    Music suggestion: GuyJ

    • @TheFutureThinker
      @TheFutureThinker  3 หลายเดือนก่อน

      I don't artist music, its copyright.
      i rather use stock, play by myself or generate

    • @MilesBellas
      @MilesBellas 3 หลายเดือนก่อน

      @@TheFutureThinker
      Yes, that makes sense.

  • @rakibislam6918
    @rakibislam6918 3 หลายเดือนก่อน

    how to train sdxl lora?

    • @TheFutureThinker
      @TheFutureThinker  3 หลายเดือนก่อน

      search for the old video, I remember someone did it and I did it.

  • @havemoney
    @havemoney 3 หลายเดือนก่อน

    The character is a very simple example, I suppose transferring the artist’s style will require 200-300 initial scans, and the training time for 12-16 GB will increase to 72 hours?

    • @Elwaves2925
      @Elwaves2925 3 หลายเดือนก่อน

      I started a character lora on my 12Gb VRAM card. It was 25 images at 1024, 1500 steps and I think everything else was default. Once it got to the epoch part (which took awhile), the estimated time was 23hrs. I didn't continue but I will try again with less images, steps and epochs.

  • @insurancecasino5790
    @insurancecasino5790 2 หลายเดือนก่อน

    A real low GPU would be more like 4GB as many have old machines.

    • @TheFutureThinker
      @TheFutureThinker  2 หลายเดือนก่อน +1

      So we should define real low , AI low, server low

    • @insurancecasino5790
      @insurancecasino5790 2 หลายเดือนก่อน

      @@TheFutureThinker Roop almost runs on anything. They had fast SD CPU, but the trend went to high VRAM then now lower VRAM. There was the LCM LoRA models with Fast SD CPU. Most of this is just photoshop on steroids, but there should be more of a manual way of doing it locally for almost anyone that does not have the VRAM. IMO.

  • @luisellagirasole7909
    @luisellagirasole7909 3 หลายเดือนก่อน

    Thanks, I'll check with my 12Gb card, I think it will took not less than 5 hrs :)

    • @TheFutureThinker
      @TheFutureThinker  3 หลายเดือนก่อน +1

      you can try it. let us know how much it take for 12gb. :)

    • @luisellagirasole7909
      @luisellagirasole7909 3 หลายเดือนก่อน

      Ok, no, it's working, expected to finish in about 8 hrs :)

    • @TheFutureThinker
      @TheFutureThinker  3 หลายเดือนก่อน +1

      Right on😎👍 but....omg 8 hours... Put that on the cloud server or something.🥹

    • @luisellagirasole7909
      @luisellagirasole7909 3 หลายเดือนก่อน

      @@TheFutureThinker It tooks 8hrs 19 min

    • @TheFutureThinker
      @TheFutureThinker  3 หลายเดือนก่อน +1

      @@luisellagirasole7909 go to sleep take a rest, then it's done 😄

  • @MilesBellas
    @MilesBellas 3 หลายเดือนก่อน

    Beavis and Butthead dance during the metal music.😅

    • @TheFutureThinker
      @TheFutureThinker  3 หลายเดือนก่อน

      Hehe yeah yeah hehe yeah yeah... 😂😂

  • @jason54953
    @jason54953 3 หลายเดือนก่อน +2

    This thing doesn't work at all. All processing goes to my CPU rather than my GPU.

    • @dariocardajoli6831
      @dariocardajoli6831 3 หลายเดือนก่อน

      Your problem . My rx 7600 xt gpu is used to it's full potential and my cpu is just chilling.

    • @jason54953
      @jason54953 2 หลายเดือนก่อน +1

      @dariocardajoli6831 Okay, not sure how this contributed

    • @dariocardajoli6831
      @dariocardajoli6831 2 หลายเดือนก่อน

      @@jason54953 u didn't even say what's your GPU lol

    • @jason54953
      @jason54953 2 หลายเดือนก่อน

      @dariocardajoli6831 i have a rtx 4070 ti

  • @fredpourlesintimes
    @fredpourlesintimes หลายเดือนก่อน

    12 Giga VRAM is still too high

  • @kalakala4803
    @kalakala4803 3 หลายเดือนก่อน

    I am trying with my 3090.

    • @TheFutureThinker
      @TheFutureThinker  3 หลายเดือนก่อน

      20gb 20 photo , 51mins in 4090 , let see what you have in 3090

    • @ColoraceCG
      @ColoraceCG 3 หลายเดือนก่อน

      @@TheFutureThinker I also have 4090 / how much data is enough for training with good results?

  • @Piotr_Sikora
    @Piotr_Sikora 3 หลายเดือนก่อน

    I got much better results using SimpleTrainer. And I think is more optimized than SD-script

    • @TheFutureThinker
      @TheFutureThinker  3 หลายเดือนก่อน

      Cool, i will try that one later thanks.

  • @chouawarasteven
    @chouawarasteven 3 หลายเดือนก่อน

    Low Vram.... 12GbVRAM 😂
    (😢4GbVram😢)

  • @pointy7771
    @pointy7771 3 หลายเดือนก่อน

    Dude just give 0.80 usd to replicate and be done with your loras fast

    • @TheFutureThinker
      @TheFutureThinker  3 หลายเดือนก่อน

      Yes, good solution for entry level.

    • @pointy7771
      @pointy7771 3 หลายเดือนก่อน

      @@TheFutureThinker tried a couple, no fine tuning of course. It was alright for charactera but did not try entire styles, quess the key words, having the text files and training it with more detail might have better results.

  • @ckhmod
    @ckhmod หลายเดือนก่อน

    Once I hit training, at the bottom I'm seeing :
    [INFO] RuntimeError: use_libuv was requested but PyTorch was build without libuv support
    [ERROR] Command exited with code 1
    [INFO] Runner:
    Out of my element, like Donny, on this one. Do I have to install it with this?
    pip install torch --no-cache-dir
    If anyone has some insight. Would appreciate it. Everything works perfect until this point.