Hey a little tip, set your workflow up to render multiple images each time. 1. where you go around the Lora, and just render it with the prompt (and NO LoRA), for control. 2. Have multiple lora set up, so it test them all at one go (I have a workflow where i do all this). That way you do are sure for every epoc (lora), you test, they have the same settings, seed... and you can better see what the difference is in each epic, and again remember the Control where NO lora is loaded. In your case, you did not use the Tricker word, and not always you do need it, but in your case, without a control it might just be Flux that generated the images of a office since the word office were in, there for i also reccomand to use tricker words like "aY5KaZaD" that way you know that by just putting that in as prompt, and you get the result, it is not because Flux know ah aY5KaZaD = Office...
Wonderful guide! You can also put the start up in a bat file (environment, start the app etc) so you don't have to go into a command prompt every time. I just added a timeout in the bat for each of the steps. file: timeout /t 3 /no I am not a programmer so there might be a better way, but certainly speeds up starting up.
I got it working and added my images and had it add captions. I hit "start training" and after about 10 seconds it says "Training Complete. Check the outputs folder for the LoRA files." Which is obviously way too fast. Any idea why it thinks it's done when it didn't do anything? I kept all the settings at default and I have a beefy NVidia card.
Looks good. I had a test folder that already had jpg/txt files from a kohya_ss trial run. Made sure to ad my lora name to text files also. It stalled but I could not get back in to retry. What is the process of opening Fluxgym again? Whatever I did worked first time . . . if you could get up and running I could be more precise of my error. Thanks. Forgot something: I couldn't get the Gradio html to pop up.
@@metairieman55 you would need to start the virtual environment then activate it. Then run the app.py script. It will give you a URL and just paste that into a new browser and it should load up the UI
@@andrewbarrett6403 sorry for the confusion, no you don’t need all of them for it to work. Each safetensor represents a saved/final lots during the process. The higher the number, the further into training it saved. If there is no number and just the file name, that is the final Lora which would be the best to use
thanks for the walk through! is anyone else getting this on the step around 8:40 ? C:\FluxTraining\fluxgym>python -m venv env 'python' is not recognized as an internal or external command, operable program or batch file.
I trained a model on 16 and it took about five hours with 3090. Default settings. I feel that the model he made is good. It is acceptable. But I wanted to ask how some models are 18 megabytes and some even reach one to two gigs? Determining the volume depends on what factors? And another question is how to make a model quickly but with quality and similar to what we want? I forgot another question. How to get flexibility in different styles. The character I made seems to work very well only on the real photo. And that I did the subtitles. Is subtitling really important? Sorry, I have too many questions
@@mr.entezaee in order to make it quickly, you would need to limit the number of images in the dataset, lower the repeats to 1 AND lower the number of epochs to 8-10.
@@mr.entezaee you don’t need captions but if you want it to train specifically for what’s in the image for example for a style Lora, you will need captions
Any ideas why my sample images during the lora training were amazing and just what I wanted, but I cannot replicate them at all in comfy/forge/etc? The more it uses my lora, the more the overall quality turns to rubbish and I don't know what to change
Excuse me, one more question, what does this warning mean? lora key not loaded: lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_out_0.lora_down.weight Maybe he didn't recognize my model or is there something I didn't understand?
Thank you so much for video. how to stop downloading models, I already have the models downloaded. Even I added them to folder, it is still downloading them. Thank you.
@@AIFuzz59 the training time on low vram pc is several hours, if i have to stop the training jn the middle then how do i resume the training from the last state.
why do the video of telling us how to install it and just being like oh im not going to do it because i already have it. just delete it and do it step by step
doesn't run. I don't know what I did wrong. It's pretty straight forward. Here is the error - (env) E:\AI\ComfyUI_windows_portable_nvidia\FluxLora\fluxgym>python app.py Traceback (most recent call last): File "E:\AI\ComfyUI_windows_portable_nvidia\FluxLora\fluxgym\app.py", line 19, in from library import flux_train_utils, huggingface_util File "e:\ai\comfyui_windows_portable_nvidia\fluxlora\fluxgym\sd-scripts\library\flux_train_utils.py", line 17, in from library import flux_models, flux_utils, strategy_base, train_util File "e:\ai\comfyui_windows_portable_nvidia\fluxlora\fluxgym\sd-scripts\library\flux_models.py", line 363, in class ModelSpec: File "e:\ai\comfyui_windows_portable_nvidia\fluxlora\fluxgym\sd-scripts\library\flux_models.py", line 366, in ModelSpec ckpt_path: str | None TypeError: unsupported operand type(s) for |: 'type' and 'NoneType'
Hey a little tip, set your workflow up to render multiple images each time.
1. where you go around the Lora, and just render it with the prompt (and NO LoRA), for control.
2. Have multiple lora set up, so it test them all at one go (I have a workflow where i do all this).
That way you do are sure for every epoc (lora), you test, they have the same settings, seed... and you can better see what the difference is in each epic, and again remember the Control where NO lora is loaded.
In your case, you did not use the Tricker word, and not always you do need it, but in your case, without a control it might just be Flux that generated the images of a office since the word office were in, there for i also reccomand to use tricker words like "aY5KaZaD" that way you know that by just putting that in as prompt, and you get the result, it is not because Flux know ah aY5KaZaD = Office...
Wonderful guide! You can also put the start up in a bat file (environment, start the app etc) so you don't have to go into a command prompt every time.
I just added a timeout in the bat for each of the steps.
file: timeout /t 3 /no
I am not a programmer so there might be a better way, but certainly speeds up starting up.
I got it working and added my images and had it add captions. I hit "start training" and after about 10 seconds it says "Training Complete. Check the outputs folder for the LoRA files." Which is obviously way too fast. Any idea why it thinks it's done when it didn't do anything? I kept all the settings at default and I have a beefy NVidia card.
Delete the fluxgym in the API folder. Then reinstall
Hey:) can you train in flux gym with different loras like skin texture and iphone style lora?
And is fal ai better or flux gym? haha great video
Can you provide links to the checkpoint used in the attached workflow. Can the trained lora work only with this checkpoint and vae?
@@ckleinhendler this trains a Lora based on Flux Dev so it will work with any Flux model. In order to use Flux, you will also need the Flux VAE
Looks good. I had a test folder that already had jpg/txt files from a kohya_ss trial run. Made sure to ad my lora name to text files also. It stalled but I could not get back in to retry. What is the process of opening Fluxgym again? Whatever I did worked first time . . . if you could get up and running I could be more precise of my error. Thanks. Forgot something: I couldn't get the Gradio html to pop up.
@@metairieman55 you would need to start the virtual environment then activate it. Then run the app.py script. It will give you a URL and just paste that into a new browser and it should load up the UI
So you have to put all 4 safetensor files in the Lora folder? Does that mean you have to load all 4 when generating the images?
@@andrewbarrett6403 sorry for the confusion, no you don’t need all of them for it to work. Each safetensor represents a saved/final lots during the process. The higher the number, the further into training it saved. If there is no number and just the file name, that is the final Lora which would be the best to use
Hi, can u link comfy workflow, cant find one for flux dev with multi loras. And flux dev unet.
can you show more detailed part how to use ComfyUI for running those safetensors? I run your workbench but it shows error of missing modules
@@gileneusz what modules are missing
i get this error " Cannot execute because a node is missing the class_type property.: Node ID '#43'
@@user-nb6kx3qn3p did you install missing comfy noses?
@@AIFuzz59 thanks. now its working fine, however i have a error in productfactory1 workflow it shows " Error occurred when executing VAEDecode: "
Is there any way to create a shortcut on the desktop so I don't have to create the virtual environment manually every time I want to open it?
@@KovekProducciones yes we will be posting a video on that soon
thanks for the walk through! is anyone else getting this on the step around 8:40 ?
C:\FluxTraining\fluxgym>python -m venv env
'python' is not recognized as an internal or external command,
operable program or batch file.
I trained a model on 16 and it took about five hours with 3090. Default settings. I feel that the model he made is good. It is acceptable. But I wanted to ask how some models are 18 megabytes and some even reach one to two gigs? Determining the volume depends on what factors? And another question is how to make a model quickly but with quality and similar to what we want?
I forgot another question. How to get flexibility in different styles. The character I made seems to work very well only on the real photo.
And that I did the subtitles. Is subtitling really important? Sorry, I have too many questions
@@mr.entezaee in order to make it quickly, you would need to limit the number of images in the dataset, lower the repeats to 1 AND lower the number of epochs to 8-10.
@@mr.entezaee you don’t need captions but if you want it to train specifically for what’s in the image for example for a style Lora, you will need captions
doesnt work with 8 vram.. also how to change the --highvram in training script
Any ideas why my sample images during the lora training were amazing and just what I wanted, but I cannot replicate them at all in comfy/forge/etc? The more it uses my lora, the more the overall quality turns to rubbish and I don't know what to change
@@RyMaz0 so the outcome when using the Lora is bad?
Excuse me, one more question, what does this warning mean?
lora key not loaded: lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_out_0.lora_down.weight
Maybe he didn't recognize my model or is there something I didn't understand?
@@mr.entezaee make sure you have all the requirement.txt files run in the fluxgym folder and the sc scripts folder
Thank you so much for video.
how to stop downloading models, I already have the models downloaded. Even I added them to folder, it is still downloading them.
Thank you.
20:09 you didn't use the trigger word "TTR" in the prompt you set while training 😁
@@kiransurwade3576 good eye thanks
@@AIFuzz59 welcome 😊
I had to go to AI Toolkit since FluxGym keeps having errors with Torch. Will FluxGym work with multiple GPU's? (I have 2 3090's and 2 P40's.)
What voice changer do you use
@@Jaysunn huh?
@@AIFuzz59 Your voice changer. Don't act , it's decent but not perfect. Has obvious artifacts in spots
@@AIFuzz59 your audience is AI enthusiasts, don't think we can't spot something so simple lul
how can i do checkpoint training on low vram ?
i'm so newbie, how to make your node looks like this? Mine was like a wol, lol
@@semakusut make sure you update ComfyUI
Can we resume an interrupted training session?
@@satyajitroutray282 what do you mean “interrupted”?
@@AIFuzz59 the training time on low vram pc is several hours, if i have to stop the training jn the middle then how do i resume the training from the last state.
is she from California? such Californian accent
why do the video of telling us how to install it and just being like oh im not going to do it because i already have it. just delete it and do it step by step
@@jamaljenkins8136 thanks for your suggestion!
So you don't even train it successfully and still uploaded the video anyway lol?
@@MrWizardGG it’s more about sharing the info. We have trained many successful Lora’s fine with FluxGym bro
@@MrWizardGG we love you
you just posted cringe
doesn't run. I don't know what I did wrong. It's pretty straight forward. Here is the error -
(env) E:\AI\ComfyUI_windows_portable_nvidia\FluxLora\fluxgym>python app.py
Traceback (most recent call last):
File "E:\AI\ComfyUI_windows_portable_nvidia\FluxLora\fluxgym\app.py", line 19, in
from library import flux_train_utils, huggingface_util
File "e:\ai\comfyui_windows_portable_nvidia\fluxlora\fluxgym\sd-scripts\library\flux_train_utils.py", line 17, in
from library import flux_models, flux_utils, strategy_base, train_util
File "e:\ai\comfyui_windows_portable_nvidia\fluxlora\fluxgym\sd-scripts\library\flux_models.py", line 363, in
class ModelSpec:
File "e:\ai\comfyui_windows_portable_nvidia\fluxlora\fluxgym\sd-scripts\library\flux_models.py", line 366, in ModelSpec
ckpt_path: str | None
TypeError: unsupported operand type(s) for |: 'type' and 'NoneType'