This was a fantastic explanation of training a Flux model! I've been looking all over for exactly this and you've done the best job at explaining it without rushing over the smaller details. Greatly appreciate you sharing your knowledge and expertise!
Thank you for your guidance. However, after completing all the steps, when I run a prompt, the output image appears in a very short time (less than 2 seconds), then it immediately disappears, leaving only a placeholder image icon and the text "Output" instead. I always encounter this issue. How can I fix this issue? Thank you so much!
I tried using both on "lucataco/flux-dev-lora" and Replicate => View Profile => Lora. I followed the step-by-step instructions for LoRA training and copied and pasted the HF_LoRA link into lucataco/flux-dev-lora. The Output (JSON) always shows "status": "succeeded", but the Output (Preview) fails to display the image result.
Thanks so much for this video. I followed it and worked great. Question - You mention you can run locally on Mac after the training model is produced. How would you do this please?
It seems the age of trusting in photos is over! Having a picture taken with camera was a gold standard to capture reality, be it for authentication (e.g., IDs) or for reporting a real event. Seems this was short-lived, maybe only 80 years.
Thank you so much for the tutorial! Very clear and straight forward! when i try to upload a zip folder with photos, it does not upload anything. Any idea why its not uploading the zip-folder?
When you were trying out the fine tuned model in replicate, did u have to deploy it in a server to try it? Per replicate pricing it says.. "Unlike public models, you’ll pay for setup and idle time in addition to the time it spends processing your requests." THank you
I wonder how you can see the autocaptions, so that you can see if any noise/incorrect captioning made its way. Similarly, if you can run multiple samples for the autocaptions, to see how much it varies and if there are any better captions.
Auto caption is based on lava 1.5 which probably we can try separately. After I have made this video, I think they've made changes to their UI where you can download the caption as well. But I would still suggest that if you can use something like gpt4o or Gemini to make caption. I guess that will be a lot better but when I did the fine tuning I did not do with any separate caption file and still it turned out to be good
@@1littlecoder Perhaps, but it might be better if the captions are closer to the embedding space that Flux actually uses for text? That is, a higher quality caption might not fit the native (so to speak) distribution and might not be as effective? However, that’s just a theory.
@@1littlecoder I would like an example of the TXT caption file. Filename a_photo_of_JACK Caption TXT file a_photo_of_JACK standing up talking? is that how, please explain @1littlecoder
I'm new to this, can you please explain me 1. That download weight means the Flux LoRA fine tune model file, right? And we can use it in Google collab, Huggingface or any other platform. 2. In hf_lora label can we mention any other repo platform or can we only store the repo only in replicate to use it later to use via replicate new Reflux platform.
Very grateful. I tried this and was able to train my Lora. Now, when I tried it on my ComfyUI using the nodes from x-flux, I get the error: Error occurred when executing FluxLoraLoader: Error(s) in loading state_dict for DoubleStreamBlockLoraProcessor: Missing key(s) in state_dict: "qkv_lora1.down.weight", "qkv_lora1.up.weight", "proj_lora1.down.weight", "proj_lora1.up.weight", "qkv_lora2.down.weight", "qkv_lora2.up.weight", "proj_lora2.down.weight", "proj_lora2.up.weight".
I read in the comment section of "Fireship"'s video on Flux : "I saw in my lifetime the shift from 'show me the picture that I may believe' to 'I want to see it that I may believe'" Anyway, photos didn't exist 100 years ago, so we are back to the times, when people didn't trust paintings! The age of celluloid and digital photos seems over now!
Would you be open to the idea of training a Lora with me? I am willing to help you with anything you want. Love the idea of building something of my own. This would mean the world to me.
@@vanshwadhwa1332 point of this tutorial is that you don't need the hardware. You can just spend less than $10 on that website and then train your own model and then use it anytime you want
@@vanshwadhwa1332 point of this tutorial is that you don't need the hardware. You can just spend less than $10 on that website and then train your own model and then use it anytime you want
Actually not, I'd recorded it a day before, but saw his hulk yesterday on twitter, his hulk looked better. In my Dhanush's hulk, the face doesn't have a lot muscles. His one was better in that sense.
@@1littlecoder yeah, the composition is very much the same and the contrast has its own look. Very much like MJ in the color. Something about the blacks not having detail man maybe like a vibrancy in the mids. It’s like it’s been trained on a lot of modern movies with professional grades. Looks realistic but very much the same same.
New Super Fast Flux -Finetuning - th-cam.com/video/rKs2o1gBw3Y/w-d-xo.html
This was a fantastic explanation of training a Flux model! I've been looking all over for exactly this and you've done the best job at explaining it without rushing over the smaller details. Greatly appreciate you sharing your knowledge and expertise!
Fantastic walkthrough on fine-tuning FLUX.1! Really appreciated the detailed steps and practical tips. This made things much clearer! 👍
Thank you
Thank you Thalaiva. I am going to try it today. I have become a fan of you!
Haha Thank you nga!
Great video ! I have made four now works perfect.
Glad it helped
Great Video. W content 👑
Thank you
Very nice and clear tutorial !
Thank you, David
Excellent.. i was searching for this :)
If you happen to try, let me know if your quality was good
Good quality video Friend!
Thank you, I spent a lot of time on this, lot more than average video. I'm glad that's reflected 🙏🏾❤️
Amazing work.
Great video!
there many Ai artist asking me how to train Flux Lora for free so... I am suggesting your channel to them
Thank you for your guidance.
However, after completing all the steps, when I run a prompt, the output image appears in a very short time (less than 2 seconds), then it immediately disappears, leaving only a placeholder image icon and the text "Output" instead.
I always encounter this issue. How can I fix this issue?
Thank you so much!
You need to try inference on the flux lora explorer. Where did you try it?
I tried using both on "lucataco/flux-dev-lora" and Replicate => View Profile => Lora. I followed the step-by-step instructions for LoRA training and copied and pasted the HF_LoRA link into lucataco/flux-dev-lora.
The Output (JSON) always shows "status": "succeeded", but the Output (Preview) fails to display the image result.
Great tutorial! I'm going to use this for halloween lol
Haha great
Thanks so much for this video. I followed it and worked great. Question - You mention you can run locally on Mac after the training model is produced. How would you do this please?
Thank you. Just published th-cam.com/video/3uuxp0v3FSQ/w-d-xo.html
this showing This field is required when i upload zip file size limit of zip ?
It seems the age of trusting in photos is over! Having a picture taken with camera was a gold standard to capture reality, be it for authentication (e.g., IDs) or for reporting a real event. Seems this was short-lived, maybe only 80 years.
Thank you so much for the tutorial! Very clear and straight forward! when i try to upload a zip folder with photos, it does not upload anything. Any idea why its not uploading the zip-folder?
I use Flux locally on my computer, can I do this on my pc or I still need to use Replicate? Thank you
When you were trying out the fine tuned model in replicate, did u have to deploy it in a server to try it? Per replicate pricing it says.. "Unlike public models, you’ll pay for setup and idle time in addition to the time it spends processing your requests." THank you
@@ramp2011 the model is already there. I paid for inference. I also did the inference with the lora from hosting the lora on hugging face
How about using controlnet and upscaler with? In order to produce more crisp images?
Can we create for multiple characters?? Does it have an option of initial image option?
Technically you would have to create them as two different loras
I wonder how you can see the autocaptions, so that you can see if any noise/incorrect captioning made its way. Similarly, if you can run multiple samples for the autocaptions, to see how much it varies and if there are any better captions.
Auto caption is based on lava 1.5 which probably we can try separately. After I have made this video, I think they've made changes to their UI where you can download the caption as well. But I would still suggest that if you can use something like gpt4o or Gemini to make caption. I guess that will be a lot better but when I did the fine tuning I did not do with any separate caption file and still it turned out to be good
@@1littlecoder Perhaps, but it might be better if the captions are closer to the embedding space that Flux actually uses for text? That is, a higher quality caption might not fit the native (so to speak) distribution and might not be as effective? However, that’s just a theory.
@@1littlecoder I would like an example of the TXT caption file. Filename a_photo_of_JACK Caption TXT file a_photo_of_JACK standing up talking? is that how, please explain @1littlecoder
Hi sir, can you tell me how to run on Mac?
Hey, can I train two models of different different individuals and get outputs with both of them in an image?
thank u human
I'm new to this, can you please explain me
1. That download weight means the Flux LoRA fine tune model file, right? And we can use it in Google collab, Huggingface or any other platform.
2. In hf_lora label can we mention any other repo platform or can we only store the repo only in replicate to use it later to use via replicate new Reflux platform.
It's a very awesome tutorial but I would like to know how to add realism in the lora we create.
I wish you would have shown how to make the captions txt file not sure what you meant by ur example? Every photo have a different name? Confusing ?
@@CorkyBallasdancewithme it's not mandatory. It'll be auto done if you want to skip it.
❤
Noob question - Do I have to train it again and again if I want to use in future?
@@imyashw nope you use the trained lora
It is trained on ly that particular image. If you have a customer with another image, you hae to train it again on that different image.
Hey It is asking for payment. It didn't in your case.
It's a paid service. You could see it in the billing section
lets go
🚀
if you include captioning , do you still need the trigger word ?
captioning is for the algorithm to understand the picture better. trigger word is what invokes what you trained
Very grateful. I tried this and was able to train my Lora. Now, when I tried it on my ComfyUI using the nodes from x-flux, I get the error: Error occurred when executing FluxLoraLoader: Error(s) in loading state_dict for DoubleStreamBlockLoraProcessor: Missing key(s) in state_dict: "qkv_lora1.down.weight", "qkv_lora1.up.weight", "proj_lora1.down.weight", "proj_lora1.up.weight", "qkv_lora2.down.weight", "qkv_lora2.up.weight", "proj_lora2.down.weight", "proj_lora2.up.weight".
I did it here. It worked fine th-cam.com/video/3uuxp0v3FSQ/w-d-xo.html
@@1littlecoder Yes, I figured that out after realizing the trained Loras over at Ostris are not compatible with the Loras from XLab (and their nodes)
hey bro! not able to upload the zip file in the destined place, nothing is being uploaded, any idea why?
Oh. Did you already add money to replicate? th-cam.com/video/rKs2o1gBw3Y/w-d-xo.htmlsi=0f1W0h3BXPjyrBZe you can try this much cheaper
@@1littlecoder no thanks for saving brother!!, gonna try this one..!!
Possible to run on M3 18GB Ram Mac? If yes, How?
Ugh. My fine tuned model looks nothing like me. I used 50 pictures.
50 is too much and unnecessary. Did you check the video ?
How did you get $95 for free on Replicate?
I don't know why, but im getting very bad quality results, and Lora insists on copying the clothes of imput images to create the new images 😢
Try to reduce the LoRAs strength when you do the inference
Unable to upload the .zip file. Can you please help me out on this?
Try on Google Chrome. Not working for me into Firefox
Bro from where to add payment method on replicate. I'm struggling to add please help.
@@gyahoo I guess it will be inside your accounts and billing
@@1littlecoder ok thanks bro
Confusing title.. its Lora training or Checkpoint model training?
Doesn't title say Lora fine tuning?
I am a little new to all this, how can I set all of this up locally? And not on the replicate that you've linked
do you have an NVIDIA GPU with at least 24GB graphics memory or minimum 18GB VRAM i guess
@@1littlecoderI have a RTX 4090 and I want to train a few LORAs locally. Please make a video on training loras on personal computer
Hi how can I use my own model on google collabs or hugging face?
Do you mean you already have lora?
@@1littlecoder yes, I already trained it, just wanna know how
anyone found that you can't upload zip file replicate, anyone else facing that issue?
same here, were you able to resolve the issue?
@@gulatiramit try different browser I was using brave
if that's not work use API I hosted my files in S3
Can u download the Lora after training and use it in Comfyui runing localy in PC ?
Yes I showed how to download. But also here showed how to use locally th-cam.com/video/3uuxp0v3FSQ/w-d-xo.htmlsi=lq9N3QlPbq8qaWmG
@@1littlecoder Thanks
🤯
how to run inference locally ??
Just published th-cam.com/video/3uuxp0v3FSQ/w-d-xo.html
Can you make a tutorial for forge ui?
How much did this cost you to make?
I guess around $5 why ?
@@1littlecoder Thanks. Just wanted to know before I chose between doing this and running Osiris toolkit on runpod.
Then I would suggest you compare with this which is much cheaper th-cam.com/video/rKs2o1gBw3Y/w-d-xo.html
@@1littlecoder cool, thanks for the suggestion, I'll check it out now!
so is that fkn mean this thing is not free but crappy paid even low paid price but it has M I right or we can who cant afford can use free also ??
Didn't understand
Teeviramana Dhanush Rasingara neenge???
athe kelvi than!
Kandippa illa, I wanted someone not white skin and also someone who's not on training data, settled with Dhanush!
@@1littlecoder thanks for all ur content bro!
Is this method free?
Nope
What the freaking hell! Can we trust the internet anymore?
Almost you can't!
I read in the comment section of "Fireship"'s video on Flux : "I saw in my lifetime the shift from 'show me the picture that I may believe' to 'I want to see it that I may believe'"
Anyway, photos didn't exist 100 years ago, so we are back to the times, when people didn't trust paintings! The age of celluloid and digital photos seems over now!
@@sammathew535 brilliant
Can we do this in low end device?
This was happening in a cloud service
Grayan
Can I Train Lora by providing a single grid of multiple face angles?
Would you be open to the idea of training a Lora with me? I am willing to help you with anything you want. Love the idea of building something of my own. This would mean the world to me.
could you please elaborate, do you want me to train a lora for you, is that what you mean?
@@1littlecoderI will work with you as you ask. I don’t have the required hardware nor do i have as much knowledge as you do.
I feel my project would benefit the community too.
@@vanshwadhwa1332 point of this tutorial is that you don't need the hardware. You can just spend less than $10 on that website and then train your own model and then use it anytime you want
@@vanshwadhwa1332 point of this tutorial is that you don't need the hardware. You can just spend less than $10 on that website and then train your own model and then use it anytime you want
How to use the fine-tuned flux loRA locally - th-cam.com/video/3uuxp0v3FSQ/w-d-xo.html
Hi
Varun mayya inspired? 😅
Actually not, I'd recorded it a day before, but saw his hulk yesterday on twitter, his hulk looked better. In my Dhanush's hulk, the face doesn't have a lot muscles. His one was better in that sense.
@@1littlecoder what could be the reason?
@@webhosting7062 probably a detailed prompt or Varun has a bit of cheek more than Dhanush pics
@@1littlecoder got it 👍
I really don’t like the look of Flux.
@@therobotocracy why ? Anything particular?
@@1littlecoder yeah, the composition is very much the same and the contrast has its own look. Very much like MJ in the color. Something about the blacks not having detail man maybe like a vibrancy in the mids. It’s like it’s been trained on a lot of modern movies with professional grades. Looks realistic but very much the same same.