Update #2: I've just gotten word from Replicate that they have re-opened the $10 link! It should be working again. :) Update #1: Apparently the link expired for Replicate. I was not told that the link would expire. I apologize for this. I honestly had no idea. I reached out to them and asked what happened and they told me that their finance team decided to turn it off because too many people were using it. I'm going to do what I can to rectify the situation, even if it means finding a different method people can follow to do it for free. Once again, I sincerely apologize. This decision was made my Replicate and they did not inform me that they would turn the link off if it got too much traffic.
thanks for your dedication, a lot of can't afford to pay for gpu time and would really appreciate a free alternative to train flux lora, maybe kaggle perhaps
@@luizpaes7307 You can generate some pretty nice stuff with Stable Diffusion 1.5 on a 2060 with the right Checkpoint/Lora but yeah, not gonna work with Flux.
Ok, there were a couple of things that I've learned from other vids that may have gotten missed with this one. 1. It must be set to public (which I think he was later addressed) and the other way I've learned and now do with no issues is 2. When creating your TOKEN you have three options - Fine-grained, Read, Write - you want to set it to WRITE and that solved all my problems. I hope it solves yours. Good Luck.
Thank you so much! The error was driving me crazy. Unfortunately the results look nothing like me. It's probably my fault with the images I chose? There were I think 50. Maybe I should have prioritized the best ones.
Hey Matt. We followed your instructions and generated some impressive images. Thanks for the detailed practical way you do the demos. Your dedication and passion are a cut above many.
For anyone getting the error Command '['python', 'run.py', 'config/replicate.yml']' returned non-zero exit status 1: I also got the error, but it worked on the 2nd try with a couple of changes. Not sure what exactly was causing it but you can do all of these which will hopefully work for you. 1) Specify the model name instead of leaving it blank (black-forest-labs/FLUX.1-dev) 2) Rename all of your training images to be shorter (ex: person, instead of photo_of_person) (Edit: Probably not needed it's just something I randomly tried) 3) Make sure you request access to the black-forest-labs/FLUX.1-dev model on Hugging Face (I already did this prior, so probably not the issue, but who knows) 4) When creating your access token, scroll down to "Repositories permissions", select the location for your Lora, grant all permissions
It is likely step 4 that was the issue and what fixed it. I would imagine first creating the repository and then the access token would also fix the issue.
I still got an error after doing all that, then I resized my images and there were two corrupt photos that I had to delete from my zip after taking claude's help. Now it is working fine.
Regarding your question of positioning of the trigger word: LLMs in general pay more attention to the beginning and end of a prompt than they do to the middle (something which human brains *also* do, by the way 😁). So moving the trigger word to the beginning or end will indeed have a measurable effect on how often it includes the trained face/concept correctly.
This is dangerous. You really don't want to train anything of your face on someone else's servers. I'm looking for a local solution. I would hate to screw up my life or someone else's blindly trusting they're an ethical company
You are not wrong. However we live in times of unlimited likeness. Anyone’s face or voice is replicable unless you are not on the internet in any way. I’m not saying thats good, it’s something I think about a lot.
หลายเดือนก่อน +8
It worked like a charm.... but I want to use the model I created on hugging face on my local ComfyUI. Can you make a short tutorial on how to do this please!! Love your videos... keep up the good work!!
You download the safetensor from your Hugging face and include it in your lora folders. Onwards, you use it as any other lora. I didn't try yet, but that is what I will do when I will create mine.
There seems to be a giant issue in the script of the ai-toolkit that causes it to fail every time now. It seems to be downloading EVERY image ever supplied to it succesfully. When you check the logs you can clearly see that it is downloading multiple sets of images with names of other people that (obviously) can't be found anymore or are so big in total size that the memory can't save them all locally anymore. Seems like a 'little' oversight on the dev's part 😂
Hey Matt, I just wanted to drop you a quick note to say how awesome I think your tutorials are. I’ve gone through the steps, and even though I haven’t had the chance to test yet, I can already tell that this is going to be a game changer for me. It’s looking like we’re getting some rainy weather next week, so that’s when I’ll really dive into it. Just wanted to say thanks for making everything so easy to follow. You’re doing an amazing job, and I’m excited to see where this takes me./ Anna
The $10 Credit is now expired... *sad face* I Watched this video the day you uploaded but did not actually try to use it until now... Ughhh Thats what I get for procrastinating..
Sir, for free credits it shows 'It looks like that invite has expired. Please ask whoever sent it to you to send a new one.'. Can you please send the new invite link. Thank you
I followed your instructions but after clicking an "create training" it always says: Command '['python', 'run.py', 'config/replicate.yml']' returned non-zero exit status 1. Cany ou help?
I followed the instructions and it worked perfectly. Only issue I have, is if there are more people in the generated image, they pretty much all look like me lol
You get better results because the first sentences of the prompt have more importance than the last ones. It works always like that! Basically I always try to use the prompt related to my lora as the first or last.
For anyone wondering, you can use your created Lora locally on your pc. i'm using it with flux nf4 on forge ui. but the results do seem a lot better running it on flux dev on replicate tho
Finally got it running with some tweaks mentioned in the comments. Unbelievable quality! I got the best results when I used the name of my AI alter ego often in the prompts
BRO!!! thank you ! your editing amazing love the new glitch style transition, and information presentation / lesson / 10 free , wow, this is one of your best videos yet. you the man !!
On the video you said to choose the "public" option instead of "private" for the model. What happens if I choose the option of the model being private? I'm making a freelance AI animation project where it would be good if the people in the animation appear exactly as themselves on real life, but it is too risky to create a public model that anyone can access and generate any sort of images of these people, specially when one of them is very famous.
@@houmie very easy, In that repository you will find a setting button---> click on that ---> There must be a option change visibility---> Click on Private. That's it.
Question, the $10 coupon no longer works... is there a page where I can upload or use my replicate lora? That is, use it and continue generating images on another page? (I have downloaded the lora replicate in a zip file).
Wow, thank you very much for providing the credits and your continuing great tutorials, really highly appreciated. I truly enjoy your content and am always thrilled when you upload something new. I especially love that you cover so many different AI topics and that you always take the time to teach your findings in a easy to follow way. Keep up the great work and thanks again for sharing.
Very cool video - wanted to give it a go but it seems the Replicate Code has expired. Saw your pinned comment - sorry to make you do this again, but do you have another link? Keep up the good work in making clear and easy to follow videos like this
Man this is definitely one of your best videos, simple, straight to the point, and super impressive. Thumbs up. Edit: I wish I saw more examples at the end coz it's stunningly good (I don't maybe it's like "rewarding" more "fairly" after 20 minutes of watching, my brain needed more than these few examples 🙂🙂 )
A question - what happens if you don't want to upload the model to Hugging Face at all? Do you get a download link, or something else. I read about this toolkit just yesterday but with the VRAM requirements it was a no go for me. Didn't know about Replicate but it has me interested and seems easy enough. Cheers.
Super appreciate this. I followed your tutorial from last year, and these are light years better! Thanks for posting and to you and Luis for hooking us up with the coupon. Super appreciated!
Hmmm.. I should know how to do this.. but how do I use my HuggingFace lora model in a local ComfyUI ? Do I name the lora.safetensors file a custom name and stick in in the models/loras folder? Gonna try that and find a flux workflow using a lora. YEP.. that worked in a Flux workflow. This was a fantastic video. Now I have to make several loras for characters and see if I can use em all in a generation. very nice. thanks.
Please keep breaking the tools down like this I am taking notes here. Your tutorials have been invaluable and a new wave of awesomeness is clearly upon us.
hey matt, I love your work, but this really does feel like promotion. just saying how it comes across. anyways, when will we be able to do all this locally? can we create flux lora's on 1111?
great job matt, as usual. just a small request. would you mind switching your browser to Dark mode. i can hardly watch it's so bright and white now. thanks in advance
Thank you so much for this video. It was very easy to follow along and get a working model. Cost me $3.10. It does realism very well but I'm struggling to make an illustrated version of myself. It seems like the Lora is squashing any stylization from the prompt. When I stylize with another Lora, the realism is still prominent in the image. Running locally in Comfy.
Wow wow wow this is just crazy! Say a question - why did we actually connect it to huggingfaces? Does this allow me to run the model through there? In addition, is it possible to connect to the API and create the images through there? Thank you very much for everything!
Very cool. I’m amazed that you figured out this workflow… that is a lot of connecting the dots. I won’t do this one but I like the tutorials. Thanks Matt 😅
If you want to try step it up a bit, you should generate long descriptions of your images with chatgpt vision. If you’re giving it 12 images for fine tuning, try to describe in detail yourself 4 of the images (not just photo_of_mreflow), and then ask chatgpt to generate a detailed description of the 8 remaining. The model was probably trained on such long prompts to begin with so you’ll be within the bounds of what he knows and he’ll learn better. Plus, you’ll have a certain variety in how things are expressed (if you don’t force it) so that he knows how to handle prompts that are longer or shorter, with the same of the subject at the start or at the end, etc…. Never tried it but after reading papers it seems to be a more promising way to me. I’m not sure if that would mean you would need more data tho, so just test it out 👍🏽
you are an AI image prompt **Optimizer Your Role is to take the **prompts that I give you and optimize them so that the image generated is higher contrast has more brilliant colors and has beautiful Aesthetics. The subject of the prompt will always be ASHFlow, this is the trigger word to use my face within the image. The Prompt should always mention what camera angle should be generated we want the subject ASHFlow to always be the main focus of the image and his face to be seen in the image and whenever an image prompt is submitted respond with three optimized prompts to get a better version of the same idea don't give any extra context just reply with the **optimized prompts**
Hello! Great video, I have a question. If I want to do several people, can I put all the images and train it or can it only be done with one person, since I tried it but it seems to mix the appearance of all the people into one.
This is really cool, love this, thanks for walking us through it, just had a go and looks really good! Sweet! Also thanks for the credit allowing us to try!
Thank you for the detailed instructions! Great video. You look much cooler when you show off your skills than when you pretend you don't know anything about the subject.
I tried with 14 photos of an object. LoRA trained fine, but attempt to generate images of the object did not have the object in them. Does this only work with people?
Thank you for the clear instructions as well as the $10 credit. Am I able to train multiple models on replicate, and then use them together to make an image with multiple trained subjects?
Update #2: I've just gotten word from Replicate that they have re-opened the $10 link! It should be working again. :)
Update #1: Apparently the link expired for Replicate. I was not told that the link would expire. I apologize for this. I honestly had no idea. I reached out to them and asked what happened and they told me that their finance team decided to turn it off because too many people were using it. I'm going to do what I can to rectify the situation, even if it means finding a different method people can follow to do it for free. Once again, I sincerely apologize. This decision was made my Replicate and they did not inform me that they would turn the link off if it got too much traffic.
yes doesnt work, maybe you will find other tools for free to show us
thanks for your dedication, a lot of can't afford to pay for gpu time and would really appreciate a free alternative to train flux lora, maybe kaggle perhaps
I’m a little late unfortunately, but thanks for letting us know. Love you videos ❤
Amazing thank you for your work!
@@mreflow Thank you sir 🔥🔥
all I want is to train locally, generate locally, and use a nice UI .
your GPU will get really hot so watch out, make sure you got an AC unit nearby
I would love to do it if I had a Nvidia H100 at home, or even an A6000, but no way my 2060S will handle anything worth the time
@@luizpaes7307 You can generate some pretty nice stuff with Stable Diffusion 1.5 on a 2060 with the right Checkpoint/Lora but yeah, not gonna work with Flux.
@@francisco444 I'll just wait for this winter to be sure :p
It's all been available for a long time; as long as your PC can handle it.
Ok, there were a couple of things that I've learned from other vids that may have gotten missed with this one. 1. It must be set to public (which I think he was later addressed) and the other way I've learned and now do with no issues is 2. When creating your TOKEN you have three options - Fine-grained, Read, Write - you want to set it to WRITE and that solved all my problems. I hope it solves yours. Good Luck.
the fine grained setting is what got me, it must be to write
Can i make more than 1 lora with the same HF token?
Thank you so much! The error was driving me crazy. Unfortunately the results look nothing like me. It's probably my fault with the images I chose? There were I think 50. Maybe I should have prioritized the best ones.
Hey Matt. We followed your instructions and generated some impressive images. Thanks for the detailed practical way you do the demos. Your dedication and passion are a cut above many.
For anyone getting the error Command '['python', 'run.py', 'config/replicate.yml']' returned non-zero exit status 1:
I also got the error, but it worked on the 2nd try with a couple of changes. Not sure what exactly was causing it but you can do all of these which will hopefully work for you.
1) Specify the model name instead of leaving it blank (black-forest-labs/FLUX.1-dev)
2) Rename all of your training images to be shorter (ex: person, instead of photo_of_person) (Edit: Probably not needed it's just something I randomly tried)
3) Make sure you request access to the black-forest-labs/FLUX.1-dev model on Hugging Face (I already did this prior, so probably not the issue, but who knows)
4) When creating your access token, scroll down to "Repositories permissions", select the location for your Lora, grant all permissions
It is likely step 4 that was the issue and what fixed it.
I would imagine first creating the repository and then the access token would also fix the issue.
i did 1 and 4 and it worked (had 3 done already). thank you
Its specifically step 3
@@avivolah9401 thanks!!!! i think this is it
I still got an error after doing all that, then I resized my images and there were two corrupt photos that I had to delete from my zip after taking claude's help. Now it is working fine.
Your prompts are fabulous Matt keep it up!
That’s right man I love him
Matt, this was a BRILLIANT Tutorial, worked like a charm on my first attempt !
Unfortunately I got a Python error when I tried. Pretty lame lol
Regarding your question of positioning of the trigger word: LLMs in general pay more attention to the beginning and end of a prompt than they do to the middle (something which human brains *also* do, by the way 😁). So moving the trigger word to the beginning or end will indeed have a measurable effect on how often it includes the trained face/concept correctly.
This is dangerous. You really don't want to train anything of your face on someone else's servers. I'm looking for a local solution. I would hate to screw up my life or someone else's blindly trusting they're an ethical company
You are not wrong. However we live in times of unlimited likeness. Anyone’s face or voice is replicable unless you are not on the internet in any way. I’m not saying thats good, it’s something I think about a lot.
It worked like a charm.... but I want to use the model I created on hugging face on my local ComfyUI. Can you make a short tutorial on how to do this please!! Love your videos... keep up the good work!!
You download the safetensor from your Hugging face and include it in your lora folders. Onwards, you use it as any other lora. I didn't try yet, but that is what I will do when I will create mine.
There seems to be a giant issue in the script of the ai-toolkit that causes it to fail every time now.
It seems to be downloading EVERY image ever supplied to it succesfully. When you check the logs you can clearly see that it is downloading multiple sets of images with names of other people that (obviously) can't be found anymore or are so big in total size that the memory can't save them all locally anymore.
Seems like a 'little' oversight on the dev's part 😂
I get other people instead of me?
yeah that might be why i got my dog instead of myself sometimes.
Ohh that explains a lot. Its not really me sometimes I feel so nostalgic they look like family members I never met. Not much like me though
just watched this video, and the link is not working, unfortunately.
would u give us a new link ??
*Wow, I'm definitely gonna try this method! Great stuff Mr. 🐺*
⚠ It looks like that invite has already been used. Please ask whoever sent it to you to send a new one.
🚫"You need to have a payment method set up to run this model"
This is literally one of the most valuable YT vids ever put out there. Holy f*ck, does it work!
Hey Matt,
I just wanted to drop you a quick note to say how awesome I think your tutorials are. I’ve gone through the steps, and even though I haven’t had the chance to test yet, I can already tell that this is going to be a game changer for me.
It’s looking like we’re getting some rainy weather next week, so that’s when I’ll really dive into it. Just wanted to say thanks for making everything so easy to follow. You’re doing an amazing job, and I’m excited to see where this takes me./ Anna
Hi, the credit has expired, could you send a new one? Thank you!
They must have re-upped it.
The $10 Credit is now expired... *sad face* I Watched this video the day you uploaded but did not actually try to use it until now... Ughhh Thats what I get for procrastinating..
Thank you so much Matt for looking out for us as always!
Sir, for free credits it shows 'It looks like that invite has expired. Please ask whoever sent it to you to send a new one.'.
Can you please send the new invite link.
Thank you
I followed your instructions but after clicking an "create training" it always says: Command '['python', 'run.py', 'config/replicate.yml']' returned non-zero exit status 1. Cany ou help?
another problem might be that you have set the hugging face id wrong.
hi brother, i am facing the same issue, have you found the solution ?
Cannot understand how you only got 630K subscribers. By far, this is the best IA channel ever. Thks Matt
You haven’t watched enough
I followed the instructions and it worked perfectly. Only issue I have, is if there are more people in the generated image, they pretty much all look like me lol
i faced this issue too - lmk if u found a solution>
Would be great to have a tutorial on using the generated LORA with ComfyUI
You get better results because the first sentences of the prompt have more importance than the last ones. It works always like that! Basically I always try to use the prompt related to my lora as the first or last.
Outputs are realistic af... Thanks Matt & Luca
For anyone wondering, you can use your created Lora locally on your pc. i'm using it with flux nf4 on forge ui. but the results do seem a lot better running it on flux dev on replicate tho
Link not working :(
This is my first, formal AI training and I loved it. Thank you for your clear instructions
Finally got it running with some tweaks mentioned in the comments. Unbelievable quality! I got the best results when I used the name of my AI alter ego often in the prompts
BRO!!! thank you ! your editing amazing love the new glitch style transition, and information presentation / lesson / 10 free , wow, this is one of your best videos yet. you the man !!
On the video you said to choose the "public" option instead of "private" for the model. What happens if I choose the option of the model being private?
I'm making a freelance AI animation project where it would be good if the people in the animation appear exactly as themselves on real life, but it is too risky to create a public model that anyone can access and generate any sort of images of these people, specially when one of them is very famous.
Did you find an answer for that? I have the same question
@@AndresArosemena Click on 'Settings' in the created repository. There, you will find an option to change the model's visibility. Set it to 'Private'.
That's a good question, have you been able to solve it? Otherwise I find this to be quite risky from a privacy point of view.
@@houmie very easy, In that repository you will find a setting button---> click on that ---> There must be a option change visibility---> Click on Private. That's it.
Question, the $10 coupon no longer works... is there a page where I can upload or use my replicate lora? That is, use it and continue generating images on another page? (I have downloaded the lora replicate in a zip file).
Wow, thank you very much for providing the credits and your continuing great tutorials, really highly appreciated. I truly enjoy your content and am always thrilled when you upload something new. I especially love that you cover so many different AI topics and that you always take the time to teach your findings in a easy to follow way. Keep up the great work and thanks again for sharing.
Hey Matt, did you ever get Leonardo AI models to train properly on yourself? I AM STRUGGLING BIG TIMEEEE
Tutorial videos are always welcome. It’s how us little folk get a leg up on the rich and powerful. 😊
"It’s how us little folk get a leg up on the rich and powerful" .... - Dream on
Love the candy shrimp 😁 Thanks Matt for this tutorial!
“A lot of time has passed”. No it hasn’t. That’s the amazing thing.
Very cool video - wanted to give it a go but it seems the Replicate Code has expired. Saw your pinned comment - sorry to make you do this again, but do you have another link? Keep up the good work in making clear and easy to follow videos like this
Really appreciate that $10 replicate credit - immediate value and no fluff looks like the motto for this channel - i'm locked in
i created my Lora & it went great thanks!
Can I use this using my GPU locally? Though either Comfy or Forge?
This was AWESOME, Matt! Love all your videos and how you break it all down. Keep going! :)
Man this is definitely one of your best videos, simple, straight to the point, and super impressive. Thumbs up. Edit: I wish I saw more examples at the end coz it's stunningly good (I don't maybe it's like "rewarding" more "fairly" after 20 minutes of watching, my brain needed more than these few examples 🙂🙂 )
Thanks for sharing, but the AI training mode is no longer available under Lucataco account, what should I do? please advise. Thanks
Definitely gonna try this out. Thanks for sharing everything in such detail man 🙌
A question - what happens if you don't want to upload the model to Hugging Face at all? Do you get a download link, or something else.
I read about this toolkit just yesterday but with the VRAM requirements it was a no go for me. Didn't know about Replicate but it has me interested and seems easy enough. Cheers.
It looks like that invite has expired. Please ask whoever sent it to you to send a new one.
How can I keep my model private in HF and still using it in replicate?
So many hoops to jump through!!! Come on someone - make this super duper easy and make it $9.99 a month. People will pay for itttttt.
Replicate: "It looks like that invite has expired. Please ask whoever sent it to you to send a new one."
Thank you Matt for the clear instructions. I managed to follow along and create some amazing images.
Thank you Matt. The replicate credit is expired, can you update it if possible?
Super appreciate this. I followed your tutorial from last year, and these are light years better! Thanks for posting and to you and Luis for hooking us up with the coupon. Super appreciated!
Hmmm.. I should know how to do this.. but how do I use my HuggingFace lora model in a local ComfyUI ? Do I name the lora.safetensors file a custom name and stick in in the models/loras folder? Gonna try that and find a flux workflow using a lora. YEP.. that worked in a Flux workflow. This was a fantastic video. Now I have to make several loras for characters and see if I can use em all in a generation. very nice. thanks.
Please keep breaking the tools down like this I am taking notes here. Your tutorials have been invaluable and a new wave of awesomeness is clearly upon us.
Thanks Matt! It's finally getting easier and easier.
A very nice and useful video! Thank you for that!
Can a model be retrained? Can I add a new set of photos to the model?
not working
It worked perfectly, amazing pictures came out, very creative, thanks matt!
hey matt, I love your work, but this really does feel like promotion. just saying how it comes across. anyways, when will we be able to do all this locally? can we create flux lora's on 1111?
Following your instructions, I successfully created AI images of myself. It was such fun. Thank you very much!
great job matt, as usual. just a small request. would you mind switching your browser to Dark mode. i can hardly watch it's so bright and white now. thanks in advance
You nailed it with your presentation; everything was spot on. Many thanx.
Very good instructions I followed it to the letter and all went well. Thanks Matt !
0:49 This is Matt at the 2024 Paris Olympic Ceremonies! :-)
Do the images I train the model on have to be a certain size?
Because I keep getting an error after I click on the train itself
I have the same problem.
Thank you so much for this video. It was very easy to follow along and get a working model. Cost me $3.10. It does realism very well but I'm struggling to make an illustrated version of myself. It seems like the Lora is squashing any stylization from the prompt. When I stylize with another Lora, the realism is still prominent in the image. Running locally in Comfy.
This is super good. The coupon code isnt working :-(
AWESOME video Matt! You're the GOD of AI! Thank you for sharing your insights. Just tried everything you taught here and enjoyed it a lot!
Wow wow wow this is just crazy!
Say a question - why did we actually connect it to huggingfaces?
Does this allow me to run the model through there?
In addition, is it possible to connect to the API and create the images through there?
Thank you very much for everything!
coupon link has expird
If I wanted to have multiple phrases for one image, how would I go about doing that? For example, cute, adorable, etc
Very cool. I’m amazed that you figured out this workflow… that is a lot of connecting the dots. I won’t do this one but I like the tutorials. Thanks Matt 😅
It's like the golden era of the internet again but better than downloading entertainment. We are the entertainment now...
If you want to try step it up a bit, you should generate long descriptions of your images with chatgpt vision. If you’re giving it 12 images for fine tuning, try to describe in detail yourself 4 of the images (not just photo_of_mreflow), and then ask chatgpt to generate a detailed description of the 8 remaining. The model was probably trained on such long prompts to begin with so you’ll be within the bounds of what he knows and he’ll learn better. Plus, you’ll have a certain variety in how things are expressed (if you don’t force it) so that he knows how to handle prompts that are longer or shorter, with the same of the subject at the start or at the end, etc…. Never tried it but after reading papers it seems to be a more promising way to me. I’m not sure if that would mean you would need more data tho, so just test it out 👍🏽
Mid journey weights favour the words at the beginning of the prompt as well. It’s a thing.
Great tutorial Matt! Props for the credits you landed for fans. I do enjoy Replicate and playing with all the models with Flux indeed being the best.
Great video!! Is this same workflow capable of training and producing product photos. Like shirts and glasses?
Please update the link
you are an AI image prompt **Optimizer Your Role is to take the **prompts that I give you and optimize them so that the image generated is higher contrast has more brilliant colors and has beautiful Aesthetics. The subject of the prompt will always be ASHFlow, this is the trigger word to use my face within the image. The Prompt should always mention what camera angle should be generated we want the subject ASHFlow to always be the main focus of the image and his face to be seen in the image and whenever an image prompt is submitted respond with three optimized prompts to get a better version of the same idea don't give any extra context just reply with the **optimized prompts**
Matt the replicate $10 link no longer works my friend.
Hello! Great video, I have a question. If I want to do several people, can I put all the images and train it or can it only be done with one person, since I tried it but it seems to mix the appearance of all the people into one.
I have the same question!
Refresh the link
Hi Matt, the link to the $10 credit coupon has expired. Can you please ask Luctaco if he can renew it? Cheers Aldo
This is really cool, love this, thanks for walking us through it, just had a go and looks really good! Sweet! Also thanks for the credit allowing us to try!
Love your videos Matt. This is one of the very best. Thanks.
Fantastic video. Thanks for your effort in teaching people like me who have no idea how to code to use this magical tool.
Thank you for the detailed instructions! Great video. You look much cooler when you show off your skills than when you pretend you don't know anything about the subject.
"You need to have a payment method set up to run this model." When I hit create model on replicate. Help.
The free 10$ code s expired again. Is there any miracle on earth to make the code available again ? :(
Very in formative. I stuck at the huggingface part and your video help big time. Also that Claude trick is also good. Thanks Matt!
Thank you so much for making this video. I was just telling myself 3 days ago. I need something like this.❤
Skintexture is too smooth and skin too glossy/shiny. How can i make it more realistic?
Link no good anymore.. ;(
I really liked the way you presented it.
I tried with 14 photos of an object.
LoRA trained fine, but attempt to generate images of the object did not have the object in them.
Does this only work with people?
I have about 50 images. Is there a benefit or downside to using more images for the training?
Thanks Matt - keep ‘em coming😊
Hi! Thank you for totorial, but invite link is not work. Am I doing something wrong or too late to use it?
Yeah it expired
Thank you for the clear instructions as well as the $10 credit.
Am I able to train multiple models on replicate, and then use them together to make an image with multiple trained subjects?