Te amo meu amigo. Sou do Brasil, não escrevo direito em inglês mas entendo bem o idioma e seus vídeo são os melhores que existem. Obrigado pelo seu trabalho, sempre direto e não tratando seu público como se ele fosse burro. Isso é muito respeitoso. O nível sempre muito alto e bem explicado.
@@taoprompts 😆 Very welcome!! YES, do it, man!! 🔥🔥🔥🔥🔥 I've been planning to do the same with an AI version of myself. Thanks again for all your great work. Cheers
Always come in Clutch brother... I do remember someone else posting about this same technique in another video. But again, I believe you explain it better. Thank You!
I will try to plan more guides in the future about running stuff locally. I think for most users, running things in browser is more accessible because of hardware limitations
So, it's settled, you shall be my primary AI sensei! My next video upload will be the result, please bell me so you can be instantly notified. thanks a lot bro. Your style of teaching I'm feelin it.
replicate made a very different (and sometimes monstruous) face when I pre-train it on ostris as you said. Could it be because I trained the model with both back and front photos and incorporated facial expressions like disdain or concern? I didn't use any expressions that distort the face, though.
A LoRA model tries to learn similar "patterns" across your training data and associates that with the "trigger word". It could end up blending different facial expressions which may create weird effects. I would try to keep the training data consistent for a single LoRA (at least what you want to the Ai to learn). In my experience it's fairly common to train multiple Lora's for different face movements.
This is truly a creative and interesting video. You always put effort and heart into each video 👍I’m wondering about the Generating AI image of you part, does the AI randomly pick from my input images for each command to create the result? Is it possible to control which input image will generate which output image?
Thanks Jennie! The Ai is able to generate specific features from photos of the training dataset. The "autocaption text" generates text descriptions for your dataset during training, and then when you prompt with the finished model, the language you use in the prompts will be correlated with the text descriptions. It will typically blend together the input images, but if you have specific keywords in the prompts it can sometimes match with keywords in the training dataset captions although it will be hard to actually control
Amazing video. Thank you I made a music video for my bf for christmas with this. So helpful. Thank you!! Is there a possibility to train the model to generate two people, e.g. for a couple shot?
This is terrific Tao, thank you! Question: I'm having a very difficult time with getting a pixar type character consistent. There's variations on it like fur colour, hairdos and so on...I'm currently using Midjourney, do you have other suggestions or do you think I should wait a few more months for other programs? Thank you again!!
Hey, if you want consistency with specific features it will be hard to get that in Midjourney. You can try to be more specific in the prompts with exactly what you want, however the results may not always be consistent. If you train a Lora it will do a better job of keeping the resemblance the same (make sure to include text descriptions in the training data that describe exactly what you want). Ultimately, if you want a high level of resemblance & consistency, you will always need a customized (finetuned) model for the character.
Yes you can train this locally, some other channels have made tutorials for that. However, depending on the hardware you have it may be a lot slower, or your machine may have memory issues.
I am watching your videos from past few days and I can't find anything better than yours. I am thinking of buying either Kling or Runway for my product advertisement but i am confused, can you please make a video about "Product promos" or "product advertisement" kind of tutorial video using one of these AIs. So, i marketers like me get an Idea😊
What kind of product videos are you trying to make. The best way would probably be create images first with an Ai image generator and then animate them in Kling or Runway. Based on what I've seen, Runway is pretty good with animating different physical products.
When you train the model, there's an option to link a free hugging face account and the replicate will push your trained model to hugging face, from where you can download the model
@@taoprompts thanks for reply! I managed to create some stunning photos with your tutorial. Also I just found a free and opensource image upscaler called Upscayl. Have you tried it?
amazing dude doing mind-blowing stuff. Can you tell me how I can give it some script that the model speaks in video? I want my video speaking some text I provide
Hi Tao, Thanks for all the infos you share with , i was wondering if you have an alternative to finalframe (that totally changed) to capture the last frame and being convenient
hi! thanks for the great tutorial. Just trained my first model. And here is the question. I tried to make to scenes. One is myself siting next to a female in front of a laptop and...she is - each time - looking more less like me. Sometimes event thats exactly myself. How to avoid it - so it is simply any random person? Second - tried to generate myself making a lecture on the conference with some audience...and the audience are my like siblings. How to make them random people? thanks!
I would be very specific in the prompts when describing the physical characteristics of other people in the image, otherwise it may blend their appearence with yours. Alternatively here's a separate method on how to get multiple characters inside using Flux + lora: x.com/techhalla/status/1853716057107619953
If you create a hugging face account then during training fill out the options "hf_repo_id" and "hf_token", it should save a model to your hugging face repo.
You don't need to name the pictures. You do have the option of attaching an additional text file with text descriptions for each image which is what the other tutorials are referring to, but that is not necessary.
The quality of the photos you use to train the model does matter, if they are blurry the model will learn to generate blurry images. You could also try prompting for "sharp details, highly detailed, high resolution" etc
I will put myself in one of my future AI short films and this video was very helpful. However, I’m planning to use Midjourney to create the images for the videos. Do you find Flux or Midjourney better in consistent characters and do you know if there’s a Flux-based AI image generator service out there with consistent characters?
Training a custom model like Flux + Lora will always be better than Midjourney. The way the consistent characters feature works in Midjourney is not going to be able to capture a resemblance this similar to your own photos in most cases. Flux based models with consistent characters will be using a process similar to this one to train a custom model until further developments
@@ShayDylan well then it's fine for you. My friend has lots of wait compared to me, and she's in free mode and I paid so I think that was the explanation. But it maybe something else ah
Hi Tao! I really like your channel and thank you so much for sharing your amazing skills. I was wondering when I saw this technique, if it would be possible to take an object like a Coke bottle, and integrate it into a composition with other objects to make a product photo shoot? Maybe you have another faster technique to achieve this? This would be a great video topic, what do you think?
Definitely, you can train a Lora model on any concept: a person, objects, a pet, even a visual style. You can also combine multiple Lora's, just add both of their trigger words into the prompt and add an "extra lora" when running the models. The results for multiple lora's won't work as consistently as a single Lora though
Hmm you can prob use the images output from the original training, to train another lora thats more themed specific... Not sure if im making sense here since i havent tried lora training yet
@@taoprompts yes thanks to you i already topped my $30 replicate account !!! doing trainings all night!!! now YOU owe me $30, LOL, joke man.... thanks.
Tao, thank you very much for this tutorial. It is easy to follow. Would you happen to know what AI can help me create a consistent creature (no human) with 10 positions and backgrounds to train AI so I can do this process? - Thanks in advance.
I don't think there's a great way to do that right now. In order to get a consistent creature you need photos to begin with. However, if you have access to a 3d renderer that could be a way to get a dataset like that
@@taoprompts but this video seemed very focused on the face, was the AI also copying your body on those clips?? yes i have a lot of dataset like hundreds of pictures, i have a lot of shirtless pictures too, can the AI copy shirtless pictures/muscles? for example i do bodybuilding and it would be cool if i could make images and videos of myself that were accurate in the body, included my nipples, wearing just stage competition trunks hitting certain poses, can the AI do this or are half naked bodies blocked somehow even tho its a male body?
@@dontreadmyname4396 It should be able to learn anything that is common between the photos in the training dataset. If you want to know if it's censored or not, you could just try to generate a regular image (w/out) Lora of a shirtless bodybuilder
@@dontreadmyname4396 Not necessarily, 10-30 images is enough. Also, this works best if the pose is relatively similar. It's quite common for people to train multiple different Lora models to learn different concepts, for example different poses.
I've found that faceswapping typically doesn't work as well as training a custom LoRA model, the results won't be as consistent. Although you may get lucky with the results.
For image upscaling I like Magnific, however it costs a monthly subscription. Topaz is great because it is a one time purchase fee and you get 12months of free upgrades with it.
when i went to train my model i got this error message: Training failed. Unable to load weights from pytorch checkpoint file for 'liuhaotian/llava-v1.5-13b/pytorch_model-00001-of-00003.bin' at 'liuhaotian/llava-v1.5-13b/pytorch_model-00001-of-00003.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
There are some platforms like Runway which have lip sync capabilities. You could you generate the Ai videos using runway and then use their lip sync Ai. The results aren't that expressive though.
You can create a free hugging face account and update the boxes "hf_repo_id" and "hf_token" before training. Then you should be able to download the model from hugging face.
Yeah, if you wear the same outfit in all the training images, it will be able to replicate that. Make sure to use text descriptions that also describe the outfit (either in the autocaption prefix or txt files you upload).
Hi! Thank you for sharing. Can we also upload branded products to a model on Flux? For example, instead of uploading a set of our photos with different expressions, we’ll upload photos of our branded product such as baby diapers. Would really appreciate anyone who could help answer🫶🏻
I think thanks to A.I everyone can become superheroes quickly. Maybe this is a new business day when a.i services allow you to create videos, music, pictures of yourself becoming superheroes in moments.
This is awesome. Question: is it possible to get two people in one image? What I mean by that is can I train it for myself and another person and have us in the same image? Thinking of cool ways to use this for my band ☺️
what about me trained flux (ragwort) and some other people? when I use the trigger word all people in the picture are looking like me. Equal if they are female or male. Is there a trick to do random faces example for a girl and me the white man trigger word?
bro,is there any way to make video from same character,but as a reference,no image,only reference character face.plz make video on this,if such ai exists
Another nice video. Thank you for the tips. This is not related to your video but I noticed that you forgot to turn your Super Thanks on, or maybe you did it on purpose.
Thanks I appreciate the support! I have donations turned off, the best way to support the channel is to share it with someone else who is interested in Ai 👍
Midjourney cannot train a custom model for you. If you want to create pictures of yourself or something else with a high level of resemblance, training a custom Lora will work much better than Midjourney
Replicate won't accept my zip file. I have winrar for compressing, but the file is a .zip I named each photo like you did with only double digit numbers 01-10 I wonder if replicate won't accept files zipped by winrar? I been trying for an hour.
Ok, I sent the zip file to google drive and downlaoded it to my ipad. I was able to go to replicate and upload the zip file from my ipad. It looks like replicate does not accept files compressed on winrar. My model is training now. I am excited.
You speak of using "Flux" to generate images, but then imediately switch to "Replicate". Forgive me, but I fail to see where "Flux" figures in this ? Can we not simply upload pictures from our own pc directly into "Replicate" ?
This is cool. I’d rather not use photos of myself though, because of privacy issues. I prefer to work with fictional characters, and be able to depict them consistently in every scene.
Your 1-minute introduction is better than tons of similar content in entire TH-cam videos. Your content is straight to the point and saves me time.🎉
Thank you, I've been working on my video intros a lot 👍
And he is cute 🙂
Te amo meu amigo. Sou do Brasil, não escrevo direito em inglês mas entendo bem o idioma e seus vídeo são os melhores que existem. Obrigado pelo seu trabalho, sempre direto e não tratando seu público como se ele fosse burro.
Isso é muito respeitoso.
O nível sempre muito alto e bem explicado.
Muito obrigado, fico feliz em saber que esses vídeos estão ajudando você. As coisas estão mudando muito rápido, é bom estar à frente 👍
Damn!! Tao dropping absolute gold. Thanks man
I got you man! it was a bit weird to see the Ai version of me 😂
Agreed, Tao is The Man. One of the Top AI channels ever!
WOOOOOWWWWW Tao!!!! Sensei!!!! really cool done!!! keep doing amazing content!
Thank you! It's a strange experience to see yourself in an Ai video 😂
Wow...thank you dude. Your explanation is straight to the point and easy to understand. Great video!
Glad you liked this man, it's a surreal experience to see yourself in an Ai vid 🤔
You look epic as heII, my Bro, in those AI shots!! 😅 Many Thanks again for your Awesome Content. You're a Great Teacher. Mega Respects, Bro. 🙌🏾
Thanks man, I appreciate that! I can be in a movie now 😂
@@taoprompts 😆 Very welcome!! YES, do it, man!! 🔥🔥🔥🔥🔥 I've been planning to do the same with an AI version of myself. Thanks again for all your great work. Cheers
u really helping soo much buddy one of the best channel and person i have watched till yet
Thank you, I really appreciate that a lot! I try to make my videos easy to learn from and useful
Finally, I've received the notification
Good to hear you got it working , youtube sometimes doesn't send those properly
Thank you Tao for all your techniques ❤
Glad you like these guides, I have a lot of fun working with these Ai tools also
Amazing timing on this... I was just starting my deep dive into this exact topic and now I'm going to walk through this as I go. Thanks!
OMG I just finished training my model and it's AMAZING!! Thank you for such an excellent tutorial, once again!
That's great! It's pretty incredible how closely the generated images will look like your photos 👍
Very informative and interesting as usual.
I love your AI content Tao… it’s always different from others are doing!
Thank you! It's amazing to see that we can put ourselves into Ai videos now!
Minimax is by far the best generator its beyond the realism of other models and once image to video comes out
I should say that your prompt guide really helps, though! Thank you for taking the time to do it!!
I got you covered, I always find it helpful to see other peoples prompts 👍
it's straightforward and practical, much appreciated!
The fact that you can put yourself in an Ai video is amazing!
Best Teacher Ever ❤
Thanks Ghost!
Great video and all steps worked well! Thank you!
Awesome to hear this worked for you man!
Amazing like always ! ❤
Thank you!
Thanks for the info, I am curious about creating AI videos, and you made it clear and interesting. Thank you. Great video !
That's great man, now is a perfect time to start getting into Ai video 👍
@@taoprompts Where/how should I start
This is 1000% awesome bro 🙌
Thanks man 🙏
This is all amazing, Grt walk thru!
Thanks man, It's a crazy feeling to see yourself in an Ai video!
That's awesome dude, you're famous!
Thanks man, some of these Ai videos look just like TV shows
Always come in Clutch brother... I do remember someone else posting about this same technique in another video. But again, I believe you explain it better. Thank You!
I got you covered! There's different ways this technique can be done, I tried to find the easiest way possible
Holy shit, that's insane....gonna continue watching
It's gotten a lot better even from a few months ago
You do such a great job with these. 👍🏻
Thanks man, I'm loving all these new updates
Wow i love this tutorial 😭🙏
Thank you, I appreciate that 🙏
@@taoprompts sorry can u explain about payment about it, im newbie 🙏
Thanks man. I'm tapping in now
Good luck! It's a bit weird to see an Ai version of yourself
tnx for your help
😉
excellent tutorial Tao. Giving it a try now
Thanks man, you're the expert on putting yourself inside Ai!
COOL video!. I like your style.
I appreciate that Fernando!
Great video ! you give many cool ideas...
Thank you! it's a surreal feeling to see yourself in an Ai video
im obsessed
awesome !
would be great to know how to do this locally too!
I'd love that too! Which kind of hardware do you have?
@@SongStudios High end build using a 4090
I will try to plan more guides in the future about running stuff locally. I think for most users, running things in browser is more accessible because of hardware limitations
So, it's settled, you shall be my primary AI sensei! My next video upload will be the result, please bell me so you can be instantly notified. thanks a lot bro. Your style of teaching I'm feelin it.
Thanks man, I'm excited to see what you come up with 👍!
Wow 😮 Really cool
replicate made a very different (and sometimes monstruous) face when I pre-train it on ostris as you said. Could it be because I trained the model with both back and front photos and incorporated facial expressions like disdain or concern? I didn't use any expressions that distort the face, though.
A LoRA model tries to learn similar "patterns" across your training data and associates that with the "trigger word". It could end up blending different facial expressions which may create weird effects. I would try to keep the training data consistent for a single LoRA (at least what you want to the Ai to learn). In my experience it's fairly common to train multiple Lora's for different face movements.
Thank you for your sharing! How can I download the lora trained with my own photos?
Inside the trained model page, Next to "run trained model", there's a "download model weights" button.
@@taoprompts Thank you!
This is truly a creative and interesting video. You always put effort and heart into each video 👍I’m wondering about the Generating AI image of you part, does the AI randomly pick from my input images for each command to create the result? Is it possible to control which input image will generate which output image?
Thanks Jennie! The Ai is able to generate specific features from photos of the training dataset. The "autocaption text" generates text descriptions for your dataset during training, and then when you prompt with the finished model, the language you use in the prompts will be correlated with the text descriptions. It will typically blend together the input images, but if you have specific keywords in the prompts it can sometimes match with keywords in the training dataset captions although it will be hard to actually control
Amazing video. Thank you I made a music video for my bf for christmas with this. So helpful. Thank you!! Is there a possibility to train the model to generate two people, e.g. for a couple shot?
That's awesome! Here's a twitter post that shows how you can get multiple characters: x.com/techhalla/status/1853716057107619953
This is terrific Tao, thank you! Question: I'm having a very difficult time with getting a pixar type character consistent. There's variations on it like fur colour, hairdos and so on...I'm currently using Midjourney, do you have other suggestions or do you think I should wait a few more months for other programs? Thank you again!!
Hey, if you want consistency with specific features it will be hard to get that in Midjourney. You can try to be more specific in the prompts with exactly what you want, however the results may not always be consistent.
If you train a Lora it will do a better job of keeping the resemblance the same (make sure to include text descriptions in the training data that describe exactly what you want). Ultimately, if you want a high level of resemblance & consistency, you will always need a customized (finetuned) model for the character.
Oh Lora does animated characters? How would that work? Perhaps I should email you?@@taoprompts
Thanks for sharing! Just wondering is there anyway to train your own lora locally with Flux comfyUI or something else? Thank you 🙏🙏🙏
Yes you can train this locally, some other channels have made tutorials for that. However, depending on the hardware you have it may be a lot slower, or your machine may have memory issues.
I am watching your videos from past few days and I can't find anything better than yours. I am thinking of buying either Kling or Runway for my product advertisement but i am confused, can you please make a video about "Product promos" or "product advertisement" kind of tutorial video using one of these AIs. So, i marketers like me get an Idea😊
What kind of product videos are you trying to make. The best way would probably be create images first with an Ai image generator and then animate them in Kling or Runway. Based on what I've seen, Runway is pretty good with animating different physical products.
wow:) Can I import the trained model to use on local flux on my computer or do i have to use it exclusively on flux replicate ? thank you
When you train the model, there's an option to link a free hugging face account and the replicate will push your trained model to hugging face, from where you can download the model
Its so amazing ❤❤🎉
Thank you 🙏!
Amazing! Easiest tutorial ever. Do you have a tutorial to train locally maybe with Fooocus and SD? Thanks!
Thank you! I haven't made any guides for Fooocus or SD currently
@@taoprompts thanks for reply! I managed to create some stunning photos with your tutorial. Also I just found a free and opensource image upscaler called Upscayl. Have you tried it?
Excellent video.
amazing dude doing mind-blowing stuff. Can you tell me how I can give it some script that the model speaks in video? I want my video speaking some text I provide
I have an updated guide that includes Ai voices here: th-cam.com/video/ltuRxvaCdMs/w-d-xo.html
Great video. This is the AI stuff I like to see
Thanks! These recent updates in Ai have so many new possibilities
Hi Tao, Thanks for all the infos you share with , i was wondering if you have an alternative to finalframe (that totally changed) to capture the last frame and being convenient
I use ezgif to get image frames. ezgif.com/video-to-gif
I use the "video to png" tab.
How do you remove the watermark from Kling videos?
hover over the download button, if you have a paid plan you will see an option to download without watermark
For me kling is not working its freezing at 99% while generating , any help for me ?
The Answer is Pay. It does not happen to subscriber users.
Let it run overnight. It will almost always return a result after several hours.
Excellent tutorial. I would not know where to start without your great video. How much you pay for Magnific tho? 😅
Thank you. It's pretty expensive, I have the $40/month plan. A cheaper free alternative is Upscayl, I might do another tutorial on that
@@taopromptsCool. Do you pay for Kling, too? I find it's faster and more reliable than Dream Machine so far.
hi! thanks for the great tutorial. Just trained my first model. And here is the question. I tried to make to scenes. One is myself siting next to a female in front of a laptop and...she is - each time - looking more less like me. Sometimes event thats exactly myself. How to avoid it - so it is simply any random person?
Second - tried to generate myself making a lecture on the conference with some audience...and the audience are my like siblings. How to make them random people? thanks!
I would be very specific in the prompts when describing the physical characteristics of other people in the image, otherwise it may blend their appearence with yours.
Alternatively here's a separate method on how to get multiple characters inside using Flux + lora: x.com/techhalla/status/1853716057107619953
This is amazing! I've train my own lora. is there anychance to download the lora file to use it in other platform?
If you create a hugging face account then during training fill out the options "hf_repo_id" and "hf_token", it should save a model to your hugging face repo.
Thank you!
Do we have to special name our pictures like some other videos are saying or do we upload them in the zip file named as is?
You don't need to name the pictures. You do have the option of attaching an additional text file with text descriptions for each image which is what the other tutorials are referring to, but that is not necessary.
@@taoprompts thank you for always replaying ♥
How do i run do this locally if I have the PC power. I prefer to run an applicaiton on my pc than pay for replicate to do training, etc..
this tutorial seems to explain it: th-cam.com/video/bN2uhrVKdPE/w-d-xo.htmlsi=AyAVol_TV4290rdo
Can you use these tips to generate and animate images of 2 persons together?
sure I just made a guide here: th-cam.com/video/h-7SIFL6gP4/w-d-xo.html
@ ok great I will follow this then. Otherwise I’m failing to train my model. When I hit « create training » it fail
My pictures is not sharp at all. How do i fix this? Is it the quality of the pictures? Nice video!
The quality of the photos you use to train the model does matter, if they are blurry the model will learn to generate blurry images. You could also try prompting for "sharp details, highly detailed, high resolution" etc
I will put myself in one of my future AI short films and this video was very helpful. However, I’m planning to use Midjourney to create the images for the videos. Do you find Flux or Midjourney better in consistent characters and do you know if there’s a Flux-based AI image generator service out there with consistent characters?
Training a custom model like Flux + Lora will always be better than Midjourney. The way the consistent characters feature works in Midjourney is not going to be able to capture a resemblance this similar to your own photos in most cases. Flux based models with consistent characters will be using a process similar to this one to train a custom model until further developments
@@taoprompts Thank you for your answer and information. I have to try a custom model!
Hello mister,
Can I train antropomorphic animals (nonhumans) with this method? Thanks for the answer!
yes, as long as the training images (in your 10-12 photo dataset) are consistent
But kling ai image to video generation process takes a century 😐
In free mode yeah. When you subscribe it takes a few minutes to generate videos.
@Trankilstef I use free mode and it's never taken more than five minutes, if that.
@@ShayDylan well then it's fine for you. My friend has lots of wait compared to me, and she's in free mode and I paid so I think that was the explanation. But it maybe something else ah
What do you mean? It goes from 0 to 99% in a matter of minutes. It should be done any second now! ☠️
My all data finish
Hey how do you make the Ai move what app
I used Kling: klingai.com/?kolToken=5QYDWZJF
Hi Tao! I really like your channel and thank you so much for sharing your amazing skills. I was wondering when I saw this technique, if it would be possible to take an object like a Coke bottle, and integrate it into a composition with other objects to make a product photo shoot? Maybe you have another faster technique to achieve this? This would be a great video topic, what do you think?
Definitely, you can train a Lora model on any concept: a person, objects, a pet, even a visual style.
You can also combine multiple Lora's, just add both of their trigger words into the prompt and add an "extra lora" when running the models. The results for multiple lora's won't work as consistently as a single Lora though
Hmm you can prob use the images output from the original training, to train another lora thats more themed specific...
Not sure if im making sense here since i havent tried lora training yet
It's definitely possible to use AI generated images as a dataset for training, people use Ai generated text to train models all the time
Thanks, Tao! Is there any way to save the settings from the trained model so it always does 2 images and save it as PNG?
hmm, I didn't see any way to save the settings when running the models
NOTE, to use the weights LOCALLY you need to use the fp8 DEV model, not the bnb nf4, or you will get mixed faces.....
thanks for the tip 👍
@@taoprompts yes thanks to you i already topped my $30 replicate account !!! doing trainings all night!!! now YOU owe me $30, LOL, joke man.... thanks.
This is like perfect for my music video
That's great! you can also do some basic lip sync in Kling now with the newest updates
Thanks for all the infornation. I just subscribed 😊
Great job! How you feel about MidJourney with the FaceSwap vs this?
This works much better than Midjourney with face swap in terms of creating a high level of resemblance to the training photos
Tao, thank you very much for this tutorial. It is easy to follow. Would you happen to know what AI can help me create a consistent creature (no human) with 10 positions and backgrounds to train AI so I can do this process? - Thanks in advance.
I don't think there's a great way to do that right now. In order to get a consistent creature you need photos to begin with. However, if you have access to a 3d renderer that could be a way to get a dataset like that
what about copying the body? or this is only for the face?
You can copy anything inside your training dataset as long as the images are consistent
@@taoprompts but this video seemed very focused on the face, was the AI also copying your body on those clips?? yes i have a lot of dataset like hundreds of pictures, i have a lot of shirtless pictures too, can the AI copy shirtless pictures/muscles? for example i do bodybuilding and it would be cool if i could make images and videos of myself that were accurate in the body, included my nipples, wearing just stage competition trunks hitting certain poses, can the AI do this or are half naked bodies blocked somehow even tho its a male body?
@@dontreadmyname4396 It should be able to learn anything that is common between the photos in the training dataset. If you want to know if it's censored or not, you could just try to generate a regular image (w/out) Lora of a shirtless bodybuilder
@@taoprompts and the more pictures i have the better its going to "learn it", i guess? or past X number of pictures is unnecessary?
@@dontreadmyname4396 Not necessarily, 10-30 images is enough. Also, this works best if the pose is relatively similar. It's quite common for people to train multiple different Lora models to learn different concepts, for example different poses.
Good stuff! Tried Kling, but it seems that it can only generate 99% of the cue, 5 sec and 10 sec. And than it never produce a video :(
It takes several hours on the free plan. I usually let it work overnight.
It can be pretty slow if you use the free version, the paid version will be much faster
What about we generate the images in Midjourney and then face swap our face on it ?
I've found that faceswapping typically doesn't work as well as training a custom LoRA model, the results won't be as consistent. Although you may get lucky with the results.
thank a lot bro. I am from myanmar
Oh wow, it's great to hear all the way from Myanmar 🙏
Tao which one do you think the best, magnific or topaz Gigapixel?
For image upscaling I like Magnific, however it costs a monthly subscription. Topaz is great because it is a one time purchase fee and you get 12months of free upgrades with it.
when i went to train my model i got this error message:
Training failed.
Unable to load weights from pytorch checkpoint file for 'liuhaotian/llava-v1.5-13b/pytorch_model-00001-of-00003.bin' at 'liuhaotian/llava-v1.5-13b/pytorch_model-00001-of-00003.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
Woww! It’s mind blowing! 🤯
What if I want to lipsync the subject singing? As I’m a singer, I would like to generate AI music videos of me singing
There are some platforms like Runway which have lip sync capabilities. You could you generate the Ai videos using runway and then use their lip sync Ai. The results aren't that expressive though.
@@taoprompts I saw a comparison video between runway lip sync and sync labs, and I think I would go for the sync labs
Thanks bro
🙏 I hope this was helpful, I'll make more guides on flux + lora soon
Any ideas if Kling or any other AI platforms are gonna have sales or price cuts any time soon, perhaps cyber Monday?
I'm not sure, Kling had a pretty big sale when it released. There might be a sale for new year
Is Runway still free to use? I remember last time it said they don't offer free to use due to huge demand
There is a free trial, but you only get 125 credits which is like 1-2 video generations
Hi, how can I download the trained model (Lora) of my image from the website Replicate?
You can create a free hugging face account and update the boxes "hf_repo_id" and "hf_token" before training. Then you should be able to download the model from hugging face.
good tutorial, i have ti fill billing first so error prosses
Awesome stuff but I am stuck at the "create training". I have zero credits there but I simply dont know what to pay for? Amazing stuff man
Thank you! If you go to the dashboard, you can set up billing. You only get billed for gpu time you use (ie. training a model or generating images)
This is great! Can you train yourself wearing a specific outfit?
Yeah, if you wear the same outfit in all the training images, it will be able to replicate that. Make sure to use text descriptions that also describe the outfit (either in the autocaption prefix or txt files you upload).
Cool! buddy!! I love u :)
Thanks for supporting!
Hi! Thank you for sharing. Can we also upload branded products to a model on Flux? For example, instead of uploading a set of our photos with different expressions, we’ll upload photos of our branded product such as baby diapers.
Would really appreciate anyone who could help answer🫶🏻
Yes, it learns anything in common between the photos of the training dataset
I think thanks to A.I everyone can become superheroes quickly. Maybe this is a new business day when a.i services allow you to create videos, music, pictures of yourself becoming superheroes in moments.
That's very possible, people are going to get used to seeing Ai versions of themselves soon
new subscriber for this video
Thanks for the sub!
This is awesome. Question: is it possible to get two people in one image? What I mean by that is can I train it for myself and another person and have us in the same image? Thinking of cool ways to use this for my band ☺️
Yes, you can train multiple LoRA models and prompt both of their "trigger words". You will need to use the "extra lora" button to do this.
what about me trained flux (ragwort) and some other people? when I use the trigger word all people in the picture are looking like me. Equal if they are female or male.
Is there a trick to do random faces example for a girl and me the white man trigger word?
@@taoprompts thanks!!
@@NEWSonTour hi,managed to solve it? I have the same problem with others that look exactly like me :(
@@AICodziennie no solution since now. any tips volks, here?
bro,is there any way to make video from same character,but as a reference,no image,only reference character face.plz make video on this,if such ai exists
I'm not sure what you mean, you can use reference character to create Ai images and then animate those into video
Another nice video. Thank you for the tips. This is not related to your video but I noticed that you forgot to turn your Super Thanks on, or maybe you did it on purpose.
Thanks I appreciate the support! I have donations turned off, the best way to support the channel is to share it with someone else who is interested in Ai 👍
@@taoprompts Got it. I send it through your book store. Thanks again.
@@dasberlinlex Thank you so much! I really appreciate you helping the channel out 🙏
Can You input that training data into MJ aswell? Have you compaired the results using MJ?
Midjourney cannot train a custom model for you. If you want to create pictures of yourself or something else with a high level of resemblance, training a custom Lora will work much better than Midjourney
If i drag my zip file of images to that little upload box, it doesnt do anything... What could be the cause? thanks for the video
To upload the zip file, click on the box instead
@@taoprompts Thank you it worked!
@@taoprompts I tried to click the box, but didn't worked
Replicate won't accept my zip file. I have winrar for compressing, but the file is a .zip I named each photo like you did with only double digit numbers 01-10 I wonder if replicate won't accept files zipped by winrar? I been trying for an hour.
oh wait, do I need to pay first maybe? I'm stumped lol
Ok. I added a debit card and website still won't accept the .zip file. I contacted support. Hopefully they can help. Love your vids by the way.
Ok, I sent the zip file to google drive and downlaoded it to my ipad. I was able to go to replicate and upload the zip file from my ipad. It looks like replicate does not accept files compressed on winrar. My model is training now. I am excited.
Hey don't try to drag and drop it in. Click on the upload button and then add in the zip file from there.
Good to hear you got it working! Sounds like it was kind of hassle, but the results will be worth it 👍
please some guidance for Kling 1.5
Hey I have the hardware how do I do this?
You speak of using "Flux" to generate images, but then imediately switch to "Replicate". Forgive me, but I fail to see where "Flux" figures in this ? Can we not simply upload pictures from our own pc directly into "Replicate" ?
Flux is an ai image generator. Replicate is a platform with cloud computation that can be used to run Flux
can someone build an app that just puts all these steps together for you in a simple to use interface? or do they already exist?
I haven't seen any app like that, it typically will take a few steps
have you tried this with minimax?
Minimax doesn't have image to video to do this yet
This is cool. I’d rather not use photos of myself though, because of privacy issues. I prefer to work with fictional characters, and be able to depict them consistently in every scene.
This process will work on any characters you have, as long as you have photos of them
Is there a way to use kling without inputting your phone number
I think you only need an email