In MiniMax there is a trick I discovered for getting a consistent character by reuploading a still frame from a generated video with that character. How it works is that you upload that frame in image to video and prompt something like, "Fast blur transition to a new scene showing..." and then you describe that character in a new scene or angle, or even with new clothes. MiniMax then tries to recreate that character in a new scene. It may take several or more tries, but when you get another scene that looks perfect with the same character, you then download that video and use a single frame from it to then upload back into MiniMax to create a full 6-second clip with the character in that new scene. However, sometimes I have used a video that has had the prompt for "Fast blur transition" and simply spliced and cut off the first second or two when in my editing software, and then used the remaining 3 or 4 seconds (that show the character in a new scene) in my short film. This method CAN be a little time consuming, I admit, hehe, because sometimes it takes two dozen generations to get the perfect facial resemblance of the character in a new scene or angle or with completely different clothing (if that is prompted).
To maintain real character consistency, we need to train LORAs. Otherwise, the results are obtained as by face swap utilities: is the same person, but also the exactly same face, same position, same emotion.
@renascienza.bazarclub Not necessarily true. For users of MiniMax I have sometimes posted a comment showing a simple technique that can create a very consistent character down to exact details, that doesn't involve using ChatGPT prompts nor LoRAs, but involves reuploading still frames from generated videos in MiniMax. *I see MANY people whine about "character consistency," but NOT A SINGLE COMMENT about creating great characters and stories.* This is why MOST A.I. videos on TH-cam tend to LACK real originality, for most people tend to just follow TRENDS, such as whining about "character consistency" or using Hollywood copyrighted characters in "Super Panavision" videos that look nothing like real Panavision, yet not actually creating GREAT *ORIGINAL* CONTENT like I have. Hehe.
@dwainmorris7854 There is some specific AI services in hugging face that allow you to put any clothes on your character if you have a decent pic of it. But I found out that on Flux models, if you have a REALLY good and detailed prompt about your outfit, you can make it really consistent across different generations. Some details out of place you can inpaint if you need it.
Ce site ne marche pas... Les images ne s'affichent pas il n'y a que des carrés gris, seul le mode non connecté fonctionne et donne des images carrées mais de bonne qualité.
In MiniMax there is a trick I discovered for getting a consistent character by reuploading a still frame from a generated video with that character. How it works is that you upload that frame in image to video and prompt something like, "Fast blur transition to a new scene showing..." and then you describe that character in a new scene or angle, or even with new clothes. MiniMax then tries to recreate that character in a new scene. It may take several or more tries, but when you get another scene that looks perfect with the same character, you then download that video and use a single frame from it to then upload back into MiniMax to create a full 6-second clip with the character in that new scene. However, sometimes I have used a video that has had the prompt for "Fast blur transition" and simply spliced and cut off the first second or two when in my editing software, and then used the remaining 3 or 4 seconds (that show the character in a new scene) in my short film. This method CAN be a little time consuming, I admit, hehe, because sometimes it takes two dozen generations to get the perfect facial resemblance of the character in a new scene or angle or with completely different clothing (if that is prompted).
To maintain real character consistency, we need to train LORAs. Otherwise, the results are obtained as by face swap utilities: is the same person, but also the exactly same face, same position, same emotion.
Already uploaded watch this: th-cam.com/video/yxuA6fXX7EQ/w-d-xo.html
@renascienza.bazarclub Not necessarily true. For users of MiniMax I have sometimes posted a comment showing a simple technique that can create a very consistent character down to exact details, that doesn't involve using ChatGPT prompts nor LoRAs, but involves reuploading still frames from generated videos in MiniMax. *I see MANY people whine about "character consistency," but NOT A SINGLE COMMENT about creating great characters and stories.* This is why MOST A.I. videos on TH-cam tend to LACK real originality, for most people tend to just follow TRENDS, such as whining about "character consistency" or using Hollywood copyrighted characters in "Super Panavision" videos that look nothing like real Panavision, yet not actually creating GREAT *ORIGINAL* CONTENT like I have. Hehe.
Plus they can't render it consistent costumes either without or train altering the image or clothing clothing
@dwainmorris7854 There is some specific AI services in hugging face that allow you to put any clothes on your character if you have a decent pic of it. But I found out that on Flux models, if you have a REALLY good and detailed prompt about your outfit, you can make it really consistent across different generations. Some details out of place you can inpaint if you need it.
Know what we need is way to input the image that we made from outside the system into AI while having it altered
Not understand what you want ask..
Thanks
My pleasure keep watching
Cool
My pleasure keep watching
Ce site ne marche pas... Les images ne s'affichent pas il n'y a que des carrés gris, seul le mode non connecté fonctionne et donne des images carrées mais de bonne qualité.