The winner is Vidu. Lets you use 3 reference images. But that's if you like their style. Kling also has an option but it requires a ton of training on videos and is limited to something you have videos of, not generated imagery.
Cool thanks for the tip on Vidu. Sounds like Pika's Ingredients which I love so I'll check it out. I'm hopefully going to do a separate one on Kling because, like you said, they require video reference rather than a single image. Thanks!
Unfortunately, I had a poor experience with this feature of Hailuo, as it frequently produced bizarre artifacts, distortions, and severe disproportions, particularly with the characters' heads. The speed was inconsistent, and the results were far from what was advertised in the promotional material
@@aivideoschool I used human characters, like a child in ninja outfits. Obviously, the AI struggles significantly with dynamic and action-packed videos, especially fight scenes, but I found its issues to be quite exaggerated
but they didnt have that kick of emotions in voice and expression and speed of back ground and the focus point is not matching but still i will give 72 its out os 100
I think the best way to get emotion is using ElevenLabs voice to voice, combined with Runway Act One. Anything purely generated is a hard sell, but if there's a human performance driving it, it's much better.
Fantastic comparisons! Thanks!! 🤗
cool comparison video with some great examples :)
Thanks! 😃
Thanks for the review! Really helpful ❤
Glad it was helpful!
muito obrigado,valeu!!!!
The winner is Vidu. Lets you use 3 reference images. But that's if you like their style. Kling also has an option but it requires a ton of training on videos and is limited to something you have videos of, not generated imagery.
Cool thanks for the tip on Vidu. Sounds like Pika's Ingredients which I love so I'll check it out. I'm hopefully going to do a separate one on Kling because, like you said, they require video reference rather than a single image. Thanks!
essa plataforma cria vídeos e imagens
Unfortunately, I had a poor experience with this feature of Hailuo, as it frequently produced bizarre artifacts, distortions, and severe disproportions, particularly with the characters' heads. The speed was inconsistent, and the results were far from what was advertised in the promotional material
Just out of curiosity, were the characters you were using realistic? I ask because I almost tried testing a non-human or illustrated character.
@@aivideoschool I used human characters, like a child in ninja outfits. Obviously, the AI struggles significantly with dynamic and action-packed videos, especially fight scenes, but I found its issues to be quite exaggerated
Got it. I agree, it should only generate the action to a level where the face is still present/similar.
Amazing thank you , what about multi photos training 😮
I think only Pika does two characters 04:00 but Kling just released their own version called Elements. You can also do two characters in Kling.
but they didnt have that kick of emotions in voice and expression and speed of back ground and the focus point is not matching but still i will give 72 its out os 100
and salute to you sir 🫡 2 years doing ai only is no shot of small deal understanding ai mechanics especially
I think the best way to get emotion is using ElevenLabs voice to voice, combined with Runway Act One. Anything purely generated is a hard sell, but if there's a human performance driving it, it's much better.