Great video. Subscribed, got out of the ai tool landscape but Been itching to jump back in. havnt had the time and there’s so many emerging tools it gets overwhelming sometimes. hard to know where to start sometimes so this was a killer vid.
Glad it was helpful! I agree, it's overwhelming at times. I try to learn a little bit every day about something that interests me, even if it's not the latest, newest thing. That makes it feel normally whelming.
Great video! Really informative and interesting, it made me wonder how Runway handles different face types, can it do big beards, obscured mouths, or even non human characters talking as organically?
Those all would have been good ones to test too. I wonder if Runway would have been able to do the bearded monk if he didn't have a beard. I'm working on another project and it's lip-synced some half profile shots of faces without beards.
Thank you, great video! However, pika doesn`t offer lip sync at the moment and runwayml didn`t work with my animation videos and it also didn`t work with faces in profile 😞
I believe it's possible but not super controllable. Lots of prompting and redoing, which means blowing through a lot of credits. Others may have more insights though.
I was trying Pika Labs to lip-sync with a FREE tier. It takes forever and shows me the "File size should be less than 10MB" error. Do I need a paid plan for files bigger than 10MB? Thanks in advance.
If you're on your phone, try logging in with a browser instead of the Runway app. Also, I just saw that lip sync will work with Gen-3 alpha coming out in a few days!
On the Runway dashboard, it should be under the square that says GA at the top. It's not available on mobile. If that doesn't work, ask on their Discord. They have some good moderators there.
Great video. However, once that EMO lipsync platform (incredible moving image from picture + audio file) gets public, both Pika and Runway are nowhere. I just don't see how they can quickly keep up with next level things like Sora and the Emote Portrait Alive (EMO). The A.I. revolution goes so fast, many platforms that look impressive now will become obsolete pretty fast. I wonder if there will be fusions of platforms to combine features and resources to evolve faster and keep up with the (still unavailable) platforms that already show us new mind-blowing possibilities.
That EMO demo is amazing and if anything, I think Runway/Pika will find a way to integrate it like they've done with ElevenLabs voices. The smartest move for any AI company right now is not to build the best model but to offer the best set of models in an easy to use toolset. I predict within the next year we'll all be disappointed by Sora, either from something better coming out, the cost, or the content filters. We'll also be blown away by Sora but that's the example I think of for your observation about impressive technology becoming obsolete. Even the amazing tech that isn't out yet will be obsolete in two years.
But not everyone has actor friends. I've actually made indie features and shorts with real actors like you mentioned. They're totally different things and you can enjoy them both!
I think the best way to start is to generate images first, and then bring those to life with image-to-video. Kling and Luma Dream Machine are both very good for this and have some free credits.
Thanks, btw I'd never remove a comment like this. I still think it's cool I can generate an image, turn that image into a video, then make a person in that video talk using a generated voice. The quality is still uncanny valley but I'm cool with that for now.
@@aivideoschool that’s honestly all I was talking about was the precision and quality other than that it really is remarkable what they are trying to achieve.
I made a tutorial on how to combine Lip Sync with consistent characters from Midjourney th-cam.com/video/nL9UowfT0PE/w-d-xo.html
Hey do u take a paid job to make a request video?
Excellent Video! Easy to follow, easy to listen to, and substantial value! Subscribed! Thank you!
Wow, I did my first lip sync in Runway. I did not know it could do it. Thank you so much!!!
It just came out Friday. Thanks for watching!
@@aivideoschoolHi, after convertinng from text to video in "runway ai" i am able to download only once. How can i download more than once?
Where can I get the app
Excellent video demo and comparison. Thanks for doing this!
Thank you for watching, it was pretty fun to test out
Thank you for this video. It saved me some time in trying that out myself. Highly appreciated!
This was really useful - thank you!
I'm glad it was helpful, thanks for watching
Thanks for the great video!❤ Greetings from Germany 👋
yeah, lipsync was the keyword. doing an AI insurance agent with its own subtitles. crazy shit, so much fun. now i got the workflow done.
Great video. Subscribed, got out of the ai tool landscape but Been itching to jump back in. havnt had the time and there’s so many emerging tools it gets overwhelming sometimes. hard to know where to start sometimes so this was a killer vid.
Glad it was helpful! I agree, it's overwhelming at times. I try to learn a little bit every day about something that interests me, even if it's not the latest, newest thing. That makes it feel normally whelming.
This was awesome! thank you!
I'm glad it was helpful, thanks for watching!
On runaway it don't works with all character ? Like a cartoon chicken or something like that ? I tried but doesn't "find a face in my picture" :(
It still has to look vaguely human
I don't see the Lip sync functionality in Pika anymore. Did it get removed?
It looks like Lip Sync is still there but you have to select the Pika 1.0 model and upload an image/video to see the Lip Sync icon
Great content bro! I just subscribed!
Do you know of any good lip-sync free tool which allows managing several minutes videos? My need is to have an image talk
Try the Hedra Discord server maybe? I think they're still free with a watermark. There are open source tools as well.
thank you Mike.
Great video! Really informative and interesting, it made me wonder how Runway handles different face types, can it do big beards, obscured mouths, or even non human characters talking as organically?
Those all would have been good ones to test too. I wonder if Runway would have been able to do the bearded monk if he didn't have a beard. I'm working on another project and it's lip-synced some half profile shots of faces without beards.
Test it and find out 😮😮😮😮
In Pika labs, do we have to subscribe to the Standard plan to get access to the sound effects and lip sync options ?
Thank you! This helped my make my latest music video over on my channel!
Just checked it out and left a comment. Nice work!
great content my man, congrats
Appreciate it!
Awesome video! Thank you!! Just subbed.
Thank you, great video! However, pika doesn`t offer lip sync at the moment and runwayml didn`t work with my animation videos and it also didn`t work with faces in profile 😞
Try Hedra, it does better with animated/non-human characters. None of the generators do great with faces in profile yet.
have another question, its exported HD quality? i want to drop the clip on premier and keep editing. would it be a problem?
I don't think they're exactly 1920x1080 if that matters but they do have HD outputs
So can you make facial expressions, such as smiling whilst talking?
I believe it's possible but not super controllable. Lots of prompting and redoing, which means blowing through a lot of credits. Others may have more insights though.
I'm goin with Runway
Same, especially after today's Gen-3 announcement
I'm waiting for an open source text/image to ai video. I'm tired of the credits and subscription plans.
Have you looked into some of the Discord based tools? Some are still free like Morph Studio, VisionStory, Pika (I think?), and Moonvalley.
I hope that happens one day but I am not sure about that sadly
can you change the teeth? say you wanted yellow and sharp teeth
That would be easier to do at the image generation step where you could inpaint until you found the teeth you wanted.
I was trying Pika Labs to lip-sync with a FREE tier. It takes forever and shows me the "File size should be less than 10MB" error. Do I need a paid plan for files bigger than 10MB? Thanks in advance.
I'm not sure off hand, was the 10MB+ file audio of the voice or was it the video you were adding lip sync to?
If the head moved as well, even slightly and randomly…it would be much better. Ty for a good vid!
Totally agree, many of the AI avatars have that sort of head movement and it helps sell it as real
Sorry I came late but can’t I do this with my phone?
If you're on your phone, try logging in with a browser instead of the Runway app. Also, I just saw that lip sync will work with Gen-3 alpha coming out in a few days!
Great stuff 🎉
Thank you for watching!
Thanks for this 👍
You're welcome, thanks for watching!
Thank you:)))
I don't have the "generative audio" on my Rumway
On the Runway dashboard, it should be under the square that says GA at the top. It's not available on mobile. If that doesn't work, ask on their Discord. They have some good moderators there.
@@aivideoschool I found it. Thank you!:)
Great video. However, once that EMO lipsync platform (incredible moving image from picture + audio file) gets public, both Pika and Runway are nowhere. I just don't see how they can quickly keep up with next level things like Sora and the Emote Portrait Alive (EMO). The A.I. revolution goes so fast, many platforms that look impressive now will become obsolete pretty fast. I wonder if there will be fusions of platforms to combine features and resources to evolve faster and keep up with the (still unavailable) platforms that already show us new mind-blowing possibilities.
That EMO demo is amazing and if anything, I think Runway/Pika will find a way to integrate it like they've done with ElevenLabs voices. The smartest move for any AI company right now is not to build the best model but to offer the best set of models in an easy to use toolset. I predict within the next year we'll all be disappointed by Sora, either from something better coming out, the cost, or the content filters. We'll also be blown away by Sora but that's the example I think of for your observation about impressive technology becoming obsolete. Even the amazing tech that isn't out yet will be obsolete in two years.
All you had to do was ask two actor friends to say the lines in front of your camera. Done! And MUCH better.
But not everyone has actor friends. I've actually made indie features and shorts with real actors like you mentioned. They're totally different things and you can enjoy them both!
Thanks for sharing 👍🙏👍
I have a quick question if I may. What is the secret or the right ai to get the animation character to blink?
I think with Runway's motion brush you can select the eyes and choose ambient noise. It's not perfect and sometimes they'll just blink on their own.
@@aivideoschool Thank you! You're awesome!
Great video
Thanks for watching
where are the links to the websites?
They are pika.art and runwayml.com/
make video on stable video please
Here's one I made on Stable Video a few weeks ago:
th-cam.com/video/QvAORUYT-1M/w-d-xo.html
İ just subbed! Also i am tryin to do a music video using ai can you help me or give me ideas?❤
I think the best way to start is to generate images first, and then bring those to life with image-to-video. Kling and Luma Dream Machine are both very good for this and have some free credits.
Gave up after 4:00. You gotta show previews of finished product
I show how each works and then show the results/comparison at 06:19
not a lot of these videos out there
What good is lip syncing an image if no other part of the face moves ….
Agreed. The facial muscles and eye have movement when talking, that’s why it doesn’t look realistic (yet)
Can they sing as well?
I've tried this with a vocal isolation track from a song with mixed results. If you only have the vocals it sort of works to mix it back with music.
But putting music to these shorter lip sync movies without strange breaks between them are still a pain in the b... 😄
I've had a little bit of luck lip syncing videos instead of images. You can suspend disbelief a little longer with the right shot.
Only works when face is full frontal.😂
Pika does a little better at half profile (45 degree) shots. It's not as good as facing directly though for sure
Only problem, they look like people with no souls talking, no emotion, just like robots.
I want to be very respectful so my comment doesn’t get removed but honestly I tried both and they really really suck. Don’t even east your time guys.
Thanks, btw I'd never remove a comment like this. I still think it's cool I can generate an image, turn that image into a video, then make a person in that video talk using a generated voice. The quality is still uncanny valley but I'm cool with that for now.
@@aivideoschool that’s honestly all I was talking about was the precision and quality other than that it really is remarkable what they are trying to achieve.
yeah. the lips need to improve.
Thanks for the great video!❤ Greetings from Germany 👋
Thanks for the great video!❤ Greetings from Germany 👋
Hello Deutschland! Thank you for watching, glad it was helpful!