The quality really is improving. It's kind of out of my price range because it takes so many credits to do anything. But it's SO much better at faces than Kling.
NEXT LEVEL indeed and I’m trying to remember the first platform I did this with a few months ago but it didn’t come with the voice animations so super stoked for this!! Hollywood studios must be embracing this 100 and as I said 2 yrs ago now when I signed up w ChatGPT those that embrace Ai will only survive past those that don’t in many professions.
Why did the chicken cross the road? Because Olivio friken stole him! 😁 Crazy stuff. With tools to exaggerate expressions, I could see this replacing a lot of animators. Right now, tech is using mocap to animate 3d models, but this is actually just adjusting the final image, no 3D model is required.
Thinking about what this expressive a sync could be in another modality - I wonder is there's a system that translates audiobooks while maintaining the performance? Emoting good enough by just doing TTS isn't quite there for listening to an audiobook to be a good experience, at least in the examples I heard, nor does it really seem even close, but if they could take cues for that from a human doing it it one language, maybe they could synthesize speech following these but in another language, like this takes emotive cues from one face input when creating the performance in the lipsync and face motions in another?
i think we will see that sooner than later. that said there are already very succesful youtube channels, like warhammer lore channels, that use ai voice and ai text for naration
@@pvanukoff actually big studios research this right now and try to include it into their workflow. It takes a lot less time and money to create a AI scene and then fix some errors, then to create everything from scratch. It's not perfect yet, but nothing was perfect when it first started
If only you could merge the quality of Kling 1.5 with the breadth of tools that Runway offers (including unlimited gens). It would be an absolute juggernaut.
All great toys. Too much uncanny valley though to be really useful for anything other than pixar or digital art styles. Would be interested to see how it works with something like Arcane style animation.
Actually I see a lot of uses for this for content creation and entertainment uses. People are totally OK at the moment with it looking strange. They basically expect it
Okay it's been a while since I was impressed by something in this space (Well ok Hailuo AI was last time, but before that, really nothing for whole 2024. Flux was nice but nothing revolutionary.) But this is really impressive!
1:31 you? Here you have to overact. Most actors, specially A list actors, don’t overact (perhaps because of the Botox?). Runway still is managed by ran by outsiders from the film industry. You have to at least made a few movies to understand the needs and create the tools we need. Until they understand this simple and basic rule, they will keep failing. FYU, most universities here and in Europe are adding to, for example, medicine or law, to learn to code AI (phyton). This is because only doctors can create AI for doctors. I’m just quoting a big paper of mistakes done on the data entry or information added to AI models by people without knowledge. One simple mistake tagging files wrongly, could ruin the results.
I overact to show the face movement. Wouldn't be much of a presentation if i just speak with not much facial expression ;) That said, there is a lot of overaction in hollywood too. It just depends on the lore. Jim Carrey build a life on it ;)
Not very useful for anything other than the subject facing camera or limited rotations off of front facing footage. Most animated projects require a free range of motion and subject/camera positions.
Everything is very dangerous. It just depends on how you use it. Take water: too much is bad, too little is bad, wrong kind is bad. You pretty much need a very specific type of water to no be in danger
This is insane ❤😂
Face actors are the new gold
Now you only need the face and not the looks, to make it big :)
I've seen a lot of videos on this effect. Yours is the best!
If RunwayML had it's own voice cloning like Elevenlabs, it would be perfect
Yes, that would be amazing
Maybe... In the future can be possible?
Wow can't wait for your next video on this topic. Act-one + Gen-3 + a little basic editing could be magic and so much fun.
I was digging for this. as a user of FCP, this is very helpful
The quality really is improving. It's kind of out of my price range because it takes so many credits to do anything. But it's SO much better at faces than Kling.
Yes, the unlimted tier is pretty expensive
Amazing, thanks for the video!
you are welcome :)
That's nice, thanks for posting.
Awesome! Looks better than live portrait! 🎉
Incredible looking feature. I certainly am glad to be here for all of these new technologies improving!
me too. people already do really stunning stuff with this
The girl rubbing lotion on her face has 6 fingers. AI still hasn't figured out how hands work.
We will enjoy this live! Things are growing up so fast… I was still in the previous video about IC Light and now that, it’s insane
you got the be the flash to keep up ;)
You are a great actor 😃
Thank you, finally some one who loves my quirky act 🙂👍😅
NEXT LEVEL indeed and I’m trying to remember the first platform I did this with a few months ago but it didn’t come with the voice animations so super stoked for this!! Hollywood studios must be embracing this 100 and as I said 2 yrs ago now when I signed up w ChatGPT those that embrace Ai will only survive past those that don’t in many professions.
Yes, pretty sure they are all over this. I will talk about that to an asian animation studio soon too :)
why did you take my chicken !!
Because Link needs to do the chicken dance ;)
To get to the other side.
Actually me and my son repeated this whole day 🤪
Great video Olivio, thanks! Man, 2025 will be so wild for A.I. video generation!
Can't wait for all of the cold new tools we will get
Ok, this is pretty impressive
Almost looks real 🙂
Why did the chicken cross the road?
Because Olivio friken stole him!
😁
Crazy stuff. With tools to exaggerate expressions, I could see this replacing a lot of animators. Right now, tech is using mocap to animate 3d models, but this is actually just adjusting the final image, no 3D model is required.
Yes, really sick. If this getss even more head and body motion it will be insane
@@KDawg5000 why did the
Amazing work! Would you mind sharing the midjourney prompt to create the visuals from the last video?
Thank you. the prompt is pretty basic "pixar style young woman in a forest"
I don't care about your video since I saw your t-shirt ! I want it 🤣🤣🤣
It's from Qwertee.com you can get $1 off with code Olivio 🙂
But the question remains; why did you take my chickens?
amazing! is the walking black girl stock footage? it looks very real
yes, the body is stock and i show you the real stock woman in comparison. i love how well this picks up on the look of her face
Thinking about what this expressive a sync could be in another modality - I wonder is there's a system that translates audiobooks while maintaining the performance? Emoting good enough by just doing TTS isn't quite there for listening to an audiobook to be a good experience, at least in the examples I heard, nor does it really seem even close, but if they could take cues for that from a human doing it it one language, maybe they could synthesize speech following these but in another language, like this takes emotive cues from one face input when creating the performance in the lipsync and face motions in another?
i think we will see that sooner than later. that said there are already very succesful youtube channels, like warhammer lore channels, that use ai voice and ai text for naration
Cant wait for Hollywood studios, like Disney to use this
Exactly, and this was always the end game all along
I don't understand your excitement for them to use this. It's not going to improve their quality, only make it cheaper/faster for them.
Or cool indie movies ❤️❤️❤️
@@OlivioSarikas That's the real use case, for creatives who don't have access to big studio funds to realize their visions.
@@pvanukoff actually big studios research this right now and try to include it into their workflow. It takes a lot less time and money to create a AI scene and then fix some errors, then to create everything from scratch. It's not perfect yet, but nothing was perfect when it first started
Olivio, Um wie viel Uhr ist der Livestream? This looks great!
20 Uhr
Ai never sleeps a new one has just dropped and it can add body animation, dang, hard keeping up
Uhhh, 8 got to check that
@@OlivioSarikas You prob heard of the channel 'Ai Search' , hes always shouting out new papers ;)
@@armondtanz Which one is it? 👀
@@lylia5550 also a free one called Hyundo or sumthing like that.
Gutted we didn't find out why and who took his chickens!
Sorry for taking your chickens. They were crossing a road which I thought was dangerous, so I took them mid-journey.
lol, so nice that you took care of my chickens. You went the egg-stra mile
If only you could merge the quality of Kling 1.5 with the breadth of tools that Runway offers (including unlimited gens). It would be an absolute juggernaut.
technically you can import the kling videos and enhance them with runway. but yes, both together would be crazy af
This is some kind of live portrait no?!
Kind of, but it works better and no hardware or install required
so basically live portrait for people who cant use comfyui
This looks way superior though
live portrait does not work with too much body or head movement as far as i know.
All great toys. Too much uncanny valley though to be really useful for anything other than pixar or digital art styles. Would be interested to see how it works with something like Arcane style animation.
Actually I see a lot of uses for this for content creation and entertainment uses. People are totally OK at the moment with it looking strange. They basically expect it
only for face not body movement huh
ah! can you face transfer to a chicken or is that why it was taken away? 🐓
them chickes have no lips though
Okay it's been a while since I was impressed by something in this space (Well ok Hailuo AI was last time, but before that, really nothing for whole 2024. Flux was nice but nothing revolutionary.)
But this is really impressive!
👋 hi
Funny
This is scary
SOOOOO many fingers
You know what they say: finger licking good. More fingers. More taste 😅
1:31 you? Here you have to overact. Most actors, specially A list actors, don’t overact (perhaps because of the Botox?). Runway still is managed by ran by outsiders from the film industry. You have to at least made a few movies to understand the needs and create the tools we need. Until they understand this simple and basic rule, they will keep failing. FYU, most universities here and in Europe are adding to, for example, medicine or law, to learn to code AI (phyton). This is because only doctors can create AI for doctors. I’m just quoting a big paper of mistakes done on the data entry or information added to AI models by people without knowledge. One simple mistake tagging files wrongly, could ruin the results.
I overact to show the face movement. Wouldn't be much of a presentation if i just speak with not much facial expression ;) That said, there is a lot of overaction in hollywood too. It just depends on the lore. Jim Carrey build a life on it ;)
10 years from now, people will look back and laugh at how basic this is.
i really hope so :)
I.....HAVE your CHICKENSSSSS!!!.....
Maaaa baaabiiiies. Treat them weeeelllll
Not very useful for anything other than the subject facing camera or limited rotations off of front facing footage. Most animated projects require a free range of motion and subject/camera positions.
rome wasn't built in a day, my friend.
Why are you holding your mic stand? Is that an in-joke?
He mentioned at the start of the last video he did. But it's just so he can show off the T-shirts and because it looks more dynamic!
Every wizard needs a staff of power. The rod of the lord. The scepter of the empreor. I think it's a rooster thing ;)
Doesn't work for Rule 34, tried to apply Rule 35 and failed...
runway isn't 34 land ;)
JUST PAID A SUB FEE - THE GENERATE BUTTON DOES NOT WORK. DO NOT USE. ITS NOT READY
nonsense. just contact customer support.
I think this is very dangerous
Runway ML has safeguards.
Besides you can't stop progress
Everything is very dangerous. It just depends on how you use it. Take water: too much is bad, too little is bad, wrong kind is bad. You pretty much need a very specific type of water to no be in danger
what a wonderful plagiate app.