I’m getting it to produce exactly what I want by first creating an image and uploading it. I use the same prompt for the video as I did for the image. It is adding in my light reflection, ray tracing, and all of that. It listens to my camera angle and pan/zoom settings but my issue is the duration and quality.
Yup. I think consistency in prompting helps. Duration will come-- Gen-1 started at 4 seconds before getting bumped up 15. And quality will follow suit. I actually think there have been improvements to Gen-2 since the early Beta. It actually looks a lot better to me already. (There was a rumor that the Web Version has been upgraded-- no idea if that is true, but there does seem to be a slightly cleaner look)
🎯 Key Takeaways for quick navigation: 00:00 🎥 *Introduction to Gen 2* - Overview and tutorial of Runway ML's Gen-2. - Web UI version and differences from Discord UI. - Introduction to the key interface elements. 01:22 📝 *Writing Effective Prompts* - Formula for writing effective prompts: style, shot, subject, action, setting, and lighting. - Tips for using keywords in prompts. - Importance of keeping character descriptions simple. 03:02 🌟 *Prompting and Output Examples* - Demonstration of prompts and their results. - How locking a seed influences consistency. - Exploring different prompts and their outcomes. 06:11 🖼️ *Using Reference Images* - Experimenting with reference images in prompts. - Challenges and adjustments needed for specific actions. - Incorporating reference images as storyboards. 08:15 📈 *Upscaling and Output Quality* - Comparing image quality between free and upscale versions. - Benefits of upscaling for higher resolution outputs. - Mentioning differences between Discord and web-based versions. 10:34 🧑💻 *Additional Resources and Patreon* - Announcement of Patreon for community support. - Encouragement to join a smaller community for collaboration. - Future plans for the Patreon community. Made with HARPA AI
Just got their Ultimate plan...yes, $100 a month...but I think it'll be well worth it. Always made some cool looking videos and can't wait to create more with Gen2! Great tutorial.
Please how can i make a consistent character in runway..Like i already generated a character of a man, so i want to use the face of the character in other scene of the movie.
100% Thank you for watching! And yeah, I always like to showcase the logic behind a prompt. Teach a man to fish, and all that... Also, I just realized I'm hungry...
You know what's crazy? I was playing with Gen 2 this morning by uploading an AI avatar of myself in Pixar style to Gen 2 as an image prompt, and I luckily used a very similar prompt structure to the one your mentioned - style, shot, subject, action, setting. I didn't include the lighting though, and this was my first ever attempt at using Gen 2 and I was actually impressed with the result. I mean it had deformed hands and all, but was generally happy with Gen 2 gave me. I compare this to early versions of Midjourney - I bet in a year (or less) we're going to get some pretty fantastic results. Thanks again for your awesome video and tips ❤
excellent to hear! And although I didn't really go over it here, I do think Gen2 does particularly well with animated looks. I think our brains accept the video a little better when you have a stylized look, as opposed to the "rubber people" that are fairly common with Gen2 in its current state. I don't know if you caught it, but I had a pretty fun workflow with taking Gen2's output and popping it into Kaiber (plus lip sync for dialouge!): th-cam.com/video/eUrtX432KUI/w-d-xo.html
A tutorial on using Runway Gen-2, from basic to advanced prompt writing. This video provides a very clear explanation of effective prompt writing, suitable for both beginners and professionals.
I started my journey creating AI content. I've had some fun using pan and zoom videos so far with midjourney, but will eventually try text to video if I can. Thanks for your videos. Take care!
It's a great time right now! I'm going to do a Gen-2 follow up next week. In the meantime, you might want to check out the video I did on Pika: th-cam.com/video/uLpuuRteU7Y/w-d-xo.html
Hello, Tim! I really like your videos, and I'd suggest you consider creating one guiding us through a more involved story telling for an ultra-short -- are you up for 1 min ?? :o) I have just started with Runway G2 as I'm still trying to run Automatic 1111 locally, and that is hell. Anyway, are you getting "vanishing in and out" problems when objects move in the same zone? I didn't notice that in the NYC scene but I didn't watch it closely enough. A typical prompt for you to get in trouble would be "cars moving in a busy street" ... just fill the rest as you want. If you figure out a solution, please do a video. I'll leave a note if I get any ideas about that. Tks for the tip about "styling" with "fix seeds". I'm hopeful in terms of where this goes!!
what I find frustrating with my experiments is the lack of precision on placement when it comes to prompts e.g. if you want an element of chr in your scene to be on fire, it will add the fire quite randomly to the surrounding area not tight to any given region you try to describe
Thank you. I am a subscriber. My question is does Runway Gen offer a place for "Negative" prompts ... I don't want any morphing into a whole new person? I want to keep the same character throughout the video.Thanks
So, not quite but ALMOST, I did a pretty insane workflow in this video- it works, maybe not photorealistic, but you can see that it’ll get there: Create Awesome AI Animation with this Workflow: Murf.AI, Midjourney, Gen-2, Kaiber.AI & More th-cam.com/video/eUrtX432KUI/w-d-xo.html
Are there ways of combining elements from these generated videos with actual films? Would masking work? Essentially turning video parts into pngs and pasting them onto others, then editing lighting as though it is all part of the same video.
Great content! I just subscribed to your channel and turned notifications. So I can make videos or shorts with my script or text and Gen2 creates the video itself, correct? If I don't need to find copyright-free photos and I don’t need to take pics to put together to make/edit a video which I've never done it before, I won't wait to pay right away through your affiliate link.
Awesome! thank you for the sub! Ummm, pop over to the Discord here: discord.gg/6PMENW3k and drop me a line in Project Help. I think that'll be a lot easier than chatting through comments. I have some thoughts for you...
Ha! Thank you! I wasn't actually planning on doing a video today, but some afternoon work cleared up and I was like...ehhhhh...I want to be lazy, but...Gen-2 is so much fun!
excellent to hear! There's also an updated version here: th-cam.com/video/k5CC_vg4Jqo/w-d-xo.html I still need to check the Discord generator, but I think that's mostly dead at this point. By the way, have you checked out Pika? I've been having a LOT of fun with that, and it is free: th-cam.com/video/0NRT7K3YkPI/w-d-xo.html
So I'm trying to animate an image from Midjourney but Runway just takes it as a reference rather than simply animating the image I'm happy with when I give it a prompt. Is there a way to stop it from generating new imagery and just take the image as is and use the text prompt to inform the action?
Any ideas on what to type to make the characters look as if they're speaking? I've tried "man talking," woman speaking," people having a conversation," and can't get their mouths to move.
Yeah, that's a tough one-- You can try the new update they just pushed out by uploading a reference image and it seems like they start "talking" then. I think in general, AI actors start to motormouth if they try to talk. I've seen some WEIRD outputs with speaking AI characters.
There’s a trick where if you screenshot your last frame of video and then feed it back in as an image prompt, you’ll get a continuation. But, the video quality will degrade. Unless you were talking about a boomerang type effect?
I jeed to kearn how to make the characters move . I have hyoer realistic characters for a music bideo real i am trying to create , but its hard to give the characters human like mkvmenet , blinking , walking , smiling etc . Any tips, thank you guys
Good video however anyone unfamiliar with discord and what it does may get a little lost, for instance to make a reference to using runway on discord however when I looked up discord it didn’t give me any info on this and just appeared to be a platform for creative communities to chat and discuss work. Anyway I just don’t have enough knowledge to utilise whatever discord is offering
So, that is a pretty old video now (at least in AI terms!), and Runway is no longer on discord, but rather is a fully fledged website that you can find at runwayml.com/ I haven't actually done a full tutoral on the site (I should probably do that), but I do cover a lot of the new features, like the Motion Brush: th-cam.com/video/zVO16lU3AQ4/w-d-xo.html A lot of interesting stuff has improved since this video! I think you'll be quite pleased!
It's crazy how much better it is getting. Did you see the video I did on Kaiber? In that one I used a source video I previously used for a Gen-1 test. I mean, obviously Kaiber blew it out of the water-- but I wanted to check to see when I posted that original video. It was 4 months ago. Gen-1 was four months ago. I...was shocked. It feels like that was an eon ago.
Hmmm interesting. I feel the more expensive Kaiber results were better especially for the skater one you finished your previous video with. Hmm 🤔 not sure if it's too early to do Gen 2 yet until it's a bit better in results
To be honest. I think the current best “look” for straight AI video is running Gen-2 through Kaiber. I did a video about that and I really liked the look. Personally i feel like there are interesting things you can do with text to video, but the really interesting part is video to video. I’ve seen some interesting results lately out of Gen1, and I’d love to take that on a head to head with Kaiber. As far as the super trippy stuff? Can’t beat Kaiber on that! It looks so good!
@@TheoreticallyMedia nice insights friend... That would be a great video to see how current Gen 1 fares against Kaiber in the text to video realm. I seriously looked into kaiber last night and am heavily seeing it's worth especially as it improves... It's like MJ vs Leo... And Gen 2 is still unavailable in mass so it would be intriguing if we could see the pros of when to use Gen 1 in the meantime. And then, there's that part of me that looks at the editing/workflow time investment and ponder just using unique stock videos and strategic transitions in filmora 11 and no funky hands/eyes lol. Also... Your thoughts on D-iD for Avatar animation vs what you had to do here for animating? It's quite expensive for commercial use tho. Thanks for your efforts and comprehensive tutorials. ☺️
I don’t think so? Runway has an online video editor as a separate product, so maybe they switched over to that? It’d be smart if Runway’s video editor automatically has access to your Gen2 video, but I don’t think that’s happened. Then again, can’t fully say since I don’t use it. Are you looking for video editing software?
@@TheoreticallyMedia (my bad other channel reply lol) I’ve been using the desktop version. But maybe he was using Gen 1? It seemed way too good for Gen 1 so I assumed it was 2.
No kung fu? This is ridiculous! Ha! Thank you for the video. 👍 I hopped on there the minute I got the email. I can't wait for 6 months from now. Appreciate your videos as always.
Haha, believe me, I TRIED for Kung Fu! And awhile back I tried to do a Samurai Jack animated sword fight, but no go there as well. But, I think that we'll get there soon. I have a sneaking suspicion that Gen-1 footage is being used to train Gen-2-- so, I'm going to start rolling up some Jackie Chan movies now!
Goddamn this shit's going to get out of hand real soon. June 2024 we'll look back at this moment with endearment of "remember when AI videos we're yet absolutely perfect and you couldn't make full movies with a short sentence?"
I know right? I keep thinking that the whole "wonky" look is going to be looked back on with nostalgia at some point. Like, someone will make a movie in 2040 they'll have a flashback scene to 2023, and everything will look like Gen-2! Actually, that's an awesome idea. Remind me in 2040!!
my first attempt was a travasty, tried doing a 16mm 1980s family home movie... don't know what those kids were doing but that's not how you eat icecream! Second attemp I used image reference from MJv5.1 renders and a similar promt and got very near exact results. The hands anatomy is messed up like old MJ. The video quality is terrible, but hey, this is very promising.
totally. And agreed-- I think the overall video quality is around MJ v2? But...I mean, it won't be long until we're at v4 video and beyond. Currently, I think of Gen2 as a fun exploratory tool, where you can make some fun short films. 2 years from now? The mind boggles.
I found a work-around for the low 4 seconds you get. I just create a new Davinci Resolve project and set it to 16 fps. Then important the video and activate retime control and drop it to 50% My work-around for low quality is to import the video into Davinci Resolve then export each frame as a .png then upscale it with chaiNNer. Put the upscaled .png files back in Davinci then create the video.. Problem is, that’s a lot of work… Not worth payment at this this IMO.
That's brilliant! But yeah, a lot of work too. Sometime over the weekend I'm going to plug some Gen-2 footage into Topaz and upscale it to see what happens.
@@user-pc7ef5sb6x taking Gen-2 footage and then post processing it with other tools is what I’m really interested in right now. Yup, upscale it and stabilize, then hand it to an after effects wizard to see what they can do! (I am an AE Caveman, sadly)
Yeah. They’ve slowly updated their model and now a lot of those super surreal results have been slowly vanishing. I said it a lot back then: we were eventually going to hit a point where the weird was hammered out, and I was going to miss it. Looks like we’re getting there now.
Gen-2 is awful Most of what it churns out based on the image you prompt it with... turn out to be total grrbage. Maybe 1 out of 20 results are usuable for anything at all. I'm canceling after my first month runs out. It's not useful for anything at this stage. I even trained a model with 15 images and it won't use the model when I put it in the prompt.
Are you on the Pika waitlist? You may want to try that out. I put up a video on the 1.0 update today. I think you might like it. The models all do different things, I still say the real gem of Runway is Gen1. That’s the thing everyone is sleeping on
@@TheoreticallyMedia Rolls eyes... what can any of this grrbage be used for... I don't know.. it can't even get a simple pan right... lol th-cam.com/video/qFC0qdAUTgU/w-d-xo.html
Ha! Shall do! I’m always aiming for that “moody movie lighting” but, you’re totally right, this is YT. I’m not going to use the stupid ring light though. I hate that circular reflection as a eye catch light. It kinda creeps me out!
@TheoreticallyMedia That's fair....and I agree moody/cinematic is great.. But its best to start with too much light and bring. it down in post....can still get the look but with more detail. Get some light on a least half if ya for a shadow....will look great. Take care👊
I’m getting it to produce exactly what I want by first creating an image and uploading it. I use the same prompt for the video as I did for the image. It is adding in my light reflection, ray tracing, and all of that. It listens to my camera angle and pan/zoom settings but my issue is the duration and quality.
„creating your video first“… where?
@@5timesm He said image, not video
Yup. I think consistency in prompting helps. Duration will come-- Gen-1 started at 4 seconds before getting bumped up 15. And quality will follow suit. I actually think there have been improvements to Gen-2 since the early Beta. It actually looks a lot better to me already.
(There was a rumor that the Web Version has been upgraded-- no idea if that is true, but there does seem to be a slightly cleaner look)
Do you mind sharing a prompt example you used? I’m having trouble getting camera movement and fast character movement
🎯 Key Takeaways for quick navigation:
00:00 🎥 *Introduction to Gen 2*
- Overview and tutorial of Runway ML's Gen-2.
- Web UI version and differences from Discord UI.
- Introduction to the key interface elements.
01:22 📝 *Writing Effective Prompts*
- Formula for writing effective prompts: style, shot, subject, action, setting, and lighting.
- Tips for using keywords in prompts.
- Importance of keeping character descriptions simple.
03:02 🌟 *Prompting and Output Examples*
- Demonstration of prompts and their results.
- How locking a seed influences consistency.
- Exploring different prompts and their outcomes.
06:11 🖼️ *Using Reference Images*
- Experimenting with reference images in prompts.
- Challenges and adjustments needed for specific actions.
- Incorporating reference images as storyboards.
08:15 📈 *Upscaling and Output Quality*
- Comparing image quality between free and upscale versions.
- Benefits of upscaling for higher resolution outputs.
- Mentioning differences between Discord and web-based versions.
10:34 🧑💻 *Additional Resources and Patreon*
- Announcement of Patreon for community support.
- Encouragement to join a smaller community for collaboration.
- Future plans for the Patreon community.
Made with HARPA AI
Just got their Ultimate plan...yes, $100 a month...but I think it'll be well worth it. Always made some cool looking videos and can't wait to create more with Gen2! Great tutorial.
Thank you! I did also do a video featuring the new update here: th-cam.com/video/k5CC_vg4Jqo/w-d-xo.html That will be $100 well worth spent I think!
Please how can i make a consistent character in runway..Like i already generated a character of a man, so i want to use the face of the character in other scene of the movie.
Thanks!
Thank you so much!!!
Another great video. Thank you. Thanks for showing not just your prompts, but the layout for us to use.
100% Thank you for watching! And yeah, I always like to showcase the logic behind a prompt. Teach a man to fish, and all that...
Also, I just realized I'm hungry...
You know what's crazy? I was playing with Gen 2 this morning by uploading an AI avatar of myself in Pixar style to Gen 2 as an image prompt, and I luckily used a very similar prompt structure to the one your mentioned - style, shot, subject, action, setting. I didn't include the lighting though, and this was my first ever attempt at using Gen 2 and I was actually impressed with the result. I mean it had deformed hands and all, but was generally happy with Gen 2 gave me. I compare this to early versions of Midjourney - I bet in a year (or less) we're going to get some pretty fantastic results. Thanks again for your awesome video and tips ❤
excellent to hear! And although I didn't really go over it here, I do think Gen2 does particularly well with animated looks. I think our brains accept the video a little better when you have a stylized look, as opposed to the "rubber people" that are fairly common with Gen2 in its current state.
I don't know if you caught it, but I had a pretty fun workflow with taking Gen2's output and popping it into Kaiber (plus lip sync for dialouge!): th-cam.com/video/eUrtX432KUI/w-d-xo.html
Agree! 💯 A year from now will be 🤯 And we're discussing making videos from words. Ridiculous!
@@TheoreticallyMedia I've got it saved to watch asap, after watching this video. Not sure how I missed it earlier!
I have a doubt .can we add dialogues via prompt and get an output like actors are talking/ kind of lip sync
A tutorial on using Runway Gen-2, from basic to advanced prompt writing. This video provides a very clear explanation of effective prompt writing, suitable for both beginners and professionals.
Appreciate you getting this up today. Why is today my busiest work day, all I want to do is play with the new toy!
Ha! Hope you snuck some time in!!
I started my journey creating AI content. I've had some fun using pan and zoom videos so far with midjourney, but will eventually try text to video if I can. Thanks for your videos. Take care!
It's a great time right now! I'm going to do a Gen-2 follow up next week. In the meantime, you might want to check out the video I did on Pika: th-cam.com/video/uLpuuRteU7Y/w-d-xo.html
@@TheoreticallyMedia signed up for the waitlist
Hello, Tim! I really like your videos, and I'd suggest you consider creating one guiding us through a more involved story telling for an ultra-short -- are you up for 1 min ?? :o) I have just started with Runway G2 as I'm still trying to run Automatic 1111 locally, and that is hell. Anyway, are you getting "vanishing in and out" problems when objects move in the same zone? I didn't notice that in the NYC scene but I didn't watch it closely enough. A typical prompt for you to get in trouble would be "cars moving in a busy street" ... just fill the rest as you want. If you figure out a solution, please do a video. I'll leave a note if I get any ideas about that. Tks for the tip about "styling" with "fix seeds". I'm hopeful in terms of where this goes!!
what I find frustrating with my experiments is the lack of precision on placement when it comes to prompts e.g. if you want an element of chr in your scene to be on fire, it will add the fire quite randomly to the surrounding area not tight to any given region you try to describe
Syntax of prompts at mark 1:34.
Style
shot
subject
action
setting
lighting.
Correct. I should have put together a PDF for it. Maybe I’ll do that later and add it to the pinned.
Thank you. I am a subscriber. My question is does Runway Gen offer a place for "Negative" prompts ... I don't want any morphing into a whole new person? I want to keep the same character throughout the video.Thanks
I have a doubt .can we add dialogues via prompt and get an output like actors are talking/ kind of lip sync
So, not quite but ALMOST, I did a pretty insane workflow in this video- it works, maybe not photorealistic, but you can see that it’ll get there: Create Awesome AI Animation with this Workflow: Murf.AI, Midjourney, Gen-2, Kaiber.AI & More
th-cam.com/video/eUrtX432KUI/w-d-xo.html
Are there ways of combining elements from these generated videos with actual films? Would masking work? Essentially turning video parts into pngs and pasting them onto others, then editing lighting as though it is all part of the same video.
Oh, 100%. You’d be in the land of After Effects, or some other compositing software, but 100% doable, and I think the results would be amazing.
Is this the only way to bring alive my characters? Make them walk , smile? For a music video reel of about 30 seconds
Great content! I just subscribed to your channel and turned notifications. So I can make videos or shorts with my script or text and Gen2 creates the video itself, correct? If I don't need to find copyright-free photos and I don’t need to take pics to put together to make/edit a video which I've never done it before, I won't wait to pay right away through your affiliate link.
Awesome! thank you for the sub! Ummm, pop over to the Discord here: discord.gg/6PMENW3k and drop me a line in Project Help. I think that'll be a lot easier than chatting through comments. I have some thoughts for you...
I love the effect on the cities examples. It’s looking choppy. How did you achieve that?
Quick turn around on this video :P Well done.
Good info. Thanks.
Ha! Thank you! I wasn't actually planning on doing a video today, but some afternoon work cleared up and I was like...ehhhhh...I want to be lazy, but...Gen-2 is so much fun!
Does the Color Wheel on corner get removed I get the upgrade
It does!
Posting before thinking never works for me. I forgot I can read.
Thank you very much, things are immediately better with your method 👍. Is there still an advantage to going through Discord or is it the same now?
excellent to hear! There's also an updated version here: th-cam.com/video/k5CC_vg4Jqo/w-d-xo.html
I still need to check the Discord generator, but I think that's mostly dead at this point. By the way, have you checked out Pika? I've been having a LOT of fun with that, and it is free: th-cam.com/video/0NRT7K3YkPI/w-d-xo.html
@@TheoreticallyMedia I'm looking at pika, it's not bad at all 👌😱
So I'm trying to animate an image from Midjourney but Runway just takes it as a reference rather than simply animating the image I'm happy with when I give it a prompt. Is there a way to stop it from generating new imagery and just take the image as is and use the text prompt to inform the action?
Check out the latest video, I think you'll be quite interested: th-cam.com/video/k5CC_vg4Jqo/w-d-xo.html Short answer: You can!
Any ideas on what to type to make the characters look as if they're speaking? I've tried "man talking," woman speaking," people having a conversation," and can't get their mouths to move.
Yeah, that's a tough one-- You can try the new update they just pushed out by uploading a reference image and it seems like they start "talking" then. I think in general, AI actors start to motormouth if they try to talk. I've seen some WEIRD outputs with speaking AI characters.
@@TheoreticallyMedia Appreciate the answer, I'll keep trying. Thanks again.
Thank you very much! Although I am sad that we can only generate a video of 4 seconds
Free version is 4 second. Other version is 15.
The 15 second one is just Gen-1, I think. But-- it won't be long until that cap is lifted. They already have a slider built in.
For now. Give it a month or so. One trick I was using was to slow the footage down in a video editor...its a good trick if you need a longer shot.
@@BassmeantProductions paid version is 4 seconds , hopefully they change that soon
anyone know how to create video loops with this , what can "seamless" in time ?
There’s a trick where if you screenshot your last frame of video and then feed it back in as an image prompt, you’ll get a continuation. But, the video quality will degrade.
Unless you were talking about a boomerang type effect?
Thanks for this video so much!
oh excellent!! So happy this was helpful to you! updated version here: th-cam.com/video/k5CC_vg4Jqo/w-d-xo.html
I jeed to kearn how to make the characters move . I have hyoer realistic characters for a music bideo real i am trying to create , but its hard to give the characters human like mkvmenet , blinking , walking , smiling etc . Any tips, thank you guys
Good video however anyone unfamiliar with discord and what it does may get a little lost, for instance to make a reference to using runway on discord however when I looked up discord it didn’t give me any info on this and just appeared to be a platform for creative communities to chat and discuss work. Anyway I just don’t have enough knowledge to utilise whatever discord is offering
So, that is a pretty old video now (at least in AI terms!), and Runway is no longer on discord, but rather is a fully fledged website that you can find at runwayml.com/
I haven't actually done a full tutoral on the site (I should probably do that), but I do cover a lot of the new features, like the Motion Brush: th-cam.com/video/zVO16lU3AQ4/w-d-xo.html
A lot of interesting stuff has improved since this video! I think you'll be quite pleased!
all my generations are slow. is there a way to speed em up? adding 'fast' doesn't help
Thank you.
Thank you for the watch! How are you enjoying Gen2?
can you change the aspect ratio?
Not yet, as far as I know, but I’m sure they’re working on vertical. Seems like a no brainer for IG/TT and YTShorts
Awesome vid great how much smoother the vids are now 🎉
It's crazy how much better it is getting. Did you see the video I did on Kaiber? In that one I used a source video I previously used for a Gen-1 test. I mean, obviously Kaiber blew it out of the water-- but I wanted to check to see when I posted that original video.
It was 4 months ago. Gen-1 was four months ago.
I...was shocked. It feels like that was an eon ago.
Hmmm interesting. I feel the more expensive Kaiber results were better especially for the skater one you finished your previous video with. Hmm 🤔 not sure if it's too early to do Gen 2 yet until it's a bit better in results
To be honest. I think the current best “look” for straight AI video is running Gen-2 through Kaiber. I did a video about that and I really liked the look.
Personally i feel like there are interesting things you can do with text to video, but the really interesting part is video to video. I’ve seen some interesting results lately out of Gen1, and I’d love to take that on a head to head with Kaiber.
As far as the super trippy stuff? Can’t beat Kaiber on that! It looks so good!
@@TheoreticallyMedia nice insights friend... That would be a great video to see how current Gen 1 fares against Kaiber in the text to video realm. I seriously looked into kaiber last night and am heavily seeing it's worth especially as it improves... It's like MJ vs Leo... And Gen 2 is still unavailable in mass so it would be intriguing if we could see the pros of when to use Gen 1 in the meantime. And then, there's that part of me that looks at the editing/workflow time investment and ponder just using unique stock videos and strategic transitions in filmora 11 and no funky hands/eyes lol. Also... Your thoughts on D-iD for Avatar animation vs what you had to do here for animating? It's quite expensive for commercial use tho. Thanks for your efforts and comprehensive tutorials. ☺️
I saw a post where a person sseemed to use Gen2 to edit their own videos. But I don't see a way to do that. Is that possible yet?
I don’t think so? Runway has an online video editor as a separate product, so maybe they switched over to that?
It’d be smart if Runway’s video editor automatically has access to your Gen2 video, but I don’t think that’s happened. Then again, can’t fully say since I don’t use it.
Are you looking for video editing software?
@@TheoreticallyMedia (my bad other channel reply lol) I’ve been using the desktop version. But maybe he was using Gen 1? It seemed way too good for Gen 1 so I assumed it was 2.
No kung fu? This is ridiculous! Ha!
Thank you for the video. 👍 I hopped on there the minute I got the email. I can't wait for 6 months from now.
Appreciate your videos as always.
Haha, believe me, I TRIED for Kung Fu! And awhile back I tried to do a Samurai Jack animated sword fight, but no go there as well.
But, I think that we'll get there soon. I have a sneaking suspicion that Gen-1 footage is being used to train Gen-2-- so, I'm going to start rolling up some Jackie Chan movies now!
Thx for the vid...very helpful insights :)
Goddamn this shit's going to get out of hand real soon. June 2024 we'll look back at this moment with endearment of "remember when AI videos we're yet absolutely perfect and you couldn't make full movies with a short sentence?"
I know right? I keep thinking that the whole "wonky" look is going to be looked back on with nostalgia at some point. Like, someone will make a movie in 2040 they'll have a flashback scene to 2023, and everything will look like Gen-2!
Actually, that's an awesome idea. Remind me in 2040!!
my first attempt was a travasty, tried doing a 16mm 1980s family home movie... don't know what those kids were doing but that's not how you eat icecream! Second attemp I used image reference from MJv5.1 renders and a similar promt and got very near exact results. The hands anatomy is messed up like old MJ. The video quality is terrible, but hey, this is very promising.
totally. And agreed-- I think the overall video quality is around MJ v2? But...I mean, it won't be long until we're at v4 video and beyond.
Currently, I think of Gen2 as a fun exploratory tool, where you can make some fun short films. 2 years from now? The mind boggles.
I found a work-around for the low 4 seconds you get. I just create a new Davinci Resolve project and set it to 16 fps. Then important the video and activate retime control and drop it to 50% My work-around for low quality is to import the video into Davinci Resolve then export each frame as a .png then upscale it with chaiNNer. Put the upscaled .png files back in Davinci then create the video.. Problem is, that’s a lot of work… Not worth payment at this this IMO.
That's brilliant! But yeah, a lot of work too. Sometime over the weekend I'm going to plug some Gen-2 footage into Topaz and upscale it to see what happens.
You can also pass it though Stable Diffusion upscaler with denoising strength below 50.
@@user-pc7ef5sb6x taking Gen-2 footage and then post processing it with other tools is what I’m really interested in right now. Yup, upscale it and stabilize, then hand it to an after effects wizard to see what they can do!
(I am an AE Caveman, sadly)
@@user-pc7ef5sb6x prob is, I’m making a short film and that would take forever. But yes, many options.
Did Gen 2 changed their model or something? Remember these weird A.I. commercials like pizza nuggets? I'm not getting any of these results. 😮
Yeah. They’ve slowly updated their model and now a lot of those super surreal results have been slowly vanishing.
I said it a lot back then: we were eventually going to hit a point where the weird was hammered out, and I was going to miss it.
Looks like we’re getting there now.
thanks
omg you look like the James Bond character!!!!
Haha, maybe one of the villains! I also look super silly in a Tuxedo! (or at least feel super silly!)
hey dude on the video..guess what?
Like a knock, knock joke, I'm required to ask: "What?"
Gen-2 is awful
Most of what it churns out based on the image you prompt it with... turn out to be total grrbage.
Maybe 1 out of 20 results are usuable for anything at all.
I'm canceling after my first month runs out. It's not useful for anything at this stage. I even trained a model with 15 images and it won't use the model when I put it in the prompt.
Are you on the Pika waitlist? You may want to try that out. I put up a video on the 1.0 update today. I think you might like it.
The models all do different things, I still say the real gem of Runway is Gen1. That’s the thing everyone is sleeping on
@@TheoreticallyMedia Rolls eyes... what can any of this grrbage be used for... I don't know.. it can't even get a simple pan right... lol
th-cam.com/video/qFC0qdAUTgU/w-d-xo.html
just wasted time. The generated video makes my pretty girl images very ugly! Disappointed to this app.
Yeah, its early in. It'll improve as time goes on. Your pretty girl will one day shine in full-motion!
set lighting is a tad low bud....let's see that movie star face!😬
Ha! Shall do! I’m always aiming for that “moody movie lighting” but, you’re totally right, this is YT.
I’m not going to use the stupid ring light though. I hate that circular reflection as a eye catch light. It kinda creeps me out!
@TheoreticallyMedia That's fair....and I agree moody/cinematic is great.. But its best to start with too much light and bring. it down in post....can still get the look but with more detail. Get some light on a least half if ya for a shadow....will look great. Take care👊