That's hilarious you bring up "Roughnecks", aka "Starcheap Troopers", as I called it. I worked as an animator on that show. LOL. They wanted us to give them "matching movie quality" animation, but gave us hardly any time, and no budget, and a way smaller crew than they had on the movie, and no tech crew to write plug-ins to do things that Lightwave couldn't do. It was an absolute nightmare gig. That said, we did the best we could under the conditions, and while not great, most of us were surprised how much better it came out than we thought it would.
Discovered the show recently and honestly blown away by the quality. Had no idea it was possible to make something with that quality and scope for television in those years.
Fun! You only need to buy a proper red female wig, and you'll also get the realistic hair movement. Game changer, lol. Those self-made cardboard attributes add a lot of joy to co-creating with A.I. Still wish I could speed up time and use the full potential of all these technologies combined. But eventually we will get there... Thanks for the update, Tim!
I work at the Starbucks on USC CAMPUS and have been preaching the gospel for AI to the fledgling Creatives, and this is The Cherry 🍒 on Top! Especially for all those nervous Actors and Directors. I especially love the cardboard to Cyber Punk! Just Brilliant.
I've been saying this for awhile as well: V2V is really the rosetta stone of AI Filmmaking. Namely, because it IS still filmmaking. Camera, Actor, Lights? It's all there. It just provides you with some extra production juice to realize a vision. Oh, and hey: Go Trojans, right?
I swear, whenever I see you post a video, I feel genuinely happy, and when I see you talk, I feel excited, even if some of the AI news doesn't amaze me. I just love watching you. There are many AI tools that I’ve come to love and now use daily. Thank you for all your incredible efforts. I'm one of your Arab followers from Morocco.
Gen-3's a monumental game changer that stands leagues apart from traditional 3D art, video editing, and VFX tools of the past decade. Now, with almost any Generative AI, a simple snippet of natural language is all it takes to bring complex visions to life, this is plain simply mindbogglingly on many levels beyond human comprehension and imagination.
Hey man! And yesssss, that is a brilliant idea! It’s funny, Heavy Metal’s animation is one of those things where, when I see it in my head, it looks crystal clear and flawless. And then I’ll see a clip and realize it only looks that way in memory! (Not to say it isn’t still amazing and brilliant, but those rose colored glasses as powerful!)
I'll be honest, I wasn't all that excited about the feature when I first saw the announcement for it. But when you showed the surfing guy in lava/fire, that made me rethink what can be done. So good job there. But I have to give you one point of criticism. You went through the whole surfing dude scenario and talked about putting it to 0.75, and then you showed what 0.25 and 1.0 looked like, but you never actually showed what the 0.75 looked like. So disappointed. :P But seriously, great video. Thanks!
It is looking like one possible future of filmmaking could be using a really quick and easy storyboard generator (with huge libraries of objects) for defining shots and composition and running that through whatever Gen3 becomes to create the final output,
With the cardboard props, in some cases, one could use a video layer in an editor to add things that weren't in the original film clip. And then pass it in to the Gen3. It's interesting how image plus text prompt competes with video to video.
I mean things start to get interesting with video to video. I wish i could bring in extra material, for example a reference picture that kind of gives runway an idea of the scenery that i am looking for.
My opinion is that we should have the option to replace backgrounds only and help VFX composites. I have tons of green screenshots to process. The Camera track is great, but the resolution has to improve to avoid cropping. And why is Cinematic not a style available?
Thank gosh! Finally. :) respect gen 3!! Will there/is there already a video gen from a user input first frame to last frame? Edit: just saw the starship strippers… WTF!? That’s so powerful. Dude, now anybody can remaster/remix any video content available… this is POWERful! 🎉
Tim, thanks for all you're doing for the community! Could you please add a link section in your description box with your GPT's that you've written? I tried the search feature on the GPT's and couldn't find anything you've made.
Looking forward to the next wave of graphics cards which upscale the graphics to photoreal..be cool bringing older games back to life and in diffferent genres
This looks good, and given me a few ideas. Been ages since I tried out Runway. Also reminds me (bit off topic) - has Kaiber had any updates recently, not used that for a long while either?
Would love to support runway and join the basic tier for just some fun and tinkering, but it works out to basically $2 per generation last I remember checking. F**K THAT !! Once it gets down to the realm of about 0.50c per generation, I'll think about it
Relaxed is the lower queue-- so, basically, it takes a little longer to generate. It isn't by a TON though. I actually run relaxed a lot. I'll shoot off a few prompts, move tabs, and when I come back, they're done. Rise and repeat. Also, in those others tabs, I've usually got Luma, Kling, and Minimax going...so, really: it's like a lot of pots on the stove!
I can’t really explain how infuriating it was to watch you put that garbage in and expect anything other than garbage to come out. You uploaded the good clip to the bad video generator, made it a bad clip and then uploaded that to the good generator, and then complained about it. Their arms are stuck together bc the arms are stuck together in your charming garbage.
does the gen3 vid to vid overlay blank green screen or just regular footage better. Ex: I see the AI uses materials in the original shot and enhances them, but if I'm in the shot and don't want my own face changed, is that possible?
Video to video is limited to 15 second clips. If I have a longer clip, can I break it into multiple 15 second chunks and get a consistent look if I use the same text prompts and seed?
ten seconds...and sooooorta. I had a longer bit in there but cut it for time. Basically, you kinda can do it, and the character will be more or less the say, but the background details might change. Best bet I found: keep on shot at ten seconds or less-- then cut away to something else, when you go back to shot "A" (second shot) I don't think most of the audience will pick up on the little stuff.
@@TheoreticallyMedia There was a community project on TH-cam where people recreated Star Wars, one short clip at a time, using any style they wanted. Could the same be applied to a generative AI film?
We'll see. Not a Lawyer (obviously), but there will for sure be some legal headaches for a bit. The thing is, ultimately-- It's not going to matter. Considering that a number of other countries have already OK'd all training data (Japan being one)-- the US will have to sort it out, or totally lose the AI race.
I...think? That's actually a really cool idea. My guess is that it'll end up possibly looking fairly insane? But-- That also might be super cool! gonna go dash off to give it a shot!
I’m putting together the GPT that’ll help with all that. In the meantime, try out the presets I mentioned in the video, but experiment with modifying them. I’ll have a bigger breakdown later. Just ran out of time today
The 10 seconds cap is a problem. This people have to understand that movies or short movies have longer shots where actors take their time to deliver lines. Imagine run Gravity through this process, or Snake Eyes opening shot. How do you handle problems like this. Domo has 30mor 4k seconds, which proves it’s possible to go longer. Perhaps it’s high time for us, the users to start asking for longer shots meaning more R&D on that department, otherwise none of these tools can be taking seriously for professional use in films. Music videos and commercials is a different industry. My 2 cents to the matter.
For sure agreed...and it'll get there. I will say that most of the films/shows that exceed 10 second shots are the exception, not the rule. Gravity, Dune, 2001, Blade Runner-- All the stuff we love. Those films "get away" with it-- but, generally for MOST films/tv-- you're under 10 seconds per shot. Again, that said: I still want 30 second shots too! We'll get there!
Same-- I mean, it's still out there-- my kids just discovered it (as they've taken over my PS5 anyhow), and have been rolling through all the created games on the platform. They just found The Backrooms, actually. So we've been having some fun playing that. I'm always trying to get them to make a game on it-- but so far, no luck.
I think-- just for one, it has a great use case for revamping some of those old/lost 90s CGI shows. To be honest, Roughnecks probably wasn't that great-- but, I'd love to revisit it at some point with a fresh coat of paint!
@@TheoreticallyMedia some older ones sure. Can be like a new direction. Instead upscale you update the aesthetic . Am sure if you run 3 time it will get even better
Haha, I know it was kind of a punchline when it came out, but I think over the years it's sort of found its core audience. There for sure was some really nice world building in that movie-- Actually, one of my favorite Coster Epic-World movies is The Postman...which, also gets a lot of flack. So, I get it.
the only thing i really care about is when can I just take the stuff i worked really hard on in 3D animation and have AI replace the rendering process in a seamless consistent way without "looking like AI." I want that level control and pride in the work, then im fine with AI.
@@TheoreticallyMedia thats what it feels like. i really like the idea of what it could do to the process. I really think the missing piece is turning AI into the level of control you have with 3D architecture. Like 3D assets made of AI data instead of the various 3d formats. Octane Render i believe is working toward this as is Unreal. It also does not remove the need for hands on artists making detailed assets.
Oh, sorry, that whole section got cut. Basically: when you’re doing V2V, over-act everything. Go big in your performance almost like you’re a theater actor. The models do not react to subtle well. You gotta be really obvious with your mouth movements and expressions
I just can't get over the fact that the mobile infantry dude may or may not be doing his part. Everyone else said they're doing their part, that's all I'm saying..
This is cool, but I'm not as excited about video to video as I am image to video. I want more controls in image to video. The main benefit for me of AI is so I don't have to go out and shoot any video. I guess AI video to AI video would be interesting to change styles or take bad AI and make it good AI or take video my friends post and turn them into clay as a gag. I guess once this meshes into part of a complete workflow, I can see some advantages.
Actually, I got lip movement from my own cell phone recording, just didn't get the sound. I also used a Blender render I made a while back and ran that through and because the render was part of a challenge where each render was eventually turned into a montage and transitioned from one render to another I was able to make something like this th-cam.com/video/LB3WQzcPlb8/w-d-xo.htmlsi=5-2lq3c37lYq6n_p
Since 2023 I am hearing "this is the worse it can get" and STILL is UNNUSABLE CRAP (and runway is charging great amount of cash to make you spend money on CRAP). There should be a law where if a clip doesn't work, YOU CAN'T GET CHARGED for. This business model is ridiculous. Who the hell wants to pay for trash that doesn't work more than 60% to 70% of the time?
I mean-- you are saying since last year. Might want to give it a minute. Now, that said: I did like a thing that Vidu (almost) got right: A Generation at either low or no credits, which is low-res and not really useable. Then, if you like it, you can spend the credits to bring it up to spec. That's a good system. I say "almost" because Vidu still charges you practically the same amount of credits. Don't love that.
Why is non of the AI influencers talking about the Gen:48 competition? It's really strange. As if it was systematic or something. There's a playlist of hundreds of videos made by participants from all over the world with some genuinely good ideas, but no one is showcasing them or talking about them. I already watched them all and even made a top 10 list of those hundreds of videos. What's going on?
I can't speak to any of the other channels, but for me-- well, there's just TOO much to cover. I try to focus either on new and interesting tools that are appearing, or tutorials on those things here. While I love GEN:48 (and wish I had more time to actually participate!), I think that Runway themselves do a good job of showcasing the work.
To be honest: Lots of tabs open. Start Generating, go do something else for a few minutes (usually generating somewhere else) in another tab, come back and rinse/repeat. I actually think of it a lot like cooking a big meal in a kitchen. Lots of pots boiling and an oven to watch over.
The Starship Troopers Hollywood movie WAS the Parody of the original book, which was NOT anti-Military, anti-Capitalist, nor anti-American. The movie was STUPID on many levels, such as human troops taking on the bugs WITHOUT the battle suits. In the movie they were practically FEEDING the soldiers to the bugs, with their weak weapons and zero armor. The original books were nationalistic and proud of military service.
The book was very boring and didn't age well at all. Got me all amped about super suits, then barely used them. It pretty much glorified fascism. The movie was a vast improvement.
Waiting for an AI which can generate 4 minutes longer scene with characters consistency, Lip sync, singing, voice cloning, realistic body motion, face expressions, longer video to video. Most importantly Unlimited access for $100/yearly excluding top 10 rich country. Because most of the users can't afford huge amount of money.
It's long been a problem, and there are "solves" for it-- but, admittedly, they're always kind of duct taped together. We're getting close though-- in the meantime: more duct tape!
I thought they finally got that figured out?! Ugh-- Well, they just got some funding, so hopefully they can get the whole EU compliance thing sorted. Super frustrating-- sorry!
It'll get there. To be honest, I like V2V-- mostly because it lets you have fun with a camera, lights, and actors. I think it's just another branch on a rapidly growing tree. The real key is to utilize every branch!
I think it should be coming soon. Vidu actually managed to pull it off (to a degree), you can check that out here: th-cam.com/video/UB7mRFewxIA/w-d-xo.html
Sure thing, spend 7000000 dollars for something half decent on credits. Might as well pay an SFX team to do it properly plus, you can get another camera angle. This is a scam.
@@caryvill003 For most users, Runway is barely worth paying for and I'm switching to Kling. I have their monthly Unlimited Generations Plan and they have many issues and errors daily, as many people have encountered. They must get their act together if they wish to compete with Kling and even MiniMax, which is 100% free for now but has FAR BETTER prompt accuracy in its video generations.
Detailed Breakdown of the technique mentioned in this video here: th-cam.com/video/hjk4a9gxILc/w-d-xo.html
That's hilarious you bring up "Roughnecks", aka "Starcheap Troopers", as I called it. I worked as an animator on that show. LOL. They wanted us to give them "matching movie quality" animation, but gave us hardly any time, and no budget, and a way smaller crew than they had on the movie, and no tech crew to write plug-ins to do things that Lightwave couldn't do. It was an absolute nightmare gig.
That said, we did the best we could under the conditions, and while not great, most of us were surprised how much better it came out than we thought it would.
You said lightwave 😂😂😂
I know the feeling, believe me. I did an independent, low-budget animation using Daz and Blender. I spent 2 years making 3 episodes.
Discovered the show recently and honestly blown away by the quality. Had no idea it was possible to make something with that quality and scope for television in those years.
I watched that show as a 19yr old and loved it .
At the time what were the cutting edge tools you used? What do you think of this new stuff?
Fun! You only need to buy a proper red female wig, and you'll also get the realistic hair movement. Game changer, lol. Those self-made cardboard attributes add a lot of joy to co-creating with A.I. Still wish I could speed up time and use the full potential of all these technologies combined. But eventually we will get there... Thanks for the update, Tim!
I work at the Starbucks on USC CAMPUS and have been preaching the gospel for AI to the fledgling Creatives, and this is The Cherry 🍒 on Top! Especially for all those nervous Actors and Directors.
I especially love the cardboard to Cyber Punk!
Just Brilliant.
I've been saying this for awhile as well: V2V is really the rosetta stone of AI Filmmaking. Namely, because it IS still filmmaking. Camera, Actor, Lights? It's all there. It just provides you with some extra production juice to realize a vision.
Oh, and hey: Go Trojans, right?
I'm finally now getting excited about video AI 👍 👍 👍
I swear, whenever I see you post a video, I feel genuinely happy, and when I see you talk, I feel excited, even if some of the AI news doesn't amaze me. I just love watching you. There are many AI tools that I’ve come to love and now use daily. Thank you for all your incredible efforts. I'm one of your Arab followers from Morocco.
This is incredible. Can't wait to give this a try
It’s pretty awesome! A little unpredictable, but that’s also the fun of it!!
Does it create consistent characters when the same characters are used in every video u feed it?@@TheoreticallyMedia
Video to Video with temporal and character consistency, outputing layered .exr sequences = AMAZING
😊
I love the surfer on fire though
Not gonna lie: I do too!
Gen-3's a monumental game changer that stands leagues apart from traditional 3D art, video editing, and VFX tools of the past decade. Now, with almost any Generative AI, a simple snippet of natural language is all it takes to bring complex visions to life, this is plain simply mindbogglingly on many levels beyond human comprehension and imagination.
Ghost noir hunting, sound like really cool idea TBO. 30-s early 40-s setting
This is awesome! Quickly remastering or remixing entire movies is exciting!
I think we're going to see some amazing things coming from this tool.
Amazing as always Tim - I need to get in there and drop some Heavy Metal animation clips (1981) and see what rabbit hole it takes me down.
Hey man! And yesssss, that is a brilliant idea! It’s funny, Heavy Metal’s animation is one of those things where, when I see it in my head, it looks crystal clear and flawless.
And then I’ll see a clip and realize it only looks that way in memory!
(Not to say it isn’t still amazing and brilliant, but those rose colored glasses as powerful!)
best AI Channel!
Best Comment!
THIS IS WHAT I NEED!
Thank goodness.
Thanks for another good breakdown , look forward to transforming some old music track with this tool
I'll be honest, I wasn't all that excited about the feature when I first saw the announcement for it. But when you showed the surfing guy in lava/fire, that made me rethink what can be done. So good job there. But I have to give you one point of criticism. You went through the whole surfing dude scenario and talked about putting it to 0.75, and then you showed what 0.25 and 1.0 looked like, but you never actually showed what the 0.75 looked like. So disappointed. :P But seriously, great video. Thanks!
Haha, noted! Sometimes I get carried away with maxing out the sliders, and I loved the Fury Road/Waterworld one so much, I couldn't wait to get to it!
Subbed. very useful content. Thanks!
ok, i think this is starting to get useful now
YES! I mean, again: big steps from where we were just last year in the video!
@@TheoreticallyMedia I'm noticing runway always changes my skin tone, I am very very white and its quite frustrating
It is looking like one possible future of filmmaking could be using a really quick and easy storyboard generator (with huge libraries of objects) for defining shots and composition and running that through whatever Gen3 becomes to create the final output,
Awesome Tim! Thank you
With the cardboard props, in some cases, one could use a video layer in an editor to add things that weren't in the original film clip. And then pass it in to the Gen3. It's interesting how image plus text prompt competes with video to video.
I mean things start to get interesting with video to video. I wish i could bring in extra material, for example a reference picture that kind of gives runway an idea of the scenery that i am looking for.
That should be on its way. That was something you could do in gen1, so I’d wager highly we’ll see it in an update
My opinion is that we should have the option to replace backgrounds only and help VFX composites. I have tons of green screenshots to process. The Camera track is great, but the resolution has to improve to avoid cropping. And why is Cinematic not a style available?
It's basically a ControlNet for Gen3. Promising!
That's v2v in a nutshell to me. A LOT of control! To me, it really is the rosetta stone of AI Filmmaking.
@@TheoreticallyMedia Can't wait to test it!
Thank gosh! Finally. :) respect gen 3!! Will there/is there already a video gen from a user input first frame to last frame?
Edit: just saw the starship strippers… WTF!? That’s so powerful. Dude, now anybody can remaster/remix any video content available… this is POWERful! 🎉
Tim, thanks for all you're doing for the community! Could you please add a link section in your description box with your GPT's that you've written? I tried the search feature on the GPT's and couldn't find anything you've made.
Looking forward to the next wave of graphics cards which upscale the graphics to photoreal..be cool bringing older games back to life and in diffferent genres
I've got some really cool stuff coming up on AI in Gaming coming up later this week!
I hope they bring the animation of characters with images and the stylization of videos from an image
That should be on its way at some point!
Vid to vid looks very nice
It’s a LOT of fun too!
Any chance you would do an episode in which you use props to make something amazing with video to video? Love to see what you could come up with.
This looks good, and given me a few ideas. Been ages since I tried out Runway. Also reminds me (bit off topic) - has Kaiber had any updates recently, not used that for a long while either?
I’m hearing rumors they’re on the verge something. As soon as I hear, I’ll be all over it for you!
@@TheoreticallyMedia Look forward to hearing about what they been up to - thank you :D
Would love to support runway and join the basic tier for just some fun and tinkering, but it works out to basically $2 per generation last I remember checking.
F**K THAT !!
Once it gets down to the realm of about 0.50c per generation, I'll think about it
They were supposed to be revamping their pricing-- I'll check in with them on that. And be revamping, I do mean lowering, not raising! ha!
The $76 plan has unlimited usage at a "relaxed" rate. What do they mean by relaxed? How long on average does it take to generate a 15 second clip?
Relaxed is the lower queue-- so, basically, it takes a little longer to generate. It isn't by a TON though. I actually run relaxed a lot. I'll shoot off a few prompts, move tabs, and when I come back, they're done. Rise and repeat.
Also, in those others tabs, I've usually got Luma, Kling, and Minimax going...so, really: it's like a lot of pots on the stove!
I can’t really explain how infuriating it was to watch you put that garbage in and expect anything other than garbage to come out. You uploaded the good clip to the bad video generator, made it a bad clip and then uploaded that to the good generator, and then complained about it. Their arms are stuck together bc the arms are stuck together in your charming garbage.
Youre as bad as me 😌 attention spans in the gutter haha
"This guy basicly working at a vapestore now" 😂😂😂😂😂
Haha, “Tell the gaffer to cool it with his massive clouds! Bro is out of control!”
The retro preset is for Twin Peaks :D
YESSSSSSSS. Yes it is! Ok, off to go make some Ghost Noir Detective Story set in Twin Peaks!!
Dude I feel like we’re having a sordid love affair as my eyes always light up when you pop a new video
Haha, I'd say "don't tell my wife" but she'd probably just say: "Finally, can you talk about the AI stuff to someone else then!" haha
Waiting for the day when this is possible in comfyui, along with loras.
I cant wait for the first block buster movie made by a solo one person full production.
I've seen some stuff super close to it already. I'd say-- next year?
"Working at a vape store' LOL
Jimmy the Gaffer is blowing CLOUDS!! Also, the whole set smells like Watermelon Jolly Ranchers.
@@TheoreticallyMedia 🤣
does the gen3 vid to vid overlay blank green screen or just regular footage better. Ex: I see the AI uses materials in the original shot and enhances them, but if I'm in the shot and don't want my own face changed, is that possible?
Starship Troopers is great!!!
Video to video is limited to 15 second clips. If I have a longer clip, can I break it into multiple 15 second chunks and get a consistent look if I use the same text prompts and seed?
ten seconds...and sooooorta. I had a longer bit in there but cut it for time. Basically, you kinda can do it, and the character will be more or less the say, but the background details might change.
Best bet I found: keep on shot at ten seconds or less-- then cut away to something else, when you go back to shot "A" (second shot) I don't think most of the audience will pick up on the little stuff.
@@TheoreticallyMedia There was a community project on TH-cam where people recreated Star Wars, one short clip at a time, using any style they wanted. Could the same be applied to a generative AI film?
this channel needs runway gen3 for a new background
Uff, I see a lot of potential law issues coming up with this new tool. But yes, it is really cool!
We'll see. Not a Lawyer (obviously), but there will for sure be some legal headaches for a bit. The thing is, ultimately-- It's not going to matter. Considering that a number of other countries have already OK'd all training data (Japan being one)-- the US will have to sort it out, or totally lose the AI race.
I've been kind of sour on them...mostly for the overbearing censorship, but this is some interesting stuff.
Any speculation about the frame rate of the transformation? Older style transformers were strictly 8-fps interpolated back up to 24.
Sharp
It's lookin' good!!
Ooo can we take claymation to live action ? 😮
I...think? That's actually a really cool idea. My guess is that it'll end up possibly looking fairly insane? But-- That also might be super cool! gonna go dash off to give it a shot!
what prompts did you use? Share the prompts please
I’m putting together the GPT that’ll help with all that. In the meantime, try out the presets I mentioned in the video, but experiment with modifying them.
I’ll have a bigger breakdown later. Just ran out of time today
The 10 seconds cap is a problem. This people have to understand that movies or short movies have longer shots where actors take their time to deliver lines. Imagine run Gravity through this process, or Snake Eyes opening shot. How do you handle problems like this. Domo has 30mor 4k seconds, which proves it’s possible to go longer. Perhaps it’s high time for us, the users to start asking for longer shots meaning more R&D on that department, otherwise none of these tools can be taking seriously for professional use in films. Music videos and commercials is a different industry. My 2 cents to the matter.
For sure agreed...and it'll get there. I will say that most of the films/shows that exceed 10 second shots are the exception, not the rule. Gravity, Dune, 2001, Blade Runner-- All the stuff we love. Those films "get away" with it-- but, generally for MOST films/tv-- you're under 10 seconds per shot.
Again, that said: I still want 30 second shots too! We'll get there!
$15 monthly plan that gives 625 credits > 2k credits for a 3min clip. Looks good, but i think bit expensive.
How do you think this will compare with Adobe's Firefly Video?
Wait a second... you have to buy the app and then you have to buy each exported project?
It's crazy that Dreams got mentioned lol. I'm sad that game died
Same-- I mean, it's still out there-- my kids just discovered it (as they've taken over my PS5 anyhow), and have been rolling through all the created games on the platform. They just found The Backrooms, actually. So we've been having some fun playing that.
I'm always trying to get them to make a game on it-- but so far, no luck.
yes, this is very good, so some videos can be make better versions
I think-- just for one, it has a great use case for revamping some of those old/lost 90s CGI shows. To be honest, Roughnecks probably wasn't that great-- but, I'd love to revisit it at some point with a fresh coat of paint!
@@TheoreticallyMedia some older ones sure. Can be like a new direction. Instead upscale you update the aesthetic . Am sure if you run 3 time it will get even better
Waterworld is a fantastic moive, gets way more hate than it deserves
Haha, I know it was kind of a punchline when it came out, but I think over the years it's sort of found its core audience. There for sure was some really nice world building in that movie-- Actually, one of my favorite Coster Epic-World movies is The Postman...which, also gets a lot of flack.
So, I get it.
the only thing i really care about is when can I just take the stuff i worked really hard on in 3D animation and have AI replace the rendering process in a seamless consistent way without "looking like AI." I want that level control and pride in the work, then im fine with AI.
I think you'll see that anytime between six months to a year.
@@TheoreticallyMedia thats what it feels like. i really like the idea of what it could do to the process. I really think the missing piece is turning AI into the level of control you have with 3D architecture. Like 3D assets made of AI data instead of the various 3d formats. Octane Render i believe is working toward this as is Unreal.
It also does not remove the need for hands on artists making detailed assets.
"looks like a Mazda"
Said someone who's never seen a Mazda before.
can we make "arm sausage" a thing?
2:20 The lipsync issue. What's the solve for fixing that?
Oh, sorry, that whole section got cut. Basically: when you’re doing V2V, over-act everything. Go big in your performance almost like you’re a theater actor.
The models do not react to subtle well. You gotta be really obvious with your mouth movements and expressions
looking for one with which i can create animation-movies + lyp sync. any clue? thanks in advance 👍
we remade a movie with this new video to video: th-cam.com/video/prHIW-Pg3oc/w-d-xo.html
What soft is best for animating 3d female models? There are some highly convincing examples on the internet.
Runway is too much censored - is there any other option out there for video to video which is not censoring content ?
So is this like restyle
pretty much-- but video!
I just can't get over the fact that the mobile infantry dude may or may not be doing his part.
Everyone else said they're doing their part, that's all I'm saying..
This is cool, but I'm not as excited about video to video as I am image to video. I want more controls in image to video. The main benefit for me of AI is so I don't have to go out and shoot any video. I guess AI video to AI video would be interesting to change styles or take bad AI and make it good AI or take video my friends post and turn them into clay as a gag. I guess once this meshes into part of a complete workflow, I can see some advantages.
I think continuing video will be far more interesting than just applying/transforming a different style to an existing video.
Actually, I got lip movement from my own cell phone recording, just didn't get the sound. I also used a Blender render I made a while back and ran that through and because the render was part of a challenge where each render was eventually turned into a montage and transitioned from one render to another I was able to make something like this th-cam.com/video/LB3WQzcPlb8/w-d-xo.htmlsi=5-2lq3c37lYq6n_p
Since 2023 I am hearing "this is the worse it can get" and STILL is UNNUSABLE CRAP (and runway is charging great amount of cash to make you spend money on CRAP). There should be a law where if a clip doesn't work, YOU CAN'T GET CHARGED for. This business model is ridiculous. Who the hell wants to pay for trash that doesn't work more than 60% to 70% of the time?
Sir, this is a Wendy's. I only asked if you wanted fries.
Lol gotta love the nobody's expressing their complaints that nobody cares about lol
I mean-- you are saying since last year. Might want to give it a minute.
Now, that said: I did like a thing that Vidu (almost) got right: A Generation at either low or no credits, which is low-res and not really useable. Then, if you like it, you can spend the credits to bring it up to spec.
That's a good system. I say "almost" because Vidu still charges you practically the same amount of credits. Don't love that.
OOOh, can I get a frosty?!
@@TheoreticallyMedia Be bout tree fiddy
now we can use old video games to create new movies with stable environments.
Why is non of the AI influencers talking about the Gen:48 competition?
It's really strange. As if it was systematic or something. There's a playlist of hundreds of videos made by participants from all over the world with some genuinely good ideas, but no one is showcasing them or talking about them. I already watched them all and even made a top 10 list of those hundreds of videos. What's going on?
I can't speak to any of the other channels, but for me-- well, there's just TOO much to cover. I try to focus either on new and interesting tools that are appearing, or tutorials on those things here.
While I love GEN:48 (and wish I had more time to actually participate!), I think that Runway themselves do a good job of showcasing the work.
How do these ppl find the time to do all this :)
To be honest: Lots of tabs open. Start Generating, go do something else for a few minutes (usually generating somewhere else) in another tab, come back and rinse/repeat.
I actually think of it a lot like cooking a big meal in a kitchen. Lots of pots boiling and an oven to watch over.
5:36 were you Gen-iun3ly shocked? sorry
👋 hi
Heya Louis!!
The Starship Troopers Hollywood movie WAS the Parody of the original book, which was NOT anti-Military, anti-Capitalist, nor anti-American. The movie was STUPID on many levels, such as human troops taking on the bugs WITHOUT the battle suits. In the movie they were practically FEEDING the soldiers to the bugs, with their weak weapons and zero armor. The original books were nationalistic and proud of military service.
The book was very boring and didn't age well at all. Got me all amped about super suits, then barely used them. It pretty much glorified fascism. The movie was a vast improvement.
Waiting for an AI which can generate 4 minutes longer scene with characters consistency, Lip sync, singing, voice cloning, realistic body motion, face expressions, longer video to video.
Most importantly Unlimited access for $100/yearly excluding top 10 rich country.
Because most of the users can't afford huge amount of money.
Hi man I’m still here 😢
These ai companies need to figure out character consistency before they charge any money
It's long been a problem, and there are "solves" for it-- but, admittedly, they're always kind of duct taped together. We're getting close though-- in the meantime: more duct tape!
no skyglass in europe 😥😥
I thought they finally got that figured out?! Ugh-- Well, they just got some funding, so hopefully they can get the whole EU compliance thing sorted. Super frustrating-- sorry!
guys , Im just starting out as an AI enthusiast making similar content
would appreciate your feedback!
Honestly it is almost jarring to see. My brain has a hard time computing that drastic of a scene change so rapidly.
video to video is basic. Ai needs to learn how to create these from scratch that's the next leap forward!
It'll get there. To be honest, I like V2V-- mostly because it lets you have fun with a camera, lights, and actors. I think it's just another branch on a rapidly growing tree.
The real key is to utilize every branch!
This all looks so creepy though..
Looks like it desperately needs character consistency.
I think it should be coming soon. Vidu actually managed to pull it off (to a degree), you can check that out here: th-cam.com/video/UB7mRFewxIA/w-d-xo.html
i still think it's far away from being worth what they are charging
Oh man you stole footage and showed it online. Don't steal footage. Interesting video please don't encourage theft.
Sure thing, spend 7000000 dollars for something half decent on credits. Might as well pay an SFX team to do it properly plus, you can get another camera angle.
This is a scam.
It's like $1 for 10 seconds. Where you getting 7M 😅
I tried a lot, as Ai video is my interest, but not happy with the skin tone.
I will stick with Sora for the time being., thank you for the video anyway
Good luck waiting for Sora.
Have you (or anyone other than the select few) actually used sora? 😂
How can you stick with something that's not out yet lol
Is that stick with, or wait for? Honesty, jump in now! This is when the pool is fun!
@@caryvill003 For most users, Runway is barely worth paying for and I'm switching to Kling. I have their monthly Unlimited Generations Plan and they have many issues and errors daily, as many people have encountered. They must get their act together if they wish to compete with Kling and even MiniMax, which is 100% free for now but has FAR BETTER prompt accuracy in its video generations.
But they dools didnot allow it for free users sadly..... alsoca mad stupid marketing strategy...