Best I2V is a tough call these days. Ray2 here is doing some pretty wild motion swings in the T2V arena, so I think it'll bring some big movement to I2V when that is released. Currently, I'd say maybe a tie between Minimax and Kling 1.6. I generally test a lot on Minimax, but I spent some time yesterday rolling in Kling and was really surprised with how good 1.6 has gotten. No word on Veo that I've heard. I'll poke the bear and see what we get!
@@TheoreticallyMedia Thank you! Do you have any recommendations on what IMAGE model to use for the Image to video? a lot of my Midjourney images have bee struggling on the I2V models, wondering if you have had better luck with something other than MJ?
I would also like to mention Vidu, they also released a new model 2.0. And the img to img transitions are nearly seamless though their credits system is a little costly, the results really impressive. Very underrated video generator btw.
7:54 in my opinion the most impressive example. The way the brush splays out and the paint get distributed and colors the textured canvas in shades of blue with some light reflextions where the paint accumulated.
The painting stuff is always super impressive to me as well. There's a lot of nuance the model has to keep track of-- including WHAT is being painted. Agreed: That's a great output!
I got in a little bit late, so I had some catching up to do, but yeah-- really impressive model! Can't wait until we have all the other fun Dream Machine toys added in as well! This should be a beast!
that's the big thing! I think given what I'm seeing with the T2V model, it bodes well. But, proof in the pudding, so to speak. Hopefully we'll know soon enough!
Always top tier content, Tim, thank you. What would be your best workflow recommendation for making a short, traditional continuity film with consistent characters and backgrounds, if remotely possible at this stage? If you have a moment - thank you.
It is! To me, the best play is to do Video to Video-- and then use stylization passes via Domo or Viggle. You can utilize an app like Skyglass (which acts as an auto green screen) for your backgrounds. From there, obviously 11labs for voices. If you're wholly using AI Image to Video, it's still doable, but a little tougher. It all comes down to generating consistent images of actors and locations. Tough, but not impossible. A lot of Re-rolling. From there, I'd say generate in Kling or Minimax, and then use Runway's Act-1 to provide the "acting" aspect. You'll have to make some concessions here and there-- but y'know, you have to do the same in traditional production as well! It's all about rolling with the punches!
Great video, I had a really bad experience with Luma's renders so far, I hope they got better. BTW runway released the 4K for 20 credits option, and it is REALLY good
Always great reporting for those of us just interested in following the developments in ai image to video. Also, I fear for our "man in a blue business suit."
I love the continuity and call back... By end of the year you can actually bring ALL the characters together in a connected narrative and see which ai tool does it best.
Good remember I made a comment about the characters not leaving footprints when they work on sand or on the moon Nice to see AI paying attention to that
I DO remember that comment! Haha-- uhhh, can you now make a comment that AI Generated Lottery Tickets should have winning numbers? Yeahhhh.....that's the big score!!
@@TheoreticallyMedia HA HA That would be nice but if they invent a way for me to have a consistent image to image generation it would go a long way towards Having the tools to help me be able to publish my own comic books ,then I won't need the lottery😄
Are you sure about that? I'm in CCP too-- you can upload an image, but if you look at the settings when you do, it reverts to 1.6. That said: you can kind of extend the 2.0 video as well-- it's just that on the extend, it reverts to 1.6 as well.
Tim, you've got so many subs now that I find it difficult to be one of the first commenter's on your vids........... but that's a good problem to have (increasing subs)! 👍 👍 👍
Haha, I forgot about that! Pour one out for Bobcat!! Wait...Bobcat is still alive?! Man, I seriously thought he died years ago. Well...THIS IS A GOOD DAY!
Uh, I haven't seen one. I just checked. I'll bug that about that. That seems like an easy fix. (Ahem, says the guy who does no web dev. "I'm sure it's totally easy!"
The footprints? Yeah-- super impressive. The first one kind of "fades" too quickly-- but, still-- I don't think anyone would actually notice it if were in a project. That's just me being super nitpicky.
Do you think more platforms will ever go the way of unlimited generations plan like MiniMax has?? Or will the opposite happen and MiniMax will eventually do away with it's unlimited generations plan. I also have a plan with Vidu but it's frustrating watching 24 credits be wasted on a video that didn't come out the way I hoped. With MiniMax I can just keep tweaking until I start getting what I want.
I'm taking advantage of these AI image to video tools to make content. Luma Ray 2 looks impressive, I might consider using it again after having bad experiences with it before.
Ah, that part got accidentally cut! I didn’t realize until after! But yes, you can basically “chat” with Dream Machine to make changes to your video. BUT: it isn’t really Inpainting, more like issuing a new prompt without the hassle of writing a prompt. You can just say: “no lighthouse in the background” and it’ll generate a new scene sans Lighthouse. But again, we’ve got consistency issues with it.
@@TheoreticallyMedia Ah I see. Cause in my mind I'm thinking that will create an entirely new video with different subject/environment, but perhaps not. Also, I've been wondering when they'd bring out an inpainting feature for video generation models. Might be kind of difficult keeping that coherence in just that masked area throughout each frame. But hey, they've gotten this far so fingers crossed.
It's a little more decoherent when it gets wild with the camera as opposed to Veo2-- BUT, I'll say that it also takes some really good swings as well. I'll say: When Ray2 hits (like with Pirate Gal), it's batting at the top end of the current video generators
ooooof. Between the main platform and all the APIs, it doesn't shock me though. Still...ooof. But, that is the update day surprise. Pretty typical-- I'd say give it a day or so to let the GPUs cool off and things should stablize.
@@TheoreticallyMedia we desperately need a 1 minute video generation that has Character set up for actors & a background so you can actually have consistency
3:42 speaking of Jane Sparrow... probably because Ais like this are all trained with copyrighted films and videos. And what's probably the most popular video that comes up if you search for pirates.. that's right, Pirates of Caribbean clips or trailers
I think Veo 2 is slightly superior because building at the distant especially skyscrapers are not morphing with camera movements. Ray 2 is still morphing but slightly better the best current gen Kling 1.6
What I don't get is these AI platforms would only give access to a certain people, I see every damn person flaunting Google Veo 2 and they haven't given access to the entire damn world, so I can only assume luma doing the same damn thing.
The need to test with smaller sample groups. It’s like beta testing to find bugs and edge cases before releasing it to the general public. They gradually release it to larger sample groups until they can determine that it’s safe and functional to release to the world. Very typical of the tech world actually...not surprised at all.
The problem with these new updates is its nothing special anymore 'Text to Video' MINIMAX is in the lead by a mile, once they do 1080p n lip sync its game over for all the other sites, people say kling but i disagree kling is quite average
Agree and disagree. Minimax is very best at following the prompt but very aggressive prompt adherence which is pretty over acting and difficult to control the the sweet spot you would like to get. Kling is just average in prompt understanding even with their latest 1.6 model. I bet the sweet spot between Minimax and Kling would be the best. You can't ignored the quality of the pixverse V 3.5. It really good with grate potential in the same league as Minimax and Kling.
So, it was a mess of a day when I shot that. Outside there was a crew chopping down a tree (with chainsaws and wood chippers going all day) and in the house, there was a guy sanding a wall. Haha. Yeah, it was a loud day here. So, the audio is a bit all over the place-- but, the video DID get done!
Gore is kind of in that typical AI Video level of gore. Probably just north of PG-13-- but, if you count typical AI weirdness Body Horror, you can probably get to a level that would freak out John Carpenter. Can't prompt for celebs. Which, to be honest, is fine by me.
They supposedly have a big update in the works. Yeah, I hear you...I'll say: They're still the king of AI Text though. No one has come close to them on that front.
I have tried and paid Ray2. As a normal user, I don't have unlimited or so much credits like you. Ray2 is the most disappointing video generation model I have ever seen. It always doesn't hit. Even if I followed your previous video for the prompting, none of them actually HITS. The physics, resolution, camera movement are just all wrong. I compared that with Sora myself, Sora always outperforms Ray2. I paid over $100 in total per month for different AI tools for my job and personal tests, Ray2 is the worst money I have ever spent.
Don't fix your video! Shipwrecked on a deserted island with a tiger _AND_ a lighthouse makes it a lot more interesting. I mean, who puts a lighthouse on a deserted unknown island in the middle of nowhere? Gotta be a story behind that.
My suggestion: please don't review stuff that is not out. I have a limited amount of time on earth, and I go searching for sh!t that doen't exist and then I am exceedingly fucking mad.
Looking forward to your Vidu 2 review. Naegiko - AI Animator has a great clip he used for demoing its anime ability th-cam.com/video/26oWZy6BI5k/w-d-xo.htmlsi=jLxIu5hN_IR9VyD7
Head on over and check out Luma Labs Ray 2 model here: lumalabsai.partnerlinks.io/dd1jzuzx6o87
Great breakdown, sir! Thank you for spotlighting my video!
Always a fan, sir!! Keep up the great work!
Thanks for always keeping us updated!
2 video requests: Best image models to use for image to video and an update video on Veo2!
Best I2V is a tough call these days. Ray2 here is doing some pretty wild motion swings in the T2V arena, so I think it'll bring some big movement to I2V when that is released.
Currently, I'd say maybe a tie between Minimax and Kling 1.6. I generally test a lot on Minimax, but I spent some time yesterday rolling in Kling and was really surprised with how good 1.6 has gotten.
No word on Veo that I've heard. I'll poke the bear and see what we get!
@@TheoreticallyMedia Thank you! Do you have any recommendations on what IMAGE model to use for the Image to video? a lot of my Midjourney images have bee struggling on the I2V models, wondering if you have had better luck with something other than MJ?
Ray 2 looks amazing! Thanks for sharing.
1000%! It's really great to see Dream Machine in 2.0 mode!
I would also like to mention Vidu, they also released a new model 2.0. And the img to img transitions are nearly seamless though their credits system is a little costly, the results really impressive. Very underrated video generator btw.
I've been playing with it all day. Video tomorrow. It's...yeah, it's really good. Really good.
Excited for that video
7:54 in my opinion the most impressive example. The way the brush splays out and the paint get distributed and colors the textured canvas in shades of blue with some light reflextions where the paint accumulated.
The painting stuff is always super impressive to me as well. There's a lot of nuance the model has to keep track of-- including WHAT is being painted.
Agreed: That's a great output!
Wow I completely brushed over that one (no pun intended). Because I actually thought it was a real video lol
Best Ai news channel 😄
Appreciate it!! Also, kinda making me blush...
Haven't tried Luma before. Now I will! Thanks, Tim!
Lovey tx again ..🙏🥰
The best part is the commentary 🤣🤣🤣
Ray 2 is VERY Impressive and loved being in early access to try!
I got in a little bit late, so I had some catching up to do, but yeah-- really impressive model! Can't wait until we have all the other fun Dream Machine toys added in as well! This should be a beast!
Nice break down of the latest lumlabs update
1:25 We now need "Tiger in a blue business suit walking down the street"
Yeah we're really gonna need that image to video model and I wanna see how well it going keeps it's coherence to the image
that's the big thing! I think given what I'm seeing with the T2V model, it bodes well. But, proof in the pudding, so to speak. Hopefully we'll know soon enough!
Always top tier content, Tim, thank you. What would be your best workflow recommendation for making a short, traditional continuity film with consistent characters and backgrounds, if remotely possible at this stage? If you have a moment - thank you.
It is! To me, the best play is to do Video to Video-- and then use stylization passes via Domo or Viggle. You can utilize an app like Skyglass (which acts as an auto green screen) for your backgrounds. From there, obviously 11labs for voices.
If you're wholly using AI Image to Video, it's still doable, but a little tougher. It all comes down to generating consistent images of actors and locations. Tough, but not impossible. A lot of Re-rolling.
From there, I'd say generate in Kling or Minimax, and then use Runway's Act-1 to provide the "acting" aspect.
You'll have to make some concessions here and there-- but y'know, you have to do the same in traditional production as well! It's all about rolling with the punches!
@@TheoreticallyMedia That's a superb appraisal, Tim, thanks very much indeed. Very helpful and specific.You're a star.
Thanks for this ❤❤❤❤❤❤❤❤❤❤
Great video, I had a really bad experience with Luma's renders so far, I hope they got better. BTW runway released the 4K for 20 credits option, and it is REALLY good
Vidu 2.0 is by far the best!
Playing with it right now and working up a video for tomorrow. Yeeeeeah, it's pretty good.
Very cool!
우와! 이젠 왠만하면 모두 결과물이 좋네요. 좋은 정보 감사합니다.
Always great reporting for those of us just interested in following the developments in ai image to video.
Also, I fear for our "man in a blue business suit."
Physics are looking decent for Ray, but I think Veo 2 is still in front. Hopefully Google builds some great features for creative control.
It's going to be amazing when Luma releases 4K export and image to video!
Thx again brother!
❤❤
Holy moly
Very impressive. Probably the best TTV I’ve seen other than Veo 2. Every other model has terrible TTV, whereas their ITV is always better.
2:58 are we sure the bulls just haven't been drinking?
So beautiful ❤
I love you have channel lore.😃
Haha, it's getting DEEPER too!! One day we'll need a wiki!
I love the continuity and call back...
By end of the year you can actually bring ALL the characters together in a connected narrative and see which ai tool does it best.
2:55
😆 🤣 😂 actually, that's rather fitting ...... being that they just existed a saloon!!
We're still in the text-to-slot machine era unfortunately. Would love to see some competition to Runway's Act 1.
Ok so as I was ready to go with LTX for character consistency pics and Kling for the videos. Now I have to pause and test once again a new version
Good remember I made a comment about the characters not leaving footprints when they work on sand or on the moon Nice to see AI paying attention to that
I DO remember that comment! Haha-- uhhh, can you now make a comment that AI Generated Lottery Tickets should have winning numbers? Yeahhhh.....that's the big score!!
@@TheoreticallyMedia HA HA That would be nice but if they invent a way for me to have a consistent image to image generation it would go a long way towards Having the tools to help me be able to publish my own comic books ,then I won't need the lottery😄
3:04 actual lol with snort
like sora, Veo2 will not be released untill other models catchup with it.
Hopefully not! Man, I want them to get that out there!
@@TheoreticallyMedia openai and google are scared that they get in trouble, but lets just hope...
Were the prompts for the animated scenes the same?
They were. So, yeah: consistency is still big issue on the style side with Ray’s T2V.
I2V will obviously stomp that out.
@@TheoreticallyMedia yeah it still looks a way off. Thanks for the reply.
and ray 2 does have image to video, i've been using it in cpp
Are you sure about that? I'm in CCP too-- you can upload an image, but if you look at the settings when you do, it reverts to 1.6.
That said: you can kind of extend the 2.0 video as well-- it's just that on the extend, it reverts to 1.6 as well.
What do key word tarintino mean at 2:22
Tim, you've got so many subs now that I find it difficult to be one of the first commenter's on your vids........... but that's a good problem to have (increasing subs)! 👍 👍 👍
Haha, I'll send you a chicken dinner anyhow!
Man in a suit Godzilla was the best, especially when Bobcat did it
Haha, I forgot about that! Pour one out for Bobcat!! Wait...Bobcat is still alive?! Man, I seriously thought he died years ago. Well...THIS IS A GOOD DAY!
@@TheoreticallyMedia I hope he's still alive. Now I'm gonna have to pay google a visit and keep my fingers crossed.
@ thankfully he's still around!
I asked them how we can RENAME the boards? Is there a way?
Uh, I haven't seen one. I just checked. I'll bug that about that. That seems like an easy fix.
(Ahem, says the guy who does no web dev. "I'm sure it's totally easy!"
@@TheoreticallyMedia yes because then we can organize our projects by title…. Our own titles. Thanks
So they're probably working on text (prompt) + dialogue to video, right?
2:30
That actually seems impressive to me
The footprints? Yeah-- super impressive. The first one kind of "fades" too quickly-- but, still-- I don't think anyone would actually notice it if were in a project. That's just me being super nitpicky.
You mentioned "stutter step" what's up with that? I've seen it many times
have you made anything about the switch 2
Do you think more platforms will ever go the way of unlimited generations plan like MiniMax has??
Or will the opposite happen and MiniMax will eventually do away with it's unlimited generations plan.
I also have a plan with Vidu but it's frustrating watching 24 credits be wasted on a video that didn't come out the way I hoped.
With MiniMax I can just keep tweaking until I start getting what I want.
I'm taking advantage of these AI image to video tools to make content. Luma Ray 2 looks impressive, I might consider using it again after having bad experiences with it before.
Tim you didn’t mention how to fix the lighthouse in the deserted island? 🌴
Ah, that part got accidentally cut! I didn’t realize until after! But yes, you can basically “chat” with Dream Machine to make changes to your video. BUT: it isn’t really Inpainting, more like issuing a new prompt without the hassle of writing a prompt. You can just say: “no lighthouse in the background” and it’ll generate a new scene sans Lighthouse.
But again, we’ve got consistency issues with it.
@@TheoreticallyMedia Ah I see. Cause in my mind I'm thinking that will create an entirely new video with different subject/environment, but perhaps not. Also, I've been wondering when they'd bring out an inpainting feature for video generation models. Might be kind of difficult keeping that coherence in just that masked area throughout each frame. But hey, they've gotten this far so fingers crossed.
Didney looking at all Pirates of the Caribbean references be like 🧐
Haha, I'm sure. Meanwhile, the actual character of Capt Jack is probably all: "I like it!"
It's close to veo 2
It's a little more decoherent when it gets wild with the camera as opposed to Veo2-- BUT, I'll say that it also takes some really good swings as well.
I'll say: When Ray2 hits (like with Pirate Gal), it's batting at the top end of the current video generators
Right now, Ray 2 at 5 hours wait. Their servers cannot handle all requests.
ooooof. Between the main platform and all the APIs, it doesn't shock me though. Still...ooof.
But, that is the update day surprise. Pretty typical-- I'd say give it a day or so to let the GPUs cool off and things should stablize.
@TheoreticallyMedia I am sure you are right. That is what happened last year when they just first come out.
Captian Jill Sparrow was right there. So close !
This changes everything forever. I quit TH-cam and leave internet.
you can't!! Tomorrow everything is going to change! We need you here for that!
yeah, and then live a life of regret by missing the mankind most insane oportunity of all history.
@@Cyborg_VideoEditor LOL; can´t stop laughing with the post and its replies!
@@BodySplendorChannel I'm Just a Bot. ;)
@@TheoreticallyMedia we desperately need a 1 minute video generation that has Character set up for actors & a background so you can actually have consistency
3:42 speaking of Jane Sparrow... probably because Ais like this are all trained with copyrighted films and videos. And what's probably the most popular video that comes up if you search for pirates.. that's right, Pirates of Caribbean clips or trailers
Oh, for sure we got a bunch of Pirates in that training data. That said, I think Capt Jack wouldn't have a problem with it. It's all part of the code.
👋 hi
I think Veo 2 is slightly superior because building at the distant especially skyscrapers are not morphing with camera movements. Ray 2 is still morphing but slightly better the best current gen Kling 1.6
Made my characters weird, objects and building is fine
What I don't get is these AI platforms would only give access to a certain people, I see every damn person flaunting Google Veo 2 and they haven't given access to the entire damn world, so I can only assume luma doing the same damn thing.
Nah, this is out. You can use it right now.
The need to test with smaller sample groups. It’s like beta testing to find bugs and edge cases before releasing it to the general public. They gradually release it to larger sample groups until they can determine that it’s safe and functional to release to the world. Very typical of the tech world actually...not surprised at all.
@@TheoreticallyMedia oh I'll give it a try again, earlier at the time when I watched this video, I hadn't yet gained access to this.
The problem with these new updates is its nothing special anymore 'Text to Video' MINIMAX is in the lead by a mile, once they do 1080p n lip sync its game over for all the other sites, people say kling but i disagree kling is quite average
Agree and disagree. Minimax is very best at following the prompt but very aggressive prompt adherence which is pretty over acting and difficult to control the the sweet spot you would like to get. Kling is just average in prompt understanding even with their latest 1.6 model. I bet the sweet spot between Minimax and Kling would be the best. You can't ignored the quality of the pixverse V 3.5. It really good with grate potential in the same league as Minimax and Kling.
@ yeah all of them have pros n cons! Minimax is my fave at the moment n I havnt really used kling. I will have to test out pixverse
chapters 3, 8 and 10 have a strange humming in the sound track, in your video
So, it was a mess of a day when I shot that. Outside there was a crew chopping down a tree (with chainsaws and wood chippers going all day) and in the house, there was a guy sanding a wall. Haha.
Yeah, it was a loud day here.
So, the audio is a bit all over the place-- but, the video DID get done!
Can you do gore ? And celebs? Or is it censored?
Gore is kind of in that typical AI Video level of gore. Probably just north of PG-13-- but, if you count typical AI weirdness Body Horror, you can probably get to a level that would freak out John Carpenter.
Can't prompt for celebs. Which, to be honest, is fine by me.
You know what model is getting long in the tooth? Ideogram 2.0.
They supposedly have a big update in the works. Yeah, I hear you...I'll say: They're still the king of AI Text though. No one has come close to them on that front.
So…I can use this to clone Margot Robbie? 😯
Haha- Pretty much!
strange
this is 1.6 i asked them there is 1.6
I still prefer Kling 1.6
She has two left feet ??
I have tried and paid Ray2. As a normal user, I don't have unlimited or so much credits like you. Ray2 is the most disappointing video generation model I have ever seen. It always doesn't hit. Even if I followed your previous video for the prompting, none of them actually HITS. The physics, resolution, camera movement are just all wrong. I compared that with Sora myself, Sora always outperforms Ray2. I paid over $100 in total per month for different AI tools for my job and personal tests, Ray2 is the worst money I have ever spent.
Don't fix your video! Shipwrecked on a deserted island with a tiger _AND_ a lighthouse makes it a lot more interesting. I mean, who puts a lighthouse on a deserted unknown island in the middle of nowhere? Gotta be a story behind that.
Make a video about face swap
:}
Another Cloud affiliate code video.
Compared to kling 1.6 this is pretty bad.
Mediocrity rules the day.
Annnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnd it still sucks
nd 1.6 is as pathetic comp to the competition
TLDR... Luma is rubbish
My suggestion: please don't review stuff that is not out. I have a limited amount of time on earth, and I go searching for sh!t that doen't exist and then I am exceedingly fucking mad.
Its out. It's still in beta, but it's out. You can go use it right now.
@@TheoreticallyMedia OK, THANKS. their interface is ridiculous, I can barely make sense of it
their interface is crap. their outputs are pure crap. damn it. thanks, Tim, but luma is trash
Looking forward to your Vidu 2 review. Naegiko - AI Animator has a great clip he used for demoing its anime ability th-cam.com/video/26oWZy6BI5k/w-d-xo.htmlsi=jLxIu5hN_IR9VyD7