A major highlight is the introduction of Das or diffusion as Shader 3D aware video diffusion, which represents a breakthrough in video generation capabilities. Thank for your sharing
Facelift looks really good, already working with video inputs too, nice. Hopefully they can fix the jitter cause that's impressive. Hopefully it can work with more than just people, because other single image to 3d options aren't nearly as impressive right now.
Tim, long time fan and thanks for all the great content. I watch all my YT vids at 2x but yours is the only one I slow down just to catch all your charm in delivery. Really great stuff, thanks again!😀
I’m not sure if they have that on platform, but the model itself is MMAudio, and it is open source. If you use Pinokio, I believe you can download it there and use it locally!
That's awesome! I was just wondering if they've added what they promised last year. I don't use Premiere Pro, so I'm not sure. I asked people who use Premiere and they told me that they only added a button:)
Yeah, I edit in Premiere, nothing yet. At least not in the timeline view that we saw last year. I think I’m supposed to chat with them soon, so I’ll see what’s up with that!
Man, I just want to take a moment to appreciate Theoretically Media and Curious refuge. I follow everything in the broader A.I. culture and get so sick of the clickbait, hyperbole, sensationalism and blatant inauthenticity. The guys in the AI Filmmaking space, Theoretically Media and Curious refuge, are informative and engaging without all the nonsense. Much love!
Yeah, agreed. I think the one shot aspect is a little limiting, but it certainly feels better than it did a few days ago. I’d say in a few weeks, it’ll be a killer.
9:57 Only one generation is permitted for newly registered free accounts, a bit stingy. Thank you TM for the heads up nonetheless. Wishing you well sir.
Seems like we have hit a bit of a wall and most of these changes are just tweaks. Looking forward to the next big step as most of this stuff while impressive falls short of being useful
Haha-- if you listen closely, you can hear chainsaws in the background. The neighbors were hacking down a tree today. It was just a overall mess of a Thursday! My hair was pretty stressed out.
2:13 What you’re explaining here is pretty much how any video model works….Luma was the first to implement this, but now all models follow the same architecture. So nothing new here.
Yep next step is I think adding sound but everybody probably waits for more than 5-10 sec generations. 2023 was 2 sec. 2024 was 5-6 sec. 2025 is 12 sec then? Lol.🎉
They're all wasting time trying to accomplish AI video while no single entity has properly figured out images yet. Photos of people are still poor across the board. Recraft with its expensive creative upscale are the only ones somewhere close.
I think it’s all sort of happening in tandem. And that’s ok- we’re still in the infant stage of this technology. Wild swings all around, but every once in awhile, something hits and pushes the ball further than we expected. Will it be this 3d method? Time will tell, or maybe it’ll spur an idea for another technology. Y’know, I did a video on Recraft a month or so ago. Yeah, it’s pretty good!
Ummm... That's totally wrong. Generations of humans have been figured out almost a year ago before Flux was even released. Edit: Yes, I mean perfect hands too. You just need to supplement the right files. Be that textual inversion, negative promoting, or LoRAs
I like that your videos are a good strong 10 minutes if it was 20 or 30 minutes I would’ve probably skip quicker, I got more things to do but 10 minutes of my time is OK.
Hey Tim, when you get a chance, please give me your thoughts on my latest MiniMax short film, hehe. 😂 I'm looking for more feedback. Much Luv, Brother 🙏🏾
Oh, a few hundred reasons off the top of my head. And as someone who uses PS and Premiere, like daily- I get it. But, this work is from Adobe Research, which is kind of a whole other thing. We get mad at Google all the time, but I’ll never diss the DeepMind team.
Is this the big update? wow guess real artists don't have to worry too much. Love how this "expert" gets the 3D shader wrong. Just goes to show that when you have artificial intelligence you don't need real intelligence. But maybe you should read a book or two, in case the bubble pops. Take care.
That short film someone made on MiniMax using the new character reference feature is TOP NOTCH and is blowing up. I luv it.
Any more info on that to search for it?
which short film?
Very Nice ... the 3D stuff is ALMOST ready to be useful.
I think we’ll see it this year. I’m with you, it’s SO close, but still not quite fully baked.
Few more months in the oven I thinj
A major highlight is the introduction of Das or diffusion as Shader 3D aware video diffusion, which represents a breakthrough in video generation capabilities. Thank for your sharing
Facelift looks really good, already working with video inputs too, nice. Hopefully they can fix the jitter cause that's impressive. Hopefully it can work with more than just people, because other single image to 3d options aren't nearly as impressive right now.
Ai is getting crazy. Best time to be as alive. Great video man, subscribed
Actually Tim looking at the diffusion I couldn’t help but think…. It’s Wonka vision! 😂
Haha. You’re right, it IS Wonka Vision!
Where can I find that crazy video at 1:22?
You always bring the goods!!
"TransPixar ... The Pixar movie that is more than meets the eye."
You cheeky Tim you 😄
I am on the unlimited runway plan, can’t see the 4k upscale option weird
Same. I don't see it.
Tim, long time fan and thanks for all the great content. I watch all my YT vids at 2x but yours is the only one I slow down just to catch all your charm in delivery. Really great stuff, thanks again!😀
he does add a funny sense of humor to almost every comment, which I also enjoy
I try Minimax image ref. , works like a charm, , waiting next the lypsinc. then we are done. I will upload the video soon
I think ALL of us prefer upscale that doesn't change anything and only upscale ❤
Look into latentsync. Its audio to video lipsyncs.
Illustrious one, I am in Mozambique and would like to know how to use Google VEO 2. I ask for your help.
does the hunyuan model let you upload your own videos to get sound generated for them or only their own videos?
I’m not sure if they have that on platform, but the model itself is MMAudio, and it is open source.
If you use Pinokio, I believe you can download it there and use it locally!
Tx again matey 😎 btw what is the best way to get rid of watermarks on videos I generated with hailou ? Cheers 👍
That's awesome! I was just wondering if they've added what they promised last year. I don't use Premiere Pro, so I'm not sure. I asked people who use Premiere and they told me that they only added a button:)
Yeah, I edit in Premiere, nothing yet. At least not in the timeline view that we saw last year. I think I’m supposed to chat with them soon, so I’ll see what’s up with that!
4:33 actually made me lol
Man, I just want to take a moment to appreciate Theoretically Media and Curious refuge. I follow everything in the broader A.I. culture and get so sick of the clickbait, hyperbole, sensationalism and blatant inauthenticity. The guys in the AI Filmmaking space, Theoretically Media and Curious refuge, are informative and engaging without all the nonsense. Much love!
Very nice ❤❤❤❤❤❤
Was that a reference to Independence Day at the start?😁🗽🚀
I tried out the character reference today but still seems to be some kinks in it, overall though it's working well
Yeah, agreed. I think the one shot aspect is a little limiting, but it certainly feels better than it did a few days ago.
I’d say in a few weeks, it’ll be a killer.
🤔@5:28 I don't get the joke? 'More than meets the eye'. What's the joke? ❤
9:57 Only one generation is permitted for newly registered free accounts, a bit stingy. Thank you TM for the heads up nonetheless. Wishing you well sir.
Seems like we have hit a bit of a wall and most of these changes are just tweaks. Looking forward to the next big step as most of this stuff while impressive falls short of being useful
3:10 give me the link i want to download the anime girl please
like aways, im rooting for u..
Your hair looks good even on your bad hair days.
Haha-- if you listen closely, you can hear chainsaws in the background. The neighbors were hacking down a tree today. It was just a overall mess of a Thursday!
My hair was pretty stressed out.
You know I couldn't put a finger on it for a while, but I just realized your folksy remind me of Dan Rather.
Haha. Never heard that one, but I did grow up with him, Koppel, and Brokaw in the background. Maybe something seeped it?
DAS ist güt!
Huh cool stuff.
2:13 What you’re explaining here is pretty much how any video model works….Luma was the first to implement this, but now all models follow the same architecture. So nothing new here.
Transpixar :D
Can’t wait for the Michael Bay reboot of TransPixar!
Yep next step is I think adding sound but everybody probably waits for more than 5-10 sec generations. 2023 was 2 sec. 2024 was 5-6 sec. 2025 is 12 sec then? Lol.🎉
By 2080 we’ll finally have that full hour we’ve been waiting for! Ha!
Sound is almost there, audio to lip sync works, TTS audio works, making music works, its all there.
They're all wasting time trying to accomplish AI video while no single entity has properly figured out images yet. Photos of people are still poor across the board. Recraft with its expensive creative upscale are the only ones somewhere close.
I think it’s all sort of happening in tandem. And that’s ok- we’re still in the infant stage of this technology. Wild swings all around, but every once in awhile, something hits and pushes the ball further than we expected.
Will it be this 3d method? Time will tell, or maybe it’ll spur an idea for another technology.
Y’know, I did a video on Recraft a month or so ago. Yeah, it’s pretty good!
Ummm... That's totally wrong. Generations of humans have been figured out almost a year ago before Flux was even released.
Edit: Yes, I mean perfect hands too. You just need to supplement the right files. Be that textual inversion, negative promoting, or LoRAs
They need video for robots' world models, images are good enough. But images will get even better when they perfect video.
I like that your videos are a good strong 10 minutes if it was 20 or 30 minutes I would’ve probably skip quicker, I got more things to do but 10 minutes of my time is OK.
Hey Tim, when you get a chance, please give me your thoughts on my latest MiniMax short film, hehe. 😂 I'm looking for more feedback. Much Luv, Brother 🙏🏾
Cheeseburbner
Only if it’s as dry as a Applebee’s meatloaf
you do know why people hate adobe right?
Oh, a few hundred reasons off the top of my head. And as someone who uses PS and Premiere, like daily- I get it.
But, this work is from Adobe Research, which is kind of a whole other thing.
We get mad at Google all the time, but I’ll never diss the DeepMind team.
What rhymes with orange? No it doesn’t.
I Set My Friends On Fire
Somewhere a Door Hinge is weeping.
Blancmange (Pronounced:
Bla-monj)
It'd pass in song lyrics
@@morpheus2573 The great Morpheus. We meet at last…
@ ...and you are?
Hi Tim - I'm early :D
A wizard is never early nor late!! You arrived at just the right time!
@@TheoreticallyMedia 🙏🏻🙏🏻🙏🏻🙏🏻
Ok man all of these titles are a little too hyperbolic.
To many now
Ummm really annoying appropriate comment? I guess you're not CA? Or maybe it's ok cos we don't actually need Hollywood anymore anyway?
Is this the big update? wow guess real artists don't have to worry too much. Love how this "expert" gets the 3D shader wrong. Just goes to show that when you have artificial intelligence you don't need real intelligence. But maybe you should read a book or two, in case the bubble pops. Take care.
very bad review