6:44 your translation is correct, but only covers the first part before he pauses and speaks the second sentence at this timestamp. The second part translates to "if you haven't eaten apples for 1 year, then you are allowed to steal some apples!"
THANK YOU! That makes so much more sense! Well, at least from a contextual standpoint. He's still going to get arrested for stealing those apples! Unless he's making a pie. In which case, we can let him off with a warning, as long as he shares!
Interesting how ai video generation behaves just like dreams. Things happening all of a sudden and changing to different locations sometimes without relation to one another.
Right. I've noticed this incredible similarity at the very beginning of the diffusion of generative AI art, now it's such a surprise for me to find out that it was not just my impression😅
Think about applying this to a pre-existing movie. I love Blade Runner, imagine putting on a VR headset, and walking around IN the movie as it happens. Watching characters meet the first time, exploring the environment, talk about immersive.
Yeah you are right. I think in the future we could have the technology to take an old 2D movie and convert it to full VR compatible. Not just 360 SBS where your position is fixed, but allow you to fly around and watch the actors from any angle. It would be a large file but by then im sure storage wont be a problem either. Then when you go outside of the boundary then the AI chip in your computer could randomly generate what could be around, giving you immersive capabilities.
Tim, I love how you're so thorough, get to the video quick and don't bombard half the video with ad-reads (at least not yet for the ads, lol) and your dad jokes bring a fun, enjoyable twist to your content than just reporting everything like it's the 9 o'clock news. Keep up the good work buddy, I'll keep coming back for all the latest tea!
@TheoreticallyMedia I know man, you gotta make money somehow. We're still staying here though, and I'm sure you'll incorporate more jokes into the ads😂
This is good👌Mochi 1 originally required 384 GB of VRAM and FOUR H100s to run. The Hunyuan Video Model basically only requires 10% of that - and that's even without quantisation yet!
My god, was it that much at launch? That’s insane! But yeah, agreed on your point about optimization. One of the best parts about the model being open source!
11:10 regarding the 3D shape of an object, the term to describe the surface and shape of a 3D object is "3D topology". As more and more AI dive into 3D content, you'll probably need to start using the relevant terminology too! 😁
Thank you! I've actually been slowly brushing up more. And by brushing up, I do mean watching "Intro to Blender" tutorials over and over again. I think the 2025 resolution is to at least get to the intermediate level with it.
The reason it’s the best is because it can use any media and has no copyright restrictions. Runway isn’t allowed the same content. For instance you can’t use this footage on any true film because you don’t have the rights cleared.
80 Gigs of VRAM for Hunyuan! That's pretty insane for what it spits out. But on an Apple M4 computer it could work, not quitting your home lab. Will think twice about the RAM amount when I buy my next Mac.
Hah, there's a reason I keep saying they're the "Crowd Favorite" Those guys bought ALL the good will. I haven't seen any other platform navigate the Free to Paid dance as well as they have. Kudos to Minimax!
I can't run it locally on account of the horse power-- but, I have been thinking a good video might be a lesson on how to rent a GPU and run one of these models in the cloud?
I really love the idea of being able to explore a world created from images, being able to inpaint in things to modify it, etc. I love exploration video games, so that would be like being able to create my own, suited exactly to a preferred aesthetic.
Ok so working as a video editor in TV News for many years ... we have to be able to see video at like 3x speed and cut very quickly. So the first time I show the dude on the one wheel motorcycle I was thinking... "Hey Tim, which one of those 13 billion parameters influenced the choice of undergarments for the Biker Dude?" But then I thought I just saw an AI artifact... until the video replayed and I was like ... YUP... Hunyuan might have some issues LOL. And Tim... if you saw that and let it slide LOL .... This comment is for You: "Well Done Chap!" Either way ... great job as always... and I do ride motorcycles and judging from the dude's elbow I would suspect his has had at least one too many accidents... that was my 2nd guess... Before I remembered the color of his skin around the "oddly chosen underwear" I would have guessed that he was completely green under his gear and that his friend Pokey is riding behind him... "Horsing around" on a motorcycle that only has a front wheel. 🤣
Yeeeah, I mean-- I think you'll see this model on a few platforms. Those who are using APIs to run Luma/Kling/Runway etc-- but what I think will be interesting is those who choose to actually do something with it, and tweak it to make it something new. That said, you CAN run this model by renting an A100 for like, 5 to 10 bucks an hour. It might be worth doing a video on that...I'll put that on the list.
So close with the translation Tim! Tiny nuance towards the end: "I haven’t eaten apples for a whole year, so I went to the supermarket and stole a bag of apples. Do you think I should apologise? Just because you haven’t eaten apples for a year, does that mean you can steal them?" Testing the model requires chinese mobile number to register though 😔
Thanks for the translation input!! I had the feeling that ChatGPT was acting a little sus there! Interesting about the phone number-- I thought you could just download/clone via Github? I know a few have gotten it up and running already-- Fal.ai for one. I figure that number issue will be removed at some point. They all seem to do it. I still have nightmares about that era of Kling and all the comments I was getting on that!
Have you tried to generate a 3d set using AI with just imagery from Google Earth Street view? Does it create 3d objects that can be imported in Unreal or Unity?
YO Tim...I just laughed so hard at your commentary that my finger turned off my Ad Blocker LOL Very good job. {{{Wow I just had a Mystery Science Theater 3000 flashback]]]] Which thinking back it's kinda funny... a show with future AI robots heckling humans from the past lol (I guess we all get our time to laugh! 🤣)
HA! I was hours away to start learning Domo for fusing around with some anime ideas. Now you say MINIMAX my favorite video model of all has added these marvelous models? It really is Christmas. 😁
Haha, happy to have put a bow on it for you! I do like Domo as well. I think their big strength is V2V. The consistency in stylization is fairly remarkable there!
The coherence wall is going to be a hard one to overcome, the image gens can use conceptualization of learned token-to-visual patterns - but visual narration requires an extra layer of reasoning that I imagine will be a tough nut to crack.
I think the answer lies somewhere in 3d models that are then skinned over via Video to Video. All kind of in the backend and hidden from the user. But, using 3d as a "skeleton" has always been my thought. But-- there are also much smarter people than me working on this!
@@TheoreticallyMedia Totally a theory but perhaps artificial intelligence itself needs to be harnessed as the bridge builder, comparing and contrasting video outputs and stitching them together in a prompted way, thanks!
Minimum: The minimum GPU memory required is 60GB for 720px1280px129f and 45G for 544px960px129f. Recommended: We recommend using a GPU with 80GB of memory for better generation quality.
Imagine being able to create ai environments in real time, as in you tell the ai what environment you'd like to walk through and it creates it while you are already walking through a generated environment...and that environment leads you into the next environment....long winded I know but super exciting stuff.
1:47 I love that their motorcycle rider training footage includes dudes in g-strings LOL. Internet gonna internet. The opening sample (guy with rifle) had some amazing foliage. That is my dead giveaway... when leaves look more like popcorn than leaves, it's AI.
Haha, I was wondering about that motorcycle guy...I was like, is it? Yeah...you just confirmed it is! haha...well, not gonna judge, AI or Not, you live your best life, One-Wheel Motorcycle guy!
Thx Tim. Appreciate the open source heads-up. As you go through these products in the future can you briefly mention when they're restricted to paid plans?
Being able to create scenes you can pan around and place characters in for scene purposes would be awesome I have to go check this out. I laugh every time Sora gets mentioned.
From their github page: We have tested on a single H800/H20 GPU. Minimum: The minimum GPU memory required is 60GB for 720px1280px129f and 45G for 544px960px129f. Recommended: We recommend using a GPU with 80GB of memory for better generation quality. What is the point?
7:45 My eyes nearly popped out of my sockets at the ~80GB VRAM requirements, so I had a gander over at RunPod to see what they have to rent. Aaand... they actually have a 1152 GB monster available* - that's over 1TB of VRAM (!) - which just goes to show how behind I am with computer specs! Jeeeeeeeez!! 😂🤣 * the "6x MI300X" ! idk but I think it's overkill 🤣🤣
Unfortunate how these local installs won’t run on my 2020 16” 20G Ram 512GB MacBook Pro, it’s even struggling w the new Topaz suite I just purchased, Video upscaler
oh, man-- I'm running into that with my m1 pro as well. I'm thinking it might be a good excuse to bump to the M4. I've been waffling on that a bit. That, or maxing out a m4 Mini Pro... Or, just finally biting the bullet and building a PC. So much of the local installs rely on Nvida cards. GPU hungry.
THANK YOU!! I do love those little bits of trivia. That information is now stored in my long term memory!! Appreciate it! Fingers crossed it comes up in the next Trivia Game Night! haha
@ My wife and I actually got to seem them in 2010 near Beijing in 2010. There are 8,000. I known I've seen a couple of movies where they've been brought to life with ill intent...
yup! I've also heard it called the Zolly Shot. I think the correct name is the ContraZoom...but, who knows. I mean, if you call it the Vertigo shot or the Jaws shot, everyone knows what you're talking about. It's such a cool effect!
Very cool indeed, the basic trick it's nothing that complicated, it's just playing moving physically the camera forward while zooming out according to the speed of the device or viceversa, camera moving backwards while zooming in.
Is it Krea or Korea’s new image editor??????? One of those I'm not too sure I want to try out..... Lol Might want to check your main video description man..
Yeah. I keep talking about this to all the AI devs/companies. 3D is going to be a BIG part of AI Video in 2025 and beyond-- but there is also this big disconnect between users. You have folks like yourself who are versed in 3d (the Blender/3DsX crew) and then the folks who see 3d UI as too complex. For this to really get adopted, you need a platform that kind of caters to both. Make it simple enough that it resembles "Photo Mode" on any modern game-- but also have under the hood tools for the 3D gang. And various levels of export as well. No one has quite cracked it yet, but I know they're working on it.
@@TheoreticallyMedia Tim, there's so much to be developed in 3D + AI field. But, as you said, they are worlds apart. And AI devs opted for TikTok entertainment. The only company working seriously on AI+3D filmmaking today is Cuebric. I'm creating these backgrounds in PicLumen and exporting them to Blender. But it's still a slow process.
its weird how much these videos are starting to seem like dreams. Like they're images created based on what the universe looks like, but they don't have to follow any of the rules, so anything could happen.
it IS! LOVE the LTX gang. I'm actually covering that in just a bit. I don't think it's a surprise, but that model is (obviously) making its way to the LTX platform. What I'm curious to see is how the platform model performs against the open source one. Like, what tweeks did they make behind the scenes. Actually, you know what? I'll ask 'em! Look for a full report on LTX coming up....uhhhh, sometime this month?
You know some mad tinkerer is out there working on it right now! In a dark room, illuminated only by a computer monitor and surrounded by empty cans of Red Bull. We salute you!
It does feel that way. That said, I know the US devs are still cranking away and there is some stuff on the horizon which looks to tilt the scale in the other direction...at least for a bit. It's going to be like that for awhile-- basically, it's the space race all over again-- except this time, in Latent space!
@@TheoreticallyMedia When I was doing my music video made from creepy Midjourney images, I animated the head movements using the colab of a Chinese researcher. And then I kept bumping into others from there.
I’d say Tencent claims so, or at least in their wording “better or almost better” haha. I don’t think so, personally- BUT, this being open source, it could be tuned up by the community to become something really powerful. We’ll see what happens with it. Currently, we’re kind of in an age of embarrassing riches. So I’m just enjoying it all.
6:44 your translation is correct, but only covers the first part before he pauses and speaks the second sentence at this timestamp. The second part translates to "if you haven't eaten apples for 1 year, then you are allowed to steal some apples!"
THANK YOU! That makes so much more sense! Well, at least from a contextual standpoint. He's still going to get arrested for stealing those apples! Unless he's making a pie. In which case, we can let him off with a warning, as long as he shares!
Watch? Did you see those knockers bouncin perfectly?
It's all incredible. You barely get used to one advancement before another arrives.
Good stuff as always, Tim.
I feel like we're running on the treadmill like that AI Generated lady!! Just trying to keep up!!
One day soon, a new opensource 2 hour video with special effects and better than Dolby sound and we all become Steven Spielberg, and no one will care.
Ai is actually forcing competition
Just wait until we get a 24B - 70B model we just need 1 - 5 minute output
@2:24 Yes, it was her watch, fingers, and hair we were all admiring at how realistic the rendering was. lol
Every time I hear "Sora" I just roll my eyes unless we're talking Kingdom Hearts
😂 ME TOO!!! Lol someone else finally said it
SAME LMAO, I was a huge Kingdom Hearts fans until KH3 😓
@@Biajuuu at least we still have KH2 🥲
@@AustinsEdits peak KH game
Lmao I did the exact same
Interesting how ai video generation behaves just like dreams. Things happening all of a sudden and changing to different locations sometimes without relation to one another.
Right. I've noticed this incredible similarity at the very beginning of the diffusion of generative AI art, now it's such a surprise for me to find out that it was not just my impression😅
Soon it Will not
Think about applying this to a pre-existing movie. I love Blade Runner, imagine putting on a VR headset, and walking around IN the movie as it happens. Watching characters meet the first time, exploring the environment, talk about immersive.
Yeah you are right. I think in the future we could have the technology to take an old 2D movie and convert it to full VR compatible. Not just 360 SBS where your position is fixed, but allow you to fly around and watch the actors from any angle. It would be a large file but by then im sure storage wont be a problem either. Then when you go outside of the boundary then the AI chip in your computer could randomly generate what could be around, giving you immersive capabilities.
Tim, I love how you're so thorough, get to the video quick and don't bombard half the video with ad-reads (at least not yet for the ads, lol) and your dad jokes bring a fun, enjoyable twist to your content than just reporting everything like it's the 9 o'clock news.
Keep up the good work buddy, I'll keep coming back for all the latest tea!
Haha, I've got some ad reads coming up next month! Sorry-- but, gotta keep the lights on!
Just wait for those 60 second MANSCAPED ad reads! (kidding!)
@TheoreticallyMedia I know man, you gotta make money somehow. We're still staying here though, and I'm sure you'll incorporate more jokes into the ads😂
This is good👌Mochi 1 originally required 384 GB of VRAM and FOUR H100s to run.
The Hunyuan Video Model basically only requires 10% of that - and that's even without quantisation yet!
My god, was it that much at launch? That’s insane! But yeah, agreed on your point about optimization. One of the best parts about the model being open source!
i hope this requirement can size down for home consumer in the future without beefy requirements
11:10 regarding the 3D shape of an object, the term to describe the surface and shape of a 3D object is "3D topology". As more and more AI dive into 3D content, you'll probably need to start using the relevant terminology too! 😁
Although, I work for an animation studio, and the artists here usually just call it "geo" (as in, geometry).
@@IceMetalPunk yeah, actually that is also another term that is being used to describe a 3d structure. different words for the same thing. lol
Thank you! I've actually been slowly brushing up more. And by brushing up, I do mean watching "Intro to Blender" tutorials over and over again. I think the 2025 resolution is to at least get to the intermediate level with it.
Tim, you are absolutely the best TH-camr on AI Video today! 👍👍⭐⭐
Appreciate that so much!!!
The reason it’s the best is because it can use any media and has no copyright restrictions. Runway isn’t allowed the same content.
For instance you can’t use this footage on any true film because you don’t have the rights cleared.
dopepics AI fixes this (AI Image Editor / Upscaler). New AI video from image
Tencent is a mega corp owned by the Chinese govt. These Chinese ai vid generators are concerning at best
@@WayOfTheZombie and so are all the western models. There has already been massive censorship issues with Google models and others.
80 Gigs of VRAM for Hunyuan! That's pretty insane for what it spits out. But on an Apple M4 computer it could work, not quitting your home lab. Will think twice about the RAM amount when I buy my next Mac.
I appriciate the subtle humor of the intro logo animation...
Minimax is the best I’ll be releasing a 30 minute Christmas movie
Hah, there's a reason I keep saying they're the "Crowd Favorite"
Those guys bought ALL the good will. I haven't seen any other platform navigate the Free to Paid dance as well as they have. Kudos to Minimax!
Wut in Sam hell?...thirty minutes?...how many 6 second videos is that😂
@@willD.A.Vkarter I have a lot of time on my hands and yes they were each 10-6 second clips
I have to see this sub'd. 300 generations at least.
@ sorry the channel is glory house studios that’s where I make all my movies
Thank you, Tim! Great video, trusting that you'll really test it out for us !!
I can't run it locally on account of the horse power-- but, I have been thinking a good video might be a lesson on how to rent a GPU and run one of these models in the cloud?
4:15 is a new genre of ai comedy 😂❤
I love that video so much! Every time I watch it, I find something new that I'm laughing my head off about!
Where is that from I want to make more of that
😂😂😂
This is literally the most valuable source of info for working creatives using AI on the internet
you're dead not creative
Blown away, awesome video squire!
Worldlabs in VR is going to be trippy!
Always a pleasure to watch your refreshing news with that extra spark of humor. 😄 "Do you think I should appologize?" 😅
I really love the idea of being able to explore a world created from images, being able to inpaint in things to modify it, etc. I love exploration video games, so that would be like being able to create my own, suited exactly to a preferred aesthetic.
We're getting to the point where we're seeing one jaw-dropping announcement after another every week. That is only going to become more frequent.
Ok so working as a video editor in TV News for many years ... we have to be able to see video at like 3x speed and cut very quickly. So the first time I show the dude on the one wheel motorcycle I was thinking... "Hey Tim, which one of those 13 billion parameters influenced the choice of undergarments for the Biker Dude?" But then I thought I just saw an AI artifact... until the video replayed and I was like ... YUP... Hunyuan might have some issues LOL. And Tim... if you saw that and let it slide LOL .... This comment is for You: "Well Done Chap!" Either way ... great job as always... and I do ride motorcycles and judging from the dude's elbow I would suspect his has had at least one too many accidents... that was my 2nd guess... Before I remembered the color of his skin around the "oddly chosen underwear" I would have guessed that he was completely green under his gear and that his friend Pokey is riding behind him... "Horsing around" on a motorcycle that only has a front wheel. 🤣
I signed up for WorldLabs. Can't wait to try out some old photos
wow that was a huge update video, thank you for it !
Right?! This was one where I felt no shame in using the “GameChanger” thumbnail!
2:13 Speaking of someone knowing what they're doing, putting in the add during this segment was a nice touch
a bit to much ads but had a bunch of good laughs thank you, also for this impressing informations
what i like about this guys content is he isnt glazing the AI as much as some others that are completely blind to bad things.
5:54 oh man i almost actually died laughing
Haha, I kept thinking-- need a few more holes in that ghost costume, Charlie Brown!
I almost spit the stuff I was drinking
You leave ONE disembodied head in your room and suddenly people think you're a serial killer. 😉
Found him! haha-- best comment of the day!
OMG that head popping out of nowhere was kinda scary
😂
I do wish these companies would understand their audience. This models requires a $40,000
Yeeeah, I mean-- I think you'll see this model on a few platforms. Those who are using APIs to run Luma/Kling/Runway etc-- but what I think will be interesting is those who choose to actually do something with it, and tweak it to make it something new.
That said, you CAN run this model by renting an A100 for like, 5 to 10 bucks an hour. It might be worth doing a video on that...I'll put that on the list.
That would be a great video! $5 - 10 an hour could be great value depending on the speed of video output.
Wasn't the first computer a whole room
Thanks for this great showcase! BTW I love OpenAI.
Kudos to you for being able to focus on her watch and fingers...
Omg, nice, thank you for posting this video and the wonderful news, thank you
Oh man, your commentary is comedy capers on steroids. Keep it up!
So close with the translation Tim! Tiny nuance towards the end:
"I haven’t eaten apples for a whole year, so I went to the supermarket and stole a bag of apples. Do you think I should apologise? Just because you haven’t eaten apples for a year, does that mean you can steal them?"
Testing the model requires chinese mobile number to register though 😔
Thanks for the translation input!! I had the feeling that ChatGPT was acting a little sus there!
Interesting about the phone number-- I thought you could just download/clone via Github? I know a few have gotten it up and running already-- Fal.ai for one.
I figure that number issue will be removed at some point. They all seem to do it. I still have nightmares about that era of Kling and all the comments I was getting on that!
Nice one. Pretty funny. The app looks frighteningly good.
I Love how you always post links to everything you talk about.. others dont
Have you tried to generate a 3d set using AI with just imagery from Google Earth Street view? Does it create 3d objects that can be imported in Unreal or Unity?
I haven't done it yet personally, but I have seen it done. Yeah-- wild.
@@TheoreticallyMedia Which AI company did it?
Very Cool!
This one I'm perfectly happy taking any heat for the "Gamerchanger" thumbnail!
YO Tim...I just laughed so hard at your commentary that my finger turned off my Ad Blocker LOL Very good job. {{{Wow I just had a Mystery Science Theater 3000 flashback]]]] Which thinking back it's kinda funny... a show with future AI robots heckling humans from the past lol (I guess we all get our time to laugh! 🤣)
Great this will be available in six months to consumer level
HA! I was hours away to start learning Domo for fusing around with some anime ideas. Now you say MINIMAX my favorite video model of all has added these marvelous models? It really is Christmas. 😁
Haha, happy to have put a bow on it for you! I do like Domo as well. I think their big strength is V2V. The consistency in stylization is fairly remarkable there!
Someone is gonna have to think out of the box, The Coherence bridge will have to be solved, otherwise we maybe hitting the wall.
The coherence wall is going to be a hard one to overcome, the image gens can use conceptualization of learned token-to-visual patterns - but visual narration requires an extra layer of reasoning that I imagine will be a tough nut to crack.
I think the answer lies somewhere in 3d models that are then skinned over via Video to Video. All kind of in the backend and hidden from the user. But, using 3d as a "skeleton" has always been my thought.
But-- there are also much smarter people than me working on this!
@@TheoreticallyMedia Totally a theory but perhaps artificial intelligence itself needs to be harnessed as the bridge builder, comparing and contrasting video outputs and stitching them together in a prompted way, thanks!
Minimum: The minimum GPU memory required is 60GB for 720px1280px129f and 45G for 544px960px129f.
Recommended: We recommend using a GPU with 80GB of memory for better generation quality.
Yup. And not everyone has an A100 sitting around. Granted, you can always cloud rent one for like $5/hr.
Imagine being able to create ai environments in real time, as in you tell the ai what environment you'd like to walk through and it creates it while you are already walking through a generated environment...and that environment leads you into the next environment....long winded I know but super exciting stuff.
Your commentry is class, hilarious. I think you should do comedy. Bet you'd have a killer stand up.
1:47 I love that their motorcycle rider training footage includes dudes in g-strings LOL. Internet gonna internet. The opening sample (guy with rifle) had some amazing foliage. That is my dead giveaway... when leaves look more like popcorn than leaves, it's AI.
Haha, I was wondering about that motorcycle guy...I was like, is it? Yeah...you just confirmed it is! haha...well, not gonna judge, AI or Not, you live your best life, One-Wheel Motorcycle guy!
I have a pretty beefy PC so i am going to look into this. Hopefully i can try this and the learning curve isnt too massive.
This was great. Funny too. Thank you❤
"Do a barrel-roll". Nice to know the grey hairs aren't lying. XD
100% dating myself with that reference, but as the kids say: iykyk!!
and this all is still just "the beginning of the beginning"...😲😵💫
These are frightening times for people (not only) in the creative business...
Mind supernovablown by Tim again.
Thx Tim. Appreciate the open source heads-up.
As you go through these products in the future can you briefly mention when they're restricted to paid plans?
great video thanks tim
Funny video at 4:20. Motorbike turns into bicycle and then into a rocket.
I LOVE that video so much. Every time you watch it, you see some new bit of insanity!
Some classic Tim-style jokes getting dropped in this episode. Too funny! 😂
The shared RAM on apple silicon make these more accessible system requirements
Being able to create scenes you can pan around and place characters in for scene purposes would be awesome I have to go check this out.
I laugh every time Sora gets mentioned.
From their github page: We have tested on a single H800/H20 GPU.
Minimum: The minimum GPU memory required is 60GB for 720px1280px129f and 45G for 544px960px129f.
Recommended: We recommend using a GPU with 80GB of memory for better generation quality.
What is the point?
Mac Studio and Mini can be loaded up like this
wow that very nice quality video generation
Video memory usage is an issue. looking at what LTX-video can do with a fraction of video memory,
7:45 My eyes nearly popped out of my sockets at the ~80GB VRAM requirements, so I had a gander over at RunPod to see what they have to rent. Aaand... they actually have a 1152 GB monster available* - that's over 1TB of VRAM (!) - which just goes to show how behind I am with computer specs! Jeeeeeeeez!! 😂🤣
* the "6x MI300X" ! idk but I think it's overkill 🤣🤣
dope vid as always thank you keep the good work up ..also i like the jokes
When you spotted that head...OMG 🤣🤣!!
Haha. It was a literal jump scare!
@@TheoreticallyMedia 😂😂
Unfortunate how these local installs won’t run on my 2020 16” 20G Ram 512GB MacBook Pro, it’s even struggling w the new Topaz suite I just purchased, Video upscaler
oh, man-- I'm running into that with my m1 pro as well. I'm thinking it might be a good excuse to bump to the M4. I've been waffling on that a bit. That, or maxing out a m4 Mini Pro...
Or, just finally biting the bullet and building a PC. So much of the local installs rely on Nvida cards. GPU hungry.
Wow tim please do the new minimax anime video the sketch option looks amazing and can't wait for 3d world access
gonna play with it today!! And I'll reach out to the WorldLabs gang to see if I can get us a sneak peek!
Hi Tim,
I offer this as information not criticism... since I know you've an eye for detail... They are terracotta warriors not stone... 😊
THANK YOU!! I do love those little bits of trivia. That information is now stored in my long term memory!! Appreciate it! Fingers crossed it comes up in the next Trivia Game Night! haha
@ My wife and I actually got to seem them in 2010 near Beijing in 2010. There are 8,000. I known I've seen a couple of movies where they've been brought to life with ill intent...
How does one use Minimax? is there a web UI?
There is! Should be linked below!
At 10:48 the "Jaw shot" it's technically called "vertigo effect" if I'm not mistaken
yup! I've also heard it called the Zolly Shot. I think the correct name is the ContraZoom...but, who knows. I mean, if you call it the Vertigo shot or the Jaws shot, everyone knows what you're talking about.
It's such a cool effect!
Very cool indeed, the basic trick it's nothing that complicated, it's just playing moving physically the camera forward while zooming out according to the speed of the device or viceversa, camera moving backwards while zooming in.
Is it Krea or Korea’s new image editor??????? One of those I'm not too sure I want to try out..... Lol Might want to check your main video description man..
Haha, Krea. Stupid Transcription software!
This model takes an insane amount of VRAM to run locally. Open source but an A100 will set you back $18,000.
It's a pity that registration is only possible with a Chinese number(
Not this one. This is just downloading the code from Github.
tencent is a chinese company. maybe it's the reason.
By 2025 the mid of 2025 we will get movie making model
Agreed. I can’t even imagine what 2026 is going to look like.
Very interesting, thx you
Excellent. I've ordered myself a compatible NVIDIA GPU. I would recommend the NVIDIA HGX A100 640GB GPU - It's a bargain at about £160,000
Was this a joke, or are you rich? 😅
Check on Ebay! You can get a used one for the FAR cheaper price of 120k! haha
7:11 How do I do this?
"It's coming in pretty hot" I know what you are doing here 😉
Haha-- If there's two things I've learned from marketing, it's "Keep it Hot" and "Thirst Comes First!"
WorldLabs could be useful if you could export the generated world to Blender, for instance.
Yeah. I keep talking about this to all the AI devs/companies. 3D is going to be a BIG part of AI Video in 2025 and beyond-- but there is also this big disconnect between users. You have folks like yourself who are versed in 3d (the Blender/3DsX crew) and then the folks who see 3d UI as too complex.
For this to really get adopted, you need a platform that kind of caters to both. Make it simple enough that it resembles "Photo Mode" on any modern game-- but also have under the hood tools for the 3D gang. And various levels of export as well.
No one has quite cracked it yet, but I know they're working on it.
@@TheoreticallyMedia Tim, there's so much to be developed in 3D + AI field. But, as you said, they are worlds apart. And AI devs opted for TikTok entertainment. The only company working seriously on AI+3D filmmaking today is Cuebric.
I'm creating these backgrounds in PicLumen and exporting them to Blender. But it's still a slow process.
She has a watch?
There are windows?
Great video.
On question.
How safe is it to run this code on your PC in terms of spyware, malware, privacy ?
AT least they didn't drop this on friday :)
that's Runway's secret game to drive me insane. haha
Something else to watch on your phone while you learn some Euro's language.
New Ai OF $$$ 😮
its weird how much these videos are starting to seem like dreams. Like they're images created based on what the universe looks like, but they don't have to follow any of the rules, so anything could happen.
Ltx studio model is lot smaller and available right now.😊
it IS! LOVE the LTX gang. I'm actually covering that in just a bit. I don't think it's a surprise, but that model is (obviously) making its way to the LTX platform.
What I'm curious to see is how the platform model performs against the open source one. Like, what tweeks did they make behind the scenes. Actually, you know what? I'll ask 'em!
Look for a full report on LTX coming up....uhhhh, sometime this month?
That World Labs generates a Panorama image like Blockade labs or a 3D world that we can import into a Game Engine ?
man...too many people cant wait for these advancements to bleed on to 2025-2026 nsfw animie fantasy worlds
12 gb vram quantization model someone make it happen for hunyuan ai video model please 🙏🏻
You know some mad tinkerer is out there working on it right now!
In a dark room, illuminated only by a computer monitor and surrounded by empty cans of Red Bull.
We salute you!
@TheoreticallyMedia 😭
So how/where are folks getting 45 or 60gb video cards?
Dolly in zoom out, a.k.a. vertigo effect
Yup! I’ve also called it the “Zolly” shot. Which is just fun to say!
Now imagine this in gaming
Tim can you cover a workflow for lip sync for characters in music videos?
Do you have access code for krea?
👋 Open source? 👀 👍 👍 👍 👍 👍
Open Source, but HEAVY. haha-- likely most of us won't be able to run it-- BUT...you COULD. And that's what's awesome.
@TheoreticallyMedia
I'm at 48GB GPU in my Workstation and if I can port the remainder over to my Workstation CPU....... I might be in business!
How to install?
We're lagging behind China in AI generative video.
It does feel that way. That said, I know the US devs are still cranking away and there is some stuff on the horizon which looks to tilt the scale in the other direction...at least for a bit.
It's going to be like that for awhile-- basically, it's the space race all over again-- except this time, in Latent space!
@@TheoreticallyMedia When I was doing my music video made from creepy Midjourney images, I animated the head movements using the colab of a Chinese researcher. And then I kept bumping into others from there.
I am lost, so better then Kling and minimax?
I’d say Tencent claims so, or at least in their wording “better or almost better” haha.
I don’t think so, personally- BUT, this being open source, it could be tuned up by the community to become something really powerful. We’ll see what happens with it.
Currently, we’re kind of in an age of embarrassing riches. So I’m just enjoying it all.