Google have a new AI for robots. China has been making great strides in robotics. Just needs the software to improve a bit, so probably within the next 5 years or so.
You know what I see happening? I see people just completely turning away from watching internet videos because we won't know if anything that we see is even real anymore, so it'll be a huge turn off, and people will just get bored, and want REAL first-hand REALITY again
IMO it’ll be about the value of the content - not how it was made. Simultaneously, we’ll have a Whole Foods market - but my guess is most will be down with Mountain Dew straight to the vein, which is what shorts/tt/reels feels like anyway
ITs just takes existing videos from library off milions and milions off videos that was train on and glue them together Like when Marques Brownlee put in Sora "tech reviver siting on the desk talking about smartphone in front off 2 displays " in generated video with same plant siting on the desk at same spot as in his tech videos even he typed 0 about the plant
Most of these would 100% be fine for background footage on youtube videos without anyone noticing. I still don't think it's good enough for client work though. But it's crazy how good it is!
I discovered something EVEN better than this. Truly mindbogling. It's called human contact. Walking into a forrest at night with a few tents. Lighting up a fire, walking into the lake and taking a dive in the cold water, coming back, wrapping yourself up in a blanket, Cooking meat. Talking. Glass of wine. Maybe smoking a big Dutch Joint. .. Eventually falling asleep while staring at stars as stoned as a kite. Amazing how real that technology feels.
Not only that it is also criminal because they are scraping everything uploaded on the internet without having a license. That is copyright infringement 101. But humanity is in such a state of stupidity. No one seems to care. It the law is applied those corporations would be automatically out of business and the owners in jail. The penalties are impossible to pay off.
It's pretty audacious that they're asking for $200 a month for something that's clearly unfinished. You wouldn't buy a brand new car with half the components missing, so why is this this any different?
@Larry prolly too expensive to run free given the compute could go towards gpt usage. However it does seems they did a sizable early access program www.theverge.com/2024/11/26/24306879/openai-sora-video-ai-model-leak-artist-protest
I see a problem with AI testing in that everyone's testing prompts makes too much sense...Yeah, even a godzilla-sized Corgi walking through the streets makes sense, the AI understands what it needs to do and can easily find inspiration to realize it. You need to challenge it a bit more, ask for stuff that doesn't make sense. All these clips you showed, it's all just normal stuff that can be found in any old movie. Even a skeleton playing guitar is not difficult for the AI to bring to life. Come on, show same damn imagination!! You have the most amazing tool at your hands, and you're letting your own imaginations limit your use of it!
The creative applications of text to video never made a whole lot of sense with models up until now. I'm absolutely stunned by how fast researchers have been able to get text to video working so well. Great work Google.
I am sure google is adding some kind of watermark in these videos, or other way of identification. They don't want to pollute their training material with already AI generated content.
they thought using ai would cut their cost and making more profits. in reality their products would get more cheaper in quality and value, no sane person would throw their money at trash.
There are many things this could be used for, especially as the technology matures, but when I see this, I’m immediately excited about the future of virtual reality. I understand that this type of technology is still in its infancy, but when I think about what it will be capable of, combined with conversational AI and real-time adaptation, I feel like a kid standing in the distance, watching a theme park being developed. I might be a bit overly optimistic, but I think we're starting to get close. If you disagree, please keep it to yourself. Don't be a wet blanket. Nobody likes you. 😅
Sadly nothing AI is available in the EU on launch - not even Apple intelligence. Hopefully it’s just a matter of time to meet compliance with GDPR, DMA etc
Can you start with 1) imput images for it to use 2) greenscreen actors to fill in whatever’s wanted in green screen and even crossing in front of the real actor and interacting 3) take sound inputs like voice over to 3d gen or “hear” music to have ai gen crowd to dance with the beat etc … would be incredible
I wonder if there's accidentally the ability of location setting? Given Google's giant dataset around location, especially with Maps, Earth and Streetview, how narrowed down and defined can you set the location prompt. Like if you just said London, you'd likely get a London inspired generation, but if you were more specific, let's say Oxford Street or Harrods, how well would the model understand this to set the scene? If they've accidentally included this, or if they can implement it down the line, this would entirely change the game.
Well imaged locations definitely end up being consistent, but it’s not perfect. Eg look at the SF ferry building examples in my vid; the tower changes a fair bit. I suspect they will if they aren’t training on street view and aerial imagery already.
@@bilawalsidhu Having mostly experience with Hailuo lately, I stopped watching at the halfway mark, so probably just cut off before you talked about Hailuo.
I am disappointed in it. Videos are so bad lately. Used to be 2-3 megabyte now it’s just 1-1,5 megabyte. Quality has dropped massively. I hope google releases veo2 and have an unlimited plan. Will definitely switch
To be honest the quality seems low for cinematic generations. Though the movement is better more fluid we have A LONG WAY TO GO for realism. Thanks for the update I subscribed.
When I see some creator using the AI hype to raise up a TH-cam channel about it and advertising for their courses…, and you see that they know quite nothing about it and just make stuff up as they talk…, it's just so sad to see. Thank you for providing actual and relevant information. 🙏
This shit is blowing my mind! Ive been aware of AI for a while but the last few weeks have gone on a binge and its basically going to be magic within a few years!! This is going to change everything!!! It actually givese me hope more than fear, the way the world has been since the early 2000s us humans deserve it. I just hope we dnt fuck it up!
No upload of your own image allowed for image to vídeo , makes hard to have any use for professional use. From the video professional , point of view, they need more resources
Google is winning the generative AI race, but OpenAI has already achieved AGI, which means saying Google is ahead is like beating an F1 driver at football and saying you're the best athlete.
They haven't achieved sh1t. AGI would be able to do *everything* a person can do, at least when it comes to abstract stuff. O models are not even close.
@xviii5780 It's funny that you think "doing things" is what AGI is about. We've had mindless automotons that can do all kinds of abstract, amazing things for decades. AGI is a machine that can think without a prompt. A machine that has dreams and passions. So, purely thinking from a caveman investor "me likey money" standpoint, you're right they haven't invented a new tool... They've given birth to artificial life.
I'm looking forward to remaking some of my favorite movies I remember from when I was a kid, but were actually terrible when I rewatched them as an adult.
It’s fairly clamped down prompting wise. Can’t put in names of celebrities/artists/IP also has a hard time with weapons and other stuff you’d find in a modern action movie.
훌륭해 보입니다. 바라건데 영상제작에 활용할수 있을만큼의 camera control ui 가 있으면 좋겠습니다. 그리고 그 ui가 영상제작에 활용할수있는 레벨은 촬영현장이나 기존 cg작업을 할때 한 컷의 카메라에 얼마나 정교한 작업이 필요한지 또 얼마나 많고 빠르고 정교한 수정이 필요한지에 근거해야 합니다.
Google's double standards are really fascinating. TH-cam made a big announcement in December that they want to help artists, not replace them, and that they can generally decide who uses THEIR art for training data, but of course they exclude themselves. The whole thing is just so ridiculous. Also, I don't understand what people are thinking here. If there is no value left behind art, it will become meaningless over time. Good times are coming when TH-cam is so spammed with crap that people get a complete sensory overload..
If it’s crap, it’ll only see a few people and not get recommended. Maybe a few hundred views. In the future I see people using this to tell genuine valuable stories and to make proliferation of useful information more engaging.
Open AI won’t let you use copyrighted imagery, but I guess Google is fine with those storm troopers. Furthering your point of not just replacing artist, but not respecting copyright
@@houseofknox8066 They'll put guardrails on it before it's released as a public product. I'm sure that's why they have a small invite-only portion of people using it right now. It's to uncover edge cases.
bascially tossed a few photos of my mom's doggo into google whisk which gave me a prompt that recreated the doggo down to the exact attire and ear shape. since the google models are the same across services whisk is a great way to do image to prompt --> then toss it into image or videofx
No one seems to care about how these softwares are stealing millions of copyrighted works in order to operate. Using millions of works without asking for a license is straight copyright infringement.
I think models like o3 and Gemini with thinking will help decompose a top level movie prompt into a bunch of self consistent shot level prompts. We do need more models that output audio in tandem and that tech is getting better too.
@spatialintel using CUDA cores, and to think about it I only have a Titan X pascal I can only perform so many matrix till my GPU is exhausted, it puts your entire system in an optimized state of process to its maximum potential CPU and GPU communication in overdrive
@spatialintel I'm about to change the world of processing, I'm testing my CUDA application in Blender, this application puts your CPU and GPU communication in overdrive x10 the only thing I have to fix is when you render in Blender with cycles I'm getting an error I should find a way around it soon, compiling for all NVIDA GPUs will be releasing in the store soon, note this works for all applications, I have to find why DirectX is crashing also works flawlessly in Unreal Engine testing all sorts of apps
I love how these guys are so exited by all this not realizing that soon they will all be replaced by Google AI agents pretending to be real influencers spewing better content at a faster rate, and oh they would all look sexy as fk.... Guys you are being outplayed, enjoy the ride!
Oh trust me there are days where every post or upload feels like I’m training my own replacement. But such is the case across knowledge work and really anything that can be done with a computer. I make content I like making, not content I need to make. So I absolutely will enjoy the ride!
@@bilawalsidhu As long you're having fun that's what counts, to be fair, it will probably end up between having people who want human made content and the ones who are brainwashed by these amazing fake hot AIs agents anyway lol
@@fragmcdohl8291 lol, really hope you're right. i do wonder what the split will be -- sometimes worry that while there will be a whole foods like "organic content" market -- most will want mt. dew straight to the vein
@@bilawalsidhu Humans will always have a need for human connection, it's in our DNA. My guess would be to explore topics in a very human and relatable way which would be harder for AI to emulate as regardless how good they are, they don't have feelings yet.
For now a growing early access list. I assume it’ll be this year - maybe I/O? It’ll probably come to TH-cam even before then but I dunno if that’ll be a smaller model
If they had launched in summer it would have been the leader for a while; then again the Chinese models like Kling and minimax kinda stole that thunder already
So when will Google's Veo 2 be released Nov/Dec 2025??? I know it took Sora forever to be released and when it was released it was a Watered Down version of Sora, cause it sucks!
Yo, please generate a new street skateboarding trick! Thank you. resurrect the iconic el toro 20 staircase with skater doing a 360 flip down it and rolling away perfectly
It’s funny I was talking to my friend Allie about this who observed she can’t get Veo or any model to create a sneeze - and I wondered if that’s because people cut sneezes out of their videos so it’s not in the training data 😂
@@bilawalsidhu 😂😂 that’s interesting! I actually didn’t think about sneezing! Makes me think about what other human abilities ai hasn’t quite got a handle on yet
@@yea18899 that's bs. Only people who are actually hating ai stuff are the ones that are going to lose their jobs. The consumers aren't going to give a sh*t for anything else other than quality.
Though Google may surpass chat gpt, they'll still have the issue of have they absolutely fragment all of their services making it harder to use vs most of what chat gpt offers is on one location. The latter makes it easier for a normie to use, and thats where the majority wil be, and that's who will win. Hopefully they get this right,
Google's on top, if you ask me. I played around with their Gemini Flash 2.0, and just the ability to toggle censorship? That's an instant win in my book!
I was enjoying the video... until I realized he said "homie" like 10 times.... I had to stop watching. It is so obnoxious and cringey. It also made me throw up in my mouth twice.
@spatialintel I'm older than you, son. I just find it obbnoxious that you speak like that at your age. Maybe expand on your vocabulary, so you don't sound so try hard.
Man we don't need AI video we need AI clean my house and AI wash my dishes lol
Ha, why not both! Check out this company called physical intelligence - they have a laundry bot that’ll fold your clothes too
@@bilawalsidhu See that's something I can get behind lol
You going to CES btw?
Nah, have no reason to go to CES this year - but we have to link up IRL in 2025!
Google have a new AI for robots. China has been making great strides in robotics. Just needs the software to improve a bit, so probably within the next 5 years or so.
exactly
2000s: hey check to see if that is photoshopped
2030s: hey check to see if that is generated by AI
2060s: hey check to see if that is a naturally generated human.
@@strength9621 2200: Hey check to see if this reality is simulated🙃
@@strength9621 2490s: hey check to see if *_this universe is real_*
You know what I see happening? I see people just completely turning away from watching internet videos because we won't know if anything that we see is even real anymore, so it'll be a huge turn off, and people will just get bored, and want REAL first-hand REALITY again
IMO it’ll be about the value of the content - not how it was made. Simultaneously, we’ll have a Whole Foods market - but my guess is most will be down with Mountain Dew straight to the vein, which is what shorts/tt/reels feels like anyway
sounds like a blessing in disguise, tbh
It will be hard to tell what's real and what's BS. Soon, we will need government approve news channels that's free from BS
Spot on. 😂
@MysteriousLand-f1u thanks!
This is getting CRAZY. Think about this in 3-5 years.... it will be no different from real life. Even now its hard to spot
Feels a year away from video fully shattering the video Turing test
@@bilawalsidhu Its absolutely terrifying what they can achieve. I can't even begin to imagine the complexity to build such a system
ITs just takes existing videos from library off milions and milions off videos that was train on and glue them together
Like when Marques Brownlee put in Sora "tech reviver siting on the desk talking about smartphone in front off 2 displays " in generated video with same plant siting on the desk at same spot as in his tech videos even he typed 0 about the plant
Most of these would 100% be fine for background footage on youtube videos without anyone noticing. I still don't think it's good enough for client work though. But it's crazy how good it is!
I discovered something EVEN better than this.
Truly mindbogling.
It's called human contact.
Walking into a forrest at night with a few tents. Lighting up a fire, walking into the lake and taking a dive in the cold water, coming back, wrapping yourself up in a blanket, Cooking meat. Talking. Glass of wine. Maybe smoking a big Dutch Joint.
..
Eventually falling asleep while staring at stars as stoned as a kite.
Amazing how real that technology feels.
And who says you can’t live a good life inside the matrix?
😂
As an artist I find this technology to be antithesis of human creativity. An absolute abomination.
Not only that it is also criminal because they are scraping everything uploaded on the internet without having a license. That is copyright infringement 101. But humanity is in such a state of stupidity. No one seems to care. It the law is applied those corporations would be automatically out of business and the owners in jail. The penalties are impossible to pay off.
Open AI took a massive L with the Sora thing
It's pretty audacious that they're asking for $200 a month for something that's clearly unfinished. You wouldn't buy a brand new car with half the components missing, so why is this this any different?
Yeah OpenAI needs to drastically rethink how they’re pricing Sora
@@bilawalsidhu You'd think they'd offer it for free for a while, simply for the AI to learn from the users creations.
@Larry prolly too expensive to run free given the compute could go towards gpt usage. However it does seems they did a sizable early access program www.theverge.com/2024/11/26/24306879/openai-sora-video-ai-model-leak-artist-protest
I see a problem with AI testing in that everyone's testing prompts makes too much sense...Yeah, even a godzilla-sized Corgi walking through the streets makes sense, the AI understands what it needs to do and can easily find inspiration to realize it. You need to challenge it a bit more, ask for stuff that doesn't make sense. All these clips you showed, it's all just normal stuff that can be found in any old movie. Even a skeleton playing guitar is not difficult for the AI to bring to life. Come on, show same damn imagination!! You have the most amazing tool at your hands, and you're letting your own imaginations limit your use of it!
I don't think so. Sora literally can barely generate a video of someone cutting a tomato. I think veo still wins
Yeah I agree. But Nebol, if you have some wild prompts you want me to try I’d be happy to do it!
fr, people never tried asking ai for a new color
Yeah generic as fk accomodating overffiting
@@16comic But then how can we see it, though. Also I'm pretty sure anything the AI produces is going to fall inside the human light spectrum
OpenAI should have released the full Sora model instead of the lackluster turbo model they released
Agreed, even if OpenAI only releases the big model to the $200 plan. Seems like they just wanted to copy Runway Turbo?
They did and it's far again compared to Google's
17:25 a "car looking" robot does a fucking backflip and turns into a car, then drives on the water into the camera shot. absolute cinema
The creative applications of text to video never made a whole lot of sense with models up until now. I'm absolutely stunned by how fast researchers have been able to get text to video working so well. Great work Google.
Me too! I still want more than text2video, but damn is it far more useful when you make prompt adherence a lot better
Now we know why it took so long for Sora to come out. They knew it was garbage.
Nice to connect brother, I spent 10 years as a global executive at Yahoo! Nice to discover your channel friend... namaste love this content, gracias!
Welcome brother
I am sure google is adding some kind of watermark in these videos, or other way of identification. They don't want to pollute their training material with already AI generated content.
Yes they are - Google calls it SynthID
Love that
I have been waiting so long for ai to be this good
they thought using ai would cut their cost and making more profits. in reality their products would get more cheaper in quality and value, no sane person would throw their money at trash.
There are many things this could be used for, especially as the technology matures, but when I see this, I’m immediately excited about the future of virtual reality. I understand that this type of technology is still in its infancy, but when I think about what it will be capable of, combined with conversational AI and real-time adaptation, I feel like a kid standing in the distance, watching a theme park being developed.
I might be a bit overly optimistic, but I think we're starting to get close. If you disagree, please keep it to yourself. Don't be a wet blanket. Nobody likes you. 😅
Absolutely. It feels like speaking worlds into existence. And I can’t wait for the convergence of 3D + AI tech.
Whisk is only available in the US, no EU yet sadly
Sadly nothing AI is available in the EU on launch - not even Apple intelligence. Hopefully it’s just a matter of time to meet compliance with GDPR, DMA etc
@spatialintel yeah most stuff releases a few weeks after US release
Can you start with 1) imput images for it to use 2) greenscreen actors to fill in whatever’s wanted in green screen and even crossing in front of the real actor and interacting 3) take sound inputs like voice over to 3d gen or “hear” music to have ai gen crowd to dance with the beat etc … would be incredible
No video to video or inpainting yet - but I’m sure it’s on the roadmap
Wow--great overview!
Anyone know when this will be widely available for everyone to use? Looks impressive compared to others.
what about consistency? multiple shots with the same person /creature animal
You’ll get a good sense at 22:16 - surprisingly good at locking down the character but def not as good as fine tuning on someone’s likeness
I wonder if there's accidentally the ability of location setting? Given Google's giant dataset around location, especially with Maps, Earth and Streetview, how narrowed down and defined can you set the location prompt. Like if you just said London, you'd likely get a London inspired generation, but if you were more specific, let's say Oxford Street or Harrods, how well would the model understand this to set the scene? If they've accidentally included this, or if they can implement it down the line, this would entirely change the game.
Well imaged locations definitely end up being consistent, but it’s not perfect. Eg look at the SF ferry building examples in my vid; the tower changes a fair bit. I suspect they will if they aren’t training on street view and aerial imagery already.
Excellent work by Google. As long as they keep their politics out of their tech they will be even much better.
1:18 - Why TF does no one test or talk about Hailuo? It's WAY better
I do talk about them at 9:33
Yeah it’s free and good
@@bilawalsidhu
Having mostly experience with Hailuo lately, I stopped watching at the halfway mark, so probably just cut off before you talked about Hailuo.
I am disappointed in it. Videos are so bad lately. Used to be 2-3 megabyte now it’s just 1-1,5 megabyte. Quality has dropped massively. I hope google releases veo2 and have an unlimited plan. Will definitely switch
Дуже дякую за цікавий контент !!!
Bro how is this creating videos so perfectly when most still image generations still get 6 fingered hands?
most SOTA image models are pretty darn good with fingers now -- still not perfect, but the hit rate is MUCH better than it was last year
To be honest the quality seems low for cinematic generations. Though the movement is better more fluid we have A LONG WAY TO GO for realism. Thanks for the update I subscribed.
Thats absolute the state of the art!! For the next 1-2 months...
This and AR coming together is going to get crazy
Akin to reskinning reality
Holy crap that's insanely high quality.
When I see some creator using the AI hype to raise up a TH-cam channel about it and advertising for their courses…, and you see that they know quite nothing about it and just make stuff up as they talk…, it's just so sad to see.
Thank you for providing actual and relevant information. 🙏
This shit is blowing my mind! Ive been aware of AI for a while but the last few weeks have gone on a binge and its basically going to be magic within a few years!! This is going to change everything!!!
It actually givese me hope more than fear, the way the world has been since the early 2000s us humans deserve it.
I just hope we dnt fuck it up!
No upload of your own image allowed for image to vídeo , makes hard to have any use for professional use.
From the video professional , point of view, they need more resources
I asked about this and it’s def on the roadmap - i suspect they wanna make sure 31:53 doesn’t happen 😂
Google is winning the generative AI race, but OpenAI has already achieved AGI, which means saying Google is ahead is like beating an F1 driver at football and saying you're the best athlete.
They haven't achieved sh1t. AGI would be able to do *everything* a person can do, at least when it comes to abstract stuff. O models are not even close.
@xviii5780 It's funny that you think "doing things" is what AGI is about. We've had mindless automotons that can do all kinds of abstract, amazing things for decades.
AGI is a machine that can think without a prompt. A machine that has dreams and passions.
So, purely thinking from a caveman investor "me likey money" standpoint, you're right they haven't invented a new tool...
They've given birth to artificial life.
12:57 Awesome! Soon I will be able to make my own Godzilla movies without Kong hijacking the whole damned thing.
*TEAM GODZILLA* Screw Kong!
Lmao love it
5:10 this is the future, we'll get to have waifu of our own lol XD
Bro if there is a way to get i to remember characters & Locations its over, as well as 20 - 60 Seconds
That was a fantastic video. Thank you
Glad you enjoyed it!
I'm looking forward to remaking some of my favorite movies I remember from when I was a kid, but were actually terrible when I rewatched them as an adult.
would be nice if Veo 2 could be run locally, or uncensored... Wait, is it uncensored? I dunno, i don't have access to it lol
It’s fairly clamped down prompting wise. Can’t put in names of celebrities/artists/IP also has a hard time with weapons and other stuff you’d find in a modern action movie.
@@bilawalsidhu AS USUALL WE WONT SEE THIS MODEL FOR A LONG TIME ESP IN EUROPE
such a great storytelling skills
great vid soon we will be able to prototype movie scenes instead of explaining eFin storyboards.
💯
AI community and gatekeeping name a better combo
As an european, I can testify we indeed eat our soups like Sora showed
훌륭해 보입니다. 바라건데 영상제작에 활용할수 있을만큼의 camera control ui 가 있으면 좋겠습니다. 그리고 그 ui가 영상제작에 활용할수있는 레벨은 촬영현장이나 기존 cg작업을 할때 한 컷의 카메라에 얼마나 정교한 작업이 필요한지 또 얼마나 많고 빠르고 정교한 수정이 필요한지에 근거해야 합니다.
Agreed, could be a nice layup
I was excited to try it out but couldn't sign in, so sadly I signed up for when the general public are let in, :(
Yeah, that's a shame. I hope they open it up soon, too! Meanwhile try Google whisk and imagefx
Google's double standards are really fascinating. TH-cam made a big announcement in December that they want to help artists, not replace them, and that they can generally decide who uses THEIR art for training data, but of course they exclude themselves. The whole thing is just so ridiculous. Also, I don't understand what people are thinking here. If there is no value left behind art, it will become meaningless over time. Good times are coming when TH-cam is so spammed with crap that people get a complete sensory overload..
If it’s crap, it’ll only see a few people and not get recommended. Maybe a few hundred views. In the future I see people using this to tell genuine valuable stories and to make proliferation of useful information more engaging.
Open AI won’t let you use copyrighted imagery, but I guess Google is fine with those storm troopers. Furthering your point of not just replacing artist, but not respecting copyright
@@houseofknox8066 They'll put guardrails on it before it's released as a public product. I'm sure that's why they have a small invite-only portion of people using it right now. It's to uncover edge cases.
Learn to use the technology instead of being a crybaby that's going to get left behind....
How do you know they trained using copyright from youtube specifically?
If this is the baseline now then in a few years eveerything will be on par with cinema. Scary to think about.
How did you put your fig in there? You never explained it
bascially tossed a few photos of my mom's doggo into google whisk which gave me a prompt that recreated the doggo down to the exact attire and ear shape. since the google models are the same across services whisk is a great way to do image to prompt --> then toss it into image or videofx
Is it using the quantum prosser Willo?
6:09 some of those clips look straight from The Chosen
No one seems to care about how these softwares are stealing millions of copyrighted works in order to operate. Using millions of works without asking for a license is straight copyright infringement.
How are you getting 4k output? It says current outputs are limited to 720p in the VideoFX tool.
So when do we get to use it ourselves?
Apparently Veo is coming to TH-cam next year; that’s been announced - videofx tool release no date shared rn
I genuinly see a future where theres a service where you type what type of film you want to see and it makes it for you. This could kill cinema.
I think models like o3 and Gemini with thinking will help decompose a top level movie prompt into a bunch of self consistent shot level prompts. We do need more models that output audio in tandem and that tech is getting better too.
Yea Open Ai gone have to drop that price google is killing them
Google having their own TPUs is a huge advantage too
@spatialintel using CUDA cores, and to think about it I only have a Titan X pascal I can only perform so many matrix till my GPU is exhausted, it puts your entire system in an optimized state of process to its maximum potential CPU and GPU communication in overdrive
@spatialintel I'm about to change the world of processing, I'm testing my CUDA application in Blender, this application puts your CPU and GPU communication in overdrive x10 the only thing I have to fix is when you render in Blender with cycles I'm getting an error I should find a way around it soon, compiling for all NVIDA GPUs will be releasing in the store soon, note this works for all applications, I have to find why DirectX is crashing also works flawlessly in Unreal Engine testing all sorts of apps
I love how these guys are so exited by all this not realizing that soon they will all be replaced by Google AI agents pretending to be real influencers spewing better content at a faster rate, and oh they would all look sexy as fk.... Guys you are being outplayed, enjoy the ride!
Oh trust me there are days where every post or upload feels like I’m training my own replacement. But such is the case across knowledge work and really anything that can be done with a computer. I make content I like making, not content I need to make. So I absolutely will enjoy the ride!
@@bilawalsidhu As long you're having fun that's what counts, to be fair, it will probably end up between having people who want human made content and the ones who are brainwashed by these amazing fake hot AIs agents anyway lol
@@fragmcdohl8291 lol, really hope you're right. i do wonder what the split will be -- sometimes worry that while there will be a whole foods like "organic content" market -- most will want mt. dew straight to the vein
@@bilawalsidhu Humans will always have a need for human connection, it's in our DNA. My guess would be to explore topics in a very human and relatable way which would be harder for AI to emulate as regardless how good they are, they don't have feelings yet.
When does Veo 2 release?
For now a growing early access list. I assume it’ll be this year - maybe I/O? It’ll probably come to TH-cam even before then but I dunno if that’ll be a smaller model
@@bilawalsidhu So, do they provide access every day, or how does it work? I've been ignored by them since April, and it's so frustrating.
they do own youtube and quantum computing. so they can crunch a lot of videos
A whole lot of TPUs going brrrrr
Are you still taking prompts from the community?
Has a price for this been announced?
Not yet
OpenAI deserves the fall, I was so excited when OpenAI introduced SORA they should release it sooner
If they had launched in summer it would have been the leader for a while; then again the Chinese models like Kling and minimax kinda stole that thunder already
So when will Google's Veo 2 be released Nov/Dec 2025??? I know it took Sora forever to be released and when it was released it was a Watered Down version of Sora, cause it sucks!
They’re already opening it up way more than Sora during their early access - so I doubt it’ll take that long
Thank you this is dope
As a fellow creative I'm not sure it's 100% better then SORA? Side by side some scenes are not clear
Okay I see now the physics is better
Yup exactly
Great topic thanks 👍
The first video of the fishing looks like RDR2 style of graphics
This is internal research project only. So better compare it to internal Sora model not the public Turbo one.
Yo, please generate a new street skateboarding trick! Thank you. resurrect the iconic el toro 20 staircase with skater doing a 360 flip down it and rolling away perfectly
Will try!
I see thousands and thousands of creative people losing their jobs that they've been working
for decades to get good.
Not to be all per say critical…but when ai creates videos allow Humans to BREATH like the motion of breathing , then it will be 👌👌👌
It’s funny I was talking to my friend Allie about this who observed she can’t get Veo or any model to create a sneeze - and I wondered if that’s because people cut sneezes out of their videos so it’s not in the training data 😂
@@bilawalsidhu 😂😂 that’s interesting! I actually didn’t think about sneezing! Makes me think about what other human abilities ai hasn’t quite got a handle on yet
Well... There goes 5 years of learning animation and taking film classes for 3 years... Yaaay i love AI
I feel like people would rather have a real person make animation then to watch ai as it’s already pretty hated hopefully..
@@yea18899 that's bs. Only people who are actually hating ai stuff are the ones that are going to lose their jobs. The consumers aren't going to give a sh*t for anything else other than quality.
What’s up with all the latent space useage.
samanemami.medium.com/a-comprehensive-guide-to-latent-space-9ae7f72bdb2f
im glad Sora didnt cost an extra fee because it is TRASH, I cant believe I waited so long for it smh,
Yeah, but.
Google only gives you a few seconds Invide AI gives you 4 minutes plus sound and voice.
I think if invideo had api access to Veo their tool could be wild for longer form creation - it’d do the Veo promoting for u basically
Sora destroys itself. It's crap
Shame, given how hype the results from the bigger Sora model seemed
@@bilawalsidhu so larger better model not this cheap sora your seeing none of you will be able to afford anyway
😂
How can I access to it ??
If google owns TH-cam, will their current ai technology fix their moderation by removing inapropriate bot pfps?
this
is the beginning
of the end
💀
AND apple Disney Cyberpunk dancing no the Júpiter qnd blabla AND athromoephic cars2060 with 5 dimensión
Good video !!👽More
We can finally make good Star Wars movies.
Fan film potential high
now i can make my own movies ....
Though Google may surpass chat gpt, they'll still have the issue of have they absolutely fragment all of their services making it harder to use vs most of what chat gpt offers is on one location. The latter makes it easier for a normie to use, and thats where the majority wil be, and that's who will win. Hopefully they get this right,
Agree. Google needs to be better at this. And not kludge it into some dev tool either.
It seems they have train with videogames and 3d model with unreal engine
Yeah seems like there’s a ton of synthetic 3d training data in there
Man I know your voice , but I can’t remember from where !!
Prolly from The TED AI Show?
This is the result of google collecting our data all these time, not bad.
Seems better than Sora, albeit somewhat marginally.
"What a time to be alive!"
The vibes every time I rolled a prompt 😂 now watch the feeling go away in a week cuz we adapt so fast
Fellow scholars!
Google's on top, if you ask me. I played around with their Gemini Flash 2.0, and just the ability to toggle censorship? That's an instant win in my book!
I am SO glad ai studio exists so we can toggle those settings. Now if only they had a nice mobile app!
@@bilawalsidhu I agree! Super excited!
as a aspiring 3d animator im so cooked
no single frame without hallucinations🤣😂😂
TH-cam even usk use if they can use our videos to train ai
I was enjoying the video... until I realized he said "homie" like 10 times.... I had to stop watching. It is so obnoxious and cringey. It also made me throw up in my mouth twice.
Sorry homie, I’ll take the L on that one skibidi sigma
@spatialintel I'm older than you, son. I just find it obbnoxious that you speak like that at your age. Maybe expand on your vocabulary, so you don't sound so try hard.
I watch this and assume all of them will have these same features and ability soon.
3:45 is just a sekiro hud lmao
6:46 uhhh who’s gonna tell him?..
What?
Hunyuan isn't free. You have to buy credits to generate prompts.
The model weights are available for free on GitHub, but you obviously need a machine local or cloud hosted to run it.