Welp, thats it folks. We literally can't trust videos as being real anymore. Its starting to get too realistic, I'm actually fatigued trying to detect the artifacts now. 1:41
I saw just a few things in a few examples. The panda and pterodactyl video had cars going backwards and melting into the bridge, but I think like CD quality music being replaced by crappier sounding MP3 and streaming audio, the average person is just going to get used to the small mistakes until they are ironed out of the models. It's a mindfk time to be alive, and we're at the start of the process.
@@YouCann0tSeeMeit will never be regulated by the nature of it. we live in a time where power is exactly synonymous with information. ai is a tool that can synthesize all information. we are going to become eclipsed by the machine by all of the metrics we've created for ourselves, and it is going to become a baseline. tools like this will become the new default.
Yeah I always thought about And the future will only get worse There should be a law where anything generated with AI should be explicitly mentioned somewhere or else risk your platform being taken down or be banned online. It's the only way to distinguish fake to real I say this because people will abuse what is fake and real
@@Trahloc idk why people drop so much hate towards Google. Google is the one that created AlphGo and beat the best go player in the world, they created AlphFold and got a Nobel for it. They CREATED the transformers architecture, the reinforced learning releasing everything on public research papers for anyone and everyone to use. Meanwhile, OpenAI all secretive and shit, releasing failure after failure (not as a business because clearly most people keep consuming their subpar products). In terms of LLM, Google was behind but is also the latest to join the race and is now approaching the competition. Imagen 3 is easy one of the top image generation models, NotebookLM is a homerun so idk why anyone would think VEO wouldn't follow the same standards.
Great sequence expect the last 1 second of the scene after the right turn. The steering remained on full right lock after the right turn which should have spun the car. Should have counter-steered to catch the drift. I guess the AI trainer does not drive.
I mean its google and i would understand if they become #1 in the Ai industry Aside having money they have 90% of the world video data, crazy how far we came and are going. Cant wait to try this🔥🔥🔥🔥
@ 10:33 - The real-world woman is obviously looking directly at herself in the mirror, but the mirror image is NOT looking direct back, but is instead looking at our lower-left.
Thank you i came to say this! When you think about it though, this is how a lot of shows and movies are shot so that the actor is looking at the camera directly. Very interesting how it's picked that up!
@@spinninglink I was just thinking that - there might be conflicts between physics it learned from real footage vs special FX from movies. I hope they took this into account in tagging the data
And 2024 isn't even over yet. AI video will be flawless by the end of next year or maybe sometime in 2026. After that, if we're lucky, video lengths will be much longer and censorship will be nil. I can dream, can't I?
I think consistency and cost will still be limiting factors for the next few years, but I have friends in film who are still adamant that this will never replace them. I understand how they feel, and perhaps it’s just blind optimism, but I’m like… you guys really need to consider what you’re going to be doing in 5-10 years. And I don’t tell them that that’s probably their most optimistic scenario.
In the palace party(?) scene, putting aside how the room seems to be a wierd mix of private dressing room and banquet room, the right hand mirror becomes a doorway into a different room in the second half of the pan
We need a global registry of ‘News’ videos and locations they were taken at or no one will believe nothing. I was born in 76’. We used to whine about where our flying cars are, now we’re just buckling up.
I'm blown away by V2's video generation capabilities. The level of realism and coherence is incredible. Can't wait to see what the future holds for AI-generated video!
10 or 20 years from now, if you want to watch a movie, you'll just prompt a movie AI, telling it what kind of movie with which actors you want and it'll just create it.
It will also monitor your bio (hardware already moves in that direction, lke Apple Vision already can monitor pupil dilation) and will adjust the story based on live reaction to it.
@tonystarkagi Nah, there's not enough compute to go around yet. Also the videos aren't consistent enough yet. It will take a while before we get there but you are right it probably won't take 20 years.
16:53 Crocodiles often rely on the element of surprise, lying in wait for prey to come close. Capybaras, by being in groups and alert, reduce this advantage. Capybaras live in groups which can provide safety in numbers. A group of capybaras is more likely to spot predators early, giving them a chance to flee or hide. Moreover, their communal lifestyle includes warning signals; if one capybara senses danger, it can alert others. So mabe the crocodiles never really developed a taste for them since it would be super rare to eat one...
i think the coins glitch was a physics issue. it knows the ball should displace some coins and therefore raise the level of the coins but thats not really how it works in reality
Now imagine when AI video can be quickly turned into an Unreal Engine scene, with the AI being intelligent enough to make all of the relevant assets, models, physics code, etc. Then have that in a loop structure, using the scene to make further video to iterate the scene.
7:50 The ability to generate realtime story telling with minute glitches and scenes that is perfectly coherent and consistent is just striking. Hollywood will be knocking on Google door to make a sweet deal. Google has set the bar really high because many of their geneated videos has no glitches and the stories are develop dynamically with perfect frame synchronization. I'm flabbergasted 😮😮wow.
Dialogue, consistency between scenes, and artistic control will be the key. If they will be able to provide that, then they will be in the movie business. But still, this level already lands them in the vfx for ads business. Ad agencies will salivate at this and will flood our minds with more and more creative works that steal our attention to present a product once it's stolen.
For me at this exact point in time those AI videos just look too weird and broken to the point it gives me headache. There are some AI videos that you can't tell but most are way too obvious and don't look natural. But I know that this is just the beginning and this will only get better over time.
By being too censored and woke, maybe, to the point of the model refusing to generate the level of violence and nudity present in an average HBO series. This will give competition an opportunity to create a model that will be able to do it.
10:20 when the truck is driving on the road it has a video game quality, it appears like a pool of water is moving with the tires rather than the tires driving through water that is already on the road. Uncanny valley is deeper than it appears.
The shot is great but I'm surprised no one seems to notice how the front wheels are not behaving properly. First. Sudden change of direction doesn't do anything, then coming out of the turn the car seems to understeer while drifting from the back 😮
🏆 for physics! 🤯 RE: Victorian age woman with mirror I didn't notice a significant issue but I did notice the smudges on the mirror. Very impressive a detail such as that was included.
Looks really good, apart from the fact it has "video shot on iPhone on a gimbal" look to it, while Sora has this "high-end commercial shot on Alexa look to it".
The problem with the reflection/mirror output seems to be the reflection is at a different angle to the subject and the subject blinks but the reflection doesn't.
It's important to keep in mind that there is a difference between realism and eye candy. Definitely a lot of eye candy but we're still in uncanny valley. Keep in mind we've fooled ourselves for decades that we were almost out of the valley but thus far the boundary has always been a lot further than it appeared at first glance.
The issues in the ballroom video are imho seen in the mirrors. The faces (like masks) in the bigger one and inconsistencies over time in the smaller one, i.e. different content mirrored.
This is top notch, it even understand right hand side traffic in one of the videos, the other video was more like one way street. Videos & images of the traffic mechanism in Cities is mostly crap in other tools. Generating images with right hand side traffic with opposite lanes, with 100% logic in the traffic eco system inside eg New York is kind of a challenge. 🤣
Is video AI able to create a charatcter and then continually there forth reference that character in multiple scenes? Can you like set a sort of image tag for the AI to there forth call back and place into various scenes?
I would not have suspected that wheels in relation to something's movement would be one of the hardest things for AI to figure out... And in the clip where a woman is looking into a mirror, she is facing the mirror straight on but in the mirror image is at an angle.
In November 2023, i predicted by the end of 2024 to mid of 2025 there will be two new services: 1. AI-on-demand- Netflix (enter a prompt and it generates a pilot episode, which you can continually modify as you like it, then you can let it generate a whole episode, then multiple to turn it into a series. Of course about any topic you like, in any art-style, any genre.. 2. AI- on demand -gaming.. (enter prompt to generate a playable demo "level", modify, then generate a "whole" game from it..; probably add generated game to your library/share generated games with participating friends..) Until about a month ago, i thought the timeframe i picked was too optimistic, but.. now i think mid next year that seems quite doable
Mirror shot: the woman is positioned in a different angle in the mirror reflection and her head is also tilted to the side.. i guess they wanted to show that the model doesn't understand mirror reflections correctly.
Size and Age: Juvenile capybaras are definitely vulnerable, and crocodiles will prey on them. However, adult capybaras are quite large (the largest rodents in the world!), often reaching over 100 pounds. This size difference can make them a challenging and risky prey for smaller crocodiles. Capybaras' Behavior: Capybaras are highly social and live in large groups. There's safety in numbers. They are also adept swimmers and can quickly escape into the water, which is their preferred escape route when threatened. They're also surprisingly fast on land. Habitat Overlap, Not Constant Interaction: While both species inhabit similar wetland areas, they don't necessarily interact all the time. Capybaras often graze in open areas while crocodiles tend to lurk in the water. This separation in time and space reduces the frequency of encounters. Dietary Preferences: Crocodiles are opportunistic predators, but their primary diet consists of fish, turtles, snakes, and birds. While they will take larger mammals if the opportunity arises, it's not their main focus. An adult capybara is a substantial meal that requires significant effort. The "Chill" Factor: There's a common misconception that capybaras are "chill" with everyone, including predators. While they are generally docile, they are wary of danger and will react defensively. They don't exactly seek out crocodiles for friendship!
if you pause the video with the woman and the mirror at it's very end, you will see that the reflection is looking at the candle on the left, but the person is not. I think that no flat mirror setup can achieve that.
The person in the mirror isn't looking at their reflection even though they are directly staring at it. In the veiw she is looking at the camera from,inside the mirror but the other body is still
2:50 In that example it completely ignored all the stylization requested in the prompt, and it said several times to turn it stylized and abstract, so the video quality in that one is good but prompt adherence isn't great.
Findlay someone that gets why google has everyone beat. Who knew how much of an assist TH-cam would be and the creator program incentivizing more TH-cam videos a little under 20 years 😅
The results are excellent, but the implications of this tool could be quite dangerous. Regardless, It’s still a good sign that AI is still making substantial leaps in the current same technological environment
Does this mean that reversing the generation from video to accurate instructions is possible now. If that is the case then I guess Google can also generate accurate instructions for robotic action just from watching video
I hate how good this looks. Sadly this whole process is difficult to be stopped and makes it more and more difficult to do something actually creative and fullfilling as a job instead of manual labor, but here we are. Making AI do the things, that are actually fun. Of course there will always be worth in something human made. But companies will ask: Why pay a human when the ai can do it faster, cheaper and soon, atleast comparing to what an ai can put out. The only reason might be the feeling in it also for actors but yeah, corpos arent in there for fun
I'm an artist to the bones, but we have to learn and change with the progress. It's like, if someone is a doctor, they shouldn't be sad, once the world doesn't need them anymore because people don't get ill.
@ Weird comparison comparing a human unintended state of mind or body with art. I mean yeah being sick and wounded is an undiserable state and doctors would be happy of these states dont exist. Being able to be creative and are through monetary inscentive being able to do something is. Yes artist need to learn to work with AI, but that still will make a lot of work way less your own. If you are being an artist, most likely a lot of your job will be, making ai do a thing and fix whats broken.
So, is it safe to say that television shows and movies will eventually be rendered completely by AI and will therefore have rendered (pun intended) actors obsolete? How soon is this likely to happen?
the video of the victorian woman blinks once or twice and the reflection in the mirror doesn't blink and the facial expression just seems completely off, angle not matching reflection back; the mirror face not looking directly back at the woman as she looks at herself in the mirror. It's definitely a clunky shot
We are about 4-5 years away from someone being able to remake the final season of Game Of Thrones into something good. I can't wait.
That would be the "killer app" for AI. Simply input the scripts from the first 6 seasons into Claude and ask for a suitable climax.
@PeterStrmberg007 Or, if George ever finishes the books just ask an AI to adapt them into a script.
4-5 is a stretch. more like 1 or 2
@@NinetooNine We’re at the point where we need to ask AI to just finish the damn books for him.
@@protips6924 buddy said “flawlessly” 😅
Welp, thats it folks. We literally can't trust videos as being real anymore. Its starting to get too realistic, I'm actually fatigued trying to detect the artifacts now. 1:41
They should really regulate this before it gets way too out of hand
I saw just a few things in a few examples. The panda and pterodactyl video had cars going backwards and melting into the bridge, but I think like CD quality music being replaced by crappier sounding MP3 and streaming audio, the average person is just going to get used to the small mistakes until they are ironed out of the models. It's a mindfk time to be alive, and we're at the start of the process.
@@YouCann0tSeeMeit will never be regulated by the nature of it. we live in a time where power is exactly synonymous with information. ai is a tool that can synthesize all information. we are going to become eclipsed by the machine by all of the metrics we've created for ourselves, and it is going to become a baseline. tools like this will become the new default.
Yeah I always thought about
And the future will only get worse
There should be a law where anything generated with AI should be explicitly mentioned somewhere or else risk your platform being taken down or be banned online. It's the only way to distinguish fake to real
I say this because people will abuse what is fake and real
Many videos have very obvious errors.
Watching this vs Sora, this is much better.
However, I'll believe Google when they provide a finished product for us, users, to actually use.
I'd bet it has a better chance of entering the Google graveyard before being a public release.
@@Trahloc its already available to many users for testing so no i don't think this is going to graveyard
@@Trahloc idk why people drop so much hate towards Google. Google is the one that created AlphGo and beat the best go player in the world, they created AlphFold and got a Nobel for it. They CREATED the transformers architecture, the reinforced learning releasing everything on public research papers for anyone and everyone to use. Meanwhile, OpenAI all secretive and shit, releasing failure after failure (not as a business because clearly most people keep consuming their subpar products). In terms of LLM, Google was behind but is also the latest to join the race and is now approaching the competition. Imagen 3 is easy one of the top image generation models, NotebookLM is a homerun so idk why anyone would think VEO wouldn't follow the same standards.
I don't care what papers google is publishing before they actually deliver
they literally have? you can use Veo2 right now.
The prompts are massive, d&d dms futures are bright.
When this gets to VR it will be insane
We are not far away from the matrix becoming a reality
STUNNING QUANTUM SHOCK
We need the quintessential Will Smith scarfing spaghetti shot.
@@miki_wiki12 someone did it and it looks incredible. It wont generate will smith tho probably because of censorship issues.
This looks INSANE 😮
No. Previous video AIs looked insane with things appearing out of nowhere and changing size etc. This looks normal. Which is insane.
I didn't know you watch this stuff bro😂❤
That muscle car backtires were tilted like drifting cars. That is crazy!
Great sequence expect the last 1 second of the scene after the right turn. The steering remained on full right lock after the right turn which should have spun the car. Should have counter-steered to catch the drift. I guess the AI trainer does not drive.
I mean its google and i would understand if they become #1 in the Ai industry
Aside having money they have 90% of the world video data, crazy how far we came and are going. Cant wait to try this🔥🔥🔥🔥
@ 10:33 - The real-world woman is obviously looking directly at herself in the mirror, but the mirror image is NOT looking direct back, but is instead looking at our lower-left.
Also the candles are in the wrong place in the reflection
Also, the faces of the men in the reflection looked very warped/low detail.
Thank you i came to say this! When you think about it though, this is how a lot of shows and movies are shot so that the actor is looking at the camera directly. Very interesting how it's picked that up!
Also it doesnt matter
@@spinninglink I was just thinking that - there might be conflicts between physics it learned from real footage vs special FX from movies.
I hope they took this into account in tagging the data
And 2024 isn't even over yet. AI video will be flawless by the end of next year or maybe sometime in 2026. After that, if we're lucky, video lengths will be much longer and censorship will be nil. I can dream, can't I?
I think consistency and cost will still be limiting factors for the next few years, but I have friends in film who are still adamant that this will never replace them. I understand how they feel, and perhaps it’s just blind optimism, but I’m like… you guys really need to consider what you’re going to be doing in 5-10 years. And I don’t tell them that that’s probably their most optimistic scenario.
@@wonmoreminute do we even need streaming platforms when we have an AI storyteller generating whatever movies we want every night?
@@2beJT well you wont have any money to afford them since you and everyone will be replaced by AI
Uncensored? Only for open-sourced models probably.
Easily the best vid generator, by far.
In the palace party(?) scene, putting aside how the room seems to be a wierd mix of private dressing room and banquet room, the right hand mirror becomes a doorway into a different room in the second half of the pan
This is the point where the movie industry will start to use it.
Interesting that Google waited to launch this just after Sora. Reverse OpenAI tactics, it seems.
We need a global registry of ‘News’ videos and locations they were taken at or no one will believe nothing. I was born in 76’. We used to whine about where our flying cars are, now we’re just buckling up.
I'm blown away by V2's video generation capabilities. The level of realism and coherence is incredible. Can't wait to see what the future holds for AI-generated video!
This video looks like direct footage from Sam Altman's head... during his worst nightmare.
10 or 20 years from now, if you want to watch a movie, you'll just prompt a movie AI, telling it what kind of movie with which actors you want and it'll just create it.
Yea we are only 2 years in 😅
bro what !!! 10 20 years ?? 😂😂😂😂😂😂😂😂😂😂 more like 2025
It will also monitor your bio (hardware already moves in that direction, lke Apple Vision already can monitor pupil dilation) and will adjust the story based on live reaction to it.
If you wanna watch soulless slop
@tonystarkagi Nah, there's not enough compute to go around yet. Also the videos aren't consistent enough yet. It will take a while before we get there but you are right it probably won't take 20 years.
Yessss Wes! New thumbnail picture looking not like satan! love it!
Really? He looks like he's being probed! 🤣
I keep thinking about the fact that, the people and "imagined" places don't exist. Its really loopy- 🤯
You are correct. Hyperlapse is timelapse with a moving camera. While that shot looks great, it’s not technically hyperlapse.
Wow, as a story teller I think this is the first time I've actually felt excited about having a new tool to tell stories with
I'm already exhausted by the idea of the barrier of entry being too low.
@@2beJT that's what I'm thinking
As a CGI artist this the first time I’ve actually felt I won’t have a job anymore soon
@@2beJTgatekeeping much?
Yup "If everyone every is special, no one is" @@2beJT
I am stunned and cannot move, not because of the AI video, but because I am, in fact, a carrot.
00:11 This thing seems to BEE 🐝🐝🐝
16:53 Crocodiles often rely on the element of surprise, lying in wait for prey to come close. Capybaras, by being in groups and alert, reduce this advantage. Capybaras live in groups which can provide safety in numbers. A group of capybaras is more likely to spot predators early, giving them a chance to flee or hide. Moreover, their communal lifestyle includes warning signals; if one capybara senses danger, it can alert others. So mabe the crocodiles never really developed a taste for them since it would be super rare to eat one...
i think the coins glitch was a physics issue. it knows the ball should displace some coins and therefore raise the level of the coins but thats not really how it works in reality
Now imagine when AI video can be quickly turned into an Unreal Engine scene, with the AI being intelligent enough to make all of the relevant assets, models, physics code, etc. Then have that in a loop structure, using the scene to make further video to iterate the scene.
But that would be 'the matrix'
7:50 The ability to generate realtime story telling with minute glitches and scenes that is perfectly coherent and consistent is just striking. Hollywood will be knocking on Google door to make a sweet deal. Google has set the bar really high because many of their geneated videos has no glitches and the stories are develop dynamically with perfect frame synchronization. I'm flabbergasted 😮😮wow.
They are definitely getting better but we're still in uncanny valley, especially for longer clips.
Dialogue, consistency between scenes, and artistic control will be the key. If they will be able to provide that, then they will be in the movie business.
But still, this level already lands them in the vfx for ads business. Ad agencies will salivate at this and will flood our minds with more and more creative works that steal our attention to present a product once it's stolen.
Who needs Hollywood anymore.
For me at this exact point in time those AI videos just look too weird and broken to the point it gives me headache. There are some AI videos that you can't tell but most are way too obvious and don't look natural. But I know that this is just the beginning and this will only get better over time.
Better than Sora you think? I’m on the waitlist. Can’t wait.
Competition good, we just want good ai models no matter from which side
This is much better than Sora
certainly is better. agreed.
I wonder how Google will screw this up. They have managed to nerf everything AI that they have released with perhaps the exception of NotebookLM 🙂
Omg it's almost like Google hates making money or something. They keep making the same mistakes over and over again
By being too censored and woke, maybe, to the point of the model refusing to generate the level of violence and nudity present in an average HBO series. This will give competition an opportunity to create a model that will be able to do it.
10:20 when the truck is driving on the road it has a video game quality, it appears like a pool of water is moving with the tires rather than the tires driving through water that is already on the road. Uncanny valley is deeper than it appears.
The shot is great but I'm surprised no one seems to notice how the front wheels are not behaving properly. First. Sudden change of direction doesn't do anything, then coming out of the turn the car seems to understeer while drifting from the back 😮
🏆 for physics! 🤯
RE: Victorian age woman with mirror
I didn't notice a significant issue but I did notice the smudges on the mirror. Very impressive a detail such as that was included.
Yes, very good. Now let's see full Sora.
brutally amazing
Looks really good, apart from the fact it has "video shot on iPhone on a gimbal" look to it, while Sora has this "high-end commercial shot on Alexa look to it".
How is every model out there incapable of generating readable text!!???
The problem with the reflection/mirror output seems to be the reflection is at a different angle to the subject and the subject blinks but the reflection doesn't.
Incredible.
I wonder how quickly little kids will get bored using this in the future?... ha
Stunned! xx
It's important to keep in mind that there is a difference between realism and eye candy. Definitely a lot of eye candy but we're still in uncanny valley. Keep in mind we've fooled ourselves for decades that we were almost out of the valley but thus far the boundary has always been a lot further than it appeared at first glance.
AR with genAI will be insane
The issues in the ballroom video are imho seen in the mirrors. The faces (like masks) in the bigger one and inconsistencies over time in the smaller one, i.e. different content mirrored.
This is top notch, it even understand right hand side traffic in one of the videos, the other video was more like one way street. Videos & images of the traffic mechanism in Cities is mostly crap in other tools. Generating images with right hand side traffic with opposite lanes, with 100% logic in the traffic eco system inside eg New York is kind of a challenge. 🤣
Is video AI able to create a charatcter and then continually there forth reference that character in multiple scenes? Can you like set a sort of image tag for the AI to there forth call back and place into various scenes?
Great AI video generation by Veo 2! The image of the woman in the mirror may be off in the angle of her face. Just my guess.
I would not have suspected that wheels in relation to something's movement would be one of the hardest things for AI to figure out... And in the clip where a woman is looking into a mirror, she is facing the mirror straight on but in the mirror image is at an angle.
In November 2023, i predicted by the end of 2024 to mid of 2025 there will be two new services:
1. AI-on-demand- Netflix (enter a prompt and it generates a pilot episode, which you can continually modify as you like it, then you can let it generate a whole episode, then multiple to turn it into a series. Of course about any topic you like, in any art-style, any genre..
2. AI- on demand -gaming.. (enter prompt to generate a playable demo "level", modify, then generate a "whole" game from it..; probably add generated game to your library/share generated games with participating friends..)
Until about a month ago, i thought the timeframe i picked was too optimistic, but.. now i think mid next year that seems quite doable
1800 century mirror problem.
The candle in the box on the left.
Mirror shot: the woman is positioned in a different angle in the mirror reflection and her head is also tilted to the side.. i guess they wanted to show that the model doesn't understand mirror reflections correctly.
Absolutely crazy.
Size and Age: Juvenile capybaras are definitely vulnerable, and crocodiles will prey on them. However, adult capybaras are quite large (the largest rodents in the world!), often reaching over 100 pounds. This size difference can make them a challenging and risky prey for smaller crocodiles.
Capybaras' Behavior: Capybaras are highly social and live in large groups. There's safety in numbers. They are also adept swimmers and can quickly escape into the water, which is their preferred escape route when threatened. They're also surprisingly fast on land.
Habitat Overlap, Not Constant Interaction: While both species inhabit similar wetland areas, they don't necessarily interact all the time. Capybaras often graze in open areas while crocodiles tend to lurk in the water. This separation in time and space reduces the frequency of encounters.
Dietary Preferences: Crocodiles are opportunistic predators, but their primary diet consists of fish, turtles, snakes, and birds. While they will take larger mammals if the opportunity arises, it's not their main focus. An adult capybara is a substantial meal that requires significant effort.
The "Chill" Factor: There's a common misconception that capybaras are "chill" with everyone, including predators. While they are generally docile, they are wary of danger and will react defensively. They don't exactly seek out crocodiles for friendship!
Nice work Google
if you pause the video with the woman and the mirror at it's very end, you will see that the reflection is looking at the candle on the left, but the person is not. I think that no flat mirror setup can achieve that.
So sad the way we’re heading hopefully this becomes a sub category instead of replacing all the amazing stuff we have now
The person in the mirror isn't looking at their reflection even though they are directly staring at it. In the veiw she is looking at the camera from,inside the mirror but the other body is still
I am so happy to finally be able to create the series and stories I always wanted to I'm so stoked!!!
But will it be able to keep faces and scenes consistent for that?
@joech1065 probably not initially but I hope it does eventually
I HAVE ACCESS!!!
Sus(kover?)
2:50 In that example it completely ignored all the stylization requested in the prompt, and it said several times to turn it stylized and abstract, so the video quality in that one is good but prompt adherence isn't great.
Insane
Soon we will have AI generated videos telling us about AI video generators
It sounds like he did that for this video.
8:46 - The driver can "see" the reflections. Why are 'reflections' any different than 'steering wheel'?
Because it's not really , how is that hard to understand
@Vinei yeah, same with the flamingos and their reflections.
17:37 car in right lane is moving in an impossible way.
10:40 missing camera in the mirror, Wes.
The woman who is looking at the mirror seems to be looking at it straight on yet at an angle in the reflection
Looks better than Sora IMHO. Sora looks grainy with a grey filter on top of it half the time.
we're finally getting to the point where anything you see online has an equal chance of being fake as it does being real. we're officially cooked
Well google do own the biggest video platform
Findlay someone that gets why google has everyone beat. Who knew how much of an assist TH-cam would be and the creator program incentivizing more TH-cam videos a little under 20 years 😅
In the old Victorian mirror shot, in the reflection, it looks like Jason from Friday the 13th is sat to the left 🕵♂
To the right if looking in the mirror. Definitely has a weird "masked" face.
Capybara hanging out with crocodiles, sitting on their heads, riding them are AI generated, dude!
The results are excellent, but the implications of this tool could be quite dangerous. Regardless, It’s still a good sign that AI is still making substantial leaps in the current same technological environment
Open AI had so many millions of investments and now they don't have a single frontier model.
We’re cooked
We are only at the end of year 2 😅 we are so cooked
The woman was looking straight at the mirror, but her reflection was looking off to the right.
@1050 I think it's the angle of the face. She's looking straight on but the reflection is 3/4. Could be wrong.
those thumbs are crazy
@10:04 a blink of the eye and slight movement of the head, do not reflect in the mirror.
Thanks for the video
15:48 is neither Hyper- nor Timelapse.
Does this mean that reversing the generation from video to accurate instructions is possible now. If that is the case then I guess Google can also generate accurate instructions for robotic action just from watching video
10:49 the angle the figure in the mirror is at. It’s kind of creepy because it seems to follow you lol
12:54 infinite money glitch
10:57 the problem is the orientation of the woman not being the same in the mirror and in the reality.
Wow. Couldn’t tell this was AI?
In the near future, people will give a prompt (like: make a new TWD season but with no Woke), and an entire movie will be generated.
10:33 The direction the woman is facing doesn't match the reflection.
Magic has become real. Very impressive.
Soon you'll be able to make a feature movie from a prompt
Holy sh!t 🤯
I hate how good this looks. Sadly this whole process is difficult to be stopped and makes it more and more difficult to do something actually creative and fullfilling as a job instead of manual labor, but here we are. Making AI do the things, that are actually fun. Of course there will always be worth in something human made. But companies will ask:
Why pay a human when the ai can do it faster, cheaper and soon, atleast comparing to what an ai can put out. The only reason might be the feeling in it also for actors but yeah, corpos arent in there for fun
I'm an artist to the bones, but we have to learn and change with the progress.
It's like, if someone is a doctor, they shouldn't be sad, once the world doesn't need them anymore because people don't get ill.
@ Weird comparison comparing a human unintended state of mind or body with art. I mean yeah being sick and wounded is an undiserable state and doctors would be happy of these states dont exist. Being able to be creative and are through monetary inscentive being able to do something is.
Yes artist need to learn to work with AI, but that still will make a lot of work way less your own. If you are being an artist, most likely a lot of your job will be, making ai do a thing and fix whats broken.
So, is it safe to say that television shows and movies will eventually be rendered completely by AI and will therefore have rendered (pun intended) actors obsolete? How soon is this likely to happen?
Capybara is friend to all ❤
Frankly if it put lens flair in every frame it would be obnoxious.
Can we ask it to give us a video of the founding fathers
Would rather of the founding mothers
the video of the victorian woman blinks once or twice and the reflection in the mirror doesn't blink and the facial expression just seems completely off, angle not matching reflection back; the mirror face not looking directly back at the woman as she looks at herself in the mirror. It's definitely a clunky shot
Bear in mind, they will only show the best of their videos. It likely has a lot of the same bugs as other AI video gen