For someone just getting started in the Ai Video space. (I've played with Pika when it first came out for a few days) If I wanted to make a video, where would I get started? It feels like I need to combine so many things to get a few clips
I find it astonishing how just two years ago, I was writing code to coax an AI model into producing what I called 'trippy,' distorted fever dream visuals. Witnessing its evolution since then is both awe-inspiring and unsettling hahah. I find myself repeating the matra, "I can't wait to see where it will be in a year"
Haha. Right? Sometimes I like to go back a year or two and look at old videos on the channel. Really cracks me up looking at discord servers and being in awe. …and then I think where we’ll be in 2 years when I go back and watch todays video! Wild times
@TheoreticallyMedia it is insane. My sister channel "Common Sense Made Simple" aka iamYork actually gained most of its popularity because I think I was the first person on TH-cam running Google Colabs with Stable Diffusion and now I look back at some of those videos and contemplate removing them because they look so rudimentary to what a paid app can do now in minutes in 2024. I have never seen or heard of any tech in human history that has evolved this fast. I am looking forward to what the future brings. Keep up the good work, my friend.
Basically a “lite” version of Hollywood mocap feature can be available for the public to use soon. Will greatly help with my current ongoing projects. Neat!
@@BB-mt5sb Which requires studying UE5 software and mechanics so it's specialized for people who are in that field. This process simplifies everything through an automated process just like drawings through stable diffusion.
We are looking at the mind of the machines and using that mind to take out the thoughts of our own minds because our brains doesn't come with a display integrated.
Just tried this on a 3-D cartoon character I made. It's quite beautiful. The biggest issue whether it is more or less costly to do the renders with standard method using Arnold or Redshift or to use this tool. Don't have any A100s lying around, so we need to rent servers in either case. However, it's not only a question of which is cheaper, for high-end production - it is also a question which is easier. Old method: capture audio and ARkit face, paste into Maya or MoBu. Retarget to character. Fix the weirdness - eye flutter, whatever, complete the scene. export/import Alembic for dynamic hair and cloth, recolor, the upload to rendering, download the file, import image sequence to DaVinci, composite. . With LivePortrait: shoot the video, cut into clips, upload, download, import to DaVinci, composite. Much easier.
I have had an app for over 7 years now that’s called Mug Life, that has been able to animate still picture faces with 100 percent custom control, plus you could type in text and make the picture talk.
crap, now I may need to rethink my current project/video I'm working on. Thanks a lot, Tim! But seriously, the Live Portrait stuff looks really good. As so many times before, it looks like it could make other options irrelevant.
This is incredible. I have normally kept a hard stance in not using AI because as a creative designer, it feels like losing control of creativity. But, just the other day, I softened my stance. I have a project that will require multiple speaking parts. I don't have a large budget for hiring voice actors. So, I stumbled upon text-to-speech and text-to-speech AI generation that blew my mind. I was able to alter the delivery of the voice over based off of my own vocal audio; and it worked for Male and female voices. My work pipeline instantly expanded! But now this! The mocap translation is incredible. I could find a use for this without surrendering everything to AI.
So we’ll eventually be able to remake entire movies with characters and scenes from different films. Basically complete style transfer, voices included. Weird.
Pretty much. To be honest, I'm not that interested in that-- I'm much more interested in us making our own films. Although, I can see this tech eventually being utilized for reshoots. Like, you don't even need to call Bradley Cooper back to set.
I see a future with a ton of horrible content generated by technicians claiming it to be animation or film or story telling and it's just crap. Fortunately this will last about 6 months to a year before people get sick of watching garbage and lose interest.
It will be good and we won't lose interest. Every little niece interest group will have their own entertainment like stories, comics and novels, games, music, TV shows and movies.
This is really interesting to watch and ... wow ... such cool development is happening! I am excited and I appreciate the quality of your reporting and the introduction to some of the functionality that they provide. Great work!
crazy tech to be able to generate an ai character out of thin air then make it come to life using your motions and have it look realistic or animation then change the background and lighting ... wow
I LOVE AI (unlike many colleagues in the media industry) because NOW (ok, more like in 1-2 years) ANY CREATIVE individual FINALLY CAN make ideas happen with little to no cost at all (what would've needed millions $ budget just 1-2 years ago!)
1000%. You’ll still need amazing storytelling chops and a killer idea, but the raw sculpting material will be there for you. I mean, you don’t even need to be aiming for a “hit”- these tools let you do all kinds of interesting things, like “I had a weird dream last night, this is what it looked like…”
@@TheoreticallyMedia yepp, A.I. also can give you a twist , a human couldn't possibly come up with, which is an asset in my book. You don't have to implement everything it suggests, but it's nice to have such possibilities.
I've repeated this but it's only a matter of time until Topaz adds some of these things. Like imagine turning 4:3 to 16:9, or black and white to color, or hallucinate missing or broken frames, or whatever. I can see AI completely restoring the missing classic Doctor Who episodes. Reimagining can help with lost media that was "recorded on a toaster".
You can do a lot with Upscayl. And there are comfyui setups that do some of the things you mentioned, probably all of them, but they are either very slow or you need a very beefy gpu. I’m weary of Topaz, because you just never know how long they stay with perpetual licensing. I’m very uncomfortable with a thousand subscription services. But Topaz is definitely pricy for the full suite.
Man I really like what LivePortrait is bringing to the table, great step in the right direction! EMO and Vasa-1, etc. are exciting ( annnd slightly frightening), but LivePortrait's full control is a much needed controlled step in the right direction. Not hating on anyone who wants to re-roll lip-sync until their eyes bleed, but it's easy to undervalue just how vital highly intentional acting is with timing, delivery, pauses, and all the complexities that go into a captivating performance to make you care. Do you think this likely leads to a solution that auto-composites a LivePortrait 2d facial animation onto a Viggle, Domo type motion capture to wrap a full character performance in a bow?
Art school grad here, and I’ll say: don’t stress it too much. Art school is a great time, soak it up and create/learn as much as you can. To be a bit harsh, the diploma is useless when you get out. It’s really all about taking what you learned and applying it to building your career. Network, grow your portfolio. Don’t be a jerk to other artists (they may get you your next gig), and just keep going… Doing the art thing is like choosing life on “hard” mode, but it’s also how a lot of us are wired, and you won’t be really happy unless you’re creating. AI stuff can help you with some of it. But don’t worry about it taking a job from you. Lean in and learn how to use it…and that actually might help you land a gig. Final piece of advice for art school: invest in a French press. You’ll be drinking a lot of coffee, might as well make it the good stuff.
I see so many TikTok’s where people have to label the character that they’re pretending to be with text to make the joke work. It’s gonna be so different when you can actually make it feel like the video is filmed in real life with those people, but use different facial expressions and voices. Great video as always.
There is also a comfy workflow for relighting stills based on lighting maps. The only thing missing is a script to take exported stills and batch apply the nodes to do a whole movie or at least scenes. But that’s way above my pay grade. Anyone good at scripting?
Hi Tim, did you ever try Hedra? i cannot remember if you mentioned that in your previous videos, im looking forward to the future of AI and your next movie trailer 🍿🎥
@@TheoreticallyMedia haha - have you done any videos showing your behind the scenes, methods and equipment for making your channel? I'd be v interested.. just started my own channel..
Comfy UI is an interface you can use to run these AI models locally on your own machine. That said...it is not...easy. Or rather, there is a pretty heavy learning curve, and it does require a fairly beefy machine. I play with it here and there, but I don't wade into tutorials on it-- mostly since...well-- It can be really hard to explain!
@@TheoreticallyMedia AH!! Yes I saw that on other tutorials and the installation process went on for ever!! LOL I have an Invidia GTX GPU and its fast, but I agree - it looks like a pain to install. I want to get in and play! LOL Thanks for getting back to me. I love your videos! So who put the app on HF? That was very kind of them.
Claude writes some of the best space opera radio drama scripts. I wish there was an ai that can turn them into audio including the sound effects and different voices. This is a snippet of an radio drama script it wrote. Title: "Andromeda's Call" [Dramatic orchestral music swells and fades] NARRATOR: In the year 2157, the starship Odysseus embarked on a mission to explore the outer reaches of the Milky Way. But fate had other plans... [Sound of alarms blaring] CAPTAIN ELENA REYES: All hands, brace for impact! We're caught in the gravitational pull of an uncharted black hole! [Sounds of chaos on the bridge - people shouting, equipment crashing] SCIENCE OFFICER DR. AMIT SHARMA: Captain, our sensors are going haywire! The fabric of space-time is... tearing! [A deafening roar builds to a crescendo, then sudden silence] CAPTAIN REYES: Status report! Where are we? NAVIGATION OFFICER LT. ZARA CHEN: [voice shaking] Captain... according to our star charts... we're in the Andromeda Galaxy. [Gasps and murmurs from the crew] DR. SHARMA: Impossible! We've traveled over 2.5 million light-years in an instant! CAPTAIN REYES: [gravely] Crew of the Odysseus, our mission has changed. We must find a way back to Earth. [Soft beep of the ship's computer] A.I. AURORA: Captain Reyes, based on our current propulsion capabilities, returning to Earth would take approximately 100,000 years. LT. CHEN: [despairingly] We'll never make it back... CAPTAIN REYES: We can't give up hope. There must be advanced civilizations in this galaxy. We'll find a way. [Inspirational music begins to build] NARRATOR: And so began the Odysseus's quest to return home - a journey that would push the boundaries of science, test the limits of human endurance, and uncover the secrets of a strange new galaxy. [Music swells] DR. SHARMA: Captain! I'm detecting an artificial structure on a nearby planet. It appears to be... some kind of ancient alien gateway. CAPTAIN REYES: This could be our ticket home. Prep the landing team. Our greatest adventure is just beginning... [Music reaches a crescendo, then fades] NARRATOR: "Andromeda's Call" - A tale of hope, discovery, and the unquenchable human spirit. Coming soon to your preferred audio drama platform. [End theme music]
Haha. It was actually worse today with some super bright light coming in from a skylight, so I had to do some color correction, which I rarely do! Hmmm, I’ll check the settings and maybe try it out tomorrow!
Instead of training on 2D images the corpus could be volumetric data, or synthetic volumetric adaptations of 2D images? Generative NERFs = Future? DeformToon3D ?
Life Portrait, wow. Another big step towards creating (music) videos. With singing/ talking/ expression. The quality looks great, almost surpasses the EMO stuff. And the video to video examples I've seen today are crazy! Now we need a good (video) pose transfer application to make the "acting" stuff complete.
The V2V stuff is so exciting! And I think is a really great approach, since it still requires a human driver. That’s something I’ve been preaching for awhile now: we’ve got all these awesome AI tools, but it’s all to bring a person’s vision to life!
ai making 3d models and ai acting, now put that with upscaling and you get yourself a movie. of course we need larger context models to write better scripts but we can still use novel ai to help polish ideas. cant wait for people to make better versions of older movies. one i always wanted a decent version of was the abyss, the childish ending and plot after the middle part is just dumb.
Haha, it's been awhile since I've seen The Abyss...and I almost went to check it out again with the rerelease-- Your review makes me think it may live better in my head.
It does. And I commonly use it, but it doesn’t do the creative upscale part. Which I understand most don’t want, but I’m really interested in cranking knobs in Krea and getting some weirdness out of it!
Good stuff, Maynard! But also: why would someone not want to lock down the balloon head? And also 2: Have you thought about reviewing the new AI shorts coming out in the future?
Are there any AI tools that you can use a flat picture (say a MidJourney gen) for and then get a live 2d VTuber model back? That's the dream for me. 😅 That first thing seemed kinda like that, but I dunno if it's live yet and if it's using image to avatar or what. (gonna watch more of the video now to see if I get those answered)
Not in real time. And to be honest, most of the image to 3d stuff is still kind of lacking, but getting there. @bycloud might have some good resources on that front!
As always, amazing!!! Already working on an nft-web3 based animations series/show using AI only. Let' s work together on it, maybe one day netflix will strean it, why not. 👀👀
In the near future, everyone can make a movie, by simply describing the characters, the scene, the expressions on their faces and pufff, a new movie is made.
bruh i play in amateur movies as an actor, its being really bruh years for artist who dont want direct stuff and more doing it like to draw, write or acting rather typing prompts.
It looks like it has a long way to go, and we havent seen much improvement in ai lately. Its looking like a bad photoshop warp that has been smoothed and animated but its going off model and the volume is breaking.
i went to a ms office course ones, it was 6 mnds, i could continue to be an expert for 1 yr, and that was ms office, so do anybody expect me who or anybody try this first time to understand all this?
Another great video Tim, been playing around with the character expression AI and its very good - not perfect but given time it will be a very good tool to have in the AI tool box. :D
I’ve got a look at some of that coming up later this week. Dropping a little AI movie tomorrow and then I’ll do a breakdown here. I’m thinking taking a really hard look at what is working and what isn’t currently!
Great video. One good thing I can see maybe resulting from this tech is that a lot of people who love acting but who are not in their best appearance years may have a shot now at acting and appearing in films.
Ha! It’s true though! I can’t wait to break out my webcam while explaining to my family that I can’t possibly mow the lawn right now because I’m “in character” and they need to respect my process!
Okay,... so what about AI Voices? Saw one site that does it but every voice sounds like a 20 year old or faked accent. Pretty Worthless. Any suggestions about that aspect?
@@TheoreticallyMedia Weapon Of Choice? :) Someone in a bar once asked me if I was Fat Boy Slim... like 20 years ago 😅 Hope you enjoy the song and video !
This is cool af. The people making their own shows and movies, seeing their own stories come to life one day part. Not the crap loads of people that are gonna lose their jobs cause of this stuff part though.
This rocks! Thanks for doing this video. This is a HUGE step. I just did one first time and it looks amazing. Some minor head movement issues but this is a major leap!
I'm a veteran feature film animator who is heavily working with AI right now. None of this is close to being threatening to pixar. May just look like it to the untrained eye, but this won't get you anywhere near what they can do as a studio. Give it 5 years... maybe? AI needs to break away from the 2d plane. It's still very much trapped in a flat world.
Yeeeeeah, that's bound to happen. That said, TH-cam is hard...I mean, no two ways about that-- and the algo will straight up bury low effort content. That said, I think an AI Avatar channel could find success...but the content will have to be top notch. Someone will eventually crack it, and we'll have the Mr.Beast of AI Avatars, but-- like Mr.Beast, it'll just be the one...and a ton of terrible copycats that get like, 3 views.
Check out the Creative Video Upscaler Video Here: th-cam.com/video/5mwKcBPlfVM/w-d-xo.html
Looking forward to watching this video!
nice video thanks dude
For someone just getting started in the Ai Video space. (I've played with Pika when it first came out for a few days)
If I wanted to make a video, where would I get started? It feels like I need to combine so many things to get a few clips
This is basically like facial motion capture for 3D characters but with basically already "rendered" images. Super interesting
I find it astonishing how just two years ago, I was writing code to coax an AI model into producing what I called 'trippy,' distorted fever dream visuals. Witnessing its evolution since then is both awe-inspiring and unsettling hahah. I find myself repeating the matra, "I can't wait to see where it will be in a year"
Haha. Right? Sometimes I like to go back a year or two and look at old videos on the channel. Really cracks me up looking at discord servers and being in awe.
…and then I think where we’ll be in 2 years when I go back and watch todays video!
Wild times
@TheoreticallyMedia it is insane. My sister channel "Common Sense Made Simple" aka iamYork actually gained most of its popularity because I think I was the first person on TH-cam running Google Colabs with Stable Diffusion and now I look back at some of those videos and contemplate removing them because they look so rudimentary to what a paid app can do now in minutes in 2024. I have never seen or heard of any tech in human history that has evolved this fast. I am looking forward to what the future brings. Keep up the good work, my friend.
I get so excited when I see that you've uploaded a new video. I always learn something cool and new.
That's awesome to hear! I'll have a new one this week again! (I'm thinking Thursday!)
Basically a “lite” version of Hollywood mocap feature can be available for the public to use soon. Will greatly help with my current ongoing projects. Neat!
Yup! That's exactly it. And no need to put a million little dots all over your face!
i can't wait until a similar system exists for game devs so i can quickly and cheaply create animations for characters
People've been doing it in UE5 for a while now
@@BB-mt5sb Which requires studying UE5 software and mechanics so it's specialized for people who are in that field. This process simplifies everything through an automated process just like drawings through stable diffusion.
@@noobicorn_gamer Not at all. You can do it in a very not-complicated way. This is from 3 years ago: th-cam.com/video/wSx472esQps/w-d-xo.html
We are looking at the mind of the machines and using that mind to take out the thoughts of our own minds because our brains doesn't come with a display integrated.
Just tried this on a 3-D cartoon character I made. It's quite beautiful. The biggest issue whether it is more or less costly to do the renders with standard method using Arnold or Redshift or to use this tool. Don't have any A100s lying around, so we need to rent servers in either case. However, it's not only a question of which is cheaper, for high-end production - it is also a question which is easier. Old method: capture audio and ARkit face, paste into Maya or MoBu. Retarget to character. Fix the weirdness - eye flutter, whatever, complete the scene. export/import Alembic for dynamic hair and cloth, recolor, the upload to rendering, download the file, import image sequence to DaVinci, composite. . With LivePortrait: shoot the video, cut into clips, upload, download, import to DaVinci, composite. Much easier.
I have had an app for over 7 years now that’s called Mug Life, that has been able to animate still picture faces with 100 percent custom control, plus you could type in text and make the picture talk.
crap, now I may need to rethink my current project/video I'm working on. Thanks a lot, Tim! But seriously, the Live Portrait stuff looks really good. As so many times before, it looks like it could make other options irrelevant.
This is incredible. I have normally kept a hard stance in not using AI because as a creative designer, it feels like losing control of creativity. But, just the other day, I softened my stance. I have a project that will require multiple speaking parts. I don't have a large budget for hiring voice actors. So, I stumbled upon text-to-speech and text-to-speech AI generation that blew my mind. I was able to alter the delivery of the voice over based off of my own vocal audio; and it worked for Male and female voices. My work pipeline instantly expanded!
But now this! The mocap translation is incredible. I could find a use for this without surrendering everything to AI.
So we’ll eventually be able to remake entire movies with characters and scenes from different films. Basically complete style transfer, voices included. Weird.
Pretty much. To be honest, I'm not that interested in that-- I'm much more interested in us making our own films. Although, I can see this tech eventually being utilized for reshoots. Like, you don't even need to call Bradley Cooper back to set.
@IWannaUnaliveMyselfhuh? Why?
@IWannaUnaliveMyself what strange things to worry about, lol
@IWannaUnaliveMyselfhes right, its odd
Rise of the film-DJ
I see a future with a ton of horrible content generated by technicians claiming it to be animation or film or story telling and it's just crap. Fortunately this will last about 6 months to a year before people get sick of watching garbage and lose interest.
It will be good and we won't lose interest. Every little niece interest group will have their own entertainment like stories, comics and novels, games, music, TV shows and movies.
Making product photography alot easier, creative and dynamic
This is really interesting to watch and ... wow ... such cool development is happening! I am excited and I appreciate the quality of your reporting and the introduction to some of the functionality that they provide. Great work!
crazy tech to be able to generate an ai character out of thin air then make it come to life using your motions and have it look realistic or animation then change the background and lighting ... wow
You are so knowledgeable and this was so well researched. Also, I can tell that you really love this kind of stuff
There was Avatarify quite some time ago. It animated images in real time but it was really bad at generating head sides.
I LOVE AI (unlike many colleagues in the media industry) because NOW (ok, more like in 1-2 years) ANY CREATIVE individual FINALLY CAN make ideas happen with little to no cost at all (what would've needed millions $ budget just 1-2 years ago!)
1000%. You’ll still need amazing storytelling chops and a killer idea, but the raw sculpting material will be there for you.
I mean, you don’t even need to be aiming for a “hit”- these tools let you do all kinds of interesting things, like “I had a weird dream last night, this is what it looked like…”
@@TheoreticallyMedia yepp, A.I. also can give you a twist , a human couldn't possibly come up with, which is an asset in my book. You don't have to implement everything it suggests, but it's nice to have such possibilities.
Thanks for all of your constant, up to date, thorough videos! Props
I've repeated this but it's only a matter of time until Topaz adds some of these things. Like imagine turning 4:3 to 16:9, or black and white to color, or hallucinate missing or broken frames, or whatever. I can see AI completely restoring the missing classic Doctor Who episodes. Reimagining can help with lost media that was "recorded on a toaster".
You can do a lot with Upscayl. And there are comfyui setups that do some of the things you mentioned, probably all of them, but they are either very slow or you need a very beefy gpu. I’m weary of Topaz, because you just never know how long they stay with perpetual licensing. I’m very uncomfortable with a thousand subscription services. But Topaz is definitely pricy for the full suite.
Man I really like what LivePortrait is bringing to the table, great step in the right direction! EMO and Vasa-1, etc. are exciting ( annnd slightly frightening), but LivePortrait's full control is a much needed controlled step in the right direction. Not hating on anyone who wants to re-roll lip-sync until their eyes bleed, but it's easy to undervalue just how vital highly intentional acting is with timing, delivery, pauses, and all the complexities that go into a captivating performance to make you care.
Do you think this likely leads to a solution that auto-composites a LivePortrait 2d facial animation onto a Viggle, Domo type motion capture to wrap a full character performance in a bow?
This AI stuff is making me rethink going to art school.
Art school grad here, and I’ll say: don’t stress it too much. Art school is a great time, soak it up and create/learn as much as you can. To be a bit harsh, the diploma is useless when you get out. It’s really all about taking what you learned and applying it to building your career. Network, grow your portfolio. Don’t be a jerk to other artists (they may get you your next gig), and just keep going…
Doing the art thing is like choosing life on “hard” mode, but it’s also how a lot of us are wired, and you won’t be really happy unless you’re creating.
AI stuff can help you with some of it. But don’t worry about it taking a job from you. Lean in and learn how to use it…and that actually might help you land a gig.
Final piece of advice for art school: invest in a French press. You’ll be drinking a lot of coffee, might as well make it the good stuff.
@@TheoreticallyMedia- fantastic advice right here
Your doing great, Tim. This is an exciting time!
99.2k. Just dropped in for a looksie as been away. Great video...but 99.2K subscribers Tim!!!
Haha, I know it’s insane! I’m trying not to look just in case I jinx it!!
I see so many TikTok’s where people have to label the character that they’re pretending to be with text to make the joke work. It’s gonna be so different when you can actually make it feel like the video is filmed in real life with those people, but use different facial expressions and voices. Great video as always.
100 K subs seems to be right around the corner. Congratulations.
It's just crazy. Thank you so much!
@@TheoreticallyMedia Yeah. But you do a good job. You broke news about Udio to me.
There is also a comfy workflow for relighting stills based on lighting maps. The only thing missing is a script to take exported stills and batch apply the nodes to do a whole movie or at least scenes. But that’s way above my pay grade. Anyone good at scripting?
Whoa. I’ll have to look into that. Sadly, I am not the guy for scripting! Unless it’s a movie script: that I can do!
This would've been perfect for that movie with Kevin Spacey they re-shot
Haha. I keep thinking of Henry Cavell’s stash in Justice League, but yeah, also that!
Hi Tim, did you ever try Hedra? i cannot remember if you mentioned that in your previous videos, im looking forward to the future of AI and your next movie trailer 🍿🎥
Definitely not blown away by either tech but I love the people who are bringing us ever closer to blown away results 👍 👍 👍
This game is getting more and more interesting
I truly enjoy this channel
How do you keep up with all these developments??
Haha, I gave up sleep in March of 2023.
...are you going to drink that coffee? I'll drink it if not!
@@TheoreticallyMedia haha - have you done any videos showing your behind the scenes, methods and equipment for making your channel? I'd be v interested.. just started my own channel..
HI! I know what hugingface is, but what is comfy please?
Comfy UI is an interface you can use to run these AI models locally on your own machine. That said...it is not...easy. Or rather, there is a pretty heavy learning curve, and it does require a fairly beefy machine.
I play with it here and there, but I don't wade into tutorials on it-- mostly since...well-- It can be really hard to explain!
@@TheoreticallyMedia AH!! Yes I saw that on other tutorials and the installation process went on for ever!! LOL I have an Invidia GTX GPU and its fast, but I agree - it looks like a pain to install. I want to get in and play! LOL Thanks for getting back to me. I love your videos! So who put the app on HF? That was very kind of them.
Soooo, can you even get a landscape layout you can use in an AI video?
Are there any walkthroughs on making Ai videos?
I’ll have a pretty big one coming up on Monday!
Always interesting videos Tim. ✌
How different is Magnific Relight from Magnific Style Transfer? They seem very similar in my brain...
Claude writes some of the best space opera radio drama scripts. I wish there was an ai that can turn them into audio including the sound effects and different voices. This is a snippet of an radio drama script it wrote.
Title: "Andromeda's Call"
[Dramatic orchestral music swells and fades]
NARRATOR: In the year 2157, the starship Odysseus embarked on a mission to explore the outer reaches of the Milky Way. But fate had other plans...
[Sound of alarms blaring]
CAPTAIN ELENA REYES: All hands, brace for impact! We're caught in the gravitational pull of an uncharted black hole!
[Sounds of chaos on the bridge - people shouting, equipment crashing]
SCIENCE OFFICER DR. AMIT SHARMA: Captain, our sensors are going haywire! The fabric of space-time is... tearing!
[A deafening roar builds to a crescendo, then sudden silence]
CAPTAIN REYES: Status report! Where are we?
NAVIGATION OFFICER LT. ZARA CHEN: [voice shaking] Captain... according to our star charts... we're in the Andromeda Galaxy.
[Gasps and murmurs from the crew]
DR. SHARMA: Impossible! We've traveled over 2.5 million light-years in an instant!
CAPTAIN REYES: [gravely] Crew of the Odysseus, our mission has changed. We must find a way back to Earth.
[Soft beep of the ship's computer]
A.I. AURORA: Captain Reyes, based on our current propulsion capabilities, returning to Earth would take approximately 100,000 years.
LT. CHEN: [despairingly] We'll never make it back...
CAPTAIN REYES: We can't give up hope. There must be advanced civilizations in this galaxy. We'll find a way.
[Inspirational music begins to build]
NARRATOR: And so began the Odysseus's quest to return home - a journey that would push the boundaries of science, test the limits of human endurance, and uncover the secrets of a strange new galaxy.
[Music swells]
DR. SHARMA: Captain! I'm detecting an artificial structure on a nearby planet. It appears to be... some kind of ancient alien gateway.
CAPTAIN REYES: This could be our ticket home. Prep the landing team. Our greatest adventure is just beginning...
[Music reaches a crescendo, then fades]
NARRATOR: "Andromeda's Call" - A tale of hope, discovery, and the unquenchable human spirit. Coming soon to your preferred audio drama platform.
[End theme music]
did your studio lighting got better?
Haha. It was actually worse today with some super bright light coming in from a skylight, so I had to do some color correction, which I rarely do!
Hmmm, I’ll check the settings and maybe try it out tomorrow!
@@TheoreticallyMedia just noticed something different. Love your vids btw🙌
Instead of training on 2D images the corpus could be volumetric data, or synthetic volumetric adaptations of 2D images?
Generative NERFs = Future?
DeformToon3D ?
100K - what are you going to do to celebrate - when it happens?
I was thinking the same - so close now
Im a big fan of magnific love it. Best upscaler or style transfer app. U just know how to set😉
Magnific is for 39 euros per month… no way I am willing to pay more for that than my Netflix subscription 😂
Haha, believe me, I know. I'm hopeful at some point they lower the price, or at least have a few other subscription tiers.
Life Portrait, wow. Another big step towards creating (music) videos. With singing/ talking/ expression. The quality looks great, almost surpasses the EMO stuff. And the video to video examples I've seen today are crazy! Now we need a good (video) pose transfer application to make the "acting" stuff complete.
The V2V stuff is so exciting! And I think is a really great approach, since it still requires a human driver. That’s something I’ve been preaching for awhile now: we’ve got all these awesome AI tools, but it’s all to bring a person’s vision to life!
@@TheoreticallyMedia Yes! Co-creation!
ai making 3d models and ai acting, now put that with upscaling and you get yourself a movie. of course we need larger context models to write better scripts but we can still use novel ai to help polish ideas. cant wait for people to make better versions of older movies. one i always wanted a decent version of was the abyss, the childish ending and plot after the middle part is just dumb.
Haha, it's been awhile since I've seen The Abyss...and I almost went to check it out again with the rerelease-- Your review makes me think it may live better in my head.
You are welcome!
10:20 Topaz also has video upscaler
It does. And I commonly use it, but it doesn’t do the creative upscale part. Which I understand most don’t want, but I’m really interested in cranking knobs in Krea and getting some weirdness out of it!
Good stuff, Maynard! But also: why would someone not want to lock down the balloon head? And also 2: Have you thought about reviewing the new AI shorts coming out in the future?
Are there any AI tools that you can use a flat picture (say a MidJourney gen) for and then get a live 2d VTuber model back? That's the dream for me. 😅
That first thing seemed kinda like that, but I dunno if it's live yet and if it's using image to avatar or what. (gonna watch more of the video now to see if I get those answered)
Not in real time. And to be honest, most of the image to 3d stuff is still kind of lacking, but getting there. @bycloud might have some good resources on that front!
As always, amazing!!! Already working on an nft-web3 based animations series/show using AI only. Let' s work together on it, maybe one day netflix will strean it, why not. 👀👀
Thanks for all your hard work testing and researching all this. Much appreciated!
Great stuff as always!
Thank you so much!!
Correction, it's Dwight KURT Schrute.
The future of hollywood and pretty much creative industries is going to be wild!
1000% agreed. I'm really looking forward to seeing new and inventive films and short popping up all over the place!
Great vid, as usual. Isn't it supposed to be "mag - nee - feek " .. like the french word "magnifique" ?
Haha. Yeah, that’s my barbaric American accent. You don’t want to hear me say croissant.
Doesn't look real at all. What are you talking about?
Yeah, I was 3 years away from making a feature film, now I'm 3 months away from making an AI assisted Film.
Insane, isn't it? I still say: Get started today! in 3 months, we'll have a WHOLE new crop of tools to play with!
@@TheoreticallyMedia I feel like it is getting better by the week if not by the day. You Rock!!! Thanks again
The face my dog makes when he eats too much cheese
I don't know how she keeps managing to score it! Pepperjack was the worst!!
@@TheoreticallyMedia 😂😆
Can you share workflow
Epic content as usual. Thanks for keeping us on the bleeding edge of AI!
In the near future, everyone can make a movie, by simply describing the characters, the scene, the expressions on their faces and pufff, a new movie is made.
bruh i play in amateur movies as an actor, its being really bruh years for artist who dont want direct stuff and more doing it like to draw, write or acting rather typing prompts.
It looks like it has a long way to go, and we havent seen much improvement in ai lately. Its looking like a bad photoshop warp that has been smoothed and animated but its going off model and the volume is breaking.
i went to a ms office course ones, it was 6 mnds, i could continue to be an expert for 1 yr, and that was ms office, so do anybody expect me who or anybody try this first time to understand all this?
This is finally some newer Ai that is AMAZING!! I ran some examples yesterday on Hedra
Curious how the big movie and animation studios are embracing
cool video, the relighting section was very cool, looks like alot of fun to play with
Super cool. Love your videos brother
Thank you so much!!! Really means a lot!
We live in such weird times lol 😆
Haha. Every day it gets weirder and weirder!
One day no body real... I love it😂❤
Video to video cool 😊
Great content. Thanks for your honest reviews!
If this get's good, imagine what will it mean for film making
Nice, I'm surprised with how positive the community here is
Another great video Tim, been playing around with the character expression AI and its very good - not perfect but given time it will be a very good tool to have in the AI tool box. :D
I’ve got a look at some of that coming up later this week. Dropping a little AI movie tomorrow and then I’ll do a breakdown here. I’m thinking taking a really hard look at what is working and what isn’t currently!
@@TheoreticallyMedia Looking forward to the video Tim 😀
Another great video
Thank you man!! Super appreciated!!
Great video. One good thing I can see maybe resulting from this tech is that a lot of people who love acting but who are not in their best appearance years may have a shot now at acting and appearing in films.
still clunky, but once these tools are ready for widespread, mainstream use.... yeeeeeesh not just a game changer - whole new dimension
Great channel!🌌💟
Thanks Tim, fruitful time indeed!
Hello! Are you nu-metal father? or heavy, classic?
looks like a need to go to school to learn all this
Yay! Finally time to bring out that dusty acting diploma! I KNEW I got that for a reason. And as always, thanks for the video Tim 🎉
Ha! It’s true though! I can’t wait to break out my webcam while explaining to my family that I can’t possibly mow the lawn right now because I’m “in character” and they need to respect my process!
@@TheoreticallyMedia haha 😂
Okay,... so what about AI Voices?
Saw one site that does it but every voice sounds like a 20 year old or faked accent. Pretty Worthless.
Any suggestions about that aspect?
See Fat Boy Slim's new music Video for "Role Model"? Good silly AI fun. :)
I have not! I'll check it out today!
I mean, Fat Boy Slim already holds the record for best music video ever!
@@TheoreticallyMedia Weapon Of Choice? :)
Someone in a bar once asked me if I was Fat Boy Slim... like 20 years ago 😅
Hope you enjoy the song and video !
Rockstar
Next video I'm wearing a leather jacket and pants! Oh, and I guess I gotta grow my hair out again! Haha
i like remni more for video upscaling
This is cool af. The people making their own shows and movies, seeing their own stories come to life one day part. Not the crap loads of people that are gonna lose their jobs cause of this stuff part though.
This rocks! Thanks for doing this video. This is a HUGE step. I just did one first time and it looks amazing. Some minor head movement issues but this is a major leap!
hey key ! 100K are coming Tim !
So crazy! Can't believe it's almost here!
Music video using Krea AI Scene transfer:
th-cam.com/video/5EnNpi-rfSQ/w-d-xo.htmlsi=q6Iy4H05CEcf6Eq2
This tool of AI is very useful but as it is useful to people for good and goody stuff it can certainly and most likely will be used for bad.
Great as always, thank you
Pixar must be jelly right now. What used to cost millions to animate can now be done in 30 seconds on a personal computer that costs maybe a thousand.
I mean, practically a phone as well! Gates are down, time to get creating!
I'm a veteran feature film animator who is heavily working with AI right now. None of this is close to being threatening to pixar. May just look like it to the untrained eye, but this won't get you anywhere near what they can do as a studio. Give it 5 years... maybe? AI needs to break away from the 2d plane. It's still very much trapped in a flat world.
Are u related to chael sonnen?
Cool!
I'm pretty excited about it! I've got on AI Film dropping tomorrow and I really could have used something like what we see here!
brace yourselves, an avalanche of AI avatar TH-cam channels are coming
Yeeeeeah, that's bound to happen. That said, TH-cam is hard...I mean, no two ways about that-- and the algo will straight up bury low effort content. That said, I think an AI Avatar channel could find success...but the content will have to be top notch. Someone will eventually crack it, and we'll have the Mr.Beast of AI Avatars, but-- like Mr.Beast, it'll just be the one...and a ton of terrible copycats that get like, 3 views.
Thanks lov 🥰
Thank YOU!
😮👀👀👀
Thank you very much Tim.