The plot twist about the AI narration at the end left me speechless too. It's incredible to see how advanced AI technology has become. (Comment generated by ChatGPT)
a few notes from a professional compositor. Whenever adding still images as a matte painting you need to add back moving grain that matches the original footage. This can be done using a denoiser on the original clip and then overlaying the RAW noise it captured onto the png. I also noticed that some of the fills looks either too sharp or too soft in areas where it came up against the original clip. You could probably fix this by increasing the photoshop file's resolution and selectively blurring/softening it after adding it next to the clip. Your results still look really good considering you guys aren't VFX artists. It is really impressive how far AI tools have come in the last year.
I also miss some actual overlapping of the generated content, it feels really stiff when they're just standing/sitting in place instead of wandering around a bit and rotoscoping it over
Could you give us a minute of your time and explain to us how to obtain the noise from the original matter? Or recommend a video where they explain how to do that? Thank you so much.
@@pabloromano6845 Not a VFX/compositing pro either, but you could capture a clean slate i. e. several seconds of still image from a tripod and then extract the temporal noise by subtracting one frame from another. There might even be a matching blend mode (subtract, difference) where you would overlay the clip with itself, but shifted one frame. If there is no clean slate available and your noise filter has no "retain only noise" option (which at least some audio noise filters have), you could denoise your clip and the use a subtract/difference blend mode to overlay the denoised clip with the original one. That is probably not as efficient and precise as filming a gray card with the exact same camera, settings and under the exact same lighting conditions to get "pure" noise - but for sure it will be more authentic than just slapping some arbitrary film grain onto the clip. Perhaps @LFPAnimations can tell us a couple more secrets of the trade.
@@enricopalazzo2312 denoising and then blending between a noisy and denoised clip set to “difference” or “subtract” blend modes are the industry standard way of extracting noise. You basically guessed it which is cool ;)
The narration voice was lowkey giving me serial killer vibes and was kinda scaring me subconsciously. The plot twist at the end made me feel much better.
Everything about this video is just incredible, but what really left me speechless is the plot twist about the AI narration at the end. AI is really becoming incredibly advanced.
Really? After like 5 seconds I was already wondering why the narrator was talking so weirdly, and by 10 seconds I knew I was listening to AI. I don't know this stuff still manages to fool people.
@@PolymerJones i dont see how its dissapointing being able to give you information, sound, imagery and many many other things in a fragment of time that a human would need. Also the narration, no i did not realize it was AI. Sounded fairly fine and if you just think about it with how young AI actually is but how much it already can do...Give it 3-4 years and its gonna be tremendously smarter than we are. I mean it already is. Only thing that limits it as of right now is us as humans
Absolutely epic stuff here, using AI for environments is such a smart move. It's like boost app social, an app I use for my social media, they're also using AI to create stunning stories and reels. Tech these days!
I miss you guys, another of my favorite TH-cam channels turned into a ghost town. hope all is going well with your business, maybe one day you will be back or at least you will feel inspired to give us a farewell video.
For a small creator who struggles with Space where I only have a corner of our second bedroom, which is not only my office corner being able to use this tool will allow me to create a much more professional looking setup without actually needing it I would probably go one step further with this, and when you’re in Photoshop, remove subject from the plate, so then you have that solid background
We can't deny it's helpful and industry changing forever, the only flaws I could've spotted were the river waters not running and the tree leaves with no wind movement at all, on the outdoor environments. Apart from that, it's perfect! when indoor with unanimated objects, it's insane what it can do. Thanks for bringing this video guys 👏🏼 would love to watch a more detailed tutorial on matching the scenes on the video editing software.
The thing to keep in mind is, this technology is as bad as it will ever be. It will only improve from here. So the flaws you (rightfully!) mentioned, will be gone sometime in the future. And at the rate this all is developing, it will be gone sooner than one might think
No one’s denying it’s not helpful or industry changing. Just that it’s going to put a lot of people out of work in many industries. Last month 40,000 jobs were already lost to ai in the US. It’s already happening and capitalism is not prepared for the mass unemployment ai is going to eventually cause. Even jobs people claim are safe like programming are absolutely not safe, ai is already able to do most website coding and eventually will be able to code 95% of apps and software just based on user prompts.
New subscriber here, so the following may make more sense. Adding the blue light and talking about it prior to showing the effects of the light may have caused a bias. It does seem like a neon light is hitting you in that direction, but that may be my bias. I did immediately notice the AI generated narration as I'm trying to do this with my videos. It will be interesting to ee how many comments were surprised by this. I'm not sure if the tech s totally here yet, but the narration was good, but only because I saw your face many times in the video so that told me that those were not brought in from stock footage. Great video! Cant wait to see some more videos like this!
Love to see that someone makes a tutorial on it finally - thought i was the only one thinking about that. I directed a music video for some pretty famous artists in germany, which released yesterday (PA Sports - Doktor) - I used the same technique combinated with digital zooming and also some stock video assets to bring in a little bit movement.
Congratulations guys. This link popped up on a mailing list I follow. Lucky for me, I had already not subscribed to the channel. You’re going to be getting a lot of new views.
The shot using the overpass looks kinda kludgey to me and might not “feel” right to some viewers, but the rest seems quite plausible. A real boost to lower budget commercials and the like. Nice work! And thank you for sharing. I have a new project started where this may make solve some budget problems.
You are right some of the AI stuff does look "kludgey", but yeah its still pretty impressive at the beta stage. Im excited about using thes tools even for concept or storyboarding stages. Also the adobe generative fill seems to do better with real world objects/spaces vs "apocalyptic destroyed cities"
In the first 30 seconds i thought did they get a new narrator. Then when you appeared on screen i thought "hmmm" knew something was up with the voice. It was too clean and unnatural. As for the video this is incredible. You have just changed my life. I've all the tools to do this and its solved a big problem i see in the near future for an upcoming project. Just wow. I just hope all the generated assets are owned by adobe and don't screw of independent artists.
Creating new AI environments using the generative fill feature in Adobe Photoshop is an incredible way to explore creativity and push boundaries. Keep up the great work!
I think it is a great tool, and the question comes in whether you film it on location, or the project is such that this is perfectly workable. Filming on a location allows for more dynamics. And like many many many post tools, this could allow you to do a quick pickup shot, or save a shot when budget or time does not allow you to reshoot or do a pickup.
The AI voice was the biggest shock tbh... Hardly noticed that bit at all. The rest of it is convincing enough for TH-cam. More than anything it will allow smaller budgets to have nice studio looks. If you play into it a bit, it could be a good move for some.
Wow, this video is an absolute game-changer! 🚀 The concept of using AI environments to create amazing videos anywhere is mind-blowing. I've always struggled with finding the perfect locations or dealing with unpredictable weather conditions, but this solution seems like a dream come true. The way AI can simulate different environments and backgrounds opens up endless possibilities for creativity. It's fascinating to see how technology is revolutionizing the filmmaking process. This video has truly inspired me to explore new avenues and push the boundaries of my own video production. I can already imagine the convenience and flexibility this brings, especially for content creators on the go. Being able to transport yourself to various settings without leaving your studio is incredible. The ability to create professional-looking videos without the need for expensive equipment or travel is a game-changer for aspiring TH-camrs like me. I'm really excited to dive deeper into AI environments and experiment with different scenarios. The potential to create immersive storytelling experiences is tremendous. Thank you for sharing this valuable information and empowering creators like us to take our videos to the next level. Keep up the fantastic work! 👍🎥
Oh my god, I was actually a bit stunned at the end that it was actually that good AI voice trained on your voice. I kind of thought the whole narration was a bit "monotone" and flat in it's delivery so I was a bit like "hmmm...okay maybe thats the tone they're aiming for" but didn't expect it to be AI (and I'm quite deep in audio editing, sound design...). Did you use Respeecher for the voice?
You also missed the part where it took them hours to fix the coloring and exposure to make it look real and that's if they know what they are doing...would take an amateur days and they'd never get it this close, and that's just for one shot lol
@@GameUnkasa lol he said the word "now" and "with this tech" meaning they will be able to without anymore technological advancements, but at least you can read the first half 👍 but honestly he isn't wrong, it just needs to be noted that people shouldn't just give up on art just cuz any kid can make art now. The point is that, yes it's easier to create certain shots now and will be even easier in the future, but there are still very difficult steps to creating an entire film that will stay very difficult for probably forever.
Kids with phones have been able to make movies for several years now. 99.99% of it is pure shit. The ones who persist will get better if they learn from their mistakes and build on their successes.
This is a great idea, i recently started to write a my own web-series for youtube. i was warried about how i am going to do things but this is great idea. thank you soo much. this opened so many possibilities for me. i can do it :)
I'd love a tutorial on how to incorporate these assets into Blender and track a 3D camera for parallax. As much as I love this and am trying to figure out my own workflow, it looks like a green screen at best. There's something about it that my brain just won't believe. Maybe adding some kind of extra moving noise to the generated images would help among other things. It just looks like a well keyed green screen ... and that is not exactly convincing. Maybe some handheld movement - but in After Effects for parallax.
That’s not that difficult if you have basic 3D skills, just enough to create simple geo. Search for camera projection tutorials and also for camera tracking and have fun!
Cool concept. Seems to have the best results mimicking an indoor "set". With an inoffensive background its a great way to extend a "greenscreen" set for talking-head videos. Still could be challenging to make sure the real-world lighting stays motivated by a fake environment. It feels a bit uncanny for the outdoor scenes. The outdoor lighting/shadows were a bit off, and lack of background movement (wind/running water) felt unnatural.
totally agree! The generative fill is much better at "real environments" vs the apocalyptic world we tested on this video. One thing that really did surprise us about the generative fill, is most of the time it really tried to keep lighting from the correct direction, color, and hardness. Not perfect but we were consistently impressed how it referenced the image lighting as a whole before generating the new sections.
@@jamesepiclight I noticed that as well but only after color correction and other post effects were applied. At first placement the elements have a paste-in feel, as you don't notice the basic continuity of light values. Color correction (continuity) makes so much difference in image editing in general relative to perceived dimensionality, etc.
Wow! I could not believe the environments in this video was created with Photoshop & enhanced with Davinci Resolve. When the time comes, I will try it out. Thanks Epic Light Media.
4:47 on the left edge, you can see a little bit of the border between "real" and "fake" but then, you added a vertical frame feature in the background which justified that vertical line quite nicely. Human art aided by AI, this is the future!
I never subscribe to any channel solely based on someone in the video asking me to subscribe. Upon seeing the "do not subscribe" comment in the end, I immediately subscribed :)
Awesome video! If you didn't tell me this was made by AI, I wouldn't have known or been looking for imperfections like water moving but this looks like an easy process for the most basic creator. Especially if your in a tight space
This generative fill definitely opens up possibilities. I'm surprised to see how the AI generated content resolution in a 12K timeline. My understanding is that it produce content 1024 resolution so I guess the trick is to do a basic back plate and then regenerate smaller chunks of it to up the resolution
For any AI programs that produce low resolution output, you can then feed that into an image scaling AI to get it up to 4K or 8K or whatever. But a lot of the programs do let you increase the render resolution if you poke around.
@@RavenMobile yes lots of upscalers out there designed for various types of images ( photo, anime, digital art etc). but it's tricky when patching up a higher res image and how big of a selection you should make to fill it at the highest resolution available so it also matches the rest of the image.
I subscribed immediately after seeing this video.. I've been frustrated with the level of quality visually that my videos have this should be a game changer. I hope you follow this theme on giving all the details on how to make this happen.. And refining the process... I think everybody wants to know more..
Awesome stuff! you have left no doubt in my mind that I will be buying a green screen in the near future :) Still, for professional stuff I can easily see the value in this..
Am I the only one who could immediately recognize that the voice was AI? There's something about the inflections that just aren't 100% Really cool video! I can't wait to play around with the tools in Photoshop!
@@moomoocowsly Interestingly I'm pretty much the same. But i follow all this stuff closely as well. ChatGPT is the easiest of them all though it basically uses the same templates for nearly everything even if you request different styles, etc.
@@moviedorkproductions9465 The video editing part was demonstrated in DaVinci Resolve, but the AI part has to be done in Photoshop, as there is no generative fill in Resolve.
Man i was legit just thinking about how to use generative fill in videos. Of course, I wasn't the only one with those thoughts going through my head. Cool ideas!
Wow, really impressive. I don't know if I'm in the minority here but I would have really like to have seen a more granular and desciptive segment on the actual creation and editing of the import and post import. You either skipped, or zommed past, a lot of small detail that a newbie like me would really like to understand.
I think the room lighting thing was absolutely amazing. I did notice that the voice-over was ai. The only thing in the commercial footage I noticed was that the generated cement for the roads before and Beyond the actor were clearly flattered and Faker than the actual concrete he was standing on.
The voice immediately stood out as AI... But that dinning and room scenes where just amazing.. With some good enough light... TH-camrs won't need to deck out their scenes again to have lovely looking backgrounds.. This is inspiring..
Very creative guys. Let the record show, Epic Light Media was the first to highlight this feature and all other videos were shameless copycats. The amount of copycat videos this week is honestly disturbing. I will not name names. But it doesn’t matter. This is the OG. All others are copycats.
Thanks for the creepy ending. 😆 Great tutorial; it definitely can be useful for vids that don't need to move the camera -- I could take my tiny space and turn it into a mansion! Or at least a larger studio to work with.
This is something that will save a lot of time for my corporate interviews. My question is, when do you disclose that you used AI in a project? Do you disclose when you've tweaked a lamp, couch, window, etc.? I know that Adobe's current iteration has some interesting terms of use as well. I used AI to spruce up a boring black background in a recent project and will definitely use the method again.
@@moomoocowsly good point. I hadn't considered that we don't disclose CGI (or even simple compositing) that alter the images we see. But is AI enough of a departure from a more hands-on approach to the art we create, that it warrants a disclosure?
@@brentwpowell YOU NEED TO DISCLOSE AI , it is in every AI software TOS , dont know how would you pose with that before your boss tho , probably it will take a few years and it will become mainstream.
@@poti732 I'm under the inclination that you should disclose, and have on most of my outputs, but not all software requires disclosure. The generations shown in this video were via Adobe Firefly, which is currently in non-commercial beta, so the disclosure is necessary for fair-use commentary. I've come across other implementations that don't require disclosure in their TOS. Many bosses will love this stuff, faster turnaround and higher concept productions, even if they have to disclose.
@@moomoocowsly using AI is different from photoshopping or using CGI since those two still involve the human touch, whereas the AI is generating the whole content for you; it's as if you were commissioning/outsourcing it to someone else. If you were to heavily alter the AI-generated content or simply use it as a reference, however, then a case could be made that you no longer need to disclose it.
Great video I don't have the skill set you folks do but I have a plain back wall and have played a little with generative fill and your video has shown how to
voice got me, i thought it had no emotion but happens when reading a script. its definitely better than so many text to voice readers, what was used to do the voice. the videos from the get go my brain did not accept it. i thought it was full green screen the shadows where not right, to sharp just wrong, one of those things where your brains like there a problem here and from the get go your looking at ever little thing for some unknown reason because something feels off. overall fantastic
haha, i clicked on this video on a whim, and that post apocalyptic cell service commercial caught me waaaaayyy off guard....i literally laughed out loud. nicely done.
First of all if I could Subscribe 1,000 times I would so go ahead and hate me for that. This channel seriously upgraded my videos, basically within a week. I watched all your videos. If I could work with you I would. You guys are awesome.
@@EpicLightMedia Yes! You guys really got me thinking about affordable options, for example, a flag/scrim kit. Maybe you could make a “homemade” DIY version of the Modern Scrim Set?
Question: How to make the generated background look more like natural moving video? Context: When we see any high resolution video, we can feel a motion. Because the sensor captures an updown flow of light, although our eyes cannot see any grain visible when we focus on the content. But we can sense it pretty well. Now, when you put a picture on a video, the video part will give the natural moving feelings, what about the PNG part? Pro editors will agree with me that, when we will use low light and sharpness, the video part will give that nearly unnoticeable grainy vibe, where the PNG will not.
Until now, the gereative PNG is having a 2D image appearance for it's flat color and no-grain. Adobe needs a little more trick to use this feature in Premiere Pro natively.
at 8:12 when you brought the image in, there was a color difference, I'm having this same problem, only tools I have are photoshop and fcp. You went to Rec 709 color... I used a tripod and a simple indoor background, but I get this problem. any suggestions, I wasn't sure how to change mine to the rec 709 or how to know what to change to, in order to get a match.
4:18 i'm not sure if it was like good idea it was like light was motivated by "window" plus u got very much of banding because we are in youtube and it sucks on gray gradient(like adobe software though)
At (7:09) coudln't you could just make simple masks where lighting is prominent, select the blue channel with color picker and just reduce saturation as a quick fix? I guess the fact that your shirt has blue hues and some of the lighting bleeds into the clothing's folds makes the process a little more complex. I still think it would take a relatively short amount of time to fix given the the symmetryical nature of the shot.
Wow this was pretty awesome. Great idea, dudes!
Instant pin! Casey is a GOD!
Such a great idea that other popular TH-camrs are making exactly replicas of your video th-cam.com/video/-DMU_pJ0YNY/w-d-xo.html
@@EpicLightMedia 💯💯💯
The plot twist about the AI narration at the end left me speechless too. It's incredible to see how advanced AI technology has become. (Comment generated by ChatGPT)
Literally mind blowing. The entire thing was very well done.
@@TheRafark Yeah. I was like "this is some good narration..."
sounded like one of thoes skyscraper videos ngl, still kinda far from humman
It was pretty obvious
@@ClaimerUncut yep
a few notes from a professional compositor. Whenever adding still images as a matte painting you need to add back moving grain that matches the original footage. This can be done using a denoiser on the original clip and then overlaying the RAW noise it captured onto the png. I also noticed that some of the fills looks either too sharp or too soft in areas where it came up against the original clip. You could probably fix this by increasing the photoshop file's resolution and selectively blurring/softening it after adding it next to the clip.
Your results still look really good considering you guys aren't VFX artists. It is really impressive how far AI tools have come in the last year.
I also miss some actual overlapping of the generated content, it feels really stiff when they're just standing/sitting in place instead of wandering around a bit and rotoscoping it over
Could you give us a minute of your time and explain to us how to obtain the noise from the original matter? Or recommend a video where they explain how to do that? Thank you so much.
@@pabloromano6845 Not a VFX/compositing pro either, but you could capture a clean slate i. e. several seconds of still image from a tripod and then extract the temporal noise by subtracting one frame from another. There might even be a matching blend mode (subtract, difference) where you would overlay the clip with itself, but shifted one frame.
If there is no clean slate available and your noise filter has no "retain only noise" option (which at least some audio noise filters have), you could denoise your clip and the use a subtract/difference blend mode to overlay the denoised clip with the original one.
That is probably not as efficient and precise as filming a gray card with the exact same camera, settings and under the exact same lighting conditions to get "pure" noise - but for sure it will be more authentic than just slapping some arbitrary film grain onto the clip.
Perhaps @LFPAnimations can tell us a couple more secrets of the trade.
Damn. Great tip!!
@@enricopalazzo2312 denoising and then blending between a noisy and denoised clip set to “difference” or “subtract” blend modes are the industry standard way of extracting noise. You basically guessed it which is cool ;)
The narration voice was lowkey giving me serial killer vibes and was kinda scaring me subconsciously. The plot twist at the end made me feel much better.
Everything about this video is just incredible, but what really left me speechless is the plot twist about the AI narration at the end. AI is really becoming incredibly advanced.
Really? After like 5 seconds I was already wondering why the narrator was talking so weirdly, and by 10 seconds I knew I was listening to AI.
I don't know this stuff still manages to fool people.
the “ha ha ha” becoming more and more sinister was a nice touch 😂
AI is still disappointing and sucky once you play with it for a while
@@PolymerJones Thats what she said
@@PolymerJones i dont see how its dissapointing being able to give you information, sound, imagery and many many other things in a fragment of time that a human would need. Also the narration, no i did not realize it was AI. Sounded fairly fine and if you just think about it with how young AI actually is but how much it already can do...Give it 3-4 years and its gonna be tremendously smarter than we are. I mean it already is. Only thing that limits it as of right now is us as humans
Absolutely epic stuff here, using AI for environments is such a smart move. It's like boost app social, an app I use for my social media, they're also using AI to create stunning stories and reels. Tech these days!
Been using it for a while now, great toolkit for socials!
Mindblowing!! Awesome!!
I miss you guys, another of my favorite TH-cam channels turned into a ghost town. hope all is going well with your business, maybe one day you will be back or at least you will feel inspired to give us a farewell video.
Love the cell service parody commercial. You guys could make a whole channel doing them. Keep up the great work.
For a small creator who struggles with Space where I only have a corner of our second bedroom, which is not only my office corner being able to use this tool will allow me to create a much more professional looking setup without actually needing it
I would probably go one step further with this, and when you’re in Photoshop, remove subject from the plate, so then you have that solid background
HEY! the fact that we can't move the camera brings us back to birth of cinema, where creativity was made exactly like that. NICEEE
We can't deny it's helpful and industry changing forever, the only flaws I could've spotted were the river waters not running and the tree leaves with no wind movement at all, on the outdoor environments. Apart from that, it's perfect! when indoor with unanimated objects, it's insane what it can do.
Thanks for bringing this video guys 👏🏼 would love to watch a more detailed tutorial on matching the scenes on the video editing software.
The thing to keep in mind is, this technology is as bad as it will ever be. It will only improve from here. So the flaws you (rightfully!) mentioned, will be gone sometime in the future. And at the rate this all is developing, it will be gone sooner than one might think
No one’s denying it’s not helpful or industry changing. Just that it’s going to put a lot of people out of work in many industries. Last month 40,000 jobs were already lost to ai in the US. It’s already happening and capitalism is not prepared for the mass unemployment ai is going to eventually cause. Even jobs people claim are safe like programming are absolutely not safe, ai is already able to do most website coding and eventually will be able to code 95% of apps and software just based on user prompts.
it can only get better from here
I think this will be a great tool for educational videos. A cheaper way to create sets without the need for green screen.
Great process! Creating cinematic mattes with AI is so much fun. These lighting tips really up the game.
Next step: combine this technique with Resolve Relight. We're living in the future
Resolve relight is not very good tbh
@@Visethelegendsounds like you don't know how to properly use it 😱
@@GameUnkasa sounds like you are not a real colorist 🤯
I have to try
@@Visethelegend٩ مرحبا ٥٥
New subscriber here, so the following may make more sense. Adding the blue light and talking about it prior to showing the effects of the light may have caused a bias. It does seem like a neon light is hitting you in that direction, but that may be my bias. I did immediately notice the AI generated narration as I'm trying to do this with my videos. It will be interesting to ee how many comments were surprised by this. I'm not sure if the tech s totally here yet, but the narration was good, but only because I saw your face many times in the video so that told me that those were not brought in from stock footage. Great video! Cant wait to see some more videos like this!
Love to see that someone makes a tutorial on it finally - thought i was the only one thinking about that. I directed a music video for some pretty famous artists in germany, which released yesterday (PA Sports - Doktor) - I used the same technique combinated with digital zooming and also some stock video assets to bring in a little bit movement.
Oh wow! I’d love to see it!
Just checked it out, great work! Very curious which parts were AI and what other kinds of tools you used.
Awesome work on your video, really dig your style!
Loved the video!
Congratulations guys. This link popped up on a mailing list I follow. Lucky for me, I had already not subscribed to the channel. You’re going to be getting a lot of new views.
The shot using the overpass looks kinda kludgey to me and might not “feel” right to some viewers, but the rest seems quite plausible. A real boost to lower budget commercials and the like. Nice work! And thank you for sharing. I have a new project started where this may make solve some budget problems.
You are right some of the AI stuff does look "kludgey", but yeah its still pretty impressive at the beta stage. Im excited about using thes tools even for concept or storyboarding stages. Also the adobe generative fill seems to do better with real world objects/spaces vs "apocalyptic destroyed cities"
In the first 30 seconds i thought did they get a new narrator. Then when you appeared on screen i thought "hmmm" knew something was up with the voice. It was too clean and unnatural.
As for the video this is incredible. You have just changed my life. I've all the tools to do this and its solved a big problem i see in the near future for an upcoming project. Just wow. I just hope all the generated assets are owned by adobe and don't screw of independent artists.
Creating new AI environments using the generative fill feature in Adobe Photoshop is an incredible way to explore creativity and push boundaries. Keep up the great work!
Adding a window to your room is really pushing the boundaries of creativity
the AI narration at the end left me speechless
Wow - even a twist ending! Bravo!
I think it is a great tool, and the question comes in whether you film it on location, or the project is such that this is perfectly workable. Filming on a location allows for more dynamics. And like many many many post tools, this could allow you to do a quick pickup shot, or save a shot when budget or time does not allow you to reshoot or do a pickup.
The AI voice was the biggest shock tbh... Hardly noticed that bit at all.
The rest of it is convincing enough for TH-cam. More than anything it will allow smaller budgets to have nice studio looks.
If you play into it a bit, it could be a good move for some.
Lost count of how many times my mind was blown during this video...
Wow, this video is an absolute game-changer! 🚀 The concept of using AI environments to create amazing videos anywhere is mind-blowing. I've always struggled with finding the perfect locations or dealing with unpredictable weather conditions, but this solution seems like a dream come true.
The way AI can simulate different environments and backgrounds opens up endless possibilities for creativity. It's fascinating to see how technology is revolutionizing the filmmaking process. This video has truly inspired me to explore new avenues and push the boundaries of my own video production.
I can already imagine the convenience and flexibility this brings, especially for content creators on the go. Being able to transport yourself to various settings without leaving your studio is incredible. The ability to create professional-looking videos without the need for expensive equipment or travel is a game-changer for aspiring TH-camrs like me.
I'm really excited to dive deeper into AI environments and experiment with different scenarios. The potential to create immersive storytelling experiences is tremendous. Thank you for sharing this valuable information and empowering creators like us to take our videos to the next level. Keep up the fantastic work! 👍🎥
Thanks!!!
Oh my god, I was actually a bit stunned at the end that it was actually that good AI voice trained on your voice. I kind of thought the whole narration was a bit "monotone" and flat in it's delivery so I was a bit like "hmmm...okay maybe thats the tone they're aiming for" but didn't expect it to be AI (and I'm quite deep in audio editing, sound design...). Did you use Respeecher for the voice?
Elevenlabs
It seemed AI to me in the first 30 seconds.
@@wright96d yeah, immediately recognizable and incredibly offputting.
Damn it's getting crazy. Any kid with a camera will be able to produce his own film with this tech. So many new possibilities now.
And then there's the script and the acting, but sure.
You also missed the part where it took them hours to fix the coloring and exposure to make it look real and that's if they know what they are doing...would take an amateur days and they'd never get it this close, and that's just for one shot lol
Since ppl can't read... Use words like "getting" or "will be able" is speaking on things to come... It's only gonna evolve and become easier......
@@GameUnkasa lol he said the word "now" and "with this tech" meaning they will be able to without anymore technological advancements, but at least you can read the first half 👍 but honestly he isn't wrong, it just needs to be noted that people shouldn't just give up on art just cuz any kid can make art now. The point is that, yes it's easier to create certain shots now and will be even easier in the future, but there are still very difficult steps to creating an entire film that will stay very difficult for probably forever.
Kids with phones have been able to make movies for several years now. 99.99% of it is pure shit. The ones who persist will get better if they learn from their mistakes and build on their successes.
This is a great idea, i recently started to write a my own web-series for youtube. i was warried about how i am going to do things but this is great idea. thank you soo much. this opened so many possibilities for me. i can do it :)
I'd love a tutorial on how to incorporate these assets into Blender and track a 3D camera for parallax. As much as I love this and am trying to figure out my own workflow, it looks like a green screen at best. There's something about it that my brain just won't believe. Maybe adding some kind of extra moving noise to the generated images would help among other things. It just looks like a well keyed green screen ... and that is not exactly convincing. Maybe some handheld movement - but in After Effects for parallax.
I wonder if you could cheat at least some motion by using a normal map of the image generated in Resolve Relight? Really want to dive into this...
@@davidbroughton5237all of that will require substantial power to even preview I suppose. But I was thinking the same thing haha!
@@davidbroughton5237 You need a depth map, but you can't push that too far before the effect will break down
That’s not that difficult if you have basic 3D skills, just enough to create simple geo. Search for camera projection tutorials and also for camera tracking and have fun!
I did not expect the narration to be AI...
What a time to be alive!
Cool concept. Seems to have the best results mimicking an indoor "set". With an inoffensive background its a great way to extend a "greenscreen" set for talking-head videos. Still could be challenging to make sure the real-world lighting stays motivated by a fake environment.
It feels a bit uncanny for the outdoor scenes. The outdoor lighting/shadows were a bit off, and lack of background movement (wind/running water) felt unnatural.
totally agree! The generative fill is much better at "real environments" vs the apocalyptic world we tested on this video. One thing that really did surprise us about the generative fill, is most of the time it really tried to keep lighting from the correct direction, color, and hardness. Not perfect but we were consistently impressed how it referenced the image lighting as a whole before generating the new sections.
@@jamesepiclight I noticed that as well but only after color correction and other post effects were applied. At first placement the elements have a paste-in feel, as you don't notice the basic continuity of light values. Color correction (continuity) makes so much difference in image editing in general relative to perceived dimensionality, etc.
Wow! I could not believe the environments in this video was created with Photoshop & enhanced with Davinci Resolve. When the time comes, I will try it out. Thanks Epic Light Media.
"ha ha haa.... ha ha haaa.. hah haaaaa... " My favorite quote from this video
Waiting, in anticipation, until the next one!
4:47 on the left edge, you can see a little bit of the border between "real" and "fake" but then, you added a vertical frame feature in the background which justified that vertical line quite nicely. Human art aided by AI, this is the future!
I never subscribe to any channel solely based on someone in the video asking me to subscribe. Upon seeing the "do not subscribe" comment in the end, I immediately subscribed :)
That's it. I am going to live in a cabin in the woods and write manifestos on a mehanical typewriter.
That ending was perfect. It made me editor brain so happy I had to rewatch it three times.
Awesome video! If you didn't tell me this was made by AI, I wouldn't have known or been looking for imperfections like water moving but this looks like an easy process for the most basic creator. Especially if your in a tight space
OMG I just CANNOT believe this content exists! What a great job! You have a new subscriber here for sure. Thanks so much for sharing it!
This generative fill definitely opens up possibilities. I'm surprised to see how the AI generated content resolution in a 12K timeline. My understanding is that it produce content 1024 resolution so I guess the trick is to do a basic back plate and then regenerate smaller chunks of it to up the resolution
For any AI programs that produce low resolution output, you can then feed that into an image scaling AI to get it up to 4K or 8K or whatever. But a lot of the programs do let you increase the render resolution if you poke around.
@@RavenMobile yes lots of upscalers out there designed for various types of images ( photo, anime, digital art etc). but it's tricky when patching up a higher res image and how big of a selection you should make to fill it at the highest resolution available so it also matches the rest of the image.
I subscribed immediately after seeing this video.. I've been frustrated with the level of quality visually that my videos have this should be a game changer. I hope you follow this theme on giving all the details on how to make this happen.. And refining the process... I think everybody wants to know more..
Awesome stuff! you have left no doubt in my mind that I will be buying a green screen in the near future :) Still, for professional stuff I can easily see the value in this..
Wow… That’s One awesome video and things to learn and try. 🎉🎉
Where are you? It has been months since you posted a video.
Am I the only one who could immediately recognize that the voice was AI? There's something about the inflections that just aren't 100%
Really cool video! I can't wait to play around with the tools in Photoshop!
Got me as well. It's interesting how some people notice instantly while others don't
@@moomoocowsly Interestingly I'm pretty much the same. But i follow all this stuff closely as well. ChatGPT is the easiest of them all though it basically uses the same templates for nearly everything even if you request different styles, etc.
The reason you don't put a light close to a subject is because when they move it falls off too quickly. Falloff is fastest close to a light source.
I know I know
@@EpicLightMedia, would this also work with Da Vinci Resolve? I just started using this editing sortware and I'm trying to keep it simple...
@@moviedorkproductions9465 The video editing part was demonstrated in DaVinci Resolve, but the AI part has to be done in Photoshop, as there is no generative fill in Resolve.
And the reason you put a light close to a subject is for softer lighting. So its a matter of preeference.
@@vbrooklyn It always looks a bit unnatural though - yes it's softer but it's also harsher because of overly quick falloff.
Man i was legit just thinking about how to use generative fill in videos. Of course, I wasn't the only one with those thoughts going through my head. Cool ideas!
Wow, really impressive. I don't know if I'm in the minority here but I would have really like to have seen a more granular and desciptive segment on the actual creation and editing of the import and post import. You either skipped, or zommed past, a lot of small detail that a newbie like me would really like to understand.
I think the room lighting thing was absolutely amazing. I did notice that the voice-over was ai. The only thing in the commercial footage I noticed was that the generated cement for the roads before and Beyond the actor were clearly flattered and Faker than the actual concrete he was standing on.
Hey, everything is ok? I miss your videos in this channel.
I commend you for explaining this in a way we can fully understand the process and wasnt overly long 👏🏾👏🏾👏🏾👏🏾👏🏾
Where are you guys?? We miss your videos!!
We decided to stop TH-cam to do a feature film. Maybe we will start up in a year or 2
@@thomasmanning9111 oh that's so sad
We have a small filming studio and such a video is really helpful as I wondered if I can incorporate Photoshop to enhance real footage. Thank you !!
Hey guys, where have u been
The voice immediately stood out as AI... But that dinning and room scenes where just amazing.. With some good enough light... TH-camrs won't need to deck out their scenes again to have lovely looking backgrounds.. This is inspiring..
Now the indie moves will upgrade to another level. Cheers.
Very creative guys. Let the record show, Epic Light Media was the first to highlight this feature and all other videos were shameless copycats. The amount of copycat videos this week is honestly disturbing.
I will not name names. But it doesn’t matter. This is the OG. All others are copycats.
Would've loved to see your Degrain and Regrain methods for this process
Thanks for the creepy ending. 😆 Great tutorial; it definitely can be useful for vids that don't need to move the camera -- I could take my tiny space and turn it into a mansion! Or at least a larger studio to work with.
Or you can take the 50k you'd spend on this gear and actually get a bigger space.....
This is something that will save a lot of time for my corporate interviews. My question is, when do you disclose that you used AI in a project? Do you disclose when you've tweaked a lamp, couch, window, etc.? I know that Adobe's current iteration has some interesting terms of use as well. I used AI to spruce up a boring black background in a recent project and will definitely use the method again.
@@moomoocowsly good point. I hadn't considered that we don't disclose CGI (or even simple compositing) that alter the images we see. But is AI enough of a departure from a more hands-on approach to the art we create, that it warrants a disclosure?
@@brentwpowell YOU NEED TO DISCLOSE AI , it is in every AI software TOS , dont know how would you pose with that before your boss tho , probably it will take a few years and it will become mainstream.
@@poti732 I'm under the inclination that you should disclose, and have on most of my outputs, but not all software requires disclosure. The generations shown in this video were via Adobe Firefly, which is currently in non-commercial beta, so the disclosure is necessary for fair-use commentary. I've come across other implementations that don't require disclosure in their TOS. Many bosses will love this stuff, faster turnaround and higher concept productions, even if they have to disclose.
@@moomoocowsly using AI is different from photoshopping or using CGI since those two still involve the human touch, whereas the AI is generating the whole content for you; it's as if you were commissioning/outsourcing it to someone else. If you were to heavily alter the AI-generated content or simply use it as a reference, however, then a case could be made that you no longer need to disclose it.
Miss your great content guys!!!
Best generative AI tutorial yet
Great video I don't have the skill set you folks do but I have a plain back wall and have played a little with generative fill and your video has shown how to
voice got me, i thought it had no emotion but happens when reading a script. its definitely better than so many text to voice readers, what was used to do the voice. the videos from the get go my brain did not accept it. i thought it was full green screen the shadows where not right, to sharp just wrong, one of those things where your brains like there a problem here and from the get go your looking at ever little thing for some unknown reason because something feels off. overall fantastic
haha, i clicked on this video on a whim, and that post apocalyptic cell service commercial caught me waaaaayyy off guard....i literally laughed out loud. nicely done.
Hey thanks!!!!! I’m glad you liked that part. Not many people have mentioned it
Thanks for sharing this! would love to see in detail your workflow in photoshop to get the final image. awesome video! new sub
I'm looking to start doing this inside my house. I plan on getting a backdrop. What color do you recommend for the backdrop or does it matter ?
First of all if I could Subscribe 1,000 times I would so go ahead and hate me for that. This channel seriously upgraded my videos, basically within a week. I watched all your videos. If I could work with you I would. You guys are awesome.
Hey thanks so much!!!!! We haven’t made a new video in a while… if you think of any new video ideas let me know
@@EpicLightMedia Yes! You guys really got me thinking about affordable options, for example, a flag/scrim kit. Maybe you could make a “homemade” DIY version of the Modern Scrim Set?
Speechless here!!! What kind of app did you use for the voice man? thanks for this video, loved it!
Great tipps! What software did you use for the narration? Thank you!
It looks amazing. Tottaly realistic.
This was an amazing display of so many technological advances. Thank you for making it easy to understand and replicate!
Hey thanks so much!!
Love it. Going to try this. It will be very beneficial for third world country people who want to create good high quality videos.
HEY, you can add movement to those images and simulate. slow zoom ins and outs. Just take the image and slice it in layers.
I used VND-CPL filters from Kase for my movie. It's greatful to us when we take action outdoors.
Question: How to make the generated background look more like natural moving video?
Context: When we see any high resolution video, we can feel a motion. Because the sensor captures an updown flow of light, although our eyes cannot see any grain visible when we focus on the content. But we can sense it pretty well. Now, when you put a picture on a video, the video part will give the natural moving feelings, what about the PNG part? Pro editors will agree with me that, when we will use low light and sharpness, the video part will give that nearly unnoticeable grainy vibe, where the PNG will not.
Until now, the gereative PNG is having a 2D image appearance for it's flat color and no-grain. Adobe needs a little more trick to use this feature in Premiere Pro natively.
mix it with camera tracking and you can create moving camera footage . but alone generative fill is completely awesome.
That last "hahaha" was sinister sounding lol
this is a creative way to use them. i love the idea. people became more creative using this ai tools
the trailing 'ha ha' had me loling. Great work!
The work is inspiring and amazing. The only detail I'll add is the stream in the river of the last shot (commercial)
Super work! Can you also explain how did you generate this narativ voice?
Ths is the first time I am seeing your video. Great work. Keep it up.
Great work guys. I still remember photoshop without layers... hahaha. We are moving into an exponential world with AI
Love the transparency disclaimer, way to go!
What soft bag do you use with your Nova 600? Thanks!
wow. What service you did you guys use for the AI narration? I can't believe that is AI!
Elevenlabs
@@EpicLightMedia Thank you for the reply.
There's always a bit of grain moving around in video, did you add anything like that to the plate?
Grain, blurring, halation and other effects were added in the final Davinci grade to help blend some of these compositions together.
I've been using PS AI for a while now but this is insane!!! I didn't even think of this potential! Thank you!!
at 8:12 when you brought the image in, there was a color difference, I'm having this same problem, only tools I have are photoshop and fcp. You went to Rec 709 color... I used a tripod and a simple indoor background, but I get this problem. any suggestions, I wasn't sure how to change mine to the rec 709 or how to know what to change to, in order to get a match.
How cool is that. I'm goin gto integrate this into my workflow, ASAP.
Nervously awaiting generative fill in premiere or after effects. It could make my life easier, or put me out of a job. Who knows
4:18 i'm not sure if it was like good idea it was like light was motivated by "window" plus u got very much of banding because we are in youtube and it sucks on gray gradient(like adobe software though)
Pretty cool. Like the idea of shooting vertical at something like 4k then be able to down res and zoom at 2k or 1080.. great video
Really interesting use of generative fill and insight into the use of AI. Thanks for sharing this.
At (7:09) coudln't you could just make simple masks where lighting is prominent, select the blue channel with color picker and just reduce saturation as a quick fix? I guess the fact that your shirt has blue hues and some of the lighting bleeds into the clothing's folds makes the process a little more complex. I still think it would take a relatively short amount of time to fix given the the symmetryical nature of the shot.
What software/tool you used for the narrator voice? I'm impressed and want to try by myself