Definitely want to use it in the future. Ever since I saw these videos come out, I've been wondering if you could use it with a green screen? Meaning, have a green screen behind the subject (potential reasons maybe locations not available or to cut travel costs etc) and then use this AI tool to fill the rest. Let's say you want to have a scene with the Eiffel Tower in the background and use a green screen for that, but then use AI in PS to fill the rest. Any thoughts on that? Great video, thanks for the super simple explanation for a newbie like myself
@@Mrlloydvideostrue, but the point is that with this method you have more control of your background...now you're not in a park, you're in a vast remote vista.
i like how you talk to us with clarity and complete transparency whereas others are too busy trying to sell us anything and everything. thanks a lot for being here and doing this, Cheers!
I've been using this at work a little bit recently. A couple of tips for anyone who is going to try this: • Add a little noise or grain to the png when you put it on your timeline to help it blend in with your video footage a little more. All footage will have a little bit of chatter no mater how clean and well lit it is so this will help sell the effect if people start looking closely! you could even do noise reduction on your video segment and then add grain over the whole thing to make the noise/grain match even better. • Avoid trying to get it to create anything with words or symbols in it like a sign post or something. AI is still terrible at that and will give you some hilariously wonky results! • Be careful if you are shooting in natural environments. If there is a breeze moving branches or leaves around you, of water running or rippling, trying to expand that stuff will be a bit of a giveaway when the generated material isn't moving the same way.
Another tip: As of now, the resolution of generative fill is limited to 1024x1024px. If you select an area wider than this, photoshop will just upscale a 1024x1024px image to your selection, losing quality. To gain the best quality, select an area no more than 1024x1024px at a time, building the scene square by square.
@@JDRos Fstoppers and lots of other sources claim that the area that is being generated is 1024x1024 pixels. But I don't have any first hand information from Adobe, so if you have some information from Adobe directly, then you should probably trust them.
1) You don't need photoshop beta anymore, it's in the base app 2) no movement in the generated content ruins the illusion 3) it's better to just shoot square video at 4-8K and frame it so you're in the center for both crops, then you can just crop
It's neat and all. But why not film in 4k horizontal, and crop it to 1080 vertical for verticals? ;) do you gen fill in batches of ~500px, or don't you mind the upscaled fills?
I just had a great AI experience! I can see the AI in the thumbnail the way I learned to see wind turbulence (hang gliding) or see lens lengths from seeing a picture🙌 As a rule, I'd shoot horizontal and AI the vertical version when necessary. Even if we just crop a vertical out of an 1080 image we can also use AI to blow it up to 1920 vertical. There's a lot less area to invent, and the invented bits are less relevant, ie. a small patch of ground and sky, for example. Granted, inventing most of the frame is more impressive as a demo.
Absolutely incredible video brother! I love the idea of adding real estate above your subject for interviews if they aren't quite cropped properly and need to make adjustments. That right there could save you an entire re shoot!
On the first glance you don't even notice it. The only thing that breaks the illusion is for example in the scene @7:30 the branches in the middle section are moving but the branches at the outer sections are not, if you don't look for it I'm not sure if you would notice that right away, it would be more of a unexplained feeling of something is wrong :D
Great idea! I think the only thing that makes it look not so realistic is the too shallow depth of field. When you shoot vertically, you're closer to the camera and then when you widen the frame, the depth of field is much smaller than normal, and I think that's why it looks a bit unnatural.
I see it as a big win for creators just starting who do not have a studio as per say and only a small space to get talking head shots. At this point, even advanced creators renting studio spaces might as well just play smarter with the backgrounds for way less money overall. Not even mentioning where this might even be in the next few years ... Or imagine similar technology coming natively in Resolve for example ? This AI thing is going to get us in trouble at some point
Awesome tip about shooting vertical and then using content fill in post to create a 16:9 video. I'm definitely going to start using this in my videos. Thanks again!
We've been applying this technique quite a while. The one MAJOR issue with using this for any type of commercial production both still and motion is the resolution of the Gen fill. Currently GF has a px limitation (I"m sure you all read about or experience that) which significantly impacts a "clean" shot with longer DOF. It is highly noticeable in the form of softness and hashing. Hashing (grid like) and softness are especially noticeable if you zoom in or if you are displaying on a larger screen (phone is not an issue). Currently the only way we found around this is to open your lens for a lower DOF which accommodates a more gradual and natural blend of the Gen Fill while removing the hashing effect. Hard edge fills are common as well as exposure mismatch. They're not difficult to correct but takes time to get it right. The utility of the fill is unquestionable, especially with stills. To take it a step further for video you could import the still into an app like Particle Illusion or Borris FX Sapphire and try to match any tree movement.
Everyone talks about the “issues & problems” with the technique. 🤦🏾♂️ Damn just thank the creator and take from it what you can. I’ll say it! Thanks for spending your time creating this video and sharing your knowledge. Ignore the dweebs on here that criticize and don’t have any content of their own.
bro i knew somewhat of this concept already but underestimated how easy it is. you gave me such a breakthrough idea for my youtube video because of this, bless you
Firstly, thanks for the video! Awesome results. One thing I would caution anyone about to try this out though - if you are shooting outside your lighting conditions will be constantly changing. The new 'mask' environment around your shot won't change though so you will see it straight away. Still, it's a great tool for setups when you can control the lighting and for short shots.
Pro tip- if you turn the camera ninety degrees before starting to film, you can skip ALL of the above work, and still be able to crop that into a vertical if you need to. You can move the camera, the background and everything.
Sometimes the video wasn’t planned to be used with other videos of a certain orientation so it can certainly be a handy thing. But otherwise, planning your shots if obviously the best thing to do.
Would be cool to see an old movie scene filmed in a narrow aspect ratio filled in with AI to become widescreen without cropping from the top and bottom
you shouldn't even need to take you out of the png, with resolve, the black would be transparent either side of the clip, you could just pull in the png under the clip, only the edges will be visible around the clip, and the vertical will still be on top. Just a thought. Like the use of this though!
Great tip for interviews. It's wild that this is just a beta release. A lot of the results are great - some not so. Fingers crossed they'll add some video features (not requiring Premiere), at least something subtle like tree branches gently moving in the wind. That's the big tell right now. Also, I would love to see a less "professional" company (gimp?) do something like this because Adobe is crazy heavy on censorship. I tried to add legs to a character and got shut down based on the word "legs." Tried again without the keyword...and it drew legs LOL.
This is such a great feature. At some point in the near future I'm going to implement this into my work flow and production. Thanks for taking the time to make the video.
I haven’t played with photoshop in about 10 years and don’t have much experience with premiere pro. I did this for a short music video of mine and it was so simple, the AI task was nearly perfect, it all blew my mind. However, i did have hard lines in my generated image and i didnt even these about these soft edges like you did here. I’ll definitely be doing that next time.
Great video - learnt alot - this is a game changer for me as I've moved countries and don't have gear or a studio anymore - this may allow me to create some "studio" to film in - cheers mate! BTW, loved your relaxed style on camera, really enjoyed watching you.
You could bring the photo in the video and add a mask with a feature to save a step. You can lower opacity to see the video then align mask and raise opacity.
This could be good for inside background but for outside it doesn't feel reel because there's no movement in the background. Even though it's not a windy day trees would be moving.
I've seen lots of videos on this. I've been thinking of using the generative fill to create a clean plate when I isolate the subject from the background. I would want to create a depth map for that clean plate and do a 3D displacement in either Fusion or Blender, and add shake to a virtual camera to add some movement. The sky is the limit for creatives with where tech is going. Thanks for another great video.
Amazing. I have watched for whole day TH-cam videos how to do that but not able to do so, until I watched this great simple, clear and well-explained video.
You would be throwing away a lot of pixels that way, leaving a lower resolution product. Pixar did something similar with A Bug's Life in which they extended the frame in some scenes instead of cropping it for the then-still-prevelant 4:3 video release.
@@alphzoup Yet your way in order to get QUALITY pixels you’d need to only generate 1024x1024 since that’s all generative fill can do at a time! So AGAIN my way is the LOGICAL and QUALITY way to do it…sorry!
I don't know how I stumbled upon this video, but I am very glad that I did. NOTHING is real anymore. I had to subscribe and now going back to see what else you got for me. I'm hopeful that I can learn many things to help my social media presence along. Thanks!
crazy how I all ready knew this , theres not many videos that impress me or I learn something new from. thats actually good because it means im always learning on my own.
I was experimenting with this concept. But my still had no hole. I put the movie as a layer over the picture. You have nice tips and instructions. Thanks for that. BTW I was wondering how you blended in the cat at 2:49 ;)
Subscribed. Will use this in a current project with a plain purple background and it will save me a lot of time. But one question I do have. Which screenrecorder do you use to get your recordings in 4k? Best quality I have seen so far. Thank you in advance for your answer.
You bro I love the way your delivered this story I have watched so many people explaining this but you have done it and captured my imagination while doing it thank you you have a new fan
Why do you film outdoor cropped, you can film as it is..😅, then you don't need generative ai.. I think Adobe AI is a replacement of green screen and indoor shooting mainly
Ok... Love this... but most importantly, I need new glasses and what you have are awesome... can I ask where they are from? Finding glasses for a bald & big headed guy like myself has been tough 🤣
The problem is, you cant walk around much x.x you are restrained to staying within that small frame you set up and then the audience will eventually noticed something is up. I also find it crazy that your doing a video that like every other content creator is also doing Dx seems like a video made because it was a easy one to produce
It "might" be because you're trying to gen off a layer. You need to open the image rather than add it to an existing comp. Also, make sure your selection overlaps the image a bit.
Hi! Many thanks for this! I have got a few photoshop related questions. I tried using the eraser tool for the middle part of my video but it does not overlay well in Davinci resolve :( . the size is much smaller and the part I erased is not transparent Thanks again in advence
When saving as png make sure to check the box for transparency. Also the resolution of the video frame should match with the newly created background. If the video is 1920x1080 then the new background file should also be 1920x1080.
Are you considering using this going forward??
Scary and impressive at the same time.
Definitely want to use it in the future. Ever since I saw these videos come out, I've been wondering if you could use it with a green screen? Meaning, have a green screen behind the subject (potential reasons maybe locations not available or to cut travel costs etc) and then use this AI tool to fill the rest. Let's say you want to have a scene with the Eiffel Tower in the background and use a green screen for that, but then use AI in PS to fill the rest. Any thoughts on that? Great video, thanks for the super simple explanation for a newbie like myself
What about trees movements? Your stage is unnaturally static. Can you manage this?
I can't afford PhotoShop!
yes
To sell this effect even more, you can add a node with noise and grain to it. So it has more subtle movement to it and not make it look artificial.
yup !
@@Mrlloydvideostrue, but the point is that with this method you have more control of your background...now you're not in a park, you're in a vast remote vista.
@@robbysalz8710 I don't think you understood
i like how you talk to us with clarity and complete transparency whereas others are too busy trying to sell us anything and everything. thanks a lot for being here and doing this, Cheers!
That works outside on a day with no wind if you have trees in your shot, but it would look weird with wind blowing only in the center. 😅
I've been using this at work a little bit recently. A couple of tips for anyone who is going to try this:
• Add a little noise or grain to the png when you put it on your timeline to help it blend in with your video footage a little more. All footage will have a little bit of chatter no mater how clean and well lit it is so this will help sell the effect if people start looking closely! you could even do noise reduction on your video segment and then add grain over the whole thing to make the noise/grain match even better.
• Avoid trying to get it to create anything with words or symbols in it like a sign post or something. AI is still terrible at that and will give you some hilariously wonky results!
• Be careful if you are shooting in natural environments. If there is a breeze moving branches or leaves around you, of water running or rippling, trying to expand that stuff will be a bit of a giveaway when the generated material isn't moving the same way.
Another tip: As of now, the resolution of generative fill is limited to 1024x1024px. If you select an area wider than this, photoshop will just upscale a 1024x1024px image to your selection, losing quality. To gain the best quality, select an area no more than 1024x1024px at a time, building the scene square by square.
@@andrew_nayesis it really 1024? not 512?
@@JDRos Fstoppers and lots of other sources claim that the area that is being generated is 1024x1024 pixels. But I don't have any first hand information from Adobe, so if you have some information from Adobe directly, then you should probably trust them.
1) You don't need photoshop beta anymore, it's in the base app
2) no movement in the generated content ruins the illusion
3) it's better to just shoot square video at 4-8K and frame it so you're in the center for both crops, then you can just crop
It's neat and all. But why not film in 4k horizontal, and crop it to 1080 vertical for verticals? ;)
do you gen fill in batches of ~500px, or don't you mind the upscaled fills?
I just had a great AI experience! I can see the AI in the thumbnail the way I learned to see wind turbulence (hang gliding) or see lens lengths from seeing a picture🙌
As a rule, I'd shoot horizontal and AI the vertical version when necessary. Even if we just crop a vertical out of an 1080 image we can also use AI to blow it up to 1920 vertical. There's a lot less area to invent, and the invented bits are less relevant, ie. a small patch of ground and sky, for example. Granted, inventing most of the frame is more impressive as a demo.
Caveat: Your background has to be perfectly still.
Absolutely incredible video brother! I love the idea of adding real estate above your subject for interviews if they aren't quite cropped properly and need to make adjustments. That right there could save you an entire re shoot!
On the first glance you don't even notice it. The only thing that breaks the illusion is for example in the scene @7:30 the branches in the middle section are moving but the branches at the outer sections are not, if you don't look for it I'm not sure if you would notice that right away, it would be more of a unexplained feeling of something is wrong :D
*We love you, keep on making great videos! Thanks*
Honestly? This is the first video about this topic : simple, clean, straight to the point and well talked.
Great idea! I think the only thing that makes it look not so realistic is the too shallow depth of field. When you shoot vertically, you're closer to the camera and then when you widen the frame, the depth of field is much smaller than normal, and I think that's why it looks a bit unnatural.
You spoiling us now. Loving the content.
Aye! Doing what I can!
I see it as a big win for creators just starting who do not have a studio as per say and only a small space to get talking head shots. At this point, even advanced creators renting studio spaces might as well just play smarter with the backgrounds for way less money overall. Not even mentioning where this might even be in the next few years ... Or imagine similar technology coming natively in Resolve for example ? This AI thing is going to get us in trouble at some point
Nice! You'll need to watch for the wind blowing the branches though. That would break the illusion
Awesome tip about shooting vertical and then using content fill in post to create a 16:9 video. I'm definitely going to start using this in my videos. Thanks again!
great straight forward video - simple instructions and positive recognition of the limitations. Nicely done!
We've been applying this technique quite a while. The one MAJOR issue with using this for any type of commercial production both still and motion is the resolution of the Gen fill. Currently GF has a px limitation (I"m sure you all read about or experience that) which significantly impacts a "clean" shot with longer DOF. It is highly noticeable in the form of softness and hashing. Hashing (grid like) and softness are especially noticeable if you zoom in or if you are displaying on a larger screen (phone is not an issue). Currently the only way we found around this is to open your lens for a lower DOF which accommodates a more gradual and natural blend of the Gen Fill while removing the hashing effect. Hard edge fills are common as well as exposure mismatch. They're not difficult to correct but takes time to get it right. The utility of the fill is unquestionable, especially with stills. To take it a step further for video you could import the still into an app like Particle Illusion or Borris FX Sapphire and try to match any tree movement.
Hm... But if you have a camera, why would you film vertically? Isn't it easier to film horizontally and then cut and repurpose for 9:16 platforms?
Everyone talks about the “issues & problems” with the technique. 🤦🏾♂️ Damn just thank the creator and take from it what you can. I’ll say it! Thanks for spending your time creating this video and sharing your knowledge. Ignore the dweebs on here that criticize and don’t have any content of their own.
Much love!!
bro i knew somewhat of this concept already but underestimated how easy it is. you gave me such a breakthrough idea for my youtube video because of this, bless you
Much Love!!
Finally. A really good and simple video explained in a way I fully understand. Again thank you. Saved to my video editing list!
Firstly, thanks for the video! Awesome results. One thing I would caution anyone about to try this out though - if you are shooting outside your lighting conditions will be constantly changing. The new 'mask' environment around your shot won't change though so you will see it straight away. Still, it's a great tool for setups when you can control the lighting and for short shots.
I think this is probably is more convincing for interior shots that don't have moving elements (like trees moving)
Pro tip- if you turn the camera ninety degrees before starting to film, you can skip ALL of the above work, and still be able to crop that into a vertical if you need to. You can move the camera, the background and everything.
Thank you. Apparently common sense left production with Instagram and TikTok.
@@tonyhopecreative Perhaps I am not alone in doubting my own sanity when it comes to the formidable skill of holding stuff the correct way up.
You can scale it down with this method tho
It’s still helpful to know this other option.
Sometimes the video wasn’t planned to be used with other videos of a certain orientation so it can certainly be a handy thing. But otherwise, planning your shots if obviously the best thing to do.
Awesome subtle effect and super clear instructions, it looks great
2:50 the ai cat was amazing!
Bro, nice tips, nice video, I appreciate you and your work!
Would be cool to see an old movie scene filmed in a narrow aspect ratio filled in with AI to become widescreen without cropping from the top and bottom
you shouldn't even need to take you out of the png, with resolve, the black would be transparent either side of the clip, you could just pull in the png under the clip, only the edges will be visible around the clip, and the vertical will still be on top. Just a thought. Like the use of this though!
Great tip for interviews. It's wild that this is just a beta release. A lot of the results are great - some not so. Fingers crossed they'll add some video features (not requiring Premiere), at least something subtle like tree branches gently moving in the wind. That's the big tell right now. Also, I would love to see a less "professional" company (gimp?) do something like this because Adobe is crazy heavy on censorship. I tried to add legs to a character and got shut down based on the word "legs." Tried again without the keyword...and it drew legs LOL.
LOL! Need to try Legs on my next round of Ai!
This is such a great feature. At some point in the near future I'm going to implement this into my work flow and production. Thanks for taking the time to make the video.
I haven’t played with photoshop in about 10 years and don’t have much experience with premiere pro. I did this for a short music video of mine and it was so simple, the AI task was nearly perfect, it all blew my mind. However, i did have hard lines in my generated image and i didnt even these about these soft edges like you did here. I’ll definitely be doing that next time.
That was pretty cool, Chris. Thank you.
- Robert
Great video - learnt alot - this is a game changer for me as I've moved countries and don't have gear or a studio anymore - this may allow me to create some "studio" to film in - cheers mate! BTW, loved your relaxed style on camera, really enjoyed watching you.
You could bring the photo in the video and add a mask with a feature to save a step. You can lower opacity to see the video then align mask and raise opacity.
Cool! We Like your channel, Keep on the good work!
This could be good for inside background but for outside it doesn't feel reel because there's no movement in the background. Even though it's not a windy day trees would be moving.
Looking great- now all we need is a good solution for generating fake leaf shadow animation for that last shot.
You know you dont have to make that transparent "hole" on your generated BG, and put your original video above it in premier...
I've seen lots of videos on this. I've been thinking of using the generative fill to create a clean plate when I isolate the subject from the background. I would want to create a depth map for that clean plate and do a 3D displacement in either Fusion or Blender, and add shake to a virtual camera to add some movement. The sky is the limit for creatives with where tech is going. Thanks for another great video.
Man, your video is so simple and very interesting. Completely easy to understand. Cheers!!
great video, thanks for going into detail
Amazing. I have watched for whole day TH-cam videos how to do that but not able to do so, until I watched this great simple, clear and well-explained video.
I love your idea of this…but wouldn’t it just be easier to record the video horizontally to begin with then just crop for vertical after?
I was thinking the same. For me it would be if i got a perfect clip then this would help to get it likex16:9
You would be throwing away a lot of pixels that way, leaving a lower resolution product. Pixar did something similar with A Bug's Life in which they extended the frame in some scenes instead of cropping it for the then-still-prevelant 4:3 video release.
@@alphzoup Yet your way in order to get QUALITY pixels you’d need to only generate 1024x1024 since that’s all generative fill can do at a time! So AGAIN my way is the LOGICAL and QUALITY way to do it…sorry!
@@rogalaphotography What do you mean re: 1024? Are you referring to SDXL? This isn't SDXL.
@@HikingWithCooper HAHA do some research about Adobe Generative Fill and 1024x1024...then you'll understand. It ALL OVER TH-cam
Cool stuff, Basically you can save presets? Can background/items be colored?
They can but it's best to color your image first then go into photoshop because the generative fill image can't be graded as well as video!
I don't know how I stumbled upon this video, but I am very glad that I did. NOTHING is real anymore. I had to subscribe and now going back to see what else you got for me. I'm hopeful that I can learn many things to help my social media presence along. Thanks!
EXCELLENT Work, Brethren
Thanks Bro!
Can you make a tutorial on how to extend background with a moving video?
Super creative...Who knows? adobe might add this to premiere in the future....
I like all the movement in this video.
crazy how I all ready knew this , theres not many videos that impress me or I learn something new from. thats actually good because it means im always learning on my own.
I was experimenting with this concept. But my still had no hole. I put the movie as a layer over the picture. You have nice tips and instructions. Thanks for that. BTW I was wondering how you blended in the cat at 2:49 ;)
Lolol! She loves the attention!
Ext step for ai, content fill with motion, imagine if the leaves were moving in the breeze, that would be better again.
Dude! Mind blown 🤯
Hey can you elaborate on the ps part. Great vid
Now you can do this in premiere, you should do a new video showing off the video versions of the new AI tools in premiere
thank you for this video!
Subscribed. Will use this in a current project with a plain purple background and it will save me a lot of time. But one question I do have. Which screenrecorder do you use to get your recordings in 4k? Best quality I have seen so far. Thank you in advance for your answer.
I use the Quick Time recorder built into the Macbook, then take that and bring it into resolve for any tweaks!
Cool feature use, well explained. Let’s include this to resolve directly
so I need super GPU? for photoshop?
Not necessarily! But the experience would be better with one lol! Or any of the M series of Mac
Personally, i think this would be immensely beneficial when someone cannot go outdoors for video shoots on a rainy day or an intensely cold day.
You bro I love the way your delivered this story I have watched so many people explaining this but you have done it and captured my imagination while doing it thank you you have a new fan
Very happy to help!
Fantastic content fam, you've earned another sub with this one.
Why do you film outdoor cropped, you can film as it is..😅, then you don't need generative ai.. I think Adobe AI is a replacement of green screen and indoor shooting mainly
Filmed vertically because it was for social but I converted it for youtube. Could do it either way.
💙@@ChrisFranklinJr
then the light changes on the footage and it doesn't work anymore.
the generated content is limited to 1024x1024, so you could refine it more by going in passes
Great job mate 👏👏
Do you have to use the soft brush for the part where you erase the middle of the shot or can you just erase with hard eraser too?
Either works if you do everything correctly! I had some instances where the hard brush was noticeable
Ok... Love this... but most importantly, I need new glasses and what you have are awesome... can I ask where they are from? Finding glasses for a bald & big headed guy like myself has been tough 🤣
Yo! Thanks! These are actually Ray Ban sun shades that I had converted into prescription glasses!
@@ChrisFranklinJr BRILLIANT…. Now I’m going to hunt these down. Thanks man. Love the content.
Maybe I’m an idiot, but to me it seems so much easier to film in landscape and then if you want a vertical shot, just crop it. 🤯
BEST VIDEO EVER!!!!!!!!!!!!!!
thanks for the video buddy - super clear
I was lookin for this today for video, oh gosh, will need 4 min of 24fps and yes with lot's of movement...
bro would you please tell me about your wireless mic it sounds Pretty good... please
Thanks! I use the DJI mic set + Vocal Compression - Raising the Bass on the EQ
Amazing use of this
Thanks Chris
The problem is, you cant walk around much x.x you are restrained to staying within that small frame you set up and then the audience will eventually noticed something is up. I also find it crazy that your doing a video that like every other content creator is also doing Dx seems like a video made because it was a easy one to produce
this why he said is helpful for interviews
Thank you! ❤️🔥
My Only concern would be, how would we make the BG dynamic especially when there is moving object in the original footage?
its not possible
Keyframes and if necessary roto of subject
Awesome tips, thanks for sharing.
Great video and so nice good ol’ Ps comes in handy 👍
really cool content, love it!
Man something must be wrong with my generative fill. Every time I try to generate sides it turns into a funhouse nightmare.
It "might" be because you're trying to gen off a layer. You need to open the image rather than add it to an existing comp. Also, make sure your selection overlaps the image a bit.
@@HikingWithCooper will try it! Thanks
nice, I have seen other editors doing this differently, by masking the main part in davinci/premiere pro etc. for example.
thanks for the this tip as im getting into the ai thing as well.
this is game changing
Until the light changes... be warned.
Amazing! Tks
Well done good sir!
Hi! Many thanks for this!
I have got a few photoshop related questions. I tried using the eraser tool for the middle part of my video but it does not overlay well in Davinci resolve :( . the size is much smaller and the part I erased is not transparent
Thanks again in advence
When saving as png make sure to check the box for transparency. Also the resolution of the video frame should match with the newly created background. If the video is 1920x1080 then the new background file should also be 1920x1080.
@@muser4 Thank you so so much!
You're welcome 🙂@@KelvoVEVO
Your smile is amazing!
Thank you so much!! ✨
Why not ROTO yourself and then apply background? Not have to worry about borders then?
THANKS LIL YOUTCHY
LOL
WoW Cool !!!!!
Now waiting for moving AI, so that the leaves also move
Great idea!
WOW,... subscribed.