This is extremely impressive, you have found a method to put a subject into any scene and make them look native, applications are much bigger for videos than for photos as changing the lighting of a photo can also be circumnavigated by just generating a new ai photo, or using generative fill etc, yet for video this is a game changer
This is a bit beyond what I'm comfortable attempting, but it's refreshing to see a young tutorial creator on TH-cam that a) really knows his shit and b) is innovating and experimenting, not regurgitating the same basic info.
'blew my mind' after 20 years of traditional compositing. It's not about replacing Hollywood's high-end VFX, but about democratizing access to quality visuals for indie creators and TH-cam producers like myself
That' just plain false lol. We already use them as part of bigger, more complex, comps. The main difference is that we train these models on custom datasets, sometimes even per show...
@@Daniel_Bettega_52 i dont think so. someone should use the new ai tools and they will do it better than people which are not familiar with compositing
your videos are so exciting and very easy to follow! If you are a VFX artist or supervisor with a team that is evolving frequently. THESE are the solutions for lower budgets, shorter turnarounds ... Thanks for all of your hard work Micky! You are trusted and valued! 🫡
Bro, I'm working on a horror series rn. Was going to try and create my own workflow like this after I was done. You saved me so much time. Thank you, G!
there's a SAM2 model which natively takes video input. It can make temporally stable masks with much better control (can take positive and negative points as well as prompts as input), and it's much faster too, I'd recommend you check it out!
I would love to see an Image version of this, Alot of the current image compositing workflow lack things like edge fix and keeping the person in the image looking like they do.
@@eccentricballad9039 Boris is better for manual rotoscope, still not very good to automatically roto out subjects perfectly. MatteAssistML is still quite jittery
Man, this comfy UI thing looks wild! It reminds me a lot of BMD'S Fusion but like from another planet. The node-based UI feels familiar, but all its various functions and associated technical jargon are completely incomprehensible to me. Might be fun to learn, but I already spend way too much time plugging things into other things "to see what happens" in other programs xD
Such a wonderful video! 👏 Will you please consider to update the simple workflow for subject image (PNG / JPG) and Background (which you already show how it works) instead of only video? I've tried it myself via "Load Image" and some bypass video related nodes, but something always broken in the way. (it works fine with video) It will be great to have it as a workflow if you'll be willing to share of course, thanks ahead 🙏
Node-based interfaces always look intimidating until you actually just look at each node individually (and learn what it's doing) and take a step back to see the logic of the system.
Thank you very much. Honestly, I didn't even expect to be able to get this scheme up and running quickly. The only problem is the blurred face. Could you please tell me in which module to solve this?
Your video are Amazing, i would like to ask you a question. Is it possibile to generate the model sheet of the character without using the prompt? For example if i draw a character myself in front view, is it able to create the rest of the model?
Thank you for sharing. Unfortunately the build is not working for me. Error in Ksample, I have not been able to solve it yet. No solution found on the forums.
Think about how intelligent this gentleman is... Now think about how our politicians can't hit the mute button on the zoom call. Seems a little backwards huh?
i am entirely new to ai ...where i have to start creating video content through ai? what is the very basic workflow to get to image generation and consistent character creation for video content
so far oen of the best tutorials on this..i need to try it in the cloud now...BUT..i have a question..what if I wanted to animate the entire video to cartoon but keep one object (like a table) realistic..how do i do this? Can it be done with this workflow?
Hello and thank you very much for this tutorial. I have just installed everything and set up my files, but the background is not rendering in the output. Is there a limit to the image size? I am using a still image in the background and it keeps showing an error on the image input of the repeatimagebatch node.
Assuming this all being done locally? What is the maximum number of frames that can be rendered and will you get consistent results if you have to do this in batches?
Using traditional Compositing and rotoscoping for close to 20 years, this new workflow just blew my mind. Thank you brother and bless you
Same I feel stuned 😂
You are a true innovator. I bet your failed workflows have more genius in them than most people's working ones!🤣
This is extremely impressive, you have found a method to put a subject into any scene and make them look native, applications are much bigger for videos than for photos as changing the lighting of a photo can also be circumnavigated by just generating a new ai photo, or using generative fill etc, yet for video this is a game changer
Bro we're getting there!
This is a bit beyond what I'm comfortable attempting, but it's refreshing to see a young tutorial creator on TH-cam that a) really knows his shit and b) is innovating and experimenting, not regurgitating the same basic info.
Creating history! Great workflow brother
'blew my mind' after 20 years of traditional compositing. It's not about replacing Hollywood's high-end VFX, but about democratizing access to quality visuals for indie creators and TH-cam producers like myself
Yes, these amazing workflows are really great, I've tried running it with comfyui on Mimicpc and the results didn't disappoint!
That' just plain false lol. We already use them as part of bigger, more complex, comps. The main difference is that we train these models on custom datasets, sometimes even per show...
Absolutely.
As for replacing Hollywood's high-end VFX, I believe it's a matter of time, a short time.
@@Daniel_Bettega_52 i dont think so. someone should use the new ai tools and they will do it better than people which are not familiar with compositing
This is just insane brother what????
Well done
Bravo! Its like if corridor crew was one person
Pitz workflows and work arounds are slept on
this was the more dense video i ever watched. thanks bro, great job
A lot of crystallized thought and effort in your videos. You are really making use of AI. AI community surely values your work, Mick
This is awesome my man, you’re a killer in this space!!!!
God ... this guys is on another level... 100
i had to add: my favourite channel on comfy/ai animation
I usually just watch your videos for eye openings and I rarely comment but this time I really want to try this out. Thanks so much for this video.
I appreciate you for co-creating a magnificent future! 🙏
Man I love you ! Your workflows are the best 🤩
Simply brilliant, thanks so much!
My jaw hits the floor
Thank you for this knowledge
This is brilliant workflow
Crazy, thank you so much! You are just on time with this video 💯
Really impressive workflow, I'll definitely try it! Thanks for sharing.
wow you are really talented and thank you for the workflow :)
absolutely amazing, really have loved your videos on my ai+filmmaking journey thanks!
The power of Hollywood has come to the bedroom hobbyist!
*the greed of capitalism has killed Art & Intention
Maybe you saved my life. I will test this workflow
I haven't even gotten into the video yet and have pee'd myself in excitement from the intro!! KUDOS! 😍
Thanx a lot fof sharing. Really precious content showing how to use advanced AI features to really make a difference.
your videos are so exciting and very easy to follow! If you are a VFX artist or supervisor with a team that is evolving frequently. THESE are the solutions for lower budgets, shorter turnarounds ... Thanks for all of your hard work Micky! You are trusted and valued! 🫡
Amazing work! Thank you for the free workflow!
crikey, your content keeps blowing me away
This is actually super amazing 😮 ñ
Like this would take so many hours to do if not day holy cow
Your works is amazing
Amazing work Bro!
Bro, I'm working on a horror series rn. Was going to try and create my own workflow like this after I was done.
You saved me so much time. Thank you, G!
Let's see it please
@@ernesto.iglesias It's called under the black rainbow. Episode 9 just dropped yesterday.
no cap
I actually think the whole process would be smoother on Mimicpc, utilizing comfyui to go through it
Great video man . Love from india🇮🇳
there's a SAM2 model which natively takes video input. It can make temporally stable masks with much better control (can take positive and negative points as well as prompts as input), and it's much faster too, I'd recommend you check it out!
What’s it called?
use track-anything if you have nvidia with cuda core
its a great tutorial, thanks for sharing and it has improved our learning knowledge. pls keep it up
I would love to see an Image version of this, Alot of the current image compositing workflow lack things like edge fix and keeping the person in the image looking like they do.
You are making the best tutorials!
Thank you so much!
this video is gold af
you are a beast, thank you for all this knowledge
Dude!! you are awesome! liked and subscribed!!
You so good bro!🙏 i wish i could do this!
ooh damn this looks amazing
amazing workflow
Love it, will try this😊
Interesting Approach ! great video :)
Wow… that’s fantastic!
Sweeet! Nice flow man 🎉
I think we can also use Boris Fx or other softwares for better Rotoscoping. Am i wrong ?
@@eccentricballad9039 Boris is better for manual rotoscope, still not very good to automatically roto out subjects perfectly. MatteAssistML is still quite jittery
Thank You, you are a Genius!! Brilliant Work 👍🏾👍🏾
You are bigg time saver
Great work, hopefully IC Light comes out with a SDXL version.
Relighting is an absolute industry game changer.
Great job!
Modern tech is like magic and we can't even explain it 100% in the case of the black box called AI.
The next few coming years is gonna be crazy
you are that worth of finding something after 2cn night not sleeping well now will go with some comfort comfy ;-)
Holy shit man , this is very impressive work...
Thats so cool! would be awesome in music videos!
great video, thanks man
Superb video.
bro du bist der krasseste alter
this is awesome
nice. thank you
Fantastic Exploration ~~~~!!!!
Thanks!
Nice work!!! Thanks!
Consider adding a way to match black/white values of the plate and foreground elements prior to relighting.
mind blown!
Man, this comfy UI thing looks wild! It reminds me a lot of BMD'S Fusion but like from another planet. The node-based UI feels familiar, but all its various functions and associated technical jargon are completely incomprehensible to me.
Might be fun to learn, but I already spend way too much time plugging things into other things "to see what happens" in other programs xD
This is utterly insane!!
What is the specs that are needed to run this thing locally? Will 12vram be enough?
Dude your awesome lol yes it all seems a bit overwhelming.
great :) thank you
Such a wonderful video! 👏
Will you please consider to update the simple workflow for subject image (PNG / JPG) and Background (which you already show how it works) instead of only video?
I've tried it myself via "Load Image" and some bypass video related nodes, but something always broken in the way. (it works fine with video)
It will be great to have it as a workflow if you'll be willing to share of course, thanks ahead 🙏
Great. 💪💯💪 Now I just have to learn how to use comfrey 😂😂.
amazing
Node-based interfaces always look intimidating until you actually just look at each node individually (and learn what it's doing) and take a step back to see the logic of the system.
18:52 How do i "use" your workflow? where do i get it from? I followed every prior step already
Alaways looking for for video
This makes me wonder if I can save a lot of money on my next shoot by renting a big empty room instead of a green screen studio.
Thank you very much. Honestly, I didn't even expect to be able to get this scheme up and running quickly. The only problem is the blurred face. Could you please tell me in which module to solve this?
Your video are Amazing, i would like to ask you a question. Is it possibile to generate the model sheet of the character without using the prompt? For example if i draw a character myself in front view, is it able to create the rest of the model?
Woow wow wow
mind blown. Is this compatible with Mac?
Thank you for sharing.
Unfortunately the build is not working for me.
Error in Ksample, I have not been able to solve it yet.
No solution found on the forums.
18:52 Im confused, where and what is that file? "COMP_SMPL v10" where do i find that? where did it came from? help pls
Cool as usual! Now what about replacing SD with flux.1?
we need to learn us how to use this in google colab or anything like this
wow!
Where do we download the custom models from then?
Think about how intelligent this gentleman is... Now think about how our politicians can't hit the mute button on the zoom call. Seems a little backwards huh?
Bro you are boycotting the film industry 😂
thanks for the video again.
Super interessant!
i am entirely new to ai ...where i have to start creating video content through ai? what is the very basic workflow to get to image generation and consistent character creation for video content
@Mickmumpitz we are still waiting for Flux walkthrough or some classic inventions
I'm new to ComfyUI, can you make a video on explaining Comfy and how much system config for smoothly run comfyui
so far oen of the best tutorials on this..i need to try it in the cloud now...BUT..i have a question..what if I wanted to animate the entire video to cartoon but keep one object (like a table) realistic..how do i do this? Can it be done with this workflow?
Hello and thank you very much for this tutorial. I have just installed everything and set up my files, but the background is not rendering in the output. Is there a limit to the image size? I am using a still image in the background and it keeps showing an error on the image input of the repeatimagebatch node.
Assuming this all being done locally? What is the maximum number of frames that can be rendered and will you get consistent results if you have to do this in batches?