I really need to see General Spade having his face eaten off by one of the creatures, and our main character looking over with an "I told you so" look on his face.
The first one to incorporate this into a streamlined tool together with storyboarding is going to be the king of AAA preproduction and used to level up pitching scripts by a lot
Yeah, I think it's a nice way to do previz, and if you see my TH-cam short where I integrated JetSet's virtual production app into this workflow, there's some big possibilities. But it is too many steps. I think it can already be a single workflow in ComfyUI (there's a Mick Mumpitz TH-cam video that has relighting and background swaps, the only necessary addition would be Animate Anyone Evolved as a replacement for Viggle).
I really like the workflow. Have you tried further editing the finished scene with Runway vid2vid? Maybe there is a way to create consistent runway vid2vid. It should actually work.
I don’t think there’s a way to be consistent, but I’ll give it a try. I suppose it might work using the same prompt and a seed. As I was finishing this, I started playing with FaceFusion 3.0, and the results were very very good. Doing one more test and then I’ll put a video together about that fix. I think it removes the Krea step, and it can be run locally, so it’s free.
@@hyperbolicfilms As far as “motion capture” is concerned, I know of 3 other AI platforms that have it: - Chromox -Domoai -GoEnhance I test the platforms myself and see who delivers the better result.
sorry for the spamming, haha, but there is also the website software called “Wonder dynamics”. Do you know her? The results are a lot better than AI motion capture. The results can also be downloaded as an MP4 file.
@@danielschweig3177 Yes, Wonder Dynamics seems very good, but the limited amount of time you are given (2.5 minutes in the $16 plan, 10 minutes in the $84 plan) has kept me from using it.
@@danielschweig3177 Sorry, I thought I responded to this. I have tried it and it’s very good but they don’t have many character options to work with. I keep waiting to find a project to use it on.
@@markdkberry The rendering would take a while. I’m sure as the models get more efficient and video cards get better, we’ll start to approach realtime with all this, but I think that’s still a few years away.
What do you do about the generated image taking on the physical characteristics of the human model. I like Viggle but I have noticed that the image will take on the facial characteristics of the human model.
I'm not sure there's a way around that, as it is taking the expression and posture from the input video. I walk with a bit of a hunch, and I noticed that it transferred over in some characters. Also a broader person will result in a broader character, at least sometimes.
Thanks! The Midjourney prompt was "Photorealistic, full body, latino soldier with stubble in dirty t-shirt and black pants, 40 years old, white background, f4, 35mm" He ended up looking more concept art style than photorealistic. I later ran the image through Krea to get it to be more realistic looking, but that was in a video after this.
What site did you use for your character, I have tried Kling and Bing but it always creates characters with shadows on the face, I have tried prompting no shadows etc, but it always adds them.
I like it a lot, but in my tests I couldn't get a second clip to match the first clip, even with the same prompt and seed. The background would change, and so would details on the character. So while it's good for short shots and definitely easier than this method, it's not as consistent across multiple shots.
@@FSK2 Video to video in Runway is not repeatable. Even with the same seed between shots, it will change a lot of the details. I hope they get there eventually, but it still has that limit now.
Yeah, should work. This week I plan to test using a 3D environment and using a motion captured 3D character to do the action scenes for the same character that a human plays in live action.
I'm definitely on the Viggle train! I made this a few months back using Viggle for live performances. I tried my best to include it within a coherent story (character, violence, action, plot, etc.). Viggle wrote a blog about it. Hope you like it: th-cam.com/video/domp_g2Mayg/w-d-xo.html
@@hyperbolicfilms Thanks! I typically can only make one big project every few months. I definitely have a draft script, but I'm working on my storytelling skills a bit before jumping into the lab. But it'll definitely be a project for early 2025! Glad you enjoyed it
As far as I know, Animate Anyone is similar, but I don’t think it’s as advanced. There may be another one for Comfy with Motion in the title, but I don’t remember its exact name.
Typical comment from someone depressed who has nothing on their channel. Get your eyes checked, son. *It looks excellent.* You are unrealistically hoping for Hollywood level production here, and that's not happening at this point. The result that he created looks very clean. You have no creative content on your channel, and nothing good to offer.
Says the Karen with nothing on her channel. 😂 The result that our bro created looks very good and clean. Where is YOUR original content on your channel?? That's what I thought, hehe...
If you know how hard to motion capture or animation, you would never comment anything like this. This video isn't for people like you, don't be a cyber bully
@izzmotion9917 ok guys, I wasn't cyber bullying nobody... My comment was just an exaggerated way to say that this ai model is still unripe, and impressing yet. Wasn't criticizing the work of the TH-camr in any way... but I think he understood what I meant, considering he liked the comment.
The acting is the key.
Always, a nice guitar doesn't make good player.
I really need to see General Spade having his face eaten off by one of the creatures, and our main character looking over with an "I told you so" look on his face.
Yeah, seems like it should be the last enemy Spade faces.
The first one to incorporate this into a streamlined tool together with storyboarding is going to be the king of AAA preproduction and used to level up pitching scripts by a lot
Yeah, I think it's a nice way to do previz, and if you see my TH-cam short where I integrated JetSet's virtual production app into this workflow, there's some big possibilities. But it is too many steps. I think it can already be a single workflow in ComfyUI (there's a Mick Mumpitz TH-cam video that has relighting and background swaps, the only necessary addition would be Animate Anyone Evolved as a replacement for Viggle).
insane thanks!
This is really great work man
Thanks!
Gonna be my next music video
This is freaking amazing!!!
Love the lower third
Wow, amazing film!
I really like the workflow. Have you tried further editing the finished scene with Runway vid2vid? Maybe there is a way to create consistent runway vid2vid. It should actually work.
I don’t think there’s a way to be consistent, but I’ll give it a try. I suppose it might work using the same prompt and a seed.
As I was finishing this, I started playing with FaceFusion 3.0, and the results were very very good. Doing one more test and then I’ll put a video together about that fix. I think it removes the Krea step, and it can be run locally, so it’s free.
@@hyperbolicfilms
As far as “motion capture” is concerned, I know of 3 other AI platforms that have it: - Chromox -Domoai -GoEnhance
I test the platforms myself and see who delivers the better result.
sorry for the spamming, haha, but there is also the website software called “Wonder dynamics”. Do you know her? The results are a lot better than AI motion capture. The results can also be downloaded as an MP4 file.
@@danielschweig3177 Yes, Wonder Dynamics seems very good, but the limited amount of time you are given (2.5 minutes in the $16 plan, 10 minutes in the $84 plan) has kept me from using it.
@@danielschweig3177 Sorry, I thought I responded to this. I have tried it and it’s very good but they don’t have many character options to work with. I keep waiting to find a project to use it on.
Very cool, thanks Bro! Just subscribed. Cheers
Thanks for the sub!
this is great. eager for AI open source movie making. I'm guessing 1.5 hour vids arent possible yet.
@@markdkberry The rendering would take a while. I’m sure as the models get more efficient and video cards get better, we’ll start to approach realtime with all this, but I think that’s still a few years away.
What do you do about the generated image taking on the physical characteristics of the human model. I like Viggle but I have noticed that the image will take on the facial characteristics of the human model.
I'm not sure there's a way around that, as it is taking the expression and posture from the input video. I walk with a bit of a hunch, and I noticed that it transferred over in some characters. Also a broader person will result in a broader character, at least sometimes.
@@hyperbolicfilms Thank you!
Yeah I’m getting this
very cool!! :D excellent tutorial too! hahah, great work!
Thanks a lot!
Hey this video is amazing, I was just wondering what prompt you used to get the specific style of character you got?
Thanks! The Midjourney prompt was "Photorealistic, full body, latino soldier with stubble in dirty t-shirt and black pants, 40 years old, white background, f4, 35mm"
He ended up looking more concept art style than photorealistic. I later ran the image through Krea to get it to be more realistic looking, but that was in a video after this.
What site did you use for your character, I have tried Kling and Bing but it always creates characters with shadows on the face, I have tried prompting no shadows etc, but it always adds them.
It's hard to not get shadows. You can try asking for even lighting, flat lighting, or diffused lighting and see if that works.
Have you had the chance to check out Runway Gen3 video-to-video yet? Be cool to get ayour thoughts on it.
I like it a lot, but in my tests I couldn't get a second clip to match the first clip, even with the same prompt and seed. The background would change, and so would details on the character. So while it's good for short shots and definitely easier than this method, it's not as consistent across multiple shots.
can you do a review of runway 3 video to video?
@@sundayreview8152 I may make a short film with it this weekend, if my camera man doesn’t have Covid
How did you slice the 2 minutes into 10 seconds each?
In my editing app (Davinci Resolve), I rendered out 10 seconds of the performance at a time. It's very slow and tedious.
In Resolve, you can also set the Output to Individual Clips, and then break up your video into 10 second fragments. That works well.
@@hyperbolicfilmsthank you
Awesome
Crazy
You can do it runway ml also
@@FSK2 Video to video in Runway is not repeatable. Even with the same seed between shots, it will change a lot of the details. I hope they get there eventually, but it still has that limit now.
Can you use a 3d mesh in place of the 2d image?
Yeah, should work. This week I plan to test using a 3D environment and using a motion captured 3D character to do the action scenes for the same character that a human plays in live action.
Cool
Wow
What program did you use to alter the voices?
@@gwart41 Eleven Labs. They seem to have broadened which voices can be used for voice to voice in the past few weeks.
look at RVC
I'm definitely on the Viggle train! I made this a few months back using Viggle for live performances. I tried my best to include it within a coherent story (character, violence, action, plot, etc.). Viggle wrote a blog about it. Hope you like it:
th-cam.com/video/domp_g2Mayg/w-d-xo.html
Incredible work! That is easily one of the best films made with AI that I've seen. Are you working on part 3? Really inspiring.
@@hyperbolicfilms Thanks! I typically can only make one big project every few months. I definitely have a draft script, but I'm working on my storytelling skills a bit before jumping into the lab. But it'll definitely be a project for early 2025! Glad you enjoyed it
Is wiggle free? Or is there a open source alternative?
As far as I know, Animate Anyone is similar, but I don’t think it’s as advanced. There may be another one for Comfy with Motion in the title, but I don’t remember its exact name.
Up your gain going forward please😊
It all looks like shit... But considering the ease and time necessary to do everything it's pretty impressive!
Typical comment from someone depressed who has nothing on their channel. Get your eyes checked, son. *It looks excellent.* You are unrealistically hoping for Hollywood level production here, and that's not happening at this point. The result that he created looks very clean. You have no creative content on your channel, and nothing good to offer.
Says the Karen with nothing on her channel. 😂 The result that our bro created looks very good and clean. Where is YOUR original content on your channel?? That's what I thought, hehe...
If you know how hard to motion capture or animation, you would never comment anything like this. This video isn't for people like you, don't be a cyber bully
@izzmotion9917 ok guys, I wasn't cyber bullying nobody...
My comment was just an exaggerated way to say that this ai model is still unripe, and impressing yet.
Wasn't criticizing the work of the TH-camr in any way... but I think he understood what I meant, considering he liked the comment.
I know this is unrelated but give the Quran a read
Hi there! Just shot you an email, but in case you missed it - I lead Partnerships at Viggle, and we would love to connect and chat!