Congratulations again for your accuracy and research. A true pearl for innovation in motion design and VFX. If you make a patreon or similar I'll follow you
I love this! I’m a Houdini user myself, but I have to ask… why going through all the trouble of setting up a very intricate creative workflow for AI in ComfyUI if you already have established your art direction from the references you’re feeding AI? You clearly have the knowledge to pull this off in Houdini alone. Is it that you don’t use Redshift, Octane and want to leverage AI to do that? I’m just not sure what you’re trying to accomplish. Love to have a conversation!
many reasons to do it this way aside from experimenting with AI is the unlimited possibilities you might have by incorporating many elements and manipulating your render passes to achieve many interesting results. each of these looks might take a day at least to pull off in Houdini.
@@ArdyAla I love the fact that you’re experimenting with AI and animation, I’m on a similar journey myself and I’ve come to realize that most of the times I find myself spending a day just setting up the workflow in stable diffusion to get some cool looks but then I have to go back to Houdini and recreate the project to make it look finished (no artifacts, no shape blending, no nonse…) and that is going to take another couple of days. So, in reality, it’s not an efficient workflow. I’d love to hear your thoughts - great work by the way!
Man I loved this. Been using comfy with houdini for a while now and its been great. Was there a reason your not using LCM? Was that for better quality? I have a 3090 and I am still waiting a decent amount of time for iterations. Your wait times seem very low or are you just fast forwarding the stream? Thanks again, I will definitely be watching all your content!
Super cool stuff! Isnt there a way to ouput a depthmap of your scene straight from houdini? This would be much more accurate than using Zoe, no? Is there a reason you decided to create the depthmap after the fact?
It might not be the mic that is creating this uncomfortable sound. I'm listening with headphones and it's very uncomfortable, but we don't know what he is using to record with. Music is nice, but could be a lot lower.
holy shit this is awesome. Finally some real world application for this tech
Thank you
A great athlete and a great artist. Thanks Ardy
very kind, thank you!!
Subscribed to keep seeing more houdini and Ai
cheers!!
pretty awesome! Thanks for pioneering these workflows!
appreciate it
subbed would love to see more comfy+houdini!
soon!! thank you
very good use of comfyui, I think using it combined with other tools is a good idea.
This a pretty good job man!) I recreated your workwlow. It works nice with LCM to. Thanks a lot!
OMG! I will buy your course on this topic!
Another nice one. Very cool.
Congratulations again for your accuracy and research. A true pearl for innovation in motion design and VFX.
If you make a patreon or similar I'll follow you
cheers :)
Fantastic !
Thanks
very informatic tutorial, just one thing in comfyui folder structure the ""controlnet-checkpoint.ckpt "" will be placed? thanks
.\ComfyUI_windows_portable\ComfyUI\models\controlnet
Great stuff. Any plan on sharing the workflow?
I love this! I’m a Houdini user myself, but I have to ask… why going through all the trouble of setting up a very intricate creative workflow for AI in ComfyUI if you already have established your art direction from the references you’re feeding AI? You clearly have the knowledge to pull this off in Houdini alone. Is it that you don’t use Redshift, Octane and want to leverage AI to do that? I’m just not sure what you’re trying to accomplish. Love to have a conversation!
many reasons to do it this way aside from experimenting with AI is the unlimited possibilities you might have by incorporating many elements and manipulating your render passes to achieve many interesting results. each of these looks might take a day at least to pull off in Houdini.
@@ArdyAla I love the fact that you’re experimenting with AI and animation, I’m on a similar journey myself and I’ve come to realize that most of the times I find myself spending a day just setting up the workflow in stable diffusion to get some cool looks but then I have to go back to Houdini and recreate the project to make it look finished (no artifacts, no shape blending, no nonse…) and that is going to take another couple of days. So, in reality, it’s not an efficient workflow. I’d love to hear your thoughts - great work by the way!
Can you do a beginning to end tutorial?
Man I loved this. Been using comfy with houdini for a while now and its been great. Was there a reason your not using LCM? Was that for better quality? I have a 3090 and I am still waiting a decent amount of time for iterations. Your wait times seem very low or are you just fast forwarding the stream? Thanks again, I will definitely be watching all your content!
yes LCM is not giving me the quality I expect yet, and each iteration takes about 7, 8 minutes for 30 frames which is not too bad
Super cool stuff! Isnt there a way to ouput a depthmap of your scene straight from houdini? This would be much more accurate than using Zoe, no? Is there a reason you decided to create the depthmap after the fact?
Constructive criticism , you need a better mic
It might not be the mic that is creating this uncomfortable sound. I'm listening with headphones and it's very uncomfortable, but we don't know what he is using to record with. Music is nice, but could be a lot lower.