Great teaching brotha! I found a new life with A.I. generation. Using Midjourney for images and Runway to make them come alive. Now all I need is music generation to go along with the other two
Thank you! AI has totally opened up new ways to be creative for me as well. For music, I enjoy Suno but I mostly just use it for background music or intro/outro for interviews. Keep generating!
Same, similar to how you can get thumbnails using their text to video. I know all of this uses a massive amount of computing but I wish if you provided an image and motion, they gave the final frame as the free preview. Then you could sort of tweak the motion settings before doing a heavy video generation.
@@aivideoschool Yup, definitely, I've only started on the basic package as a newbie to all this, and already I've got one eye on the clock, experimenting can be costly ha ha Hey, I was wondering, do you know of any good software or websites that is able to lip sync a still photograph for a newbie with zero skills, I'm not sure I'm ready for adobe animate or anything complicated yet, something simple that I can just input an image and that can do the lip sync for me with limited knowledge Any tips or ideas, or keywords to search would be a great help, it's a bit of minefield for me at the moment, I'd prefer a freebie, but I don't mind paying or subscribing for the right tool as long as it'll be easy for me to do Thanks man, new sub and loving the content so far, I know I've got a lot to learn, cheers again
It seems that no one notices that these brushes do not work very well, the result before there was one brush is approximately similar with separations, I can create the same thing with one brush.
That's interesting. I agree in some cases multiple brushes aren't needed. But when you want finer control of motion for specific elements, like the bird example, I feel like multiple brushes do help. My critique is that the brushes sometimes don't follow closely enough to the motion assigned to them.
This app needs urgently anchor points, otherwise you can have control over the motion. Similar to the pin puppet on AE. I will also request to add pressure brushes, the stronger you press, the faster you move, or something,Ike that, because you need to control the anchor point, but also speed of portion of you inpainting area. For example, the arm with the basketball, you want to move half of the arm at one speed and the other half at a different speed. I guess because I started in the industry as an animator, I see things his way……typos by iOS.
I agree and would love pin puppet type of control with motion brush. That's one difference between traditional animation and generative ai: AE lets you dial in specific parameters but GenAI is a new generation each time (unless you use the same prompt/seed but even then it's not as consistently precise). You can't move an element 50 pixels, you just decide roughly how much you want it to move and AI predicts if you meant 20 or 100 pixels.
Great teaching brotha! I found a new life with A.I. generation. Using Midjourney for images and Runway to make them come alive. Now all I need is music generation to go along with the other two
Thank you! AI has totally opened up new ways to be creative for me as well. For music, I enjoy Suno but I mostly just use it for background music or intro/outro for interviews. Keep generating!
Thank you, that was great. Presented really well these functioonalities.
Glad it was helpful!
Great video. Thanks for sharing.
how we use brush to change image expresion like smile,angry to video bro?
The closest I've gotten is painting the face and using a prompt "the person smiles" etc.
dolly zoom of u in subway was clean, the goats were a mess lol
I had high hopes for the goats
Cheers dude, I wish they'd let us see what it look like with a low res render first before we use credits though
Same, similar to how you can get thumbnails using their text to video. I know all of this uses a massive amount of computing but I wish if you provided an image and motion, they gave the final frame as the free preview. Then you could sort of tweak the motion settings before doing a heavy video generation.
@@aivideoschool Yup, definitely, I've only started on the basic package as a newbie to all this, and already I've got one eye on the clock, experimenting can be costly ha ha
Hey, I was wondering, do you know of any good software or websites that is able to lip sync a still photograph for a newbie with zero skills, I'm not sure I'm ready for adobe animate or anything complicated yet, something simple that I can just input an image and that can do the lip sync for me with limited knowledge
Any tips or ideas, or keywords to search would be a great help, it's a bit of minefield for me at the moment, I'd prefer a freebie, but I don't mind paying or subscribing for the right tool as long as it'll be easy for me to do
Thanks man, new sub and loving the content so far, I know I've got a lot to learn, cheers again
It seems that no one notices that these brushes do not work very well, the result before there was one brush is approximately similar with separations, I can create the same thing with one brush.
That's interesting. I agree in some cases multiple brushes aren't needed. But when you want finer control of motion for specific elements, like the bird example, I feel like multiple brushes do help. My critique is that the brushes sometimes don't follow closely enough to the motion assigned to them.
❤❤❤❤❤
This app needs urgently anchor points, otherwise you can have control over the motion. Similar to the pin puppet on AE. I will also request to add pressure brushes, the stronger you press, the faster you move, or something,Ike that, because you need to control the anchor point, but also speed of portion of you inpainting area. For example, the arm with the basketball, you want to move half of the arm at one speed and the other half at a different speed. I guess because I started in the industry as an animator, I see things his way……typos by iOS.
I agree and would love pin puppet type of control with motion brush. That's one difference between traditional animation and generative ai: AE lets you dial in specific parameters but GenAI is a new generation each time (unless you use the same prompt/seed but even then it's not as consistently precise). You can't move an element 50 pixels, you just decide roughly how much you want it to move and AI predicts if you meant 20 or 100 pixels.
Great Tutorial!!! Im working on a project and it would be great if we can connect to discuss some ideas.
I've got an email in my channel bio. I don't check it often but that's the best way.