The Adobe CEOs are extremely greedy, unlike anything seen in any other company. Additionally, it’s worth noting that ON1 is set to launch their new ON1 RAW photo software, which will exclusively feature LOCALY generated images ... NOT ONLINE.
it’s cool to have the runway gen3 extensions… the problem is they’re using GEN 2 to do the extensions, not GEN 3, so that’s why the extensions look so weird and low quality… they really need to use GEN 3 to do the extensions.
MiniMax is the most impressive model out. It does great with expressing prompted emotions but I have to say that Kling’s pro version has been capable of that too. Great video, as always :)
100%, my Friend, I've been using MiniMax every day for over a week, and it by far is the best at animating humans as well as other things like birds flying and animals running.
@@curiousrefuge I tried minimax after I watched your perfect video and minimax is great but without the ability to use picture to video is useless for me, because basically you cannot produce something with one character (human). Which is the best option for picture to video? I personally never seen so realistic videos like minimax, but if you have to produce something with one character on many videos doing different things which AI tool you prefer to use? Thank you once again for your you tube channel :)
Here’s what I want: 1) Stable Model 2) Stable Environment. 3) Creative Camera Work If I can simply create characters and insert them into an environment, without either of them morphing into an acid trip, I’ll pay. As of now, getting usable clips is not only time consuming with too many trial and error prompting, it gets expensive. Whoever can accomplish this first is going to do very well. I hope it happens soon.
@yeah-I-know Runway costs you around 3.K per year if you want to make movies and its not perfect but its damn good! I spent around $300 this month but just for hobby really.
As soon as Adobe releases a 4th version of the firefly model, we'll have a robust image-to-video pipeline without subscribing to many different services.
@@AINIMANIA-3DYes, but that's the case with every plug-n-play solution. If you want privacy and uncensored generations, you gotta go with flux and comfyUI
@@KevinSanMateo-p1l Do you have a list of AI video generators that are uncensored? Ideogram is the only one I know of. Minimax appears to do Will Smith, Darth Vader and Mario, but I don't know if it would do Trump shooting a gun, for example (like in the Dor Brothers' videos).
@@KevinSanMateo-p1l That's such a "I don't know how to f-ing read" comment, so, f-ing READ. Commenter said when Adobe release Firefly 4, they will be able to use ONE subscriptions service (Creative Cloud) rather than NEEDING to use multiple. Read, for God's sake.
As someone working on an XR concept in California I've been following the legislation you mentioned. The lead on the language in that bill is the "Center for AI Safety" which is basically a non-profit consultancy. Not all that enthusiastic that they are leading the charge here in CA.
I think a better comparison would be to have them all start with the same picture. Even just taking the first frame from Firefly would have been a good starting point to compare
@@terryd8692 I mean, there wil always bere a bias as in Adobe will simply choose their best example, but to give it a little of a fight at least start based on the same premises
After watching the freaky extend video feature it made me wonder if this is the real skynet. Instead of an apocalypse and nukes, which we've seen coming, skynet is going to create seriously disturbing videos that drive us into insanity.
You know what? You make a great point because. what can end up happening is people could start end up making AI videos that look like a real terrorist threat to try and cause war and now this is going to make it harder for governments to verify videos. Ohh jeez. this is going to cause a hot mess of new fraud.
There surely was such a thing when I left Premiere some 3, 4 years ago. It was called morph cut if Im not mistaken. Don`t kniw if it`s still there and has it improved. Used to work around 30% of the time.
how tf do they get their movies to look so high resolution in those movies showed at the end? i know there are ways to "cheat" by adding fine grain and filters, but the resolution overall looks much better then what runway gives out, even with good prompting and high resolution input images. especially "seeing is believing" it looks amazing, the shot with the asian woman is great!
Thank you so much! Technically, Runway’s resolution is slightly higher (1280x768) than Minimax (1280x720), but I agree-the pixel density in Minimax feels smoother. Especially with cinematic outputs, Minimax has a great consistency, and though not technically "sharp" or "high-res," it feels more balanced-kind of like a BluRay downsized to DVD that still retains its perceived sharpness. For "Seeing Is Believing," I didn’t use Topaz or any other AI video upscaler. Instead, I just put all 720p clips into a 5K Final Cut Pro project, which just "zooms" them out without additional upscaling or pixel interpolation. Then, as you mentioned, color grading and adding fine grain help give the shots that "hi-res" look, even though they technically aren’t. :) You can watch the final 4K version of "Seeing Is Believing" here: th-cam.com/video/ghnk0rf5qPU/w-d-xo.html
@@particlepanic thank you for taking the time to answer so detailed, this is great input. I appreciate it! and at first didn’t realize you are the creator that answered haha, I’m looking forward to future projects of you, keep it up :)))
Wait, which program generated the montage that's playing while you're talking about legislation (19:48, 20:26, 20:37, 20:45, etc.)? Those are some of the best I've ever seen.
After LITTERALY stolen thousands and thousands of photos, images, and video footages from their clients in their cloud service, of course they can generate great AI videos.
@NetTubeUser ... Of course and idiot with no concept of how AI works will say this. I bet you failed basic math courses in high school and photography was the only thing you could do.
And what do you think Runway, Midjourney and the rest of them did? Feeding their model training with copyrighted material? I hope this whole "industry" burns down in lawsuits very soon.
Thank you very much! I have been researching this space, looking at smaller vendors. I would have ignored Adobe assuming a heavy handed "solution", but this actually looks worth paying for. (Adobe stock at the next dip?)
A lot of people are complaining about Adobe subscription pricing. Do you know Adobe spends around 10 Billion dollars a year in development and marketing? If they do not charge what they charge Adobe as a business would fail.
...and yesterday the announcment that Runway Gen-3 is now able to generate video-to-video...things go faster than the news.. btw. thanks for the meshy reminder...have to check directly. 🙃
11:47 Why not do an end frame when testing the camera movement though? I would want as much control as possible, so I would definitely do an end frame. I’m curious to see what the results look like when you have both a start frame and an end frame, and you change the camera movement at the same time.
that lava shot is actually an FVP drone pilots footage, i remember the footage from his youtube vlog where he flew his FPV drone into a volcano, and the lava destroyed his propeller wings! i wonder if he submitted his clip for AI training, or is Adobe just snatch up Content creators clips the same way udio does there audio generations
The AI video sector is getting hot AF. Im using like 10 different video generator websites in my workflow to make videos. Its honestly getting out of hand. I also find the California push back of AI is due to the fact that Hollywood is there and they dont like the idea of the common man competing with their market share.
Do you know which video generators are uncensored (violence, guns, gore, horror, blood, celebrity and politician likenesses, etc.)? If one uploads an image of Trump, for example, to Gen3 Runway will it animate it?
@@High-Tech-Geek I’ve made some Trump image to video with kling. I think if you just upload the picture and call it “Fat orange idiot…” instead of “Donald Trump” then it won’t ID it.
There are several videos on TH-cam of people flying drones through lava. It's not copying anyone's work. It's using it as a reference, just like every other generation. CTFD.
Dude if you dragged the ankle point to the bear's toes I can imagine how precise you were with the rest of them. No wonder the bear animation looks wonky
I am not convinced by a comparison of a few random generations from models that have been trained on different data sets and for a different range of topics. Its a bit like taking a formula one car and a golf cart and compare the off road capability.
Regulating use of AI version contracts in this way is not providing "special treatment" for union members. It's definitely not a "major problem" (video - 20.53). The legislation protects ALL workers from being exploited in this context, and encourages membership of unions, which is a good thing. For reinforcement of stuff like decent minimum wages and workers rights (and, in this instance, the right to be paid properly for use of your AI version), unions are of great benefit, especially for those working in creative industries where monetary value of work is ambiguous.
@@curiousrefuge when you say check our gallery, do you mean check the videos you have uploaded to your TH-cam channel? If so, I am not finding a standalone video of the hamsters controlling robots.
I'd love to see Adobe release this stuff, but my fear is that they begin charging extra for generations. Wouldn't surprise me either as they are pretty greedy with their stock footage after you're paying big money for the suite.
I love how this Ai Video that is literally generating scenes from text.... is talked about like "it's ok, not bad".... when 5 years ago everyone would have been losing their minds.
In the runway vs adobe comparisons. Adobe's actually seem just as jank tbh and I wouldnt use either in real world applications. 1. Look at the reindeers back leg as it turns to face the camera 2. Drone flying through Lava... cause sure that's totally a thing drones can do 3. The puppets, sure whatever both are cursed 4. Look at the ripples on the sand change over time
I don't know why everyone goes crazy about it already it's still in baby stages.. video lasts like 5 seconds at best and doesn't even have audio, you can't possibly make a movie or tv show, your better of imagining something at least you can think of outcomes
It's all good, but only 8-bit. So for professionals nog very useful at the moment. Sending stuff to grading is always a struggles these people are super technical and very specific. I don't want to be the one sending them AI generated clips 🙂
Your comparison between Runway and Firefly seems biased, as you are using prompts specifically optimized for Firefly. For a fairer result, please conduct the same comparison using the commercial prompts designed for Runway.
Feels like a free ad for Adobe. If you're going to run comparisons with Runway, at least show us the prompt so we can make our own assessments. The "trust me bro" approach makes people wonder what you're hiding and who is paying you to hide it.
I’m confused. So you have adobe create an image for you that you have no hand in, but the problem is that they’ll get to call the image they created theirs?
Anyone else still find AI video unsettling? Like it hits directly in the uncanny valley for me no matter how detailed. Idk I think I will stick with man-made films.
knowing Adobe they’ll charge like $20 for 5 clips
And you'll have to redo each idea a few times.
Right 😔
it;s ok that keeps people that won't pay or can't pay out of the door ,, Beta will drops 2nd week of october can't wait
The Adobe CEOs are extremely greedy, unlike anything seen in any other company. Additionally, it’s worth noting that ON1 is set to launch their new ON1 RAW photo software, which will exclusively feature LOCALY generated images ... NOT ONLINE.
We hope not!
it’s cool to have the runway gen3 extensions… the problem is they’re using GEN 2 to do the extensions, not GEN 3, so that’s why the extensions look so weird and low quality… they really need to use GEN 3 to do the extensions.
didn't know that. does that hold true for for all subscription levels?
Interesting point !
That's not true at all and you made that up.
@@TukTukPirate its alright to be wrong lil bro, try to be better next time. 😏
MiniMax is the most impressive model out. It does great with expressing prompted emotions but I have to say that Kling’s pro version has been capable of that too. Great video, as always :)
100%, my Friend, I've been using MiniMax every day for over a week, and it by far is the best at animating humans as well as other things like birds flying and animals running.
@@FilmSpook just needs that image to video and it's top dog, for now.
Minimax is so impressive with text2video!
@@curiousrefuge I tried minimax after I watched your perfect video and minimax is great but without the ability to use picture to video is useless for me, because basically you cannot produce something with one character (human). Which is the best option for picture to video? I personally never seen so realistic videos like minimax, but if you have to produce something with one character on many videos doing different things which AI tool you prefer to use? Thank you once again for your you tube channel :)
@@georgikozhuharov2293 Can't try minimax because it won't even open the page. Looks like it's too popular and overloaded right now..
I bet adobe will make a seperate subscription model for firefly just like they did for their 3D services Substance.
We'll see!
Facts. And like runway, they'll make sure you waste your credits.
Dude that's Me!
r/Optopode here, and thank you so much for the reference 🪶
that video was hillarious!!! I loved it! -Mitzy
Amazing work!
Here’s what I want: 1) Stable Model 2) Stable Environment. 3) Creative Camera Work
If I can simply create characters and insert them into an environment, without either of them morphing into an acid trip, I’ll pay.
As of now, getting usable clips is not only time consuming with too many trial and error prompting, it gets expensive.
Whoever can accomplish this first is going to do very well. I hope it happens soon.
Exactly not even SuperNintendo had suck crappy assets for motion caputure LOL hahhha
That would be great wouldn't it : )
@yeah-I-know Runway costs you around 3.K per year if you want to make movies and its not perfect but its damn good! I spent around $300 this month but just for hobby really.
@yeah-I-know Shill
@@dakaisersthul Another bot shill
Adobe jumping into AI video? Fantastic, another subscription for tools we’ll barely use! Adobe’s playing catch-up as always
On the other hand, it helps consolidate the toolbelt!
Adobe is the industry standard.
As soon as Adobe releases a 4th version of the firefly model, we'll have a robust image-to-video pipeline without subscribing to many different services.
Only heavily censored
@@AINIMANIA-3DYes, but that's the case with every plug-n-play solution.
If you want privacy and uncensored generations, you gotta go with flux and comfyUI
@@AINIMANIA-3D that’s such such a boomer mindset. Everyone will have some kind of text video model so censorship will no longer be an issue
@@KevinSanMateo-p1l Do you have a list of AI video generators that are uncensored? Ideogram is the only one I know of. Minimax appears to do Will Smith, Darth Vader and Mario, but I don't know if it would do Trump shooting a gun, for example (like in the Dor Brothers' videos).
@@KevinSanMateo-p1l That's such a "I don't know how to f-ing read" comment, so, f-ing READ. Commenter said when Adobe release Firefly 4, they will be able to use ONE subscriptions service (Creative Cloud) rather than NEEDING to use multiple. Read, for God's sake.
As someone working on an XR concept in California I've been following the legislation you mentioned. The lead on the language in that bill is the "Center for AI Safety" which is basically a non-profit consultancy. Not all that enthusiastic that they are leading the charge here in CA.
Great point...we'll see!
You over looked the option to download the model as STL allowing one to print them! so cool!
Thanks for the info!
It's time to retire the Sora comparisons. We've got the AI video options to create TODAY.
Is anyone still even waiting for Sora?
@@dasberlinlex My grandfather. Sora 1.0 was announced in 1984 :)
@@MartinZanichelli I love it. Great joke. You have a nice sense of humor.
Sora did it's job. It got the ball rolling on a mass scale to give us all these options. Sora was never about just Sora.
@@TPCDAZ Yeah, I'd say Sora just never was. Not even looking forward to it. 😎
I think a better comparison would be to have them all start with the same picture. Even just taking the first frame from Firefly would have been a good starting point to compare
True! That's a more accurate test!
I'm not sure that you can fairly compare the firefly marketing videos to something you chucked together in a couple of minutes
@@terryd8692 I mean, there wil always bere a bias as in Adobe will simply choose their best example, but to give it a little of a fight at least start based on the same premises
17:43 It would be cool if there was an option for the AI to automatically rig the character.
Wouldn't be surprised if that's going to happen very soon!
UV mapping by AI is what we need also very dearly :)
After watching the freaky extend video feature it made me wonder if this is the real skynet. Instead of an apocalypse and nukes, which we've seen coming, skynet is going to create seriously disturbing videos that drive us into insanity.
You know what? You make a great point because. what can end up happening is people could start end up making AI videos that look like a real terrorist threat to try and cause war and now this is going to make it harder for governments to verify videos. Ohh jeez. this is going to cause a hot mess of new fraud.
We hope not!
The tool I really want from Adobe: Jump cut eliminator. if it worked like a cross dissolve, oh man.
Ooooh that would be cool!
There surely was such a thing when I left Premiere some 3, 4 years ago. It was called morph cut if Im not mistaken. Don`t kniw if it`s still there and has it improved. Used to work around 30% of the time.
how tf do they get their movies to look so high resolution in those movies showed at the end? i know there are ways to "cheat" by adding fine grain and filters, but the resolution overall looks much better then what runway gives out, even with good prompting and high resolution input images. especially "seeing is believing" it looks amazing, the shot with the asian woman is great!
Thank you so much! Technically, Runway’s resolution is slightly higher (1280x768) than Minimax (1280x720), but I agree-the pixel density in Minimax feels smoother. Especially with cinematic outputs, Minimax has a great consistency, and though not technically "sharp" or "high-res," it feels more balanced-kind of like a BluRay downsized to DVD that still retains its perceived sharpness. For "Seeing Is Believing," I didn’t use Topaz or any other AI video upscaler. Instead, I just put all 720p clips into a 5K Final Cut Pro project, which just "zooms" them out without additional upscaling or pixel interpolation. Then, as you mentioned, color grading and adding fine grain help give the shots that "hi-res" look, even though they technically aren’t. :) You can watch the final 4K version of "Seeing Is Believing" here: th-cam.com/video/ghnk0rf5qPU/w-d-xo.html
@@particlepanic thank you for taking the time to answer so detailed, this is great input. I appreciate it! and at first didn’t realize you are the creator that answered haha, I’m looking forward to future projects of you, keep it up :)))
Glad you enjoyed these!
Wait, which program generated the montage that's playing while you're talking about legislation (19:48, 20:26, 20:37, 20:45, etc.)? Those are some of the best I've ever seen.
It's a handful of different tools!
I just love AI so much, its the love of my life
We love it too!
AI also said it loves you, but it can makes mistakes
Nice. Some of these look handy in one way or another.
After LITTERALY stolen thousands and thousands of photos, images, and video footages from their clients in their cloud service, of course they can generate great AI videos.
We appreciate you watching!
Yeah, how people are still happy to give them money is beyond me
@@curiousrefuge You're welcome! 🙂
@NetTubeUser ... Of course and idiot with no concept of how AI works will say this. I bet you failed basic math courses in high school and photography was the only thing you could do.
And what do you think Runway, Midjourney and the rest of them did? Feeding their model training with copyrighted material? I hope this whole "industry" burns down in lawsuits very soon.
Adobe can't even do humans in Firefly yet, so I won't hold my breath on how good Luma is.
We'll see!
O no not me participating in Gen:48 ! I’ll have to try Adobe next.
Can't wait to see what you create!
There’s a typo in the prompt at 8:23
We appreciate the note!
Adobe was impressive but Runway is still the champ for me. I guess I'll have to test it out myself.
Definitely worth testing it all!
Thank you very much! I have been researching this space, looking at smaller vendors. I would have ignored Adobe assuming a heavy handed "solution", but this actually looks worth paying for. (Adobe stock at the next dip?)
We'll see!
You just can’t bring coffee girl down-that cup is at least half full!:)
lol
LOL
😂😂
Runway should add "Extend with frame and control with video". Then its showtime!
We're getting close!
A lot of people are complaining about Adobe subscription pricing. Do you know Adobe spends around 10 Billion dollars a year in development and marketing? If they do not charge what they charge Adobe as a business would fail.
Interesting point!
...and yesterday the announcment that Runway Gen-3 is now able to generate video-to-video...things go faster than the news.. btw. thanks for the meshy reminder...have to check directly. 🙃
Have to keep checking :)
Fix it in post is about to go to another level
ahah, very true!
Thanks for the fresh information! The AI generator race continues)
Glad you enjoyed it!
11:47 Why not do an end frame when testing the camera movement though? I would want as much control as possible, so I would definitely do an end frame. I’m curious to see what the results look like when you have both a start frame and an end frame, and you change the camera movement at the same time.
Good point, we'd need to test that next time.
Bonkers. Thanks for the excellent overview.
Our pleasure!
ooh runway extend nice!
Yes! Quite a good feature :)
Great stuff, thanks :)
My pleasure!
Off-topic question- but I really like your glasses. What is the model and brand?
Good question - we'll ask and get back to you!
@@curiousrefugehaha appreciate you 🫡
It will be everywhere soon Blackwell chips at work
the next 6 months will be crazy!
that lava shot is actually an FVP drone pilots footage, i remember the footage from his youtube vlog where he flew his FPV drone into a volcano, and the lava destroyed his propeller wings! i wonder if he submitted his clip for AI training, or is Adobe just snatch up Content creators clips the same way udio does there audio generations
It's absolutely not the drone pilot's footage.
Perhaps there was *some* training but not one single generation is a result from a single video.
Since they announced "tokens" for AI use, you'll be paying for every frame, whether they are usable or not.
It can certainly add up!
Thanks!! Great video
thanks for watching!
The bear animation… you put the first point in the wrong spot! 😮
Did you see that the dot says groin when you click on it?
Woops! Nice catch!
You've obviously never lived in Canada - snow does blow up sometimes lol.
Haha...we have so much to learn!
Great video. Thank you. I am still laughing at the bear. If anyone starts doing meet ups in the Dallas/Fort Worth area, I would love to join.
Jump in our discord and we can help organize it!
The AI video sector is getting hot AF. Im using like 10 different video generator websites in my workflow to make videos. Its honestly getting out of hand. I also find the California push back of AI is due to the fact that Hollywood is there and they dont like the idea of the common man competing with their market share.
Do you know which video generators are uncensored (violence, guns, gore, horror, blood, celebrity and politician likenesses, etc.)?
If one uploads an image of Trump, for example, to Gen3 Runway will it animate it?
@@High-Tech-Geek I’ve made some Trump image to video with kling. I think if you just upload the picture and call it “Fat orange idiot…” instead of “Donald Trump” then it won’t ID it.
Definitely makes finding a workflow difficult. However, we wouldn't be surprised if in one year from now most things are consolidated.
Brilliant video. Thanks
Glad you enjoyed it
Hailuo has image to video now, and it's fantastic.
We agree!
Montage sequence cinematographers for National Geo channel are quaking in their boots right about now
We appreciate you watching
😂😂 isn't national geographic literally about the *actual* geography my boi
I hope everyone understands that adobe is a highly advanced tech company that has been around for almost 80 years....
True!
Tere was a guy who flew his drone through a volcano... This looks EXACTLY LIKE THAT..... IT's copying his work.
There are several videos on TH-cam of people flying drones through lava. It's not copying anyone's work. It's using it as a reference, just like every other generation. CTFD.
Certainly possible to have trained on that one video but a generation is compiled of far more data than a single vid.
Will this just be part of the normal sub we pay? Probably not is what my gut tells me.
We'll see!
Dude if you dragged the ankle point to the bear's toes I can imagine how precise you were with the rest of them. No wonder the bear animation looks wonky
Good point!
thanks, I just upload 2 videos, that are 90% looks like realistic. done with minimax.. have to try adobe too
You can do it!
Content writers becomes hero.
Certainly the best stories will rise to the top!
I am not convinced by a comparison of a few random generations from models that have been trained on different data sets and for a different range of topics. Its a bit like taking a formula one car and a golf cart and compare the off road capability.
True! It's difficult to test, but we try our best :)
That not surprising they waited, and have better. Thanks!
Thanks for watching!
A werewolf you say? 🐺
::howls at the moon!::
Don't need to extend that way just reuse the same photo and prompt something else the results will be better.
Good point!
Will Adobe Firefly only interface with other Adobe software?
Very likely!
Thank you :)
Our pleasure!
Regulating use of AI version contracts in this way is not providing "special treatment" for union members. It's definitely not a "major problem" (video - 20.53). The legislation protects ALL workers from being exploited in this context, and encourages membership of unions, which is a good thing. For reinforcement of stuff like decent minimum wages and workers rights (and, in this instance, the right to be paid properly for use of your AI version), unions are of great benefit, especially for those working in creative industries where monetary value of work is ambiguous.
We appreciate your perspective.
very interesting episode
Is there a link to the full mouse video somewhere?
check our gallery!
@@curiousrefuge when you say check our gallery, do you mean check the videos you have uploaded to your TH-cam channel? If so, I am not finding a standalone video of the hamsters controlling robots.
Is that Snoop Dog eatting a burger...?
Hahah could be?
I'd love to see Adobe release this stuff, but my fear is that they begin charging extra for generations. Wouldn't surprise me either as they are pretty greedy with their stock footage after you're paying big money for the suite.
We would probably bet that there will be some kind of charge.
The dab and explosion 😂😂😂
Glad you liked that!
I wonder if the AI model will be a plugin in their desktop video software.
Perhaps one day!
ADobe better catch up or I'm done. They could've seen this coming for years. It's THEIR BUSINESS!
It's a very competitive landscape for sure!
What i dont like now is pricing for relative very low time of output.. I know it's on the beginning but the pricing is crazy
True, it's quite pricey!
Many production have moved out of CA and is now in Georgia.
Yes, a mass exodus of traditional studio locations!
I love how this Ai Video that is literally generating scenes from text.... is talked about like "it's ok, not bad".... when 5 years ago everyone would have been losing their minds.
It all moves so fast!
Amazing Stuff!
Glad you think so!
In the runway vs adobe comparisons. Adobe's actually seem just as jank tbh and I wouldnt use either in real world applications.
1. Look at the reindeers back leg as it turns to face the camera
2. Drone flying through Lava... cause sure that's totally a thing drones can do
3. The puppets, sure whatever both are cursed
4. Look at the ripples on the sand change over time
We appreciate the feedback on this!
I don't know why everyone goes crazy about it already it's still in baby stages.. video lasts like 5 seconds at best and doesn't even have audio, you can't possibly make a movie or tv show, your better of imagining something at least you can think of outcomes
It's more about adding it as one thing to a toolset, rather than making a movie with it entirely.
Wonder where Adobe trained those models
On their own assets.
Can we select a lower frame rate to get more than six seconds of video, and use the output in DaVinci Resolve Studio to fill in the missing frames?
You can certainly use other AI tools and try DV to smooth out the missing frames.
Nothing short of awesomely amazing...Thanks for bringing the value-content. Greatly appreciated.
Our pleasure!
Thats the same way I drink coffee
It's the BEST way :)
I want still camera shots, it seems they are always moving around no matter what I input
locked shot, stationary shot, tripod mounted camera, etc. none of these work for you?
That's a good tip. I'd say even with those prompts you'll still get movement in 50% of your shots unfortunately.
@@High-Tech-Geek I'll try those terms In my prompts thanks 🙏
this whole Ai video thing is really getting real. I mean will be to real to differenciate soon. a bit sceptical thou.. if it's used for bad purpouses.
Like all types of art, we encourage everyone to use it for good!
많이 배우고 있습니다.
thanks for watching!
"We will see!!" ;)
We shall!
Where is the link to minimaxi?
Should be first result in google
Is the video generator free for a few uses?
Minimax is currently but better get started now before it's too late :)
Adobe Video extension, to extend a rush... it needs to be connected online ?
Oh great question! We assume you need some internet access for the initial setup!
Isnt that Sora? They just not letting on thats what it is.
bingo
which part?
@@curiousrefuge Adobe's video gen upgrade.
It's all good, but only 8-bit. So for professionals nog very useful at the moment. Sending stuff to grading is always a struggles these people are super technical and very specific. I don't want to be the one sending them AI generated clips 🙂
Definitely only works for certain projects !
Is anyone still waiting for Sora?
We're enjoying all the other tools currently!
Your comparison between Runway and Firefly seems biased, as you are using prompts specifically optimized for Firefly. For a fairer result, please conduct the same comparison using the commercial prompts designed for Runway.
We appreciate the feedback!
Feels like a free ad for Adobe. If you're going to run comparisons with Runway, at least show us the prompt so we can make our own assessments. The "trust me bro" approach makes people wonder what you're hiding and who is paying you to hide it.
Sorry you felt this way! We'll try and be more clear next time :)
it isn´t available in South America. I have an Adobe subscription but still text to video in Firefly has the coming soon title.
Awww that's an interesting limtation! Hopefully that changes soon!
And it's all cloud based so Adobe gets to see every image you create, call it theirs, and train new Ai with it. No thanks.
We understand your perspective
I’m confused. So you have adobe create an image for you that you have no hand in, but the problem is that they’ll get to call the image they created theirs?
Sora without real users is considered fake ability
Sora defintely has users but unfortunately it's kept pretty tight.
So now there is no value for the artist now huh ?
Not at all. Artists use AI tools.
Is the robot the hamster's extension or is the hamster the robot's pet?
The world may never know!
@@curiousrefuge It's a lot like the question "Is AI the human's extension or are the humans AI's pets?"
where is th link to the chinese geenrator? there are tons I cant find it thanks
Check our description :)
I'm really excited for Premiere to start introducing some cool things!
We are too!
When people misunderstand everything about film making and art itself
How so?
This kooks amazing
Glad you enjoyed this!
thank YOU for updating my antiquated mind ,
Our pleasure!
Anyone else still find AI video unsettling? Like it hits directly in the uncanny valley for me no matter how detailed. Idk I think I will stick with man-made films.
There is certainly an uncanny valley but the tech is getting better and better :)