Did We Just Change Animation Forever... Again?
ฝัง
- เผยแพร่เมื่อ 19 พ.ค. 2024
- Our Exclusive Tutorials will teach you how to BUILD YOUR OWN ANIME! Join CorridorDigital with a 14-Day Free Trial ► corridordigital.com/
Discover how our small team was able to take cutting-edge tools and apply in just a few months, making a creative leap in our Anime Rock Paper Scissors series!
Limited Edition Anime Rock Paper Scissor 2 Merch ► Available only until August 20th, get yours today in either t-shirt or longsleeve and celebrate this release. corridordigital.store/
The Anime Rock Paper Scissors show is entirely made possible by Members of CorridorDigital, our INDEPENDENT STREAMING PLATFORM. Try a 14-Day Free Trial, and bring Episode 3 to life! ► corridordigital.com/
Written & Directed by Niko Pueringer and Sam Gorski
Artists ►
Dean Hughes: Animator, Lead Warp Artist, Neuromancer, Prince Jules - / sdeanhughes
Josh Newland: Character & Style Design, Lead Artist - / lv1_artmage
Kenson Lee: Animator Extraordinaire - / rikognition
Mattias Alegro Marasigan: Post-Production Editor, Compositor & Keeper of Timelines - www.mattiasalegro.com/
Kytra Selca: Warp Artist - / @maketherobotdoit
Eric Solorio: Warp Artist - / @enigmatic_e
Jan Losada: Warp Artist, Neuromancer - instragram.com/artificial_inte...
Sound & Music ►
Sound Design by Kevin Senzaki - senzaki.com/
Theme Song by David Maxim Micic - open.spotify.com/artist/0wQa1...
Theme Song Vocals & Translation by Shihori - open.spotify.com/artist/07vlE...
Music by Sam Gorski - open.spotify.com/artist/7sWkn...
Production & Additional Talent ►
Christian Fergerstrom: Producer, AD, Script Supervisor - / c_fergerstrom
Jordan Coleman: Costume Designer, Associate Producer - / jordan_coleman
Jordan Allen: Soldiers & Peasants - / vfxwithjordan
Merch Design by the incredible Kendrra Thoms kendrrathoms.com/
Creative Tools ►
Created on Puget Systems Computers - bit.ly/Puget_Systems
Warp Fusion created by Sxela - / sxela
Composited in DaVinci Resolve and After Effects
Chapters ►
00:00 A New Way of Animation?
01:55 Room for Improvement
03:40 The Beard Issue
05:29 Improving the Style of Episode 2
09:36 Close, but no Warp Fusion
14:10 Warp Fusion Results blow Sam's Mind
16:12 A Long Way from Home
19:48 Incredible Stories need Incredible Music
21:42 Finishing the Project - บันเทิง
Twitter is going to hate this so much
I love that for them
What's Twitter?
They do now
For good reason yeah.
for good reason
The behind the scenes of this series continues to have dramatic arcs of its own
I like how they make it look a little like parody episode of tv drama, they make it more fun to watch.
It would look fun on like a tv show but it would get boring real quick
As cool and groundbreaking as this project is, it really calls for another name rather than animation. Motion capture films don't qualify for animation Oscars, rotoscoping isn't considered animation, so a distinction needs to be made, it would clear up a lot of the frustration people feel for this.
rotoscoping is actually considered animation in some circles but i do see how this can be seen as not animation. the thing animation is so broad, it never meant, drawing frame by frame. that is just one type
well said
I think it's considered animation. If The Adventures of Tintin can win many "Best Animated Picture" awards (not Oscars however) and is heavily motion captured, then I would consider this animation too.
That being said, I still prefer hand-drawn animation by a longshot, and I wouldn't like to see this replace hand-drawn animation.
As a compromise, we can call it "Animation-esque."
Like calling something made in an anime style "Anime-esque."
It's more like a "style filter" genre, definitely not animation.
@@Hapasan808The difference is that the characters of Tintin still required heavy manual animation. The mocap simply provides a more human frame to work with. The animators still need to add much of the expression themselves, and adapt the human movement. If you didn't need animation to mocap, there wouldn't be mocap animators.
The thing I'm terrified of is companies hiring artists only to get training data firing them and then just using the data they got. no need to worry about a union or paying a fair wage when you can cheaply produce it using a machine.
Worse yet…
Bring in raw, new kids offering their own “company theme” or style and having them pay for the “schooling” only to have the raw data and it all paid for by the most gullible.
buy it would take a lot less time for the artist if you are just getting a few drawings for the model to learn from, meaning they could do other stuff. the lower earning would likely match the reduced time sink
The industry MUST adapt. Because this is coming whether we like it or not. The easiest and cheapest methods will always be sought out. It doesn't matter if we think it unfair.
If you look at it, artists aree less present in the process, but a lot of other jobs are involved and weren't much before.
While this is concerning, when it comes to entertainment the desire for new and exciting things will never go away. An animation or production company is not going to be able to just get a commission off of an artist and use that data for all of their media thereafter. It will likely create a new dynamic or approach to creating these styles to fuel AI assisted processes. People and artists will still be needed at every step of the way to generate new styles/themes, and properly implement those themes into a cohesive work. Yeah a lot of menial art in corporations will likely be cut out and replaced by mass produced AI work, but it's not like there's a downward slippery slope from the current starting point of corporate clip art.
Hiring artists for reference material for the AI was the way to go. It looks way more solid now, as well as being more ethically done now.
having to rely on other people is a bad thing generally speaking, you don't want a person to have a capability nobody else has, this is bad! That's why replacing man with machine (AI) is the greatest undertaking in human history.
Nah wild west. Artists getting dong slapped by the inevitability
@@Danuxsy But these image generatoes are fundamentally working only thanks to all the people that made art for these to work, and it's still impossible to make something actually new for these AI. No one got "replaced" when it comes to creativity, just used, blendered and regurgitated the result.
@@WwZa7 What's human creativity if not just previous creations "used, blendered and regurgitated"? Synthesizing what comes before is the essence of any creativity. Plenty of AI images are novel, in the sense that there has never been one created like it before. How similar or different it is from previous examples it learned from is a matter of opinion, not fact.
@@Danuxsy Everyone relies on everyone, how do you think movies are made? Producers are also artists, directors and foleys now? When someone has a capability you need but don't have, you hire them, welcome to life. This is one of the dumbest thing I've heard this year.
I love Sam's barbarian and his "don't look at the circle" fighting style. It was sooo funny every time he "Got em"
same. that brought back classic memories from my high school days. 🥹
Sam's barbarian wizard bit was my favorite bit on ep2! Those got'em moments were hilarious yet the hits sounded great.
spinoff
Outfit was cold too
I cannot express how much I love the fact that you guys hired real artists and emphasised the importance of that fact
Corridor assured they hired their own artist to train the Al, but remember that industry discourse like this is interconnected. They may not be stealing art, but any studio that sees this and goes 'wow, it's that easy' will. Corridor's also boasting about Al democratizing animation-making. Now anyone can make animation in their bedroom with nothing but a camera and a free software! Except, anyone could already make animation in their bedroom with nothing but a camera and a free software. I made animation in my bedroom with nothing but a camera and a free stop-motion software when I was 10 years old.
@@blackwillow7314 it’s like making a tutorial on making bombs and then saying “but don’t worry guys, we’re good people and since we’re the ones making the bombs, it’s ok because we’re not going to blow anything up.” their intentions don’t matter when the end result is still, y’know, a BOMB. they’re literally handing studio execs a way to get rid of almost every one one of their employees but it’s okay because “we’re just showing the capabilities of this new technology!!!1!!”
Well yeah. Things evolve. Lots of film developers lost jobs when digital movie cameras were a thing. Most film rolls became obsolete. Camera men and directors were still a thing, weren't replaced. They just got new equipment. Same thing.
Artists apparently lack self-esteem and need validation from others
@@TrueAfricanHero yeah, artists need lots of validation and praise for the thing they dedicated many years, decades of hard practice. Drawing for ourselves is satisfying on its own but it gets boring really quick. We are our own worst critics so hearing someone praise can lower our expectations of how good enough our art should be.
The technology behind this is cool and all, but there is a reason animators are yet to incorporate this into their workflow. Noodle did a great video on why.
We do use it a little in the concept art phase.
But it is mostly just to see which designs are worth having a designer expand on
@@RusticRonnieAs for me I just use it to see what my character concept can look like. Than Maybe exapand further.
and professional anime artists jumped on it immediately. Those guys are just overworked paid 4 bucks per in-betweens that can take hours. I'm not sure why people are mad about that. Are they protecting anime artists from themselves?
When I saw the episode and saw the child version of Niko I could still see Niko's face _in the child face,_ so I was impressed not only the de-aging process, but obviously the Defacial-Hair'ing...
Warner Bros WISH they had that tech a few years ago.
@@BroadFieldGamingworks super, man.
The sped up voices though. That’s so uncanny.
@@victorwidell9751 Pitched the voices up, not sped them up. The pitch up happens during a speed up if you don't compensate for it but they can be done separately. For Sam's character they pitched his voice down to deepen it.
I always believe that AI should be a tool alongside the work of artists, NOT to replace them. I appreciate Corridor hiring an actual artist for this, that’s absolutely the way to go
It doesn't really seem like the artist used it particularly as a tool for themselves, more like they drew reference images which got fed into the model, which then spliced it with the footage Corridor shot.
I still think that this is a replacement of an artist.
Couldn't agree more
This. In the end, all this AI image stuff is just a tool like Photoshop. I think anyone that thinks that this is "just a replacement for actual artists since you just type it in and generate it" should actually give it a shot. You still pre/post-process images to work at all, lots of testing/prototyping, need to combine or remove stuff from the image etc. There is no "type a magic prompt and replace a job" kind of stuff going on. Same as text AI will not replace developers or writers.
@ThatFoxxoLeo Absolutely, it's the same as Hollywood studios scanning actors. The talent is no longer being used, but being abused
@@ThatFoxxoLeobut the whole problem with the start was that the style was being stolen from an artist, not the rotoscoping job? Yes, this could have been done will a full animation team, but corridor would not have the funds to create that in this situation. Most people's problems are with copyright and an authors decision to keep their style theirs, so them deciding to take the style from someone worth full knowledge of what they are doing is very good
There's a few techniques I wish you guys explored, ebsynth to interplate frames and then just rendering keyframes. Using controlnet openpose, making contact sheets which multiple frames on a single image (if you have the VRAM this really helps consistency).
Warpfusion does basically the same thing as Ebsynth: warping a texture based on optical flow. Openpose doesn't offer much benefit since they're already doing img2img with the desired poses. At this point to reduce the jank you have to raise the resolution (it looks like their model is still only 768px)
I love how attentive Corridor is to giving credit where credit is due, it shows that they are a community driven group with intent to grow as artists not individuals
Exactly! One of my favourite things about this group!
The line" The only thing that stops changing is us when, we decide to stand still" is such a cold line.
Yeah, except it's being done unethically. Just because something sounds inspirational due to your confirmation bias doesn't make it right. That's how Hitler got all his cheers during his white supremacy speeches and started a world war. And the problem with AI is the input. If they deleted the datasets and replaced them with royalty free images then it wouldn't be a problem to anyone. That's all.
I assumed that was a line from the anime or something.. Niko spitting fire lines like he IS the anime.
Cold line for sure.. Line may not be true but still cold nonetheless
Idea comes from Philosophy, - but i love "Ghost in the Shell" (1995) version
"All things change in a dynamic system. You effort to remain who you are is what limits you"
The concept of evolution. Basically "you can't evolve - if you don't change"
@@mr.aNdErsOn88 its saying that everything is always changing, including us (so long as we keep moving, learning, and growing). If we stand still, the world will continue to change and leave us behind
At the start, you say, "Normally, you need a team of people and powerful computers to make an animation." I think the next line should have been, "and here's why!" You still have a pretty big team working on it along with a crate of 4090's! Pretty damned cool though!
true, but still much smaller.. probably less than 10% of the size team and render farm a big studio needs
Stolen comment?
@@ThatNerdGuyjudging by the profile it seems legit, the other one you saw was prolly a bot
Maybe, if so sorry.
Right, but also, they're creating these processes from scratch, it takes a lot of work to do that. Now they've got it more or less figured out, a lot of the grunt work is gonna be eliminated for anyone else in the future. Also as more people experiment and refine it, it will get even more accessible.
during a strike huh
if animation studio use this method, then i will lose my job
It's the same with every job in the world. It's your responsibility to improve yourself or be pushed by something better. The world isn't gonna wait for you to catch up
@@UniqueMAXPlay yea off course i can draw 100 of frames by one day. totally possible.
the only way to fight this is, i have to use ai to but spice up a little bit by using my own skill to improve the frame.
@@UniqueMAXPlay
You chose possibly the hardest thing to make a Stable diffusion do, hands, and made that a huge part of the video, mad props to you
I mean it’s pretty much just a filter over actual images so
@@maxitoburritonot really but if that’s what you got from the video then you do you
@@maxitoburritobro got ratioed💀
@@yuyah7413 a ratio on TH-cam hits really hard
@@NFIVE30 fr
When watching this, I immediately thought of the animated full-length film "Loving Vincent" (2017, Poland/UK), inspired by the life and paintings of Vincent van Gogh. The production team hired over 120 classicly trained oil painters who painted over the frames shot with live actors. It took 6 years to make.
That’s where my head goes too. Now imagine if that production had this to help those artist. It take so much less time and they could use the style throughout all the movie. (Because they actually changed styles during flashbacks to a less complex style)
@@thedarkangel613 but that would make it lose all its charm, the movie is so beautiful because literally every frame is a painting that you could hang up, the fact that so many people spent so much time working on it is what gives it meaning.
@@BunkeMonkey They will still have the option to do it, new technology is about giving people more possibilities, not forcing them to use them.
It’s insane that Corridor Crew is releasing this amid the writers strike, it’s almost like they want the studios to win and pay everyone in popsicles.
This only gives studios more reasons to no longer pay actors a fair share for their work when they can have it done like this
Writers and animators aren’t the same, but I can’t help but mostly agree with you 😭
@@stellanovaluna this basically is a domino effect, once studios shift their eyes to animation they will already have an opportunity to cut crew or animators
That's just how the world goes
Every tecnology is used to replace someone in some way as much it can create more jobs
When Disney start to use 3d it was a shock in the industry, but it didn't stop creating jobs nor ending the hand-drawn animation forever. And that's just a art analogy, but is not just the art industry that suffers
The world is for the ones that are adapted to the current situation, always was
The main problem is that artists (in special from hollywood) think they are better and must receive the attention they believe they deserve, which is part they reason most movies nowadays have the worst writing ever
..... they are a youtube channel not a Large studio producing shows and I hate to break it to ya But Ai IS GOING NOWHERE the strike will eventually end when they make a deal but ai will still be used it will probably justb be more controlled but pandoras box is open its here and people are gona use it and of all the f'd up stuff ai has done the funny rock paper scissors vid is what sets everyone off really???
I think it’s very telling that in the middle of a writers and actors strike, of which was partly caused over concerns about AI, you guys decide to put out a video shilling for a technology that is actively being used to replace your jobs
glad there’s some people with critical thinking skills in this comment section
It's going to replace everyone's jobs. They're just freaking out because they thought creative jobs will be safe. Jokes on them they'll be from the first to go.
I didn't see them freaking out over self driving trucks soon taking drivers jobs. Because they expected it.
"The only thing that stops changing is us, when we decide to stand still" Such a good quote
i can almost 100% guarantee that they will put it on a shirt
Some Niko wisdom for ya
An artist never stops learning and evolving if they want to stay relevant. It's what makes their channel.
Totally!
oh wow, David Maxim Micic is an absolute gem. Can't believe y'all got to work together.
I've been blasting his stuff all week, was not expecting him to pop up here!
yooooo hell yeah so glad someone pointed this out, been a fan since Bilo II
I've had his music on my spotify list for many months, there are way too many underappreciated/unknown artists.
my jaw dropped!
Yeah LUN has always been a favourite album for the gym or driving, am so excited for this
I wonder if modelling these characters in 3D and just capture motion tracking and facial tracking separately. That way, you would have more control over each frame and still maintain the anime style.
That already exist and has existed for a long time. That's how most AAA video games are made.
I mean they could do it if they just wanted to make something, but then they wouldn't be "changing" anything.
That would also fix the worst parts of the product. A 3d model couldn't warp features into totally seperate things independently and shaders are seperate from models, so you wouldn't have the flickering effect.
Honestly I think this is the better option compared to running it through the AI.
I think the AI is a neat idea, but the ethics behind it is currently dubious.
@@MasterMordekaiserThe technology itself is no more dubious than any other. How people use it can be morally questionable, however. The way media represents all disruptive technology is often also "morally questionable," insofar as something so subjective works, yet does that seem to stop them?
Just from this video, I could get a sense of how much work got put into making this that I don't think people get. It's just a shift in how we see it, instead of someone drawing everything and directing everything, it's someone coding and directing everything. Good ideas always shows in the end, and that's never going to change no matter what technology comes out
of course a ton of work was put into it, you can put alot of work into something that doesnt turn out to objectively be art xD. My main issue is how they present this as "we changed animation forever and made the past ways of doing it obsolete!" which is total bogus and if it is true is not a good thing
@@ezoni8438 that's like arguing little things, a video title is not really as relevant as the content of the vid and really is just talking about your first perception of click bait rather than if ai should be used in animation, which is the main point they are trying to express
@@MajorTim01 i wasnt only referring to the title, and was also referring to some other things they've said addressing the controversy
this is why people shouldn't be scared of AI, they should be scared of who uses it for their own wrongful gain
I think Plato said that
It's all same with all the tools available in the world for Human, heck even if foods are handled in a wrong way it can kill a person...it will always depend on how people use it.
:D
@@golik133 yup
exactly. and there is one person coming who will use it for evil. get saved now before it's to late.
1Co 15:1 Moreover, brethren, I declare unto you the gospel which I preached unto you, which also ye have received, and wherein ye stand;
1Co 15:2 By which also ye are saved, if ye keep in memory what I preached unto you, unless ye have believed in vain.
1Co 15:3 For I delivered unto you first of all that which I also received, how that Christ died for our sins according to the scriptures;
1Co 15:4 And that he was buried, and that he rose again the third day according to the scriptures:
Mad respect to the crew for listening to the criticism, then stepping up and providing the world an example of what artistic, ethical, and responsible use of this technology looks like. Generative AI doesn't have to be an exploitative tool, it can be part of an amazing future for artistic expression.
They did none of that. Not once have they actually addressed the criticism levied at them and admitted they were wrong. If they had, they would have taken down the original video, or at least made a public statement acknowledging the ethical issues. But they never do that sort of thing. And I don't know about you, but a future where artists are hired just to provide Source material for an algorithm that doesn't involve them any further does not sound like a good future.
AI can absolutely be a tool but this is not it.
I REALLY liked the look of the Warp Fusion output with the high frame rate!
This is great. There's nothing wrong with CC doing this in my opinion. It's just that the AI companies need to delete their datasets and replace them with royalty free images and artist-consented images. That's it.
All these new motors still have the initial motor with the LAION scrappings. This wont go back unfortunately, but they want to sugar coat it with adding personalized artwork on top of it.
I’m glad you’re addressing all the issues people had with the first video and are improving upon them here! It shows that you care about the fans’ response and truly just wanna make the best content you can. 10/10!
The thing is that regardless of using AI or not, this is still roto-animation. If you look at the original Snow White from Disney or the LOtR version of Ralph Bakshi, it has always looked a bit weird (unless that’s exactly the effect you were looking for). It’s because real actors and objects move constantly a bit randomly in real life, and movements don’t follow the « golden rules » of animation, like anticipation, exaggeration or stretch. Also, people seem to think that dividing your frame rate in 2s is sufficient to make real footage anime-like, but it’s a bit more complicated than that. Choices of key frames and in-betweens are much more important, and framerate can be on 1s, 2s or even on 3s during the same sequence.
To make it really anime like, I think you have to consider the frames of your footage like animation cells. Selecting key animation frames and don’t hesitate to play on the speed between them. It’s also possible to treat different parts of the image separately. For example slowing down the movement of hairs from the wind, while keeping eyes and mouth moving at regular speed. Because of the AI technique, it doesn’t mean that you can’t use traditional animation compositing techniques and have the entirety of the picture processed at the same time. Nothing prevents you from using one footage for the body silhouette and another one for the face. You can take different depth of field shots and then combine them together, film probably problematic overlapping stuff separately, etc. Also, you can apply deformations to exaggerate or on the contrary stabilize your footage prior to get it processed by the AI. The result may be totally weird on live action version, but be perfect once « drawn » by the AI.
In my own low amateur level experience with such things, it’s a headache and maybe a lost cause to try and generate every single frame with AI without heavy flickering, while strategically processing hand selected key frames then use EbSynth to create the inbetweens gives a much more pleasant result (at least for relatively steady scenes without too extreme changes from one frame to the other).
Those are all achievable in post, but of course it’s best if it’s already taken into account in the performance part. And there’s also that, I think it’s not easy and takes training to act like an anime character instead of a live action one. Especially with your body, because the momentum of animated characters don’t really follow laws of physics. Maybe it’s something that professional dancers/coregraphers can help with, or people that are accustomed to do slapstick comedy. I have a feeling that stage artists could do well in these kind of exercices.
You have perfectly sunmed up all the points that I was writing in a comment to a first episode. And even more. Second episode didn't come further closer to anime than first one. Animationvise. Great that they hired an artist to work on designs. That is the only major improvement in my opinion. I think that shifting resources from trying to process every frame creating nothing but rotoscoping to more creative approach that you absolutely fulyy described could have given this content true anime feel even with great deal of AI involved
@@myskeletonboyYes, their workflow on this second part is still very much a live-action production one instead of an anime project workflow, which is totally understandable, this is very different from what they usually do. Finally, what the progress of this project shows is that the AI aspect is not so much a game changer because you still need artistic designers, storyboarders, skilled animators, inbetweeners, compositors, etc. to make a very good Anime. There are many more specific skills to acquire than just being able to draw a cartoon. The ones whose job is really at stake are the persons who draw the cells on the production level. Which still represents quite a consequent workforce.
Even so, as of yet, I think the AI technique can only really work for a realistic style. It’s difficult to imagine for now that it could work as well trying to make an anime lin the style of One Piece, My Hero Academia, Porco Rosso or other comic whose characters are drawn in highly unrealistic style.
Totally agree. I think the logical "next step" (if the onjective is to make this sort of thing more like anime, and less like roto) is to pull the keyframes from the AI-converted-video, and run them through a sort of AI "in-betweener", with settings that approximate the anime style they want. Unfortunately, this throws out quite a bit of the current "product", and re-generates it, adding yet another "layer" of production.
Optimizing the pre-production workflow to concentrate on generating the keyframes might help with reducing some of the overhead. I could imagine that, through continued training of the same models, that the AI style converter might get good enough to need less "help" in getting the results they're looking for too. The pre-production would eventually be quicker and less intense.
But it's a lot of time and hard work to get from here to there. What they've produced already is pretty amazing.
@@brentbourgoine5893I think I’d try to first use the live-action frames to have the animation timing right. The thing is that since you will process the frames through AI afterwards, you don’t even need high res live action frames because the transformation into anime drawings will not need that much details (it’s even possible that too much details could confuse the AI more than anything else). You can also make pretty rough alterations to your image sequences before the AI pass that will not mess up the result since it will completely reinterpreted.
On the opposite, creating in between frames automatically, from my experience, works better with live-action as a base than with Anime frames. The displacement of pixels seems easier for softwares to predict with textured areas than with flat colors areas. In both cases though, this process works pretty damn well if both the frames contains exactly the same elements and the movement is smooth. But, understandably so, it sucks when it comes to try to create an intermediate between details that only exist in one of the two frames. For example, if you have a sudden head turn from a side view to a front face view, the system will not be able to process correctly the side of the face that doesn’t exist in the first frame. That’s why it’s important to choose manually your key frames for softwares like EbSynth. I always try to find the one which includes the maximum of details that will be also present in other frames. Like, for a face sequence, it has to be front view with the eyes and the mouth opened, because it can easily close eyelids or lips to hide the eyes or the inside of a mouth, but can’t make up those without having a reference.
All that to say that, yes, it’s still a lot of work that has to be done « manually » and it’s unsure if it’s always faster or easier with the AI method than the traditional one.
No this isn't animation. Something must be animated to be an animation.
Even in rotoscoping, important decisions involving form, weight, and even movement, if done properly, must be made by the artist, which *does indeed* require principles of traditional animation to implement.
Calling this animation is the same as calling a snap chat filter an animation.
What this is, is what corridor crew does best: Video editing, VFX and post work.
Again, calling this animation is extremely offensive to the artists who have dedicated their lives to the craft and have unwittingly made this technology possible
all these ai "artists" always saying ai art will "change" and "revolutionize" animation, they like being so vague because yes its going to change animation for the worse, completely ruin it
Womp womp
It's cool you collaborated with the warpfusion community to help get features added, improving it as a tool for everyone. That's the kind of thing I love seeing.
I love how Corridor makes their "discovery process" with the enigimatic-e clip. Because to me, that is the community of creation, looking at small concepts other people did, and using it to inspire a new piece of work, and in your case, actually contributed to the software. It's not about the software it's about the artists behind it. Beautiful and inspiring as always.
Absolutely. And with Open Source, "contribute" does not always mean submitting code, sometimes it's just the "idea" of something that can enhance the project
its not about the software its about the artist that got their works stolen for this kinda shit, yes beautiful and inspiring as fuck
@@mop-kun2381 So you clearly haven't watched the video or you'd know all the art this model got trained on was created for the project, and he was compensated for his time.
You're right that this is a problem, elsewhere, but Corridor have shown how it can be used without infringing on the artistic rights of others.
I love how they highlighted those in the WarpFusion community and specifically said "hire them"
So, after the first episode of Anime RPS, I started thinking about a rudimentary method of cleaning up the motion on the characters. Let's say you have a "style" dataset of 1000 images. You take the first frame of video and have the AI convert it using your style dataset, and each one of those 1000 images has what I call an integration value of 1. Then, for the second frame, you add the first frame to your dataset, but you give it an integration value of, let's say, 500. So the AI would be using 1000 images of style, and 500 of the same image (the previous frame) to generate an output frame. If you see jank in a frame, all you'd have to do is adjust the i-value of the previous frame up or down. Or you could cut it to, say, 250, and add in the image of the frame before that, also with a value of 250. Then I watched this video and realized that's probably exactly what WarpFusion is doing. Lol.
I don't understand the nitty gritty of it, but even if that is what WarpFusion is doing already, it's really impressive that you came to the same method on your own!
You don't retrain the model on inference. After the model is already finetuned on the style dataset it doesn't see the dataset anymore. The new style info is embedded into the weights of the network which don't change anymore when you use the AI. It would only work if you were willing to finetune the model on each frame, which would take hours to compute.
I don't understand the technical lingo but am glad people are throwing their ideas on improvements out there, this is how progress is made! it's much like the creative everyday people who mod video games such as skyrim, you get enough peopel together and they're driven by passion for their craft unlike soulless companies and you get innovation! I think what people will do is see this new method of AI generated content and create better, more improved tools that anyone can use.. this is a step up from image to video and the next improvement is capitalizing on this method and creating a specially tailored tool/program people can use. I love the creativity and innovation we all bring to the table!
One thing I love about Anime Rock Paper Scissors is that it has two stories to tell. One about two twin princes fighting against each other. And one story about new tech, and how it can be both scary AND great at the same time. It just depends on how it's used.
Holy shit David Maxim Mitic! Unexpected crossover. I've been listening to his music for many years, he has some insanely good stuff.
Overall it seems like eyes and mouths are still the biggest issue (apart from costume consistency). May be worth redrawing those by hand, just to actually get the gaze direction and expressions you want. Hiring an artist is a great move, although I'd recommend getting someone who studied the anime style for longer than a couple weeks.
Another thing I noticed is that you have a very western approach to this, seemingly working in the classic 'animating on twos' style. Anime doesn't do that, they time each frame perfectly to support the motion as best as possible with as little images as possible, sometimes going as low as 3 frames per second. This would also help tremendously with warping, since you'd drastically reduce the frame number, allowing for much more controlled direction.
They definetly need a clean-up artist
I think lots of the issues with eyes and mouths are that the filmed video doesn't have the eye-lines correct to start with, and they are not nailing the correct mouth shapes for the lines the characters are saying in the video either. So I think a big improvement could be gained from tightening up the video part. (
That said, AI still has issues with getting eyes correctly. I remember from the behind the scenes of the first Anime Rock Paper Scissors for this reason they had "lazy eye" listed in the field for what the AI should *avoid* drawing. So it's very possible that there could still be big tech improvements that are still in the future.
The biggest issue is the art theft
@@jojomarshall6465 They literally hired their own artist for this job
If you think that's still theft, then you might need to rethink about downloading or using art in general
@@gabrielbuenodossantos5203 I believe the confusion here comes from not knowing if Stable Diffusion is utilizing only Josh Newland's art, or if it's utilizing it's pre-existing database of stolen art.
> Just a couple of buddies working together
> You will see stories from small creators like us
Yeah you're a team of full-time VFX artists working in a giant LA studio with perfect green screen and cameras and rigs that costs probably more than fifty thousand dollars. Very indie! Small creator huh. Two channels with 6 million and almost 10 million subs plus a combined 3 billion views. Yeah you're a very small creator.
Yep, a team of people doing this all themselves, and if not that they're supplying the people with the skills they need with the necessary payment. And to top it off, all the information and techniques the used with the software is open to the public. Oh, and they have made a behind the scenes for both episodes that they have made, just too make sure you get some ideas.
Yeah I'm not seeing a problem here...
All this just sounds like get a green screen, a couple of cameras, someone who knows a draw, someone who knows the write, learn to act, and learn how to use the software. So with that being said technically I could do something similar to this just with my friend group right now. But that's just me personally.
@@Tech_D3mon You could! But you're still here typing in a comment section instead of doing that. I guess you couldn't.
@@nathanhollow0Using that "logic" Corridor has paid and promoted a bunch of artists and creators for this project and you haven't
@@Tech_D3mon one of my friends does animation and the thing's ive seen her do using just a macbook and an ipad are absolutely fkn impressive. animation was never expensive if you really wanted to do it. you don't even need a computer to do animation, paper flip books are still a thing. their reasoning is absolutely fkn bullshit.
@@nomickike2165 not really. Cencoroll and Voices From a Distant Star(anime movies) were both one-man projects, or at least very close to being one-man projects. all you need is a somewhat working computer and a drawing tablet and a dedication to make something. their reasoning that making anime is out of the hands of the general public is fkn bullshit. they're just a lazy profit driven soulless corporation(their love for NFTs should've given it away to everyone).
I still don't see how slapping a really advanced filter on live action footage is changing animation. This isnt animation, it just aims to look somewhat like it but isn't animation.
I'm not going to lie, I think that I solidly enjoyed the flickering version of the anime too. There is just some type of charm around the constantly switching lines.
That flickering effect would have been harder to do if it was hand drawn
Like Squigglevision from Dr. Katz or Home Videos but different.
It's a nice bonus when you have to stop watching and do something, and then you notice that each frame is great
Yeah, focusing that to the right places could be a real style. Ensure the faces read, and th rest can get pretty wild and be fine.
yeh right? a bit like how old footage got that grains and imperfections
As a 2D artist I'll always have some kind of issue with AIs, so I'm really glad tou guys are using actual artist to provide their work for you guys to animate.
People should start understanding AIs as tools to improve the final product instead of a replacement for artist.
Exactly. The more we get this in the hands of every day people instead of big heartless companies, the better. Corridor are a team I trust to do the right thing, you only need to look at where they came from. Massive props to them for involving other artists traditionally from the space, as Nico said, this should be a tool that allows creatives to make more, not one that does all the work for them.
@@XavierXonora It's already in the hands of everyday people. Anyone can use Stable diffusion (there are community sourced solutions for people who don't have a strong enough computer to run it by themselves).
If anything, I would argue that the biggest obstacle to more artists adopting this technology and figuring out ways in which it could help them is the enormous reactionary backlash against it and how the discussion gets flattened out to "AI Bad" in so many places.
After watching the second chapter of the "Rock Paper Scissors" anime, I remembered a game calle "Pistolero". You say "pistolero" out loud and then you can do one of three: load your gun (as many times as you want), shoot (if you have your pistol loaded), or protect yourself. Would be amazing if you put it on the show.
“Small creators like us”
Proceeds a shot with fully stacked red camera.
I can appreciate this way of animation being its own thing. Like other/new creators making works in this method while traditional studios still existing. In an ideal world.
we don’t live in an ideal world sadly, though there is always room for old technology to be used. Most artist love the process more than the final product, so they don’t always take the path of least resistance. That’s the one thing that gives me a bit of hope for art in the future.
Yeah, but we live in this reality, and if there's a way for people to steal other people's work, is gonna happen and is gonna be rampant
If there's a noticeable quality difference and people like the classic way, traditional studios will exist.
If customers prefer the new way, or can't tell the difference, all studios will have to adapt, or cease to exist. This is the way of technological progress- it's always been this way, though normally the transitions aren't as publicly visible and widely discussed as they are now
@@karenreddy Yeah, because companies give customers exactly what they want, and don't cut corners or give you an inferior product because it'll be more expensive to do it properly
"While traditional studios still existing"
You mean the studios that have greedy execs that will try to replace the artists with AI to do the work for them?
Doesn't sound Ideal to me.
Seeing the little audience they assembled for their viewing party at the end made me so happy. As successful as they are, the joy of sharing their art with others is so relatable- and I can see it on their faces.
Coming soon…
Corridor Animations Studio
I still worry that this is only accelerating a path for Hollywood executives to cut out all artistry, humanity and with a terrible cost to the lives of many. But I can at least give the Corridor team credit to approach this as ethically as they can. I think it would’ve been great to find an artist for the background style - whether pre-existing or solely made for this project - and compensate them, but overall this is a far more reassuring approach.
Corridor are using and promoting open source tools. If anything, it is the opposite of what execs want. They don't want this tech to reach masses.
I think the next step for this would be re-evaluating the movement of actors.
If you notice in animation the characters aren't always in motion every time they talk or do an action.
Sometimes the hand moves with the eyes slightly but everything else is still the same frozen frame.
I think you can implement a similar approach to episode 3, and this could help prevent even more glitches and flickering.
They don't move much in animation because it's a lot of work to do for the creators...
Now that the job is much easier they should definitely move more! Future iterations will reduce flickering.
It may not look "original" anime, but they should move more to show off the capability of this tech
@@notalanjoseph I don't know. Moving more than the reference material makes it look a bit like a parody to me.
I enjoy this so much, first episode was so good, second one is less "impressive" for me as I did not get surprised how the technology has advanced (fool me once). BUT in both of these I kinda feel like the "mouth opening" looks a bit too much like a parody. Anime artists have sheets for each vowel, how the mouth should look like. It's never random mouth movement. At least in Japanese. I would feel better if it was taken little less like a parody and more like a serious anime. BECAUSE, it is freaking insane and it is so good and I want to watch hours of it.
And 12 fps. This episode was too smooth for anime style. It was like Disney smooth
It feels like the AI isn't capable of interpreting small lip movements and instead renders them as not speaking. The same goes for small facial movements. In regular video we have the resolution to see the movement and in animation they would need to draw that.
I'd say they migh want to completely reinvent the pipeline for most of the shots that don't include some sort of action. Maybe AI only a single frame and then puppet it in Blender or something.
I know it's a controversial topic but you improved so much and having your own artist this time is a huge difference!
There are still inconsistencies with things morphing around in the hair and face. I imagine they left it there intentionally and they'll continue to do a series of these tech demos
Why did you have to do this during the strike though...
I loved that instead of just using the free software, they hired several artists much more specialised to use the software. I believe that's one of the directions that the industry should take going forward
The industry is gonna do whatever's cheapest my guy.
@@TheFezIsAwesome unfortunately you are right. That's why, unions!
I think this is a viable future for AI. It's not replacing artists, but it's a tool that makes them exponentially more productive. I hope to see more of this, and less single-prompt generations crowding out professionals.
Like yeah, that's what it should be. People have been improving their tools for like ages now. it's like switrching your shovel to a new tractor. I don't get why people get mad at AI when they insatead should get mad at people who use their art with that AI. I see a huge benefit of AI in animation if people will figure out all the problems with shadows and lighting. If for example Pixar or Disney would invest in AI animations technology they could make the process of creating cartoons way easier and faster.
@@SimplCup yup your opinion is irrelevent if you're pro big companies replacing artists with machines
Ye, like, it would still be a large team of people needed for this to be an actual viable option. It's not like a few people could come along and pump out cartoons using AI, they could, but it would take months and months for a single episode. They would still need actors, camera crew, editors, people creating environments, sound designers, artists to train the AI and add effects to the converted scenes, writers, composers. Like, it's just a tool, that if used correctly and used well, can produce really cool products. In 5 years, I'd imagine this technology would advance enough to take out all of the jank. And then it's just up to the teams to execute their visions with it.
@@SimplCup Disney could actually go back to making 2D animation. We could actually get a good Disney movie?! And not just live action rehashes?!
@@aprophetofrng9821 Yes, exactly. Even now we have that impressive technology that was shown in the video, WarpFusion already can turn video into cartoon without flickering faces and objects. I see AI as a new era of animation.
my favourite bit of this video is at 17:13 when niko basically did the team montage from every heist movie ever made
23:28 Nico's so proud 🥺. He deserves it. Great job!
Re: Title, There can't be an 'again' since you didn't do it the first time
They did, they're at the forefront of this technology. It's gonna be super exciting to see what fully professional studios can do with this tech.
I don’t think it’s changing animation as much as a type of animation, specifically rotoscoping. It’s definitely impressive, but it’s less a better kind of animation as opposed to a new one. It’s a good thing the techniques here are being used by actual creatives because I can see someone very easily looking at this tech and thinking “Oh, well I don’t need animators anymore”
By definition it's not even animation. Animation isn't a style, it's a process, and this isn't it
David Maxim making the Intro song is just the cherry on top
hey!! cool content but like please don’t be doing shit like this isn’t the middle of a strike!! this is just speedrunning the destruction of people’s livelihoods!!
this has nothing to do with the strike lol
I’m very grateful for the format of this video- such a perspective of making me excited for AI rather than worried for the future. Thanks to Niko and the rest of Corridor crew for becoming their inner directors.
Im so mad. Corridor Crew literally explained this new tool needs professional artists, needs weeks to create a decent animation, it improves the process in some areas but THERE ARE STILL PEOPLE WHO IGNORE THE ENTIRE EXPLANATION AND CLAIM AI WILL REPLACE ARTISTS EASILY.
Exacly like idk 3D software simulation - instead of blowing up a building, you do it on pc and take care to make it legit
The people trying to ban it on that basis are even more annoying. And a little scary with how far they have gone.
As a very new artist like less then 50 hours drawing with the first video discouraged me greatly but this helped that hey what im learning can still be aplied and used and wont be something that will become obsolete
Props to corridor crew showing that side of things
I'm glad you not only shouted out all the extra artists who helped on this project but also _paid_ them. Can tell y'all are doing your best to do this the most ethical way!
I wonder if integrating a face landmark detection AI into the training process would be useful for keeping it more consistent.
there is, its called "after detailer" simply an extension. Although I'm guessing this video was recorded before then
@@tenacious6052 after detailer relies on landmark detection models to add details, it currently doesn't help with temporal issues.
Watching comments like these is probably my favorite part
To actually hire an artist to create a style and have a discussion about the ethical usage of this as an ongoing conversation is a good way to go with this- to show it evolving with the discussion as a more positive and fair use instead of stealing and telling every side of what’s goin’ on
Wow as a classic animator i could just clean up the wobbliness by hand and it will look so fucking good, please corridor teach me your ai ways i loved you since i was a kid
Watch at 22:55 they say there
It's fascinating seeing the inside of a production. Especially with all the cool brand new vfx techniques!
I really liked the gritty feel of the first Anime Rock Paper Scissors. The second one felt a little more plastic in design. This is just my personal opinion, and I'm no expert in CGI. I also know there's so much more that was in the second, and it's a feat that I say good work for.
I know exactly what you mean. It may be the character design change. I like it, but using Vampire Hunter D as a reference style on the first one was spot on. At least they won't catch any legal heat from now on.
The minor characters are less detailed yep. They addressed it in their podcast. I think it's a good direction overall to at least diverge from the initial Vampire Hunter D inspo in ep1. Hopefully we see more later on.
100% agree. Comparing the 2nd one to the 1st, the 1st just feels so much... fuller? It has character and personality to it and it works with their facial expressions much more, but alas, it was morally and legally a better decision to source their own art in the public's eye.
@@lolziz Exactly. I suppose the issue with the 2nd one was that they had to deviate just enough to not be identified with VHD but still got constrained with the 1st ep's overall feel for cohesion. Maybe in the future they'd do a another series with an entirely different style from the get go?
@@RaiOkami Honestly, this may be their next move as it would pose a fun new "challenge" of "What styles can and can't this AI model do?" The expensive part of this, whether it be money or time wise, would be getting the references for these new styles which would theoretically be self sourced. Doing this would also allow them to have a little "Our AI method can replicate/works with all of these different types of styles." flex. Thinking about it, it opens up an opportunity to do a series that's 'Love Death + Robots'-esque where it's a relatively short story idea done in a specific style and every episode is in a different style. As nice as an idea as that is, it would be a lot of work.
Y'all really put this out in the middle of the strikes, huh?
You can't just upload this video to TH-cam, this about how Hollywood would feel 🤓
Bottom line, this isn't animation. That's honestly what irks me the most about this
let people have fun its not like there telling people to animate with ai
@@DamnRoadkillCalling this animation is incredibly offensive to people who dedicated their lives to the craft. Its not animation and cannot be animation if it wasn't animated
@MelloCello7 what is meant to say is to let people have fun with ai, not animate. You're making it seem like I want ai to replace artists, which I don't.
@@DamnRoadkillIts okay to have fun with ai, but lets not kid ourselves. It absolutely will replace artists and lots of them. It will replace alot of work as a whole
@@MelloCello7 True
The fact that you guys were able to reach out to the community and your peers for assistance and those willingly and lovingly contributed truly means a lot as a fan and audience member. You guys have become a true force of nature in the industry and genuinely deserve a seat amongst the few who have created renown works of cinemagraphic art.
I'm glad you were able to hire an artist to take away the issue of AI using art without permission. good work!
It's not using other artists work, it learns from it, similarily to what humans do. The whole ethical debate about AI art is unsubstantiated and silly.
@@Gavri1945 Humans do not algorithmically copy every single bit of someone's artstyle. Humans have inherent creativity and differences. When a human tries to recreate an artist's style there will be differences. Humans are also able to fully credit an artist and choose to not profit off another person's style. If a human artist was fully copying a currently working artist's style and selling them for large amounts of money people WOULD be upset. The only reason you don't see a difference is because you've never done an ounce of creative work in your entire life.
I mean, I don’t think it satisfies it entirely… the AI was still trained to create based on other work without permission. It’s definitely better and more ethical though.
@@olafforkbeard4782 every artist trained on other work without permition.
@@Shadowgaming105 "Humans have inherent creativity and differences" AI's too
I feel like this will be "now I have become death" moment in the future
This is nothing like that, artists have a crazy ass god complex if they think the world revolves around them.
Upvote this man to the kingdom of top comment
Not sure if you guys have considered this but why not reduce the number of frames? You shoot your videos at somewhere around 60 fps, while traditional anime is drawn at less than half of that. Try cutting down the # of frames, while using all the new tech & accounting for the interpolation between each frame. See if that helps improve the quality. The interpolation aspect might require more hand animation/editing or some coding if you have the software skills.
You know at this point its not even AI that is doing the work anymore. So many artists are coming together to make this, from the art style, shots, compositions, the technology, the story and voice acting, and even the sounds and music compositions. It is a work of art, not a work from AI. Hats off to the Corridor team and all the artists that worked on it. Bravo 👏
I agree. If anything, AI could be seen as the employee who did the least amount of specialized work, lol. Just goes to show how uniquely talented every human at Corridor really is.
"Well, The Only Thing That Stops Changing, Is Us, When We Decide To Stand Still"
What a quote! 15:21
The lip sync is good enough. Fantastic work guys I think you've cracked it.
Seeing David writing the theme was the most random thing I've seen in this channel.
I follow the guy for years and suddently he appears here.
I really wanted the opening theme to be subtitled in Japanese and English. Could you release a video of just the opening with those added?
Corridor setting the blue print for how A.I assisted creations should be done. Well done guys!
I need a version of this where al the BTS footage is in the anime style but everything on their screens is the raw footage (the anime episode without the processing). So the story is flipped: anime characters making a photo-realistic TH-cam video.
That's actually be a good idea. So that real artists can actually prove once and for all that AI isn't the future.
tbh all i saw when watching rps 2 was Joel Haver's style on the characters, but the backgrounds do have a recognizable style
13:58 as a programmer, I like when the crew shows this programmer's side of them
Yeah it would be neat if they somehow incorporated coding into their react series. Maybe highlight some of the programmers behind tech in the industry.
Probably not enough interest from the public, but if anyone can make it interesting it would be corridor.
yeah totally! programmers rise up!@@someguy8443
:skull: using premade generative ai workflows is considered programming now
Never thought I'm going to see David Micic in a corridor crew video. Huge fan of his music and albums. Awesome selection for your soundtrack.
they really thought putting a filter over a footage is animation lmao
The latent space isn't just a filter, the possibilities are huge.
This isn't really animation, it's not even rotoscoping. Its something else.... more of a performance capture technology from a filmed/photographed source that allows for a stylized 2d result instead of the usual 3D realism like Avatar.
It's got an "animated" style but its not really animation.
Agreed, I honestly have no problem with this video, but calling it animation is a little disingenuous
So so proud of David Maxim for the theme track and of course for the Corridor guys for pushing the boundaries!!
One thing to say is that a lot of times in anime it can have vastly different styles instead of being consistent throughout. A really good example of this is One Punch Man with how Saitama is drawn. Sometimes it's super simplistic, others it's extremely detailed, most of the times he's drawn in the style as every other character.
yeah. Also for huge battle sequences or comedic effect. For big fights they generally get veterans on the project. During the Tournament of Power. The artwork slowly morphed into DBZ style art because they got the key frame animators and the DBZ director back on board I think.
But that is not the problem, the problem is the style is changing on every frame rather than scenes. OP is also a comedic parody action show which requires that change of style to sell the comedic effect. But RPCB is a much serious and grounded animation that rely on a specific tone, environment and style to sell the serious plot, except the part from Sam but that is still a serious scene but with a comedic and sarcastic concept
Okay guys, I’m not sure if the corridor crew is even going to see this but I was watching another vid earlier and an add popped up for these military grade lights, they used at least one clip from wren’s lightsaber video. I’m not sure if they are aware of this. If the guys are being exploited again we need to let them know. Plz help!
Unless you made into the spider verse, you never changed anything in the first place
??????????????
But they did tho they added extra effects to rotoscoping 😂😂
I think this is really how to do things right with animation - you still need real artists but they're not handling _every single frame._ Makes things much more manageable.
Looks much better that way, everything is intentional. plus, doing in-betweens is important for young animators to gain experience
I'm really wondering why you didn't go with a stylistic change using a "pencil mileage" route that animators use. Such as animating on 2's, or in-between 2 frames for mouth movement, or even still frames. Even with the new tech, I'm curious if you would take it a step further into becoming a new art form while keeping certain aspects of what makes japanese animation iconic. Love this new software and the work you put into every frame!
Because they don't actually care about animation, if they did, they'd have hired animators to work on this.
@@theconqueringenigmayou say that like it's the easiest cheapest thing on the planet to do.
@@theconqueringenigma Explain?
@theconqueringenigma7733 don't encourage it. I would just humbly have to disagree with the idea that they don't care
Considering they paid other people to aid in their process and they are creating something new with their original drawings, it's pretty elitist to say this isn't art. Even if you don't like how it's done
@@GinjaNinja1985 Well they seem to be mostly focussed on the look of the whole thing and not on the actual animation itself. Otherwise they wouldn't have used their own reference footage to begin with. They are more concerned on fixing the jitter/garble look that most of the "free" tools/filters offer. Just like with their Son of a Dungeon series for them it's an relative easy way to generate content. It has more to do with compositing work than art imo. Which makes sense, as that is what they do. The one who did the actual art was the character designer they hired.
6:30 - Close! That’s his model sheet from “Justice League” (2001)! Still love seeing the DCAU pop up here from time to time.
I was hoping you guys to explain how you did that giant undead scene, it totally blew my mind!!!
And the MUSIC, OMG! I'm a metalhead and it's the best anime intro I've seen ever! It has the right shredding, melody variance and cheesy girl voice in it, it's awesome!
How can people gatekeep because i believe you guys are paving the way towards more accessible artistic freedom for everyone who may want to tell a story but may not necessarily have the professional skillsets
So surprised to see "enigmatic e" channel mentioned here. Always enjoy his channel especially for AI video tricks. This new version of Rock paper scissors looks 🔥
I’d rather call this putting a filter over live action. Cuz that is technically what it is.
Custom cosplayers are going to have a field day with this one
The issue people had with using AI goes out the window when you feed the AI images that you hired others to make. So this is actually pretty cool.
sorry guys, no you didn't.