Did We Just Change Animation Forever... Again?

Поделиться
HTML-код
  • Опубликовано: 27 май 2024
  • Our Exclusive Tutorials will teach you how to BUILD YOUR OWN ANIME! Join CorridorDigital with a 14-Day Free Trial ► corridordigital.com/
    Discover how our small team was able to take cutting-edge tools and apply in just a few months, making a creative leap in our Anime Rock Paper Scissors series!
    Limited Edition Anime Rock Paper Scissor 2 Merch ► Available only until August 20th, get yours today in either t-shirt or longsleeve and celebrate this release. corridordigital.store/
    The Anime Rock Paper Scissors show is entirely made possible by Members of CorridorDigital, our INDEPENDENT STREAMING PLATFORM. Try a 14-Day Free Trial, and bring Episode 3 to life! ► corridordigital.com/
    Written & Directed by Niko Pueringer and Sam Gorski
    Artists ►
    Dean Hughes: Animator, Lead Warp Artist, Neuromancer, Prince Jules - / sdeanhughes
    Josh Newland: Character & Style Design, Lead Artist - / lv1_artmage
    Kenson Lee: Animator Extraordinaire - / rikognition
    Mattias Alegro Marasigan: Post-Production Editor, Compositor & Keeper of Timelines - www.mattiasalegro.com/
    Kytra Selca: Warp Artist - / @maketherobotdoit
    Eric Solorio: Warp Artist - / @enigmatic_e
    Jan Losada: Warp Artist, Neuromancer - instragram.com/artificial_inte...
    Sound & Music ►
    Sound Design by Kevin Senzaki - senzaki.com/
    Theme Song by David Maxim Micic - open.spotify.com/artist/0wQa1...
    Theme Song Vocals & Translation by Shihori - open.spotify.com/artist/07vlE...
    Music by Sam Gorski - open.spotify.com/artist/7sWkn...
    Production & Additional Talent ►
    Christian Fergerstrom: Producer, AD, Script Supervisor - / c_fergerstrom
    Jordan Coleman: Costume Designer, Associate Producer - / jordan_coleman
    Jordan Allen: Soldiers & Peasants - / vfxwithjordan
    Merch Design by the incredible Kendrra Thoms kendrrathoms.com/
    Creative Tools ►
    Created on Puget Systems Computers - bit.ly/Puget_Systems
    Warp Fusion created by Sxela - / sxela
    Composited in DaVinci Resolve and After Effects
    Chapters ►
    00:00 A New Way of Animation?
    01:55 Room for Improvement
    03:40 The Beard Issue
    05:29 Improving the Style of Episode 2
    09:36 Close, but no Warp Fusion
    14:10 Warp Fusion Results blow Sam's Mind
    16:12 A Long Way from Home
    19:48 Incredible Stories need Incredible Music
    21:42 Finishing the Project
  • РазвлеченияРазвлечения

Комментарии • 3,2 тыс.

  • @5MadMovieMakers
    @5MadMovieMakers 9 месяцев назад +874

    The behind the scenes of this series continues to have dramatic arcs of its own

    • @kamilnurkowski
      @kamilnurkowski 9 месяцев назад +11

      I like how they make it look a little like parody episode of tv drama, they make it more fun to watch.

    • @hybridvenom9
      @hybridvenom9 5 месяцев назад

      It would look fun on like a tv show but it would get boring real quick

  • @CYGNIUS
    @CYGNIUS 9 месяцев назад +2475

    Hiring artists for reference material for the AI was the way to go. It looks way more solid now, as well as being more ethically done now.

    • @Danuxsy
      @Danuxsy 9 месяцев назад +42

      having to rely on other people is a bad thing generally speaking, you don't want a person to have a capability nobody else has, this is bad! That's why replacing man with machine (AI) is the greatest undertaking in human history.

    • @underattack14
      @underattack14 9 месяцев назад +51

      Nah wild west. Artists getting dong slapped by the inevitability

    • @WwZa7
      @WwZa7 9 месяцев назад +231

      @@Danuxsy But these image generatoes are fundamentally working only thanks to all the people that made art for these to work, and it's still impossible to make something actually new for these AI. No one got "replaced" when it comes to creativity, just used, blendered and regurgitated the result.

    • @zxbc1
      @zxbc1 9 месяцев назад +70

      @@WwZa7 What's human creativity if not just previous creations "used, blendered and regurgitated"? Synthesizing what comes before is the essence of any creativity. Plenty of AI images are novel, in the sense that there has never been one created like it before. How similar or different it is from previous examples it learned from is a matter of opinion, not fact.

    • @ThePizza28
      @ThePizza28 9 месяцев назад +122

      @@Danuxsy Everyone relies on everyone, how do you think movies are made? Producers are also artists, directors and foleys now? When someone has a capability you need but don't have, you hire them, welcome to life. This is one of the dumbest thing I've heard this year.

  • @smallsam52
    @smallsam52 9 месяцев назад +152

    As cool and groundbreaking as this project is, it really calls for another name rather than animation. Motion capture films don't qualify for animation Oscars, rotoscoping isn't considered animation, so a distinction needs to be made, it would clear up a lot of the frustration people feel for this.

    • @thedarkangel613
      @thedarkangel613 5 месяцев назад +14

      rotoscoping is actually considered animation in some circles but i do see how this can be seen as not animation. the thing animation is so broad, it never meant, drawing frame by frame. that is just one type

    • @Shin_Kouhai
      @Shin_Kouhai 5 месяцев назад

      well said

    • @Hapasan808
      @Hapasan808 4 месяца назад +6

      I think it's considered animation. If The Adventures of Tintin can win many "Best Animated Picture" awards (not Oscars however) and is heavily motion captured, then I would consider this animation too.
      That being said, I still prefer hand-drawn animation by a longshot, and I wouldn't like to see this replace hand-drawn animation.
      As a compromise, we can call it "Animation-esque."
      Like calling something made in an anime style "Anime-esque."

    • @TheLumberjack1987
      @TheLumberjack1987 4 месяца назад +2

      It's more like a "style filter" genre, definitely not animation.

    • @PANDORAZTOYBOKZ
      @PANDORAZTOYBOKZ 3 месяца назад +1

      ​@@Hapasan808The difference is that the characters of Tintin still required heavy manual animation. The mocap simply provides a more human frame to work with. The animators still need to add much of the expression themselves, and adapt the human movement. If you didn't need animation to mocap, there wouldn't be mocap animators.

  • @learningtodrawstudios4773
    @learningtodrawstudios4773 9 месяцев назад +329

    The thing I'm terrified of is companies hiring artists only to get training data firing them and then just using the data they got. no need to worry about a union or paying a fair wage when you can cheaply produce it using a machine.

    • @RusSEAL
      @RusSEAL 9 месяцев назад +44

      Worse yet…
      Bring in raw, new kids offering their own “company theme” or style and having them pay for the “schooling” only to have the raw data and it all paid for by the most gullible.

    • @WarpSonic
      @WarpSonic 9 месяцев назад +6

      buy it would take a lot less time for the artist if you are just getting a few drawings for the model to learn from, meaning they could do other stuff. the lower earning would likely match the reduced time sink

    • @gamingking1
      @gamingking1 9 месяцев назад +30

      The industry MUST adapt. Because this is coming whether we like it or not. The easiest and cheapest methods will always be sought out. It doesn't matter if we think it unfair.

    • @NFIVE30
      @NFIVE30 9 месяцев назад +3

      If you look at it, artists aree less present in the process, but a lot of other jobs are involved and weren't much before.

    • @jockeyawesomekid8906
      @jockeyawesomekid8906 9 месяцев назад +5

      While this is concerning, when it comes to entertainment the desire for new and exciting things will never go away. An animation or production company is not going to be able to just get a commission off of an artist and use that data for all of their media thereafter. It will likely create a new dynamic or approach to creating these styles to fuel AI assisted processes. People and artists will still be needed at every step of the way to generate new styles/themes, and properly implement those themes into a cohesive work. Yeah a lot of menial art in corporations will likely be cut out and replaced by mass produced AI work, but it's not like there's a downward slippery slope from the current starting point of corporate clip art.

  • @ForlornCreature
    @ForlornCreature 9 месяцев назад +3347

    Twitter is going to hate this so much

    • @jamessderby
      @jamessderby 9 месяцев назад +303

      I love that for them

    • @Desasteroid
      @Desasteroid 9 месяцев назад +428

      What's Twitter?

    • @Walker07620
      @Walker07620 9 месяцев назад +24

      They do now

    • @dodogamusghaul
      @dodogamusghaul 9 месяцев назад +380

      For good reason yeah.

    • @grucho101
      @grucho101 9 месяцев назад +268

      for good reason

  • @aSinisterKiid
    @aSinisterKiid 9 месяцев назад +1223

    I love Sam's barbarian and his "don't look at the circle" fighting style. It was sooo funny every time he "Got em"

    • @The-Middleman
      @The-Middleman 9 месяцев назад +14

      same. that brought back classic memories from my high school days. 🥹

    • @RaiOkami
      @RaiOkami 9 месяцев назад +16

      Sam's barbarian wizard bit was my favorite bit on ep2! Those got'em moments were hilarious yet the hits sounded great.

    • @adriandominguez6379
      @adriandominguez6379 9 месяцев назад +2

      spinoff

    • @yori_sounai
      @yori_sounai 9 месяцев назад +3

      Outfit was cold too

  • @KillerTacos54
    @KillerTacos54 9 месяцев назад +707

    I cannot express how much I love the fact that you guys hired real artists and emphasised the importance of that fact

    • @blackwillow7314
      @blackwillow7314 9 месяцев назад +87

      Corridor assured they hired their own artist to train the Al, but remember that industry discourse like this is interconnected. They may not be stealing art, but any studio that sees this and goes 'wow, it's that easy' will. Corridor's also boasting about Al democratizing animation-making. Now anyone can make animation in their bedroom with nothing but a camera and a free software! Except, anyone could already make animation in their bedroom with nothing but a camera and a free software. I made animation in my bedroom with nothing but a camera and a free stop-motion software when I was 10 years old.

    • @seechwing2058
      @seechwing2058 9 месяцев назад +64

      @@blackwillow7314 it’s like making a tutorial on making bombs and then saying “but don’t worry guys, we’re good people and since we’re the ones making the bombs, it’s ok because we’re not going to blow anything up.” their intentions don’t matter when the end result is still, y’know, a BOMB. they’re literally handing studio execs a way to get rid of almost every one one of their employees but it’s okay because “we’re just showing the capabilities of this new technology!!!1!!”

    • @rensaudade
      @rensaudade 9 месяцев назад +42

      Well yeah. Things evolve. Lots of film developers lost jobs when digital movie cameras were a thing. Most film rolls became obsolete. Camera men and directors were still a thing, weren't replaced. They just got new equipment. Same thing.

    • @TrueAfricanHero
      @TrueAfricanHero 9 месяцев назад +17

      Artists apparently lack self-esteem and need validation from others

    • @SirusStarTV
      @SirusStarTV 9 месяцев назад +31

      @@TrueAfricanHero yeah, artists need lots of validation and praise for the thing they dedicated many years, decades of hard practice. Drawing for ourselves is satisfying on its own but it gets boring really quick. We are our own worst critics so hearing someone praise can lower our expectations of how good enough our art should be.

  • @friendofphi
    @friendofphi 9 месяцев назад +17

    There's a few techniques I wish you guys explored, ebsynth to interplate frames and then just rendering keyframes. Using controlnet openpose, making contact sheets which multiple frames on a single image (if you have the VRAM this really helps consistency).

    • @feffy380
      @feffy380 9 месяцев назад +3

      Warpfusion does basically the same thing as Ebsynth: warping a texture based on optical flow. Openpose doesn't offer much benefit since they're already doing img2img with the desired poses. At this point to reduce the jank you have to raise the resolution (it looks like their model is still only 768px)

  • @Dartheomus
    @Dartheomus 9 месяцев назад +426

    At the start, you say, "Normally, you need a team of people and powerful computers to make an animation." I think the next line should have been, "and here's why!" You still have a pretty big team working on it along with a crate of 4090's! Pretty damned cool though!

    • @shadowproductions969
      @shadowproductions969 9 месяцев назад +64

      true, but still much smaller.. probably less than 10% of the size team and render farm a big studio needs

    • @ThatNerdGuy
      @ThatNerdGuy 9 месяцев назад +1

      Stolen comment?

    • @satratic127
      @satratic127 9 месяцев назад +15

      ​@@ThatNerdGuyjudging by the profile it seems legit, the other one you saw was prolly a bot

    • @ThatNerdGuy
      @ThatNerdGuy 9 месяцев назад +2

      Maybe, if so sorry.

    • @Grooveworthy
      @Grooveworthy 9 месяцев назад +15

      Right, but also, they're creating these processes from scratch, it takes a lot of work to do that. Now they've got it more or less figured out, a lot of the grunt work is gonna be eliminated for anyone else in the future. Also as more people experiment and refine it, it will get even more accessible.

  • @realbadger
    @realbadger 9 месяцев назад +771

    When I saw the episode and saw the child version of Niko I could still see Niko's face _in the child face,_ so I was impressed not only the de-aging process, but obviously the Defacial-Hair'ing...

    • @BroadFieldGaming
      @BroadFieldGaming 9 месяцев назад +27

      Warner Bros WISH they had that tech a few years ago.

    • @cpt_nordbart
      @cpt_nordbart 9 месяцев назад +11

      ​@@BroadFieldGamingworks super, man.

    • @victorwidell9751
      @victorwidell9751 9 месяцев назад +2

      The sped up voices though. That’s so uncanny.

    • @YurlynPlays
      @YurlynPlays 9 месяцев назад +4

      @@victorwidell9751 Pitched the voices up, not sped them up. The pitch up happens during a speed up if you don't compensate for it but they can be done separately. For Sam's character they pitched his voice down to deepen it.

  • @corporalsilver6981
    @corporalsilver6981 8 месяцев назад +102

    The technology behind this is cool and all, but there is a reason animators are yet to incorporate this into their workflow. Noodle did a great video on why.

    • @RusticRonnie
      @RusticRonnie 7 месяцев назад +5

      We do use it a little in the concept art phase.
      But it is mostly just to see which designs are worth having a designer expand on

    • @arnowisp6244
      @arnowisp6244 7 месяцев назад +4

      ​@@RusticRonnieAs for me I just use it to see what my character concept can look like. Than Maybe exapand further.

    • @mycollegeshirt
      @mycollegeshirt 6 месяцев назад +2

      and professional anime artists jumped on it immediately. Those guys are just overworked paid 4 bucks per in-betweens that can take hours. I'm not sure why people are mad about that. Are they protecting anime artists from themselves?

  • @papabaddad
    @papabaddad 9 месяцев назад +6

    during a strike huh

  • @catkilled
    @catkilled 9 месяцев назад +900

    I always believe that AI should be a tool alongside the work of artists, NOT to replace them. I appreciate Corridor hiring an actual artist for this, that’s absolutely the way to go

    • @ThatFoxxoLeo
      @ThatFoxxoLeo 9 месяцев назад +70

      It doesn't really seem like the artist used it particularly as a tool for themselves, more like they drew reference images which got fed into the model, which then spliced it with the footage Corridor shot.
      I still think that this is a replacement of an artist.

    • @ayon...
      @ayon... 9 месяцев назад +2

      Couldn't agree more

    • @tr7zw
      @tr7zw 9 месяцев назад +48

      This. In the end, all this AI image stuff is just a tool like Photoshop. I think anyone that thinks that this is "just a replacement for actual artists since you just type it in and generate it" should actually give it a shot. You still pre/post-process images to work at all, lots of testing/prototyping, need to combine or remove stuff from the image etc. There is no "type a magic prompt and replace a job" kind of stuff going on. Same as text AI will not replace developers or writers.

    • @StoopsyDaisy
      @StoopsyDaisy 9 месяцев назад +23

      ​@ThatFoxxoLeo Absolutely, it's the same as Hollywood studios scanning actors. The talent is no longer being used, but being abused

    • @deddrz2549
      @deddrz2549 9 месяцев назад +19

      ​@@ThatFoxxoLeobut the whole problem with the start was that the style was being stolen from an artist, not the rotoscoping job? Yes, this could have been done will a full animation team, but corridor would not have the funds to create that in this situation. Most people's problems are with copyright and an authors decision to keep their style theirs, so them deciding to take the style from someone worth full knowledge of what they are doing is very good

  • @insu_na
    @insu_na 9 месяцев назад +178

    oh wow, David Maxim Micic is an absolute gem. Can't believe y'all got to work together.

    • @SlackWi
      @SlackWi 9 месяцев назад +3

      I've been blasting his stuff all week, was not expecting him to pop up here!

    • @andyleemeacham
      @andyleemeacham 9 месяцев назад +1

      yooooo hell yeah so glad someone pointed this out, been a fan since Bilo II

    • @mrmoose6765
      @mrmoose6765 9 месяцев назад +2

      I've had his music on my spotify list for many months, there are way too many underappreciated/unknown artists.

    • @dudeskinus
      @dudeskinus 9 месяцев назад

      my jaw dropped!

    • @Turnaround_
      @Turnaround_ 9 месяцев назад +1

      Yeah LUN has always been a favourite album for the gym or driving, am so excited for this

  • @guynamedtoast
    @guynamedtoast 9 месяцев назад +72

    I love how attentive Corridor is to giving credit where credit is due, it shows that they are a community driven group with intent to grow as artists not individuals

    • @alecduvenage2001
      @alecduvenage2001 9 месяцев назад +2

      Exactly! One of my favourite things about this group!

  • @TURBONERD
    @TURBONERD 9 месяцев назад +10

    Why did you have to do this during the strike though...

  • @PlutoniumBoss
    @PlutoniumBoss 9 месяцев назад +12

    Mad respect to the crew for listening to the criticism, then stepping up and providing the world an example of what artistic, ethical, and responsible use of this technology looks like. Generative AI doesn't have to be an exploitative tool, it can be part of an amazing future for artistic expression.

    • @godofthecripples1237
      @godofthecripples1237 2 месяца назад +2

      They did none of that. Not once have they actually addressed the criticism levied at them and admitted they were wrong. If they had, they would have taken down the original video, or at least made a public statement acknowledging the ethical issues. But they never do that sort of thing. And I don't know about you, but a future where artists are hired just to provide Source material for an algorithm that doesn't involve them any further does not sound like a good future.
      AI can absolutely be a tool but this is not it.

  • @PeterVonDanczk
    @PeterVonDanczk 9 месяцев назад +16

    When watching this, I immediately thought of the animated full-length film "Loving Vincent" (2017, Poland/UK), inspired by the life and paintings of Vincent van Gogh. The production team hired over 120 classicly trained oil painters who painted over the frames shot with live actors. It took 6 years to make.

    • @thedarkangel613
      @thedarkangel613 9 месяцев назад +5

      That’s where my head goes too. Now imagine if that production had this to help those artist. It take so much less time and they could use the style throughout all the movie. (Because they actually changed styles during flashbacks to a less complex style)

    • @BunkeMonkey
      @BunkeMonkey 9 месяцев назад +14

      ​@@thedarkangel613 but that would make it lose all its charm, the movie is so beautiful because literally every frame is a painting that you could hang up, the fact that so many people spent so much time working on it is what gives it meaning.

    • @NFIVE30
      @NFIVE30 9 месяцев назад +5

      @@BunkeMonkey They will still have the option to do it, new technology is about giving people more possibilities, not forcing them to use them.

  • @jakezoom178
    @jakezoom178 9 месяцев назад +36

    I wonder if modelling these characters in 3D and just capture motion tracking and facial tracking separately. That way, you would have more control over each frame and still maintain the anime style.

    • @b.a.m.4135
      @b.a.m.4135 9 месяцев назад +5

      That already exist and has existed for a long time. That's how most AAA video games are made.
      I mean they could do it if they just wanted to make something, but then they wouldn't be "changing" anything.
      That would also fix the worst parts of the product. A 3d model couldn't warp features into totally seperate things independently and shaders are seperate from models, so you wouldn't have the flickering effect.

    • @MasterMordekaiser
      @MasterMordekaiser 9 месяцев назад +2

      Honestly I think this is the better option compared to running it through the AI.
      I think the AI is a neat idea, but the ethics behind it is currently dubious.

    • @TheKrister2
      @TheKrister2 9 месяцев назад +7

      ​@@MasterMordekaiserThe technology itself is no more dubious than any other. How people use it can be morally questionable, however. The way media represents all disruptive technology is often also "morally questionable," insofar as something so subjective works, yet does that seem to stop them?

  • @TrickyZ33
    @TrickyZ33 9 месяцев назад +20

    I REALLY liked the look of the Warp Fusion output with the high frame rate!

  • @sleepingArtist
    @sleepingArtist 9 месяцев назад +956

    The line" The only thing that stops changing is us when, we decide to stand still" is such a cold line.

    • @Cellardoor_
      @Cellardoor_ 9 месяцев назад

      Yeah, except it's being done unethically. Just because something sounds inspirational due to your confirmation bias doesn't make it right. That's how Hitler got all his cheers during his white supremacy speeches and started a world war. And the problem with AI is the input. If they deleted the datasets and replaced them with royalty free images then it wouldn't be a problem to anyone. That's all.

    • @WhartWhart
      @WhartWhart 9 месяцев назад +34

      I assumed that was a line from the anime or something.. Niko spitting fire lines like he IS the anime.

    • @mr.aNdErsOn88
      @mr.aNdErsOn88 9 месяцев назад +1

      Cold line for sure.. Line may not be true but still cold nonetheless

    • @ScorpyX
      @ScorpyX 9 месяцев назад +6

      Idea comes from Philosophy, - but i love "Ghost in the Shell" (1995) version
      "All things change in a dynamic system. You effort to remain who you are is what limits you"
      The concept of evolution. Basically "you can't evolve - if you don't change"

    • @JnJGaming
      @JnJGaming 9 месяцев назад +4

      @@mr.aNdErsOn88 its saying that everything is always changing, including us (so long as we keep moving, learning, and growing). If we stand still, the world will continue to change and leave us behind

  • @yannsalmon2988
    @yannsalmon2988 9 месяцев назад +79

    The thing is that regardless of using AI or not, this is still roto-animation. If you look at the original Snow White from Disney or the LOtR version of Ralph Bakshi, it has always looked a bit weird (unless that’s exactly the effect you were looking for). It’s because real actors and objects move constantly a bit randomly in real life, and movements don’t follow the « golden rules » of animation, like anticipation, exaggeration or stretch. Also, people seem to think that dividing your frame rate in 2s is sufficient to make real footage anime-like, but it’s a bit more complicated than that. Choices of key frames and in-betweens are much more important, and framerate can be on 1s, 2s or even on 3s during the same sequence.
    To make it really anime like, I think you have to consider the frames of your footage like animation cells. Selecting key animation frames and don’t hesitate to play on the speed between them. It’s also possible to treat different parts of the image separately. For example slowing down the movement of hairs from the wind, while keeping eyes and mouth moving at regular speed. Because of the AI technique, it doesn’t mean that you can’t use traditional animation compositing techniques and have the entirety of the picture processed at the same time. Nothing prevents you from using one footage for the body silhouette and another one for the face. You can take different depth of field shots and then combine them together, film probably problematic overlapping stuff separately, etc. Also, you can apply deformations to exaggerate or on the contrary stabilize your footage prior to get it processed by the AI. The result may be totally weird on live action version, but be perfect once « drawn » by the AI.
    In my own low amateur level experience with such things, it’s a headache and maybe a lost cause to try and generate every single frame with AI without heavy flickering, while strategically processing hand selected key frames then use EbSynth to create the inbetweens gives a much more pleasant result (at least for relatively steady scenes without too extreme changes from one frame to the other).
    Those are all achievable in post, but of course it’s best if it’s already taken into account in the performance part. And there’s also that, I think it’s not easy and takes training to act like an anime character instead of a live action one. Especially with your body, because the momentum of animated characters don’t really follow laws of physics. Maybe it’s something that professional dancers/coregraphers can help with, or people that are accustomed to do slapstick comedy. I have a feeling that stage artists could do well in these kind of exercices.

    • @myskeletonboy
      @myskeletonboy 9 месяцев назад +4

      You have perfectly sunmed up all the points that I was writing in a comment to a first episode. And even more. Second episode didn't come further closer to anime than first one. Animationvise. Great that they hired an artist to work on designs. That is the only major improvement in my opinion. I think that shifting resources from trying to process every frame creating nothing but rotoscoping to more creative approach that you absolutely fulyy described could have given this content true anime feel even with great deal of AI involved

    • @yannsalmon2988
      @yannsalmon2988 9 месяцев назад +1

      @@myskeletonboyYes, their workflow on this second part is still very much a live-action production one instead of an anime project workflow, which is totally understandable, this is very different from what they usually do. Finally, what the progress of this project shows is that the AI aspect is not so much a game changer because you still need artistic designers, storyboarders, skilled animators, inbetweeners, compositors, etc. to make a very good Anime. There are many more specific skills to acquire than just being able to draw a cartoon. The ones whose job is really at stake are the persons who draw the cells on the production level. Which still represents quite a consequent workforce.
      Even so, as of yet, I think the AI technique can only really work for a realistic style. It’s difficult to imagine for now that it could work as well trying to make an anime lin the style of One Piece, My Hero Academia, Porco Rosso or other comic whose characters are drawn in highly unrealistic style.

    • @brentbourgoine5893
      @brentbourgoine5893 9 месяцев назад +1

      Totally agree. I think the logical "next step" (if the onjective is to make this sort of thing more like anime, and less like roto) is to pull the keyframes from the AI-converted-video, and run them through a sort of AI "in-betweener", with settings that approximate the anime style they want. Unfortunately, this throws out quite a bit of the current "product", and re-generates it, adding yet another "layer" of production.
      Optimizing the pre-production workflow to concentrate on generating the keyframes might help with reducing some of the overhead. I could imagine that, through continued training of the same models, that the AI style converter might get good enough to need less "help" in getting the results they're looking for too. The pre-production would eventually be quicker and less intense.
      But it's a lot of time and hard work to get from here to there. What they've produced already is pretty amazing.

    • @yannsalmon2988
      @yannsalmon2988 9 месяцев назад +1

      @@brentbourgoine5893I think I’d try to first use the live-action frames to have the animation timing right. The thing is that since you will process the frames through AI afterwards, you don’t even need high res live action frames because the transformation into anime drawings will not need that much details (it’s even possible that too much details could confuse the AI more than anything else). You can also make pretty rough alterations to your image sequences before the AI pass that will not mess up the result since it will completely reinterpreted.
      On the opposite, creating in between frames automatically, from my experience, works better with live-action as a base than with Anime frames. The displacement of pixels seems easier for softwares to predict with textured areas than with flat colors areas. In both cases though, this process works pretty damn well if both the frames contains exactly the same elements and the movement is smooth. But, understandably so, it sucks when it comes to try to create an intermediate between details that only exist in one of the two frames. For example, if you have a sudden head turn from a side view to a front face view, the system will not be able to process correctly the side of the face that doesn’t exist in the first frame. That’s why it’s important to choose manually your key frames for softwares like EbSynth. I always try to find the one which includes the maximum of details that will be also present in other frames. Like, for a face sequence, it has to be front view with the eyes and the mouth opened, because it can easily close eyelids or lips to hide the eyes or the inside of a mouth, but can’t make up those without having a reference.
      All that to say that, yes, it’s still a lot of work that has to be done « manually » and it’s unsure if it’s always faster or easier with the AI method than the traditional one.

    • @MelloCello7
      @MelloCello7 2 месяца назад

      No this isn't animation. Something must be animated to be an animation.
      Even in rotoscoping, important decisions involving form, weight, and even movement, if done properly, must be made by the artist, which *does indeed* require principles of traditional animation to implement.
      Calling this animation is the same as calling a snap chat filter an animation.
      What this is, is what corridor crew does best: Video editing, VFX and post work.
      Again, calling this animation is extremely offensive to the artists who have dedicated their lives to the craft and have unwittingly made this technology possible

  • @BearFOXThirty
    @BearFOXThirty 9 месяцев назад +86

    It's cool you collaborated with the warpfusion community to help get features added, improving it as a tool for everyone. That's the kind of thing I love seeing.

  • @Kalepsis
    @Kalepsis 9 месяцев назад +12

    So, after the first episode of Anime RPS, I started thinking about a rudimentary method of cleaning up the motion on the characters. Let's say you have a "style" dataset of 1000 images. You take the first frame of video and have the AI convert it using your style dataset, and each one of those 1000 images has what I call an integration value of 1. Then, for the second frame, you add the first frame to your dataset, but you give it an integration value of, let's say, 500. So the AI would be using 1000 images of style, and 500 of the same image (the previous frame) to generate an output frame. If you see jank in a frame, all you'd have to do is adjust the i-value of the previous frame up or down. Or you could cut it to, say, 250, and add in the image of the frame before that, also with a value of 250. Then I watched this video and realized that's probably exactly what WarpFusion is doing. Lol.

    • @BeyondTrash-xe1vs
      @BeyondTrash-xe1vs 9 месяцев назад +1

      I don't understand the nitty gritty of it, but even if that is what WarpFusion is doing already, it's really impressive that you came to the same method on your own!

    • @bzqp2
      @bzqp2 9 месяцев назад +1

      You don't retrain the model on inference. After the model is already finetuned on the style dataset it doesn't see the dataset anymore. The new style info is embedded into the weights of the network which don't change anymore when you use the AI. It would only work if you were willing to finetune the model on each frame, which would take hours to compute.

    • @wesley6442
      @wesley6442 8 месяцев назад +1

      I don't understand the technical lingo but am glad people are throwing their ideas on improvements out there, this is how progress is made! it's much like the creative everyday people who mod video games such as skyrim, you get enough peopel together and they're driven by passion for their craft unlike soulless companies and you get innovation! I think what people will do is see this new method of AI generated content and create better, more improved tools that anyone can use.. this is a step up from image to video and the next improvement is capitalizing on this method and creating a specially tailored tool/program people can use. I love the creativity and innovation we all bring to the table!

  • @acheronhades1747
    @acheronhades1747 9 месяцев назад +140

    You chose possibly the hardest thing to make a Stable diffusion do, hands, and made that a huge part of the video, mad props to you

    • @maxitoburrito
      @maxitoburrito 9 месяцев назад +7

      I mean it’s pretty much just a filter over actual images so

    • @andrew4446
      @andrew4446 9 месяцев назад +17

      @@maxitoburritonot really but if that’s what you got from the video then you do you

    • @yuyah7413
      @yuyah7413 9 месяцев назад +3

      ​@@maxitoburritobro got ratioed💀

    • @NFIVE30
      @NFIVE30 9 месяцев назад

      @@yuyah7413 a ratio on RUclips hits really hard

    • @yuyah7413
      @yuyah7413 9 месяцев назад

      @@NFIVE30 fr

  • @nuave
    @nuave 9 месяцев назад +454

    I love how Corridor makes their "discovery process" with the enigimatic-e clip. Because to me, that is the community of creation, looking at small concepts other people did, and using it to inspire a new piece of work, and in your case, actually contributed to the software. It's not about the software it's about the artists behind it. Beautiful and inspiring as always.

    • @johnberkers434
      @johnberkers434 9 месяцев назад +7

      Absolutely. And with Open Source, "contribute" does not always mean submitting code, sometimes it's just the "idea" of something that can enhance the project

    • @mop-kun2381
      @mop-kun2381 9 месяцев назад +4

      its not about the software its about the artist that got their works stolen for this kinda shit, yes beautiful and inspiring as fuck

    • @XavierXonora
      @XavierXonora 9 месяцев назад +9

      @@mop-kun2381 So you clearly haven't watched the video or you'd know all the art this model got trained on was created for the project, and he was compensated for his time.
      You're right that this is a problem, elsewhere, but Corridor have shown how it can be used without infringing on the artistic rights of others.

  • @aldybarrack6522
    @aldybarrack6522 9 месяцев назад +8

    they really thought putting a filter over a footage is animation lmao

    • @NFIVE30
      @NFIVE30 9 месяцев назад +3

      The latent space isn't just a filter, the possibilities are huge.

  • @marvinvogtde
    @marvinvogtde 9 месяцев назад +17

    I still don't see how slapping a really advanced filter on live action footage is changing animation. This isnt animation, it just aims to look somewhat like it but isn't animation.

  • @Darknight4141
    @Darknight4141 9 месяцев назад +330

    I love how they highlighted those in the WarpFusion community and specifically said "hire them"

  • @senzubeanAI
    @senzubeanAI 9 месяцев назад +172

    "The only thing that stops changing is us, when we decide to stand still" Such a good quote

    • @MrMaxymon
      @MrMaxymon 9 месяцев назад +4

      i can almost 100% guarantee that they will put it on a shirt

    • @TeenPerspektiva
      @TeenPerspektiva 9 месяцев назад +5

      Some Niko wisdom for ya

    • @TheRealAlpha2
      @TheRealAlpha2 9 месяцев назад +9

      An artist never stops learning and evolving if they want to stay relevant. It's what makes their channel.

    • @ms0824
      @ms0824 9 месяцев назад +1

      Totally!

  • @Respectable_Username
    @Respectable_Username 8 месяцев назад +9

    I'm glad you not only shouted out all the extra artists who helped on this project but also _paid_ them. Can tell y'all are doing your best to do this the most ethical way!

  • @Cellardoor_
    @Cellardoor_ 9 месяцев назад +27

    This is great. There's nothing wrong with CC doing this in my opinion. It's just that the AI companies need to delete their datasets and replace them with royalty free images and artist-consented images. That's it.

    • @dolookecki3084
      @dolookecki3084 9 месяцев назад

      All these new motors still have the initial motor with the LAION scrappings. This wont go back unfortunately, but they want to sugar coat it with adding personalized artwork on top of it.

  • @PandaJackProductions
    @PandaJackProductions 9 месяцев назад +230

    I’m glad you’re addressing all the issues people had with the first video and are improving upon them here! It shows that you care about the fans’ response and truly just wanna make the best content you can. 10/10!

  • @dxaviorsith5603
    @dxaviorsith5603 9 месяцев назад +78

    Seeing the little audience they assembled for their viewing party at the end made me so happy. As successful as they are, the joy of sharing their art with others is so relatable- and I can see it on their faces.

  • @Wubster649
    @Wubster649 9 месяцев назад +7

    That’s a filter not animation

  • @dread7531
    @dread7531 9 месяцев назад +13

    As a very new artist like less then 50 hours drawing with the first video discouraged me greatly but this helped that hey what im learning can still be aplied and used and wont be something that will become obsolete
    Props to corridor crew showing that side of things

  • @nightsabre6425
    @nightsabre6425 9 месяцев назад +185

    I can appreciate this way of animation being its own thing. Like other/new creators making works in this method while traditional studios still existing. In an ideal world.

    • @doodlegame8704
      @doodlegame8704 9 месяцев назад +3

      we don’t live in an ideal world sadly, though there is always room for old technology to be used. Most artist love the process more than the final product, so they don’t always take the path of least resistance. That’s the one thing that gives me a bit of hope for art in the future.

    • @NullPointer
      @NullPointer 9 месяцев назад +5

      Yeah, but we live in this reality, and if there's a way for people to steal other people's work, is gonna happen and is gonna be rampant

    • @karenreddy
      @karenreddy 9 месяцев назад +2

      If there's a noticeable quality difference and people like the classic way, traditional studios will exist.
      If customers prefer the new way, or can't tell the difference, all studios will have to adapt, or cease to exist. This is the way of technological progress- it's always been this way, though normally the transitions aren't as publicly visible and widely discussed as they are now

    • @NullPointer
      @NullPointer 9 месяцев назад +6

      @@karenreddy Yeah, because companies give customers exactly what they want, and don't cut corners or give you an inferior product because it'll be more expensive to do it properly

    • @lucylu3342
      @lucylu3342 9 месяцев назад +1

      "While traditional studios still existing"
      You mean the studios that have greedy execs that will try to replace the artists with AI to do the work for them?
      Doesn't sound Ideal to me.

  • @tacklemcclean
    @tacklemcclean 9 месяцев назад +28

    Holy shit David Maxim Mitic! Unexpected crossover. I've been listening to his music for many years, he has some insanely good stuff.

  • @mohammadmanhar8839
    @mohammadmanhar8839 9 месяцев назад +3

    23:28 Nico's so proud 🥺. He deserves it. Great job!

  • @crackysr2961
    @crackysr2961 9 месяцев назад +1

    my favourite bit of this video is at 17:13 when niko basically did the team montage from every heist movie ever made

  • @grimmreaper6681
    @grimmreaper6681 9 месяцев назад +137

    I'm not going to lie, I think that I solidly enjoyed the flickering version of the anime too. There is just some type of charm around the constantly switching lines.

    • @IVMZR
      @IVMZR 9 месяцев назад

      That flickering effect would have been harder to do if it was hand drawn

    • @paulyguitary7651
      @paulyguitary7651 9 месяцев назад +1

      Like Squigglevision from Dr. Katz or Home Videos but different.

    • @WatcherWithNoEyes
      @WatcherWithNoEyes 9 месяцев назад

      It's a nice bonus when you have to stop watching and do something, and then you notice that each frame is great

    • @thekwoka4707
      @thekwoka4707 9 месяцев назад

      Yeah, focusing that to the right places could be a real style. Ensure the faces read, and th rest can get pretty wild and be fine.

    • @giorgiomaggioni3646
      @giorgiomaggioni3646 9 месяцев назад

      yeh right? a bit like how old footage got that grains and imperfections

  • @ArchyAJLS
    @ArchyAJLS 9 месяцев назад +193

    Overall it seems like eyes and mouths are still the biggest issue (apart from costume consistency). May be worth redrawing those by hand, just to actually get the gaze direction and expressions you want. Hiring an artist is a great move, although I'd recommend getting someone who studied the anime style for longer than a couple weeks.
    Another thing I noticed is that you have a very western approach to this, seemingly working in the classic 'animating on twos' style. Anime doesn't do that, they time each frame perfectly to support the motion as best as possible with as little images as possible, sometimes going as low as 3 frames per second. This would also help tremendously with warping, since you'd drastically reduce the frame number, allowing for much more controlled direction.

    • @tawoorie
      @tawoorie 9 месяцев назад +20

      They definetly need a clean-up artist

    • @furbyfubar
      @furbyfubar 9 месяцев назад +9

      I think lots of the issues with eyes and mouths are that the filmed video doesn't have the eye-lines correct to start with, and they are not nailing the correct mouth shapes for the lines the characters are saying in the video either. So I think a big improvement could be gained from tightening up the video part. (
      That said, AI still has issues with getting eyes correctly. I remember from the behind the scenes of the first Anime Rock Paper Scissors for this reason they had "lazy eye" listed in the field for what the AI should *avoid* drawing. So it's very possible that there could still be big tech improvements that are still in the future.

    • @jojomarshall6465
      @jojomarshall6465 9 месяцев назад +2

      The biggest issue is the art theft

    • @gabrielbuenodossantos5203
      @gabrielbuenodossantos5203 9 месяцев назад +23

      @@jojomarshall6465 They literally hired their own artist for this job
      If you think that's still theft, then you might need to rethink about downloading or using art in general

    • @MasterMordekaiser
      @MasterMordekaiser 9 месяцев назад +2

      @@gabrielbuenodossantos5203 I believe the confusion here comes from not knowing if Stable Diffusion is utilizing only Josh Newland's art, or if it's utilizing it's pre-existing database of stolen art.

  • @matansunshine5812
    @matansunshine5812 9 месяцев назад +7

    Wow as a classic animator i could just clean up the wobbliness by hand and it will look so fucking good, please corridor teach me your ai ways i loved you since i was a kid

    • @nos4me
      @nos4me 7 месяцев назад +2

      Watch at 22:55 they say there

  • @josiahsmith7250
    @josiahsmith7250 9 месяцев назад

    Niko’s just building the Avengers of animation.

  • @AimbyFrame
    @AimbyFrame 9 месяцев назад +124

    I think the next step for this would be re-evaluating the movement of actors.
    If you notice in animation the characters aren't always in motion every time they talk or do an action.
    Sometimes the hand moves with the eyes slightly but everything else is still the same frozen frame.
    I think you can implement a similar approach to episode 3, and this could help prevent even more glitches and flickering.

    • @notalanjoseph
      @notalanjoseph 9 месяцев назад +5

      They don't move much in animation because it's a lot of work to do for the creators...
      Now that the job is much easier they should definitely move more! Future iterations will reduce flickering.
      It may not look "original" anime, but they should move more to show off the capability of this tech

    • @romanpivec7346
      @romanpivec7346 9 месяцев назад +3

      @@notalanjoseph I don't know. Moving more than the reference material makes it look a bit like a parody to me.
      I enjoy this so much, first episode was so good, second one is less "impressive" for me as I did not get surprised how the technology has advanced (fool me once). BUT in both of these I kinda feel like the "mouth opening" looks a bit too much like a parody. Anime artists have sheets for each vowel, how the mouth should look like. It's never random mouth movement. At least in Japanese. I would feel better if it was taken little less like a parody and more like a serious anime. BECAUSE, it is freaking insane and it is so good and I want to watch hours of it.

    • @Takenyao123
      @Takenyao123 9 месяцев назад +2

      And 12 fps. This episode was too smooth for anime style. It was like Disney smooth

    • @biggestouf
      @biggestouf 9 месяцев назад

      It feels like the AI isn't capable of interpreting small lip movements and instead renders them as not speaking. The same goes for small facial movements. In regular video we have the resolution to see the movement and in animation they would need to draw that.

    • @DarthBiomech
      @DarthBiomech 9 месяцев назад

      I'd say they migh want to completely reinvent the pipeline for most of the shots that don't include some sort of action. Maybe AI only a single frame and then puppet it in Blender or something.

  • @func_e
    @func_e 9 месяцев назад +466

    this is why people shouldn't be scared of AI, they should be scared of who uses it for their own wrongful gain

    • @WereSquatch
      @WereSquatch 9 месяцев назад +17

      I think Plato said that

    • @golik133
      @golik133 9 месяцев назад +26

      It's all same with all the tools available in the world for Human, heck even if foods are handled in a wrong way it can kill a person...it will always depend on how people use it.

    • @func_e
      @func_e 9 месяцев назад +1

      :D

    • @func_e
      @func_e 9 месяцев назад +1

      @@golik133 yup

    • @robberbaron1431
      @robberbaron1431 9 месяцев назад +3

      exactly. and there is one person coming who will use it for evil. get saved now before it's to late.
      1Co 15:1 Moreover, brethren, I declare unto you the gospel which I preached unto you, which also ye have received, and wherein ye stand;
      1Co 15:2 By which also ye are saved, if ye keep in memory what I preached unto you, unless ye have believed in vain.
      1Co 15:3 For I delivered unto you first of all that which I also received, how that Christ died for our sins according to the scriptures;
      1Co 15:4 And that he was buried, and that he rose again the third day according to the scriptures:

  • @imranrakin17
    @imranrakin17 9 месяцев назад +4

    if animation studio use this method, then i will lose my job

    • @UniqueMAXPlay
      @UniqueMAXPlay 9 месяцев назад

      It's the same with every job in the world. It's your responsibility to improve yourself or be pushed by something better. The world isn't gonna wait for you to catch up

    • @imranrakin17
      @imranrakin17 9 месяцев назад +1

      @@UniqueMAXPlay yea off course i can draw 100 of frames by one day. totally possible.

    • @imranrakin17
      @imranrakin17 9 месяцев назад

      the only way to fight this is, i have to use ai to but spice up a little bit by using my own skill to improve the frame.
      @@UniqueMAXPlay

  • @RockinRogGaming
    @RockinRogGaming 9 месяцев назад +7

    “Changing animation” this looks like a massive step backward….

    • @KleptomaniacJames
      @KleptomaniacJames 15 дней назад

      For animators? Yeah. For animation? It is anything but.

  • @JulesBox
    @JulesBox 9 месяцев назад +22

    As a 2D artist I'll always have some kind of issue with AIs, so I'm really glad tou guys are using actual artist to provide their work for you guys to animate.
    People should start understanding AIs as tools to improve the final product instead of a replacement for artist.

    • @XavierXonora
      @XavierXonora 9 месяцев назад +2

      Exactly. The more we get this in the hands of every day people instead of big heartless companies, the better. Corridor are a team I trust to do the right thing, you only need to look at where they came from. Massive props to them for involving other artists traditionally from the space, as Nico said, this should be a tool that allows creatives to make more, not one that does all the work for them.

    • @lilowhitney8614
      @lilowhitney8614 9 месяцев назад +1

      ​@@XavierXonora It's already in the hands of everyday people. Anyone can use Stable diffusion (there are community sourced solutions for people who don't have a strong enough computer to run it by themselves).
      If anything, I would argue that the biggest obstacle to more artists adopting this technology and figuring out ways in which it could help them is the enormous reactionary backlash against it and how the discussion gets flattened out to "AI Bad" in so many places.

  • @SlandersPete
    @SlandersPete 9 месяцев назад +83

    I really liked the gritty feel of the first Anime Rock Paper Scissors. The second one felt a little more plastic in design. This is just my personal opinion, and I'm no expert in CGI. I also know there's so much more that was in the second, and it's a feat that I say good work for.

    • @namco003
      @namco003 9 месяцев назад +34

      I know exactly what you mean. It may be the character design change. I like it, but using Vampire Hunter D as a reference style on the first one was spot on. At least they won't catch any legal heat from now on.

    • @RaiOkami
      @RaiOkami 9 месяцев назад +4

      The minor characters are less detailed yep. They addressed it in their podcast. I think it's a good direction overall to at least diverge from the initial Vampire Hunter D inspo in ep1. Hopefully we see more later on.

    • @lolziz
      @lolziz 9 месяцев назад +1

      100% agree. Comparing the 2nd one to the 1st, the 1st just feels so much... fuller? It has character and personality to it and it works with their facial expressions much more, but alas, it was morally and legally a better decision to source their own art in the public's eye.

    • @RaiOkami
      @RaiOkami 9 месяцев назад +2

      @@lolziz Exactly. I suppose the issue with the 2nd one was that they had to deviate just enough to not be identified with VHD but still got constrained with the 1st ep's overall feel for cohesion. Maybe in the future they'd do a another series with an entirely different style from the get go?

    • @lolziz
      @lolziz 9 месяцев назад +1

      @@RaiOkami Honestly, this may be their next move as it would pose a fun new "challenge" of "What styles can and can't this AI model do?" The expensive part of this, whether it be money or time wise, would be getting the references for these new styles which would theoretically be self sourced. Doing this would also allow them to have a little "Our AI method can replicate/works with all of these different types of styles." flex. Thinking about it, it opens up an opportunity to do a series that's 'Love Death + Robots'-esque where it's a relatively short story idea done in a specific style and every episode is in a different style. As nice as an idea as that is, it would be a lot of work.

  • @willmfrank
    @willmfrank 7 месяцев назад +1

    Nico: "Free, open-source software..."
    Me, who can't afford to purchase or subscribe to paid software of any kind: "Free stuff! Cool! I'M IN!"
    Nico: "Anyone with a decent gaming PC can do this..."
    Me, whose thirteen year old computer freezes solid when I attempt to render even ten seconds of animation in Krita: "Well...Crap. I'm out."

  • @camilomartinez3925
    @camilomartinez3925 9 месяцев назад +2

    After watching the second chapter of the "Rock Paper Scissors" anime, I remembered a game calle "Pistolero". You say "pistolero" out loud and then you can do one of three: load your gun (as many times as you want), shoot (if you have your pistol loaded), or protect yourself. Would be amazing if you put it on the show.

  • @foxxify1
    @foxxify1 9 месяцев назад +7

    It's fascinating seeing the inside of a production. Especially with all the cool brand new vfx techniques!

  • @cptmacbernick
    @cptmacbernick 9 месяцев назад +120

    I know it's a controversial topic but you improved so much and having your own artist this time is a huge difference!

    • @pepsico815
      @pepsico815 9 месяцев назад +1

      There are still inconsistencies with things morphing around in the hair and face. I imagine they left it there intentionally and they'll continue to do a series of these tech demos

  • @mfcfbro
    @mfcfbro 9 месяцев назад

    Really excited to see David did the music. Been listening to his music for years.

  • @Celestial-Idiot
    @Celestial-Idiot 9 месяцев назад +4

    Unless you made into the spider verse, you never changed anything in the first place

  • @victorlasater7125
    @victorlasater7125 9 месяцев назад +48

    I’m very grateful for the format of this video- such a perspective of making me excited for AI rather than worried for the future. Thanks to Niko and the rest of Corridor crew for becoming their inner directors.

  • @artor9175
    @artor9175 9 месяцев назад +183

    I think this is a viable future for AI. It's not replacing artists, but it's a tool that makes them exponentially more productive. I hope to see more of this, and less single-prompt generations crowding out professionals.

    • @SimplCup
      @SimplCup 9 месяцев назад +12

      Like yeah, that's what it should be. People have been improving their tools for like ages now. it's like switrching your shovel to a new tractor. I don't get why people get mad at AI when they insatead should get mad at people who use their art with that AI. I see a huge benefit of AI in animation if people will figure out all the problems with shadows and lighting. If for example Pixar or Disney would invest in AI animations technology they could make the process of creating cartoons way easier and faster.

    • @Bakamatsu-GojiFanArchive
      @Bakamatsu-GojiFanArchive 9 месяцев назад +2

      @@SimplCup yup your opinion is irrelevent if you're pro big companies replacing artists with machines

    • @aprophetofrng9821
      @aprophetofrng9821 9 месяцев назад +2

      Ye, like, it would still be a large team of people needed for this to be an actual viable option. It's not like a few people could come along and pump out cartoons using AI, they could, but it would take months and months for a single episode. They would still need actors, camera crew, editors, people creating environments, sound designers, artists to train the AI and add effects to the converted scenes, writers, composers. Like, it's just a tool, that if used correctly and used well, can produce really cool products. In 5 years, I'd imagine this technology would advance enough to take out all of the jank. And then it's just up to the teams to execute their visions with it.

    • @aprophetofrng9821
      @aprophetofrng9821 9 месяцев назад +2

      @@SimplCup Disney could actually go back to making 2D animation. We could actually get a good Disney movie?! And not just live action rehashes?!

    • @SimplCup
      @SimplCup 9 месяцев назад

      @@aprophetofrng9821 Yes, exactly. Even now we have that impressive technology that was shown in the video, WarpFusion already can turn video into cartoon without flickering faces and objects. I see AI as a new era of animation.

  • @fsbgaming1588
    @fsbgaming1588 9 месяцев назад +3

    u guys did it. awesome works, keep it up!

  • @anne-tione4286
    @anne-tione4286 9 месяцев назад +13

    I haven’t watched the video yet, but one of the biggest appeals of animation is the freedom. You’re not bound by the laws of gravity or the constraints of reality. I don’t see how real people acting with a filter over it is going to capture that same creativity or beauty tbh. ***IMO***

    • @moocowp4970
      @moocowp4970 9 месяцев назад +2

      Well that's just creative directing though. You could run up a wall fairly easily by just having the wall on the ground and then changing the perspective for example. Falling from a great height could be achieved with a bit more effort (some slight rigging) and some stylistic animation effects behind the character.
      Then, of course, if you have a shot that isn't possible in this style (e.g. imagine you want someone's arm to split open into a bunch of tentacles and the tentacles rush forward and rip someone's face off... And also imagine that you can't rip someone's face off in real life or that you don't have a friend who has alien tentacle arms :P ) well then in those cases you can just animate those shots the old way, with a lot of money and time (well, even more money and time, this still seemed super time consuming). But at least it's only a few shots, rather than the whole thing.

    • @JamEngulfer
      @JamEngulfer 9 месяцев назад +2

      You’re not constrained by gravity or reality when you do regular VFX either. Sure, if you just film yourself and put a filter over it, the result may be static and uninteresting, but the same applies to regular animation. If you just animate two characters standing still, it’s also going to be flat and boring. The difference in both comes from the creativity and artistry behind it. It comes from the intent and skill that goes into what you’re making.

    • @1faithchick7
      @1faithchick7 9 месяцев назад +1

      A lot of animation in the 80s was real people acting, and then they animated over it. Original He Man and She Ra was done that way. This is the same thing.

    • @Dwedit
      @Dwedit 9 месяцев назад

      For things that aren't physically possible to film, you can do 3D animation then put the same anime filter over it.

    • @avokka
      @avokka 8 месяцев назад

      Idk, limitations of laws and gravity and physicality have their place, its why we have live action

  • @FireJach
    @FireJach 9 месяцев назад +12

    Im so mad. Corridor Crew literally explained this new tool needs professional artists, needs weeks to create a decent animation, it improves the process in some areas but THERE ARE STILL PEOPLE WHO IGNORE THE ENTIRE EXPLANATION AND CLAIM AI WILL REPLACE ARTISTS EASILY.
    Exacly like idk 3D software simulation - instead of blowing up a building, you do it on pc and take care to make it legit

    • @jaredgreen2363
      @jaredgreen2363 9 месяцев назад

      The people trying to ban it on that basis are even more annoying. And a little scary with how far they have gone.

  • @PGJVids
    @PGJVids 9 месяцев назад +39

    I wonder if integrating a face landmark detection AI into the training process would be useful for keeping it more consistent.

    • @tenacious6052
      @tenacious6052 9 месяцев назад +1

      there is, its called "after detailer" simply an extension. Although I'm guessing this video was recorded before then

    • @lexmitchell4402
      @lexmitchell4402 9 месяцев назад

      @@tenacious6052 after detailer relies on landmark detection models to add details, it currently doesn't help with temporal issues.

    • @Adi-oc7vu
      @Adi-oc7vu 9 месяцев назад

      Watching comments like these is probably my favorite part

  • @CloneMalone
    @CloneMalone 9 месяцев назад +3

    I will say, I think this is and never will be a replacement for 2D animation and shouldn't be treated as such. The tech is interesting and could probably assist full 2D animation to leviate tedium, but at the moment at least, it's still just making a 3D reference LOOK 2D. Like cell shading but on steroids.

  • @SteveBearscemi
    @SteveBearscemi 9 месяцев назад +1

    David Maxim making the Intro song is just the cherry on top

  • @dolenir
    @dolenir 9 месяцев назад +28

    I loved that instead of just using the free software, they hired several artists much more specialised to use the software. I believe that's one of the directions that the industry should take going forward

    • @TheFezIsAwesome
      @TheFezIsAwesome 9 месяцев назад +2

      The industry is gonna do whatever's cheapest my guy.

    • @dolenir
      @dolenir 9 месяцев назад

      @@TheFezIsAwesome unfortunately you are right. That's why, unions!

  • @arthuraguiar5382
    @arthuraguiar5382 9 месяцев назад +31

    13:58 as a programmer, I like when the crew shows this programmer's side of them

    • @someguy8443
      @someguy8443 9 месяцев назад +3

      Yeah it would be neat if they somehow incorporated coding into their react series. Maybe highlight some of the programmers behind tech in the industry.
      Probably not enough interest from the public, but if anyone can make it interesting it would be corridor.

    • @Muzzinstar
      @Muzzinstar 9 месяцев назад

      yeah totally! programmers rise up!@@someguy8443

    • @bobsmithy3103
      @bobsmithy3103 9 месяцев назад

      :skull: using premade generative ai workflows is considered programming now

  • @brad_hensil
    @brad_hensil 9 месяцев назад +2

    After watching this, I’m glad RackaRacka found success in the film industry

  • @RABDS
    @RABDS 9 месяцев назад

    You know we want a full series !!!

  • @cameron5802
    @cameron5802 9 месяцев назад +31

    The fact that you guys were able to reach out to the community and your peers for assistance and those willingly and lovingly contributed truly means a lot as a fan and audience member. You guys have become a true force of nature in the industry and genuinely deserve a seat amongst the few who have created renown works of cinemagraphic art.

  • @moonlitmortician6694
    @moonlitmortician6694 9 месяцев назад +72

    I don’t think it’s changing animation as much as a type of animation, specifically rotoscoping. It’s definitely impressive, but it’s less a better kind of animation as opposed to a new one. It’s a good thing the techniques here are being used by actual creatives because I can see someone very easily looking at this tech and thinking “Oh, well I don’t need animators anymore”

    • @swims11torches
      @swims11torches 9 месяцев назад +4

      By definition it's not even animation. Animation isn't a style, it's a process, and this isn't it

  • @nothingtheshow
    @nothingtheshow 9 месяцев назад +1

    Omg FUCKING DAVID MAXIM MICIC IS THE FUCKING BEEEEEEESSSSSSTTTTTT. you guys have absolutely no idea how happy this has made me. He deserves recognition. As do y’all.

  • @superdoodjj
    @superdoodjj 9 месяцев назад

    I'm so pumped that you got David Maxim Micic to do the theme! I'm a big fan of his music.

  • @Koushakur
    @Koushakur 9 месяцев назад +4

    Re: Title, There can't be an 'again' since you didn't do it the first time

    • @NorthgateLP
      @NorthgateLP 9 месяцев назад

      They did, they're at the forefront of this technology. It's gonna be super exciting to see what fully professional studios can do with this tech.

  • @hollowedboi5937
    @hollowedboi5937 9 месяцев назад +7

    To actually hire an artist to create a style and have a discussion about the ethical usage of this as an ongoing conversation is a good way to go with this- to show it evolving with the discussion as a more positive and fair use instead of stealing and telling every side of what’s goin’ on

  • @funeralgothatoo5814
    @funeralgothatoo5814 9 месяцев назад +6

    This isn't really animation, it's not even rotoscoping. Its something else.... more of a performance capture technology from a filmed/photographed source that allows for a stylized 2d result instead of the usual 3D realism like Avatar.
    It's got an "animated" style but its not really animation.

    • @ClueIess
      @ClueIess 9 месяцев назад +1

      Agreed, I honestly have no problem with this video, but calling it animation is a little disingenuous

  • @ZabZabZabie
    @ZabZabZabie 9 месяцев назад +4

    This is some hot doo doo 🤨

  • @foamyrocks665
    @foamyrocks665 9 месяцев назад +23

    Never thought I'm going to see David Micic in a corridor crew video. Huge fan of his music and albums. Awesome selection for your soundtrack.

  • @weebz3409
    @weebz3409 9 месяцев назад +21

    "Well, The Only Thing That Stops Changing, Is Us, When We Decide To Stand Still"
    What a quote! 15:21

  • @jmlightning8045
    @jmlightning8045 2 месяца назад +1

    Don't know how I missed this coming out, but I'm glad to finally see it. It is also encouraging to see that my thoughts on the first episode ended up coming true. The fact that you will still need artists to establish style and consistent thematic choices was something I had predicted, so it's nice to see that come true. One thing that I still believe is that you could actually take the artists role even further, and likely get a higher quality result:
    Instead of the artist simply drawing style guides for the character, you could get them to make a mange/comic of the intended film. This way, each of the most impactful, most thematically important, and most detailed shots will be hand crafted. Then, all the Ai would need to do is use the motion capture data and style guide to fill in the gaps between the key frames. Infact, as I understand it, this wouldn't even be that far from the way old hand drawn anime was made: head designers will draw the key frames, and then the lower level animators will go in and draw the intervening scenes.
    From here, the question that would be asked is how many frames can you afford to be drawn by an artist. Ideally you would find a middle ground where enough is drawn that most of the stylistic and thematic elements will carry, and even interest background elements and easter eggs, while allowing for the motion capture and AI to handle the creation of most of the frames. The goal would then be to try to get as many of the benefits of being hand crafted as possible: complex and intricate thematic and stylistic design, detailed and intricate background and character design, etc; while also getting the benefits of motion capture: cheaper per frame, smooth and realistic movement, more complicated and increcate coriaography, etc.
    In this way, there is still a category of film that will likely require a good amount of talented artists to help establish those elements, while also freeing artists from the tedious work of having to draw all the frames between each powerful moment. Additionally, you would see a spectrum of films, going from those that rely more heavily on motion capture, to those that attempt to reach that optimal balance. In essence, there would still be a divide between expensive productions and that of a couple of friends, but the fact that that group of friends can make a competent anime will introduce competition and diversity into the market. Plus, it will mean that artist may have even more opertunites to make artwork, and more of those opturtinities would be to craft proper masterpices to capture all the feelings and emotion of a scene, rather than drawing countless images so they can all run together smoothly. Furthermore, if this ends up working like I think, there would actually be demand for artist who are more skilled and developing a powerful and meaningful scene, creating and environment that wants artist to develop there styles and skills so they can offer their unique talents to set the style of one show away from the rest. This also means that more hobbiest productions will be able to higher newer artist, offering them an opportunity to further develop their skills and style to make them a more attractive artist for the higher value productions.
    In summary, assuming I am right that further inclutions of artist in the production will have notable and valuable effects on the final product, then the development of this technology will possible lead to a better market for artists, and better pipeline for newer artist to develop, and a better mindset on the development of unique styles. Between the increase in opportunities available, the increase in the number of obertunites relaint on a specific artist unique way of developing their piece, and the increase in the number of different groups attempting to higher the artists, I can only see this ending with happier artists that are more able to actually make a livable wage off their skills and pation.

  • @Cogit-
    @Cogit- 8 месяцев назад +2

    0:29 "small creators" he says with 6.2 million subscribers

  • @wickedwookie
    @wickedwookie 9 месяцев назад +47

    Corridor setting the blue print for how A.I assisted creations should be done. Well done guys!

  • @Hojeun
    @Hojeun 9 месяцев назад +138

    I'm glad you were able to hire an artist to take away the issue of AI using art without permission. good work!

    • @Gavri1945
      @Gavri1945 9 месяцев назад +24

      It's not using other artists work, it learns from it, similarily to what humans do. The whole ethical debate about AI art is unsubstantiated and silly.

    • @Shadowgaming105
      @Shadowgaming105 9 месяцев назад +24

      @@Gavri1945 Humans do not algorithmically copy every single bit of someone's artstyle. Humans have inherent creativity and differences. When a human tries to recreate an artist's style there will be differences. Humans are also able to fully credit an artist and choose to not profit off another person's style. If a human artist was fully copying a currently working artist's style and selling them for large amounts of money people WOULD be upset. The only reason you don't see a difference is because you've never done an ounce of creative work in your entire life.

    • @olafforkbeard4782
      @olafforkbeard4782 9 месяцев назад +2

      I mean, I don’t think it satisfies it entirely… the AI was still trained to create based on other work without permission. It’s definitely better and more ethical though.

    • @pacoquita8219
      @pacoquita8219 9 месяцев назад

      @@olafforkbeard4782 every artist trained on other work without permition.

    • @pacoquita8219
      @pacoquita8219 9 месяцев назад +4

      @@Shadowgaming105 "Humans have inherent creativity and differences" AI's too

  • @AnArtistInAVoid
    @AnArtistInAVoid 9 месяцев назад +2

    I’d rather call this putting a filter over live action. Cuz that is technically what it is.

  • @johanwodzynski8637
    @johanwodzynski8637 9 месяцев назад

    Incredible in so many ways. Thanks for the inspiration!

  • @dreamzdziner8484
    @dreamzdziner8484 9 месяцев назад +6

    So surprised to see "enigmatic e" channel mentioned here. Always enjoy his channel especially for AI video tricks. This new version of Rock paper scissors looks 🔥

  • @FuzzySamurai
    @FuzzySamurai 9 месяцев назад +43

    this is the perfect display of how AI is an amazing tool. it doesn't replace anything it improves everything by making it possible for people to access new forms of creativity that otherwise would've been impossible before. and that's how it should be. it shouldn't be taking any artistry of anybody without their consent, it shouldn't be replacing ANY professional career. it should only serve as a wonderful tool for people in the digital industry to use.

    • @doodlegame8704
      @doodlegame8704 9 месяцев назад +2

      It’s all depends on how humans choose to use it.

    • @temet0nosce
      @temet0nosce 9 месяцев назад

      Too late now. The more the world advances in AI and tech the less need for human jobs and we begin to lose our humanity.

  • @gerritgenis1685
    @gerritgenis1685 9 месяцев назад

    Oh man I love David's music! Been a big fan for a while, and I'm very much looking forward to hearing this music in full 🎉

  • @vistar_kj
    @vistar_kj 9 месяцев назад +1

    Mother's Basement gonna shit his pants watching this video

  • @acikacika
    @acikacika 9 месяцев назад +93

    So so proud of David Maxim for the theme track and of course for the Corridor guys for pushing the boundaries!!

  • @astarianaira6968
    @astarianaira6968 9 месяцев назад +5

    I really wanted the opening theme to be subtitled in Japanese and English. Could you release a video of just the opening with those added?

  • @kungfujesus9251
    @kungfujesus9251 7 месяцев назад

    You guys are awesome! Getting very inspired with what you're doing!

  • @theReidGarrett
    @theReidGarrett 9 месяцев назад

    This is fuckin' awesome gentlemen. Props to y'all. Just out of curiosity, what hardware are y'all using to get all of this done?

  • @DJLCBrown
    @DJLCBrown 9 месяцев назад +25

    Probably an unpopular opinion, but I think the first one worked better. It was more entertaining and the flickering/choppiness stuff was less distracting, because it was everywhere. In the new one, there was less flicker/choppiness, but that made it stand out more. Like how Sam's character's abs kept fluctuating.

    • @juanQuedo
      @juanQuedo 9 месяцев назад +12

      But the point of these are to push the tech and see how far goes and how can be used to make something "original" and or "new", so less flickering is a step forward; maybe in six months the technology allows for 0 flickering.

    • @TheXavierfull
      @TheXavierfull 9 месяцев назад +5

      Unpopular opinions are like that bc they are mostly wrong.

    • @BrainRobo
      @BrainRobo 9 месяцев назад

      I would say that for a short or concept episode it is fine. But if you try to extend to a long episode format or a multi-part series it would get tiring due to inconsistencies

    • @yagelbar
      @yagelbar 9 месяцев назад +2

      These are the famous Gorski's fluctuating abs. They are real and they are fabulous!

  • @BasicallyBaconSandvichIV
    @BasicallyBaconSandvichIV 9 месяцев назад +10

    One thing I love about Anime Rock Paper Scissors is that it has two stories to tell. One about two twin princes fighting against each other. And one story about new tech, and how it can be both scary AND great at the same time. It just depends on how it's used.

  • @Thisthat1234
    @Thisthat1234 9 месяцев назад +1

    The most ideal outcome is that AI only improves productivity.

  • @samoanmo
    @samoanmo 9 месяцев назад

    The moment Niko figured it out. "WOW... WOW.. WOW"

  • @brysonmcbee
    @brysonmcbee 9 месяцев назад +7

    The style on the main/live action characters looks so solid, but all the extras really start to look generic and seem to slip into styles from other models. Would love to know if this style mismatch is on your radar and if you guys have any insight on why the style might not transfer over as well on the background characters.

    • @TheRealAlpha2
      @TheRealAlpha2 9 месяцев назад

      It's the amount of training data. The main cast could have something like 20 images of various emotions and positions, but the background animated characters might only have 5 or 10 and the rest of the training for them has to be filled in with regularization data which is just a general understanding of the style and might include a few of the other main cast images.

  • @ElivinMendez
    @ElivinMendez 9 месяцев назад +5

    This was awesome!!! One thing I've also been interested in is how you guys schedule tasks and your full setup with spreadsheet and such (like other scheduling tools etc) I totally would not mind seeing a video on that aspect of what you guys do as well.

  • @Le_Forke
    @Le_Forke 9 месяцев назад

    They frekin starting a vfx cinematic universe. One day we're gonna get a vfx-artist endgame. Gonna be so cool

  • @dosillsiamang3005
    @dosillsiamang3005 9 месяцев назад +1

    Where is the music that starts 12:43 from? They use in a lot of their videos.