AI VIDEO GENERATION One Year Later - Runway GEN-3

แชร์
ฝัง
  • เผยแพร่เมื่อ 10 ก.ย. 2024
  • AI Video Generation is getting very very good so with the release of Runway's Gen 3 AI model we're going to take a look at their AI Video results and try it out for ourselves.
    😁 Subscribe to Joshua M Kerr
    🔗bit.ly/3l4U9wo
    ---------------------------------------------------------------
    🎦 JOIN THE DISCORD COMMUNITY
    🔗bit.ly/3kUzqvj

ความคิดเห็น • 41

  • @usuallydopesvsc
    @usuallydopesvsc 2 หลายเดือนก่อน +7

    It's gonna be so funny watching people try to art direct AI video, changing one word of a prompt and watching the entire video change lmao

    • @JoshuaMKerr
      @JoshuaMKerr  2 หลายเดือนก่อน +1

      Haha...yeah not yet at least. I tried and ...no

    • @TheNepos
      @TheNepos 13 วันที่ผ่านมา

      I'm working on that now it's do able and fairly stable just takes a few extra step.

  • @ROBOTRIX_eu
    @ROBOTRIX_eu 2 หลายเดือนก่อน +4

    For someone, the perfect output may be exactly that what came out, ...BUT, ..I would guess, more detailed input needed for what you/us would want..
    .I would do different input..something like:
    --Ambient: Space, in orbit distance between Moon and Mars
    --Action: Battle between Space spaceships from movie "Starwars" alike, against Space battleships from movie "Startek" alike
    --Shot using 3 cameras Panavision 70 lenses , with random closeups , editing, cinematic, (and other stuff you know better than i do, for sure)
    --Main actor, a "Brad Pitt" look alike, inside cockpit, helping by piloting and fighting with one of the smaller battleships of "Darth Vader", with dedicated 2 Cameras Panavision 70 lenses, (1 inside cockpit filming his face and body, and one exterior spaceship medium closeup filming him with his intense scared facial expressions
    --Individual duration shots of each camera before cut to another camera : 3 seconds
    --Total videol duration of video : 30 seconds
    --Keywords and sentences to be used as global movie thematic/storyguide :
    --cinematic,
    --teal and orange predominant colorization,
    --lasers used as weapons of spaceships,
    --laser colors blue to be used from "Startrek" warships,
    --Red color lasers to be used from "Starwars" battleships,
    --Green laser colors to be used from main caracter / actor in "Darth Vader" space battleship,
    --freedom to Artistic free AI ideas generated by AI model, like effects, anamorphic lenses, and surprising, showicasing Runway GEN-3 potential of most used and less explored or secret potential, during last 10 seconds of movie, with free range, but using inputs given before
    --on last 10 seconds of movie, AI model allowed to go crazy, exploring new ideas and concepts for movie ending

  • @MiDaY_i
    @MiDaY_i 2 หลายเดือนก่อน +3

    yea the forest and bear part took me out lmfao

    • @JoshuaMKerr
      @JoshuaMKerr  2 หลายเดือนก่อน

      Haha yeah...seeing that was an actual treat

  • @IkeWaghSF
    @IkeWaghSF 2 หลายเดือนก่อน +5

    Simply put, it's a completely different artistic experience. As someone who enjoys making short films with generative AI, I genuinely appreciate your perspective and traditional skills. Even though you joked about being a novice, your knowledge and skillset position you to create truly amazing and innovative work in the GenAI space. It's really just the current limitations of the tools that are holding you back. For example, once you have control over angle, lighting, lens, and camera, anything you dream up will be possible, even if it's just for previz purposes and not the final cut. Text-to-Video tools are fun, but most people making GenAI videos start with Image-2-Video (and then use tools to animate from there).
    I love your channel! I hope to see some videos in the future where you mix GenAI with Unreal to create some really cool stuff :)

    • @JoshuaMKerr
      @JoshuaMKerr  2 หลายเดือนก่อน +2

      I'm quite interested in the hybrid video to video idea from Gen1. Filming some shots in real life or using renders form Unreal and then using Gen AI to either add new elements or bring extra realism to renders...default cube the movie. Glad you're enjoying the videos.

  • @High-Tech-Geek
    @High-Tech-Geek 2 หลายเดือนก่อน

    Most current AI films (from festivals, ads, top rankings, etc) are generated using scenes that were cherry picked from 100 rolls and then pieced together into some kind of story. Usually there's a lot of post production with traditional tools too. It's still very difficult (almost impossible) and time consuming to create a story and then try to generate video to fit. Character consistency is a huge problem. And directing action within a scene is simply a crap shoot.
    Many use horror, dark scenes, fog, rain, etc. to mask the inconsistencies. Or they use generic characters, like a man with a balloon head or an animal that generally looks similar from shot to shot (if you don't look too close). They are all working on it though... I still think it's a few years away. And the costs are already moving well beyond any kind of indie artist, solo, hobbyist, student, etc. being able to generate enough quantity to get anything useful.

  • @Ai-Art-42
    @Ai-Art-42 2 หลายเดือนก่อน

    I tried the tool straight away because the examples look fantastic. The results were more than disappointing. At first I thought it was the prompts. Then I tried a few original prompts from the website. Unfortunately, they rarely produced the same result. They really need to improve that.

  • @marcfruchtman9473
    @marcfruchtman9473 2 หลายเดือนก่อน

    Yea, I really don't know how this is going to pan out for the world of video entertainment. It will certainly open a LOT of doors for story telling. But, it may also completely disrupt the industry as a whole.

  • @stickfigureanimations7530
    @stickfigureanimations7530 หลายเดือนก่อน

    There's another new feature in runway gen 3 Alpha is image to video

  • @matcoop
    @matcoop 2 หลายเดือนก่อน

    You take this stuff You take VP especially on the lower end of the budget scale, green screen, digital video iphones, unreal engine, cheap mocap, editing software for free like Davinci. Add in ai audio music generation. You have right there full suite of stuff which gives anyone access to the tools to make anything. TH-cam even distributes it for and pays you if the results are popular. Still need a good script, and good storytelling skills and thats something that AI can't do yet, but scriptwriting has been devalued for years anyway by Hollywood especially. Its still the most vital component. But, I think, by the end of this year (2024) maybe early next year this stuff will be at a level (esp Runway) that anyone with the desire will be able to match Hollywood on production level. Dune is mentioned earlier, I am sure, it will be good enough to generate that level of imagery with consistency. And that's one person hitting a prompt, and eventually enough prompts to create a movie that will visually rival the Hollywood version. Still need the storytelling skills. Its an elite business, the tech might change but the guys who are best at it, are the best at delivering story.

  • @IamSH1VA
    @IamSH1VA 2 หลายเดือนก่อน

    I think these current tools are really good for Previs/Story boarding, Stock Footage, Background Footage etc.
    Major issues with these current Ai Models are Random Nature (more like Hallucination or Dreaming), *lack of control over output, even making minor changes is very hard*
    Surely you can use inpainting or modify prompt or use multi-model approach to improve control over output, but it is still far too limiting.
    This will certainly improve with time.....

    • @JoshuaMKerr
      @JoshuaMKerr  2 หลายเดือนก่อน

      Certainly agree. Although the dreamy aesthetic might lend itself to very specific visions. But otherwise, it's going to be a waiting game for production quality. It's great for idea generation, but it's still a bit difficult for testing ideas because prompts are so wildly difficult to control.

  • @brettcameratraveler
    @brettcameratraveler 2 หลายเดือนก่อน

    AI videos have an understanding of the laws of physics (usually) which means that a 3D model exists in its algorithm. OTOY is working on a way of extracting that data to give us the 3D modeled scene.

    • @JoshuaMKerr
      @JoshuaMKerr  2 หลายเดือนก่อน

      Thats very interesting. Good to know otoy are on the case.

  • @voodoochild420ai
    @voodoochild420ai 2 หลายเดือนก่อน

    cool vid, the videos keep getting better by the month

  • @gu9838
    @gu9838 2 หลายเดือนก่อน

    im glad more youtubers are showcasing it, its an amazing tool!

  • @CGToonStudio
    @CGToonStudio 9 วันที่ผ่านมา

    Hi Owen sir
    I’m working on a simple animation project in Unreal Engine 5.4. It’s a 10-15 minute documentary-style video for TH-cam in 1080p, featuring MetaHumans and city life scenes. I’m sourcing assets from Sketchfab and will do color grading in DaVinci Resolve 19.
    Since I’m not creating games, I’m focused on keeping Unreal’s size small and optimizing performance for smoother viewport and faster rendering. Could you make a video covering all possible ways to optimize Unreal Engine for animation projects like mine? It would be incredibly helpful to learn how to manage MetaHumans, streamline rendering, and keep the project lightweight.
    Thanks so much in Advance.

  • @metternich05
    @metternich05 2 หลายเดือนก่อน

    I'll check back in a couple years. It might be promising technology but has a long way to go to be production ready. Certainly not worth the price right now, I'd say it's pre-alpha.

  • @davidnappyhoose204
    @davidnappyhoose204 2 หลายเดือนก่อน

    Not too impressed with the results yet except for that bear clip. I don't know how it is where you live but here, we see flying bears all the time.

    • @JoshuaMKerr
      @JoshuaMKerr  2 หลายเดือนก่อน

      Oh that was just real footage haha

  • @AlessandroShizo
    @AlessandroShizo 2 หลายเดือนก่อน +1

    try image to vídeo Gen3.... Differents results

    • @JoshuaMKerr
      @JoshuaMKerr  2 หลายเดือนก่อน

      I thought that was only available on Gen1...Consider me interested

    • @High-Tech-Geek
      @High-Tech-Geek 2 หลายเดือนก่อน

      @@JoshuaMKerr That's video to video, Joshua. Image to video Gen 3 is not a thing... yet. You can do it for Runway Gen 2, but it's not great. There are better tools available for image to video, like Kling (from China) or Luma Dream Machine.

  • @trekull-
    @trekull- 2 หลายเดือนก่อน

    Seems like AI tools will significantly lower the entry point into film making, at least on the indie side. They will give the ability to actually focus on the substance, on the art of storytelling, rather then figuring out the technical implementations. So maybe we could see some interesting projects from books and manga come to life. But I'm still skeptical if it will be possible to shoot a traditional looking film or animation using just AI text generation. Can't imagine Dune or Lord of the Rings level movie being made with this

    • @JoshuaMKerr
      @JoshuaMKerr  2 หลายเดือนก่อน

      Maybe not yet but as I say the only factor is time. Genarative AI is big business right now so theres no shortage of funding and a lot of financial motivation to keep improving the models. Totally agree about lowering the barrier to entry. Also 10 seconds is a prety good length to generate especially given the decreaing average shot length of a movie, we just need a bit more consitency in the results and we're off to the races.
      I love your point about lowering the barrier to entry, toally agree and I'm here for it.

  • @lukascowey1423
    @lukascowey1423 หลายเดือนก่อน

    Copyright strike from the Cocaine Bear....

  • @HansJrgenFurfjord
    @HansJrgenFurfjord 2 หลายเดือนก่อน

    I estimate in a maximum of five years, we're gonna have the first good Star Wars movie for adults in over forty years and the one who'll make it is a guy nobody's heard of. He won't be able to charge anyone for it, but he'll never have to work a day again his in his life with all the money he'll make from Patreon. Disney will then have to decide if they should hire him or if they should instead go on not making money on ideological cringe bombs. They'll of course chose the latter and CNN will make up a story the Star Wars guy and all his fans hates women.

  • @Citizen_one
    @Citizen_one 2 หลายเดือนก่อน

    It's sad when you can see past the ghosting and artifacts to see where the LLM is ripping the reference from. Like your climber prompt, it was clearly spider man for about half a frame or so. And that spaceship couldn't have looked more similar to the majority of open world spaceship games out there.

  • @plotcoalition
    @plotcoalition 2 หลายเดือนก่อน

    It's this sort of thing that continues to convince me that Generative ai is just the R&D for how AI will influence technology and software tools in the future.

  • @bloohaus8670
    @bloohaus8670 2 หลายเดือนก่อน +1

    It all looks the same.

    • @JoshuaMKerr
      @JoshuaMKerr  2 หลายเดือนก่อน

      In what sense the same?

    • @bloohaus8670
      @bloohaus8670 2 หลายเดือนก่อน +1

      @@JoshuaMKerr Something with the way its being lit, everything has a rim lighting on it and high contrast etc. Im just a hater though don't mind me :)

  • @kyhxx
    @kyhxx 2 หลายเดือนก่อน

    . hm- usfl vid. aware .f ths nw-

    • @JoshuaMKerr
      @JoshuaMKerr  2 หลายเดือนก่อน

      i. luf- ths cmnt