AI Generated Videos Are Getting Out of Hand

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 ธ.ค. 2024

ความคิดเห็น • 209

  • @bycloudAI
    @bycloudAI  ปีที่แล้ว +64

    This video took too many revisions, hope you enjoy it, lmk if I missed anything too!
    PS: Break up with basic browsers! Get Opera GX here: operagx.gg/bycloud3

    • @ikcikor3670
      @ikcikor3670 ปีที่แล้ว

      I think you kept using "predecessor" as "successor"

    • @kusog3
      @kusog3 ปีที่แล้ว +3

      let me add regarding roop, swimswap, etc... all of them kinda works using the same underlying model, which is from the insightface project. While the successor of roop might able to continue the project, sadly the only version actually available to the public is the 128 resolution model. There is a higher resolution model but the developers refused to release it for variety of reasons.

    • @tobilpcraft1486
      @tobilpcraft1486 ปีที่แล้ว

      @@kusog3 tbh theres no real point in releasing the higher quality model since you already get better results with face upscalers like codeformer

    • @csehszlovakze
      @csehszlovakze ปีที่แล้ว +2

      shitty chinese browser, the spiritual successor is Vivaldi.

    • @DudeSoWin
      @DudeSoWin ปีที่แล้ว +1

      Why can't you guide it with text? Insert a line break or pipe in a 2nd prompt.

  • @anywallsocket
    @anywallsocket ปีที่แล้ว +179

    imagine falling in love with a celebrity of the future only for the system to glitch for one second and reveal the true monstrosity they are LOL

    • @DeniSaputta
      @DeniSaputta ปีที่แล้ว +10

      fake vtuber

    • @beansbeans96
      @beansbeans96 ปีที่แล้ว +61

      imagine falling in love with a celebrity.

    • @RiskyDramaUploads
      @RiskyDramaUploads ปีที่แล้ว +9

      Shrek

    • @maheshraju2044
      @maheshraju2044 ปีที่แล้ว +2

      Ostrich

    • @RiskyDramaUploads
      @RiskyDramaUploads ปีที่แล้ว +1

      @@maheshraju2044 What is ostrich?
      For those for whom looks are everything, things like this have happened before. Even young, good-looking Chinese streamers sometimes use software that changes their face shape, which glitches when their face moves off screen. And then there's this: "Chinese Vlogger Gets Exposed As A 58-Year-Old Woman After Her Beauty Filter Turns Off Mid-Stream"
      "It was all revealed when a beautifying video filter glitched mid-stream, exposing her real face. After the incident, it came to light that the streamer is actually a 58-year-old lady who just really enjoys playing Apex Legends."

  • @luciengrondin5802
    @luciengrondin5802 ปีที่แล้ว +47

    It seems to me that a new representation of video, one that would imply temporal consistency is needed. That's why I think of all these methods, the one about "content deformation field", COdef, is the most promising.

    • @anywallsocket
      @anywallsocket ปีที่แล้ว

      Naw you need to scrap basic NNs and use liquid nets instead

    • @shiccup
      @shiccup ปีที่แล้ว

      Lol i think i just figured out a great workflow for temporal consistency

    • @shiccup
      @shiccup ปีที่แล้ว +2

      For vid 2 video I might make a tutorial but essentially you just use ebsynth utility + img2img, then use reference control net and temporal net then put it into ebsynth and after that throw it in flowframe

  • @keenheat3335
    @keenheat3335 ปีที่แล้ว +39

    glad you feel better and start making AI video again. Saw that post and thought maybe you gave up on AI entirely. I don't think the content is the issue but packaging (thumb nail, title, etc) These sale and marketing part of the video matter a lot in view, some time even more than the video content it self.

  • @Wooraah
    @Wooraah ปีที่แล้ว +74

    Great overview thanks. I think we all need to bear in mind that so many of these techniques are in their very earliest stage before writing them off as terrible, like dialup internet in 1998, the leaps being made are truly astounding, and it won't be long before these tools and techniques are being used consistently for commercial applications - for good or ill.

    • @EduRGB
      @EduRGB ปีที่แล้ว +8

      That profile picture tho, I have to re-install it again..."Your appointment to FEMA should be finalized within the week..."

    • @WrldOfVon
      @WrldOfVon ปีที่แล้ว +8

      Completely agreed, but for the most part, I'm seeing people praise these models more than hate them. The fact that they can produce such high quality outputs now compared to a year ago goes to show you how quickly things can change even in the next 6 months. I'm so excited! 😆

    • @3zzzTyle
      @3zzzTyle ปีที่แล้ว

      Deus Ex kept being right for 20 years and it'll be right about AI as well.

    • @thedementiapodcast
      @thedementiapodcast ปีที่แล้ว

      The problem is much bigger than 'it's brand new'. Think about how these models create ANYTHING, such as a coffee machine: They aren't 'selecting a coffee machine' from a list of what they understand to be a coffee machine, they Frankenstein their way in transforming noise into what generally appears to be a coffee machine. Humans know what coffee machines generally look like, down to the brand - or what the soles of converse shoes look like, or that a windbreaker has a zip going from top to bottom on the garnment. AIs DO NOT. The solution is chaining loras, but this is not just extraordinarly time consuming, it's impractical in the context of a scene containing as many objects as the average kitchen . Runway is working on partnering with brands right now to try to address this but i think it will just turn into mandatory product placement.
      Then you have camera angles: Anything 'extreme' (think david lynch) cannot be done without loras even on single frames. But beyond that, if you wanted to make a frame of 'a man dressed elegantly at a coffee table', imagine how many thousands of tiny details you'd have to prompt vs grabbing a Gh5 and filming. So people use blender to create depth maps and attempt to generate based on that 'template' so at least things look coherent.
      99.999999% of the stuff you see that looks 'impressive' is either vid to vid or pure sheer luck that 4 second sequences feel somehow connected to each other.
      TLDR: We're a very very long way away from 'taking what's in your mind' and putting it in images. Right now it's a giant random seed lottery, or the same old 'cyberpunk' look where people forgive that the 'futuristic helmet' looks nothing like what they understand to be a helmet to be.

  • @Xezem
    @Xezem ปีที่แล้ว +27

    There should be better AI for frame interpolating softwares like flowframes and Topaz, this would be a huge help tbh

  • @aiadvantage
    @aiadvantage ปีที่แล้ว +18

    Absolutely loved this one. Great job!

  • @Gaven7r
    @Gaven7r ปีที่แล้ว +64

    I can't imagine how hard it must be to keep up with so much stuff going on lately lol

    • @David.L291
      @David.L291 ปีที่แล้ว +1

      So what's going on then of late? LOL

    • @quarterpounderwithcheese3178
      @quarterpounderwithcheese3178 ปีที่แล้ว

      A bunch of proprietary techno babble every AI startup has trademarked and pretty much *nobody* understands

  • @deemon710
    @deemon710 ปีที่แล้ว +7

    Hey btw, thanks so much for these latest-in-ai videos. It really helps to stay informed on what's out there and helps us be able to spot fake stuff.

  • @MrJohnnyseven
    @MrJohnnyseven ปีที่แล้ว +3

    Wow after 30 years online what do we have...people watching crap AI videos that all look the same....

  • @rookandpawn
    @rookandpawn ปีที่แล้ว +7

    the amount of research and knowledge in this video is off the charts. ty for your efforts! subscribed, amazing content.

  • @passwordyeah729
    @passwordyeah729 ปีที่แล้ว +4

    It's insane how fast AI is developing. This video is already slightly outdated... scary times we live in.

    • @danlock1
      @danlock1 11 หลายเดือนก่อน

      Why must you always reference sanity?

  • @Aizunyan
    @Aizunyan ปีที่แล้ว +1

    13:41 its called rotoscoping from 1880s

  • @thedementiapodcast
    @thedementiapodcast ปีที่แล้ว +1

    What I've learned from using these tools almost daily is that the human brain becomes very quickly attuned to pick up the little details that betray AI generation.
    1. start by looking at clothing. Coherence in clothing is currently near impossible. Jackets don't have zippers, have 10 buttons where there should be none, that kind of thing.
    2. Check objects in the background. I don't know anyone who bothers to create a LORA for every single bg oject especially in bg scenes, so the coffee maker, the fridge, etc are all going to be looking 'off brand'.
    3. elements these tools aren't trained on are evidently missing: if you're a sneakerhead, you'll quickly spot that the jordans have converse soles, etc.
    4. camera angles are all very boring. Anything that's dramatic with massive differences in proximity to the lens are going to need a lora.
    It's therefore no suprise Runway current commercial strategy is to partner with brands to push specific objects in scenes - so the coffee machine will be a 'nespresso' machine, but expect this to be abused to the max (it's basically mandatory product placement).
    I think we'll see 'ai videos' but this is the precambrian stage of it. We need tools to create scenes that aren't random but reflect the artist vision (currently light maps in blender pushed to img2img type process are the best option).

  • @AjSmit1
    @AjSmit1 ปีที่แล้ว +2

    the first @bycloud video i saw was about 'is AI gon steal our jerbs? prob not' and i've been watching ever since. i appreciate the work you do to keep the rest of us in the loop

  • @owencmyk
    @owencmyk ปีที่แล้ว +4

    What if you trained a diffusion model on videos by having it interprate them as like... 3 dimensional pixel arrays so you're basically extending its ability to generate images into the temporal dimension

  • @Entropy67
    @Entropy67 ปีที่แล้ว +9

    What the hell? Combine a couple of these with generative AI playing as a DM for D&D and you might have something crazy. Anyone starting a project let me know. Might be a very fun and open ended game.

  • @TAREEBITHETERRIBLE
    @TAREEBITHETERRIBLE ปีที่แล้ว +2

    *_please keep watching_*

  • @AthenaKeatingThomas
    @AthenaKeatingThomas ปีที่แล้ว +1

    Wow, this was far more thorough than I expected it would be. Thanks for the information about HOW video generation works as well as the examples of some of the current tools!

  • @Chuck8541
    @Chuck8541 ปีที่แล้ว +13

    Things are moving SO fast.
    Also, hey dude. Can you put together a playlist, or even a paid Udemy course to get those of us that are noobs up to speed with how to use this stuff...maybe from a creator/consumer standpoint? I'd love to get into Ai and create things, but it seems there are dozens of models, and methods. I don't know where to start. I only understand like 30% of the technical words you use. haha

    • @David.L291
      @David.L291 ปีที่แล้ว +1

      maybe could see what works best for you

  • @alexcrowder1673
    @alexcrowder1673 10 หลายเดือนก่อน

    I like how at 3:08 he says "This one has the best generation quality" and then proceeds to show us the derpiest CGI lion I have EVER seen.

  • @shakaama
    @shakaama ปีที่แล้ว +1

    so which do i use?

  • @zrakonthekrakon494
    @zrakonthekrakon494 ปีที่แล้ว +1

    So many options with so much nuance and customizability, I hope the best methods continue to evolve into the widely used tech of the future instead of being phased out.

  • @V_Robot
    @V_Robot ปีที่แล้ว +7

    "All these techniques are just here to assist ai video generation or just become a thirst trap"

  • @canyouread
    @canyouread 11 หลายเดือนก่อน

    I'm so glad I didnt skip the sponsor segment LOL

  • @Spamkromite
    @Spamkromite ปีที่แล้ว +1

    Not only out of hand. Most of the ecosystem of their databases were made from stolen footage and pictures from all over internet across 5 years, even from private videos and anything you sent through messengers. Once that is found out, the owners of those sites will be sued into nothingness, specially when photograms from movies and other copyrighted animated films are discovered. Like, we can't use those photograms when we make our videos and upload them to TH-cam and we are banned from getting a single monetization. Why these sites can make cash with the same footage and double down by using full movies even? But that's just me thinking too hard 🤔

  • @Marcytheeditor
    @Marcytheeditor 3 หลายเดือนก่อน

    4:58 what is this video from? Why are there guys fighting? Can someone tell me?

  • @Donxzy
    @Donxzy ปีที่แล้ว +1

    As a former hobbyist with SD and photoshop, this video cracks me up and it's accurate indeed

  • @francesco9703
    @francesco9703 ปีที่แล้ว +1

    I need the sauce for the Kita gif at 6:14, it looks so clean

  • @ceticx
    @ceticx ปีที่แล้ว +2

    Thought i didnt care about this at all but you kept me for all 20 minutes

  • @sotasearcher
    @sotasearcher ปีที่แล้ว +2

    you’re the MVP of this, keep it up 👏👏

  • @DG123z
    @DG123z ปีที่แล้ว +3

    Once it gets 3d modeling instead of just being videos solely of images, the moment will become a lot more realistic.

    • @robertceron9056
      @robertceron9056 ปีที่แล้ว +2

      CSM and imagine 3d does it, but Picasso nvidia ai will have a better version

  • @walterhugolopezpinaya5641
    @walterhugolopezpinaya5641 ปีที่แล้ว

    Thanks for the great video on the current landscape of generative video methods! ^^

  • @ted_van_loon
    @ted_van_loon ปีที่แล้ว

    you should also add how heavy they are to run, and how well they work with different types of hardware. for example if you can run them on a normal cpu. or need a speciffic type of gpu or a speciffic type of dedicated hardware or such.
    since it affects a lot of how and who can use it.
    since many models atleast used to only work with cuda, requiring speciffic hard to get and insanely expensive scalper nvidia gpu's.
    a model which can also work on normal consumer cpu's makes it available to basically everyone.
    but also it should be known how well they are optimized for certain thigns so how fast they are with a common type of
    cpu, gpu, dpu, tpu, fpga, or whatever things are comonly used or should be used. since fpga's are becoming the future currently based on how companies start to price fix them rapidly and increase even the prices of the old very cheap models greatly despite fpga's still being quite actively developed but just no longer showed to the pblic which means there is a decent or big chance they plan to release it like the big next step sometime soon.
    also in ai keeping the weights and pre generating the main objects of interest and their info(so kind of like generating them like a simple 3d model to refference to, or like a detailed model refference prompt, this way all frames can easily, efficiently and effectively be kept similar, or even the same when detailed enough.

  • @darii3523
    @darii3523 ปีที่แล้ว +15

    Ai is growing bigger and bigger

  • @angamaitesangahyando685
    @angamaitesangahyando685 ปีที่แล้ว +2

    AI waifus in 2025 is my only cope in life.
    - Adûnâi

  • @chynabad9804
    @chynabad9804 ปีที่แล้ว +1

    Thank you, nice snapshot of the current capabilities.

  • @Arewethereyet69
    @Arewethereyet69 ปีที่แล้ว

    thanks for the video. great channel by the way. just subscribed

  • @JDST
    @JDST ปีที่แล้ว +2

    "thank you, ice cream so good. yes yes yes gang gang gang"
    Such inspiring words. 😢😢😢

  • @e8error600
    @e8error600 ปีที่แล้ว +4

    It was cool shit at first, now its getting scary...

    • @patrickfoxchild2608
      @patrickfoxchild2608 ปีที่แล้ว +5

      It's already scary. This is just what the public has produced.

    • @tylerwalker492
      @tylerwalker492 ปีที่แล้ว +4

      @@patrickfoxchild2608 And we'll never find out exactly what governments will produce!

  • @raspberrymann
    @raspberrymann ปีที่แล้ว +1

    I want the source of this part 4:57

  • @yalmeme
    @yalmeme ปีที่แล้ว +1

    Hi guys. Is any of currently existed tools can do img2img on realtime video? So I can use it in streaming?

  • @Kisai_Yuki
    @Kisai_Yuki ปีที่แล้ว +2

    IMO, a lot of these techniques are ... poor, but not for the reason you'd expect. The reason is because the underlying hardware needed to get a good result is out of reach. So starting with Stable Diffusion itself, yes, it can run on a smaller GPU, but the input training data was already low-resolution (512x512) and is incapable of generating anything else. That size was picked so it would fit on existing hardware. As soon as you tell SD to generate something bigger, the result is not a "higher resolution" result, but rather a different image with more chaotic data in the same palette.
    What is needed for good results is datasets that start out at the resolution of the output. So 2K, 4K, 8K, and that means the GPU video memory has a substantial increase each time, so to get an 8K image , you need 256 times as much VRAM as you would need for that 512x512, so if you needed 8GB for 512, you need 2TB for 8K. That's not even possible on a Nvidia DGX (which has 320GB.) So given present hardware, a 1K image would need a 32GB device.
    Which I think is going to have to come out is a tile-based model that renders 512x512 portions of each image and stitch them together, which means figuring out how to tell the AI that it's part of the same frame.

    • @phizc
      @phizc ปีที่แล้ว +2

      Stable Diffusion XL is trained on 1 megapixel images and the default generation size is 1024x1024. It can run on 8GB GPUs, AFAIK. It's trained on 1024x1024 images and some other specific resolutions.
      For SD 1.5 models you can upscale with ControlNet tiled. It splits the image into a grid and adds details to each, then combines it back to a seamless image.

  • @nefwaenre
    @nefwaenre ปีที่แล้ว +2

    i have been using Simswap for 2 years now. i use it mainly to animate my characters in real life. i am waiting for a day when i can inpaint a consistent video. For rg- change the shirt colour of my subject in the video.

    • @finallyhaveausername5080
      @finallyhaveausername5080 ปีที่แล้ว +3

      If you're just looking to animate a character based on real life movements then you could try something like EbSynth? you inpaint one or two keyframes per type of shot and it generates the rest.

    • @nefwaenre
      @nefwaenre ปีที่แล้ว

      @@finallyhaveausername5080 Thanks for the info. It's not just a character, i have these faces that i've created (there's only a few) from my paintings and dolls that i have and so i only need their faces to be there, which is why i use Simswap. Cuz i can have tons of videos with just these few characters this way.
      But i can't really post the videos online cuz then people might say it's stolen content, even though, this is an absolutely personal project and i have no intention of sharing this on a money based platform.
      So, i just want to change the shirt colours and maybe the bg from these simswap videos so that i can post online.

    • @SatriaTheFlash
      @SatriaTheFlash ปีที่แล้ว +1

      You should change to FaceFusion right now, it has the better results than SimSwap

  • @guy_withglasses
    @guy_withglasses ปีที่แล้ว +3

    bro didn't link neuron activation

  • @b.delacroix7592
    @b.delacroix7592 ปีที่แล้ว +2

    No way any of this will be used for evil. Nope.

    • @theyoten1613
      @theyoten1613 ปีที่แล้ว

      Every technology was used for evil. That's a non-argument.

  • @obboAR
    @obboAR ปีที่แล้ว +1

    You're my go to AI image to Image to video video style text to Gan video multi frame to image generator, TH-camr.

  • @SianaGearz
    @SianaGearz ปีที่แล้ว

    I have recovered a "lost" music video, as in uploads exist but they're so low quality that you can't even tell what's happening due to a deinterlacing error, they're all from like 2006, and from what i know all the official data masters etc have been lost to a fire. The data i have recovered is a slightly blocky and noisy 4mbit MPEG2 made from what looks like a painfully well-worn Betacam at a TV studio when they were changing equipment. I'm trying to make it presentable, but so far upscaling has generated some creepy facial frames. Is there an AI workflow that i can feed a handful of high resolution images of the performer's face to have it restore it? Maybe first a pass that makes faces less bad even if inconsistent, and then reintroduce frame to frame coherence with simswap or the like?

  • @PrintThatThing
    @PrintThatThing ปีที่แล้ว

    Great video!!! Very helpful and fun to watch. 🎉😊

  • @VJP8464
    @VJP8464 ปีที่แล้ว +2

    We’re living in a truly unique age in human history; there’s the time of human history spanning a great many years before advanced technology, and in the future will be the time of the virtual being indistinguishable from reality, and everything being digital and supremely unnatural, which will span the time from several years from now to the end of humanity
    We’re the only humans who will ever get to experience the transitional phase between those two time periods, which will span only 1-200 years of the tens of thousands we have been/will be on this earth
    We’re the Guinea pigs of the technological future
    What we do these days is going to be critically important to the fate of people in the future, I sincerely hope we use our ‘trial run’ position for the good of everyone, especially since it’s too easy to use technology for malice

  • @aspergale9836
    @aspergale9836 ปีที่แล้ว +1

    2:50 - This shot is also AI-generated?

  • @zikwin
    @zikwin ปีที่แล้ว

    I tested almost all the mentioned techniques over the past few months, and nothing is missing as far as I know. Great video, sum everything, I like it

  • @tja9212
    @tja9212 ปีที่แล้ว

    may the fuzzyness (3:40) result of the model being trained by classic movie material, which normally has some amount of grain?
    photographs normally dont have visible grain, but in the movies you pretty much couldnt avoid it till recent digital developments...

  • @TDMIdaho
    @TDMIdaho ปีที่แล้ว

    How do you not include deforum? Which is the best.

  • @eloujtimereaver4504
    @eloujtimereaver4504 ปีที่แล้ว

    Can we have links to some of your examples?
    I have not seen all of them, and cannot find some of them.

  • @luciusblackheart
    @luciusblackheart ปีที่แล้ว

    Thank you so much for this video. So informative! Keep up the great work!

  • @dan323609
    @dan323609 ปีที่แล้ว +1

    That day will come, when I try using nuke copycat with SD. Btw I made some tests and it was very not bad

  • @jakelionlight3936
    @jakelionlight3936 ปีที่แล้ว

    introduction of quantum computing solves any divergence, if all possible routes are takin at the same time one route will be closest to perfect as possible..it will be indistinguishable from reality with zero lag... i imagine this is already being done if your in the mile high club... we are defiantly in the dark about a lot of things imo.

    • @obsidianjane4413
      @obsidianjane4413 ปีที่แล้ว

      Someone handwaved "quantum something". Take a drink.

  • @raphaelbussiere
    @raphaelbussiere ปีที่แล้ว +1

    Great vidéo ! perfect to share with friends :)

  • @Rscapeextreme447
    @Rscapeextreme447 ปีที่แล้ว

    I think we should call category 3 “corridor video creation”

  • @Chuck8541
    @Chuck8541 ปีที่แล้ว

    lmao at the guy standing backwards on the surfboard.

  • @kamillatocha
    @kamillatocha ปีที่แล้ว +1

    it all boils down to who will make the first AI Video porn
    and soon porn stars will go on strike too

  • @Benwager12
    @Benwager12 ปีที่แล้ว

    3:15 I had to watch the video 3 times to hear "artistic"

  • @darkezowsky
    @darkezowsky ปีที่แล้ว +1

    roop is dead, but roop-unleashed is alive and even better ;)

  • @dochotwheels2021
    @dochotwheels2021 ปีที่แล้ว +1

    I am trying to find a image to image generator that can turn my still pictures into cartoons/watercolers/low bit poly/ect. Do you know of any program that does a good job? I use midjourney but it never does it correct, its usually something totally differnt.

    • @finallyhaveausername5080
      @finallyhaveausername5080 ปีที่แล้ว

      Try searching for style transfer programs rather than image2image generation; they tend to be more lossless.

  • @MihajloVEnnovation
    @MihajloVEnnovation ปีที่แล้ว

    What are your opinions on Kaiber?

  • @lorenzoiotti
    @lorenzoiotti ปีที่แล้ว

    Is there something like sadtalker for videos? Wav2lip worked on videos too but from what I’ve seen sadtalker only does images

  • @DigitalForest0
    @DigitalForest0 ปีที่แล้ว

    Thank you!, i personally got message juntil 0:38 that this video is not for me, i didn't waste my time, THANK YOU!

  • @mat_deuh
    @mat_deuh ปีที่แล้ว

    Thank you for this review :)

  • @johnjohansson
    @johnjohansson ปีที่แล้ว

    What about zeroscope v3?

  • @QuoVadistis
    @QuoVadistis ปีที่แล้ว

    I want to generate a video scene with 5 characters in a room who speak, and use photos to faceswap, and I want to provide the text for the speech. It is only short clips for fun and learning but I want good qualityI think FUlljourney can do it all (not sure), but what are the best tools to use right now? Many thanks.

  • @Tarbard
    @Tarbard ปีที่แล้ว +1

    This video really activated my neurons.

  • @issay2594
    @issay2594 ปีที่แล้ว

    this thing doesn't progress well because it goes the wrong way. it's like using a leverage at the wrong side of it. if you make an analogue, it's like teaching a human to dream with no hallucinations, to see a totally adequate movie when you sleep with no strange things happening. once there is step by step reasoning + firm understanding of physical reality (what is possible and what is not) these things overnight will start making hyperrealistic movies all of a sudden. just the way they do pics now. same approach that was used with pics won't work with movies simply because the blind associations that image generating neural networks use with a good results, after crazy amount of training, for movies would require magnitudes more training, simply because way more can happen over each fraction of a second. it's like adding several more dimensions to the task complexity. just wait for the reasoning and it will happen overnight.

  • @SylvesterAshcroft88
    @SylvesterAshcroft88 ปีที่แล้ว

    The face morphing is so freaky on ai generated videos, also that isn't Margot Robbie! ;)

  • @Shajirr_
    @Shajirr_ ปีที่แล้ว

    The towel guy is the future of vtubing

  • @moahammad1mohammad
    @moahammad1mohammad ปีที่แล้ว

    Slightly disappointed how many people fake the results of these AI's to make it seem it was entirely done with simple first-passthrough prompting only

  • @ellen4956
    @ellen4956 ปีที่แล้ว

    I wondered if a youtube channel called "curious being" was using AI for the presenter, because she doesn't look natural to me. My daughter said she thinks it's a real person. I don't. Can someone check it out and let me know? It's always about history and pre-history, but a young woman stands in a room with either a blank wall or a wall with a painting next to her.

  • @Uthael_Kileanea
    @Uthael_Kileanea ปีที่แล้ว

    10:48 - I could hear:
    Dame da ne
    Dame yo
    Dame na no yo

  • @humanharddrive1
    @humanharddrive1 ปีที่แล้ว

    the ice cream so good part gave me whiplash

  • @joaodecarvalho7012
    @joaodecarvalho7012 ปีที่แล้ว

    Things are about to get weird.

  • @blackterminal
    @blackterminal ปีที่แล้ว

    Would like Ai avatars to not loop hand movements but do more random movements.

  • @alonsomartinez9588
    @alonsomartinez9588 ปีที่แล้ว

    There is also Phenaki!

  • @NeXaSLvL
    @NeXaSLvL ปีที่แล้ว +1

    it's funny, technology used to help us create art, now we're using tools to assist the AI's video generation

    • @obsidianjane4413
      @obsidianjane4413 ปีที่แล้ว +2

      The AI is still just the tool. For now.

  • @andrewdunbar828
    @andrewdunbar828 ปีที่แล้ว

    The prompt are interesting too are it?

  • @gaker19sc
    @gaker19sc ปีที่แล้ว

    7:18 Dude is NOT fine

    • @Iqbal1808
      @Iqbal1808 ปีที่แล้ว

      bro ragdolled in the source engine

  • @GENKI_INU
    @GENKI_INU ปีที่แล้ว

    How is ebsynth still relevant these days, in this landscape?
    Hasn't it been out forever now?

  • @GameSmilexD
    @GameSmilexD ปีที่แล้ว

    "that is not a legitimate hoverboard (it's got wheeeeels)"

  • @rem7502
    @rem7502 ปีที่แล้ว

    1:34 bro wtf was that sponsorship lmao😵

  • @Dex_1M
    @Dex_1M ปีที่แล้ว

    wait can't you use pika labs to generate the woman with hair blowing then use deep fakes to fix the face ? and then loop it, and with some manure basic video editing you got an animation to put on your song ?

  • @JazevoAudiosurf
    @JazevoAudiosurf ปีที่แล้ว +1

    wow i actually learned something

  • @ummerfarooq5383
    @ummerfarooq5383 ปีที่แล้ว

    That anime one is where we would like to see ni no kuni aniremaster

  • @patrickfoxchild2608
    @patrickfoxchild2608 ปีที่แล้ว +1

    hold the eff up, did anyone notice the Bud Light commercial it made had only women drinking it?

    • @tylerwalker492
      @tylerwalker492 ปีที่แล้ว +2

      Bud Light knows it's new target demographic lmao

  • @pointandshootvideo
    @pointandshootvideo ปีที่แล้ว +1

    Thanks for this video! The current state of the art is very disappointing. I'm wondering if creating a 3D controlnet skeleton and then generating 30 fps images using Reallusion would move the technology forward. Thoughts?

  • @efxnews4776
    @efxnews4776 ปีที่แล้ว

    AI generated vids had something off about them, if you focus on some specific areas you will notice the patterns of movment are all wrong.

  • @csmit195-1
    @csmit195-1 ปีที่แล้ว

    Hi AI Expert here, the difference between the 3 videos:
    1. This is done with a Text to Video Diffusion Model, most likely Gen2 by Runway ML.
    2. Seems to be a face swap AI, they've gotten really good in the last 6 months.
    3. The last video is an EBSynth frame by frame text2image generation. These have been showing up more and more over the last 6 months, primarily on tiktok. New tools come out daily to improve flickering, its getting pretty cool. Although the first video (Gen2) is the most important one of the three and will eventually have long lasting implications to how we consume content.
    Aight thats enough pausing for now, time to watch.

    • @csmit195-1
      @csmit195-1 ปีที่แล้ว

      I only know because about 12 months ago I got really into AI and haven't fell off the train yet, I keep up with near daily news and updates. Played with most of them.

  • @The3kproduction
    @The3kproduction ปีที่แล้ว

    thats next level catfish lol

  • @brunnokamei9623
    @brunnokamei9623 ปีที่แล้ว

    I give it two years at best for AI generated content to flood the Internet and only very few people making some profit out of it.

  • @pameliaSofiA
    @pameliaSofiA ปีที่แล้ว

    This world is so fascinating and new to me. Can anyone tell me the easiest way (for a newbie) to get great results animating myself talking (starting with a real video)? I don't care about it being an accurate representation of me as I will put myself in different settings I just want the face to look good and move as a face would move. Thanks in advance for your direction!!

  • @sneedtube
    @sneedtube ปีที่แล้ว

    I didn't quite get if there's a method to deepfake a live stream but I'm kinda re tar ded so I should probably give the video another rewatch

  • @MegaDixen
    @MegaDixen ปีที่แล้ว

    can wait get a new grapics card to play with this