AI Generated Videos Are Getting Out of Hand

แชร์
ฝัง
  • เผยแพร่เมื่อ 15 มิ.ย. 2024
  • 🕹️ Your browser is holding you back. Level up here: operagx.gg/bycloud3 Sponsored by Opera GX!
    This video is a mega-compilation of all the current best AI tech related to video generation and manipulation. From Deepfake to Stable Diffusion, it's a pretty wild ride displaying how far AI tech has become.
    All Research/Apps Sources
    [CogVideo] github.com/THUDM/CogVideo
    [Zeroscope V2] huggingface.co/spaces/hysts/z...
    [Modelscope] modelscope.cn/models/damo/tex...
    [Runway Gen-1 & Gen-2] app.runwayml.com/
    [Pika Labs] www.pika.art/
    [AnimateDiff] animatediff.github.io/
    [DeepFaceLab] www.deepfakevfx.com/downloads...
    [DeepFaceLive] github.com/iperov/DeepFaceLive
    [SimSwap] github.com/neuralchen/SimSwap
    [Roop] github.com/s0md3v/roop
    [Face Fusion] github.com/facefusion/facefusion
    [FOM Model] aliaksandrsiarohin.github.io/...
    [DaGAN] harlanhong.github.io/publicat...
    [Wav2Lip] github.com/Rudrabha/Wav2Lip
    [SadTalker] github.com/OpenTalker/SadTalk...
    [HeyGen] www.heygen.com/
    [Ebsynth] ebsynth.com/
    [TemporalNet] huggingface.co/CiaraRowles/Te...
    [Tokyo Jab Method] / tips_for_temporal_stab...
    [CoDeF] qiuyu96.github.io/CoDeF/
    [TokenFlow] diffusion-tokenflow.github.io/
    [Warp Diffusion] / sxela
    [DeForum] deforum.github.io/
    All Result Related Sources (in order of appearance)
    [Google Doc] docs.google.com/document/d/1e...
    This video is supported by the kind Patrons & TH-cam Members:
    🙏Andrew Lescelius, alex j, Chris LeDoux, Alex Maurice, Miguilim, Deagan, FiFaŁ, Daddy Wen, Tony Jimenez, Panther Modern, Jake Disco, Demilson Quintao, Shuhong Chen, Hongbo Men, happi nyuu nyaa, Carol Lo, Mose Sakashita, Miguel, Bandera, Gennaro Schiano, gunwoo, Ravid Freedman, Mert Seftali, Mrityunjay, Richárd Nagyfi, Timo Steiner, Henrik G Sundt, projectAnthony, Brigham Hall, Kyle Hudson, Kalila, Jef Come, Jvari Williams, Tien Tien, BIll Mangrum, owned, Janne Kytölä
    [Discord] / discord
    [Twitter] / bycloudai
    [Patreon] / bycloud
    [Music 1] How Convenient on SlipStream
    [Music 2] massobeats - breeze
    [Music 3] massobeats - bloom
    [Music 4] massobeats - lotus
    [Profile & Banner Art] / pygm7
    [Video Editor] @askejm
    0:00 Intro
    0:35 Sponsor
    2:13 Category 1: text-to-video
    8:44 Category 2: Media Manipulation
    12:27 Category 3: Img2Img Videos
    20:01 Outro
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 209

  • @bycloudAI
    @bycloudAI  9 หลายเดือนก่อน +64

    This video took too many revisions, hope you enjoy it, lmk if I missed anything too!
    PS: Break up with basic browsers! Get Opera GX here: operagx.gg/bycloud3

    • @ikcikor3670
      @ikcikor3670 9 หลายเดือนก่อน

      I think you kept using "predecessor" as "successor"

    • @kusog3
      @kusog3 9 หลายเดือนก่อน +3

      let me add regarding roop, swimswap, etc... all of them kinda works using the same underlying model, which is from the insightface project. While the successor of roop might able to continue the project, sadly the only version actually available to the public is the 128 resolution model. There is a higher resolution model but the developers refused to release it for variety of reasons.

    • @tobilpcraft1486
      @tobilpcraft1486 9 หลายเดือนก่อน

      @@kusog3 tbh theres no real point in releasing the higher quality model since you already get better results with face upscalers like codeformer

    • @csehszlovakze
      @csehszlovakze 9 หลายเดือนก่อน +2

      shitty chinese browser, the spiritual successor is Vivaldi.

    • @DudeSoWin
      @DudeSoWin 9 หลายเดือนก่อน +1

      Why can't you guide it with text? Insert a line break or pipe in a 2nd prompt.

  • @anywallsocket
    @anywallsocket 9 หลายเดือนก่อน +179

    imagine falling in love with a celebrity of the future only for the system to glitch for one second and reveal the true monstrosity they are LOL

    • @DeniSaputta
      @DeniSaputta 9 หลายเดือนก่อน +10

      fake vtuber

    • @beansbeans96
      @beansbeans96 9 หลายเดือนก่อน +61

      imagine falling in love with a celebrity.

    • @RiskyDramaUploads
      @RiskyDramaUploads 9 หลายเดือนก่อน +9

      Shrek

    • @maheshraju2044
      @maheshraju2044 8 หลายเดือนก่อน +2

      Ostrich

    • @RiskyDramaUploads
      @RiskyDramaUploads 8 หลายเดือนก่อน +1

      @@maheshraju2044 What is ostrich?
      For those for whom looks are everything, things like this have happened before. Even young, good-looking Chinese streamers sometimes use software that changes their face shape, which glitches when their face moves off screen. And then there's this: "Chinese Vlogger Gets Exposed As A 58-Year-Old Woman After Her Beauty Filter Turns Off Mid-Stream"
      "It was all revealed when a beautifying video filter glitched mid-stream, exposing her real face. After the incident, it came to light that the streamer is actually a 58-year-old lady who just really enjoys playing Apex Legends."

  • @keenheat3335
    @keenheat3335 9 หลายเดือนก่อน +39

    glad you feel better and start making AI video again. Saw that post and thought maybe you gave up on AI entirely. I don't think the content is the issue but packaging (thumb nail, title, etc) These sale and marketing part of the video matter a lot in view, some time even more than the video content it self.

  • @luciengrondin5802
    @luciengrondin5802 9 หลายเดือนก่อน +47

    It seems to me that a new representation of video, one that would imply temporal consistency is needed. That's why I think of all these methods, the one about "content deformation field", COdef, is the most promising.

    • @anywallsocket
      @anywallsocket 9 หลายเดือนก่อน

      Naw you need to scrap basic NNs and use liquid nets instead

    • @shiccup
      @shiccup 9 หลายเดือนก่อน

      Lol i think i just figured out a great workflow for temporal consistency

    • @shiccup
      @shiccup 9 หลายเดือนก่อน +2

      For vid 2 video I might make a tutorial but essentially you just use ebsynth utility + img2img, then use reference control net and temporal net then put it into ebsynth and after that throw it in flowframe

  • @Xezem
    @Xezem 9 หลายเดือนก่อน +27

    There should be better AI for frame interpolating softwares like flowframes and Topaz, this would be a huge help tbh

  • @Gaven7r
    @Gaven7r 9 หลายเดือนก่อน +64

    I can't imagine how hard it must be to keep up with so much stuff going on lately lol

    • @David.L291
      @David.L291 9 หลายเดือนก่อน +1

      So what's going on then of late? LOL

    • @quarterpounderwithcheese3178
      @quarterpounderwithcheese3178 8 หลายเดือนก่อน

      A bunch of proprietary techno babble every AI startup has trademarked and pretty much *nobody* understands

  • @Wooraah
    @Wooraah 9 หลายเดือนก่อน +74

    Great overview thanks. I think we all need to bear in mind that so many of these techniques are in their very earliest stage before writing them off as terrible, like dialup internet in 1998, the leaps being made are truly astounding, and it won't be long before these tools and techniques are being used consistently for commercial applications - for good or ill.

    • @EduRGB
      @EduRGB 9 หลายเดือนก่อน +8

      That profile picture tho, I have to re-install it again..."Your appointment to FEMA should be finalized within the week..."

    • @WrldOfVon
      @WrldOfVon 9 หลายเดือนก่อน +8

      Completely agreed, but for the most part, I'm seeing people praise these models more than hate them. The fact that they can produce such high quality outputs now compared to a year ago goes to show you how quickly things can change even in the next 6 months. I'm so excited! 😆

    • @3zzzTyle
      @3zzzTyle 6 หลายเดือนก่อน

      Deus Ex kept being right for 20 years and it'll be right about AI as well.

    • @thedementiapodcast
      @thedementiapodcast 5 หลายเดือนก่อน

      The problem is much bigger than 'it's brand new'. Think about how these models create ANYTHING, such as a coffee machine: They aren't 'selecting a coffee machine' from a list of what they understand to be a coffee machine, they Frankenstein their way in transforming noise into what generally appears to be a coffee machine. Humans know what coffee machines generally look like, down to the brand - or what the soles of converse shoes look like, or that a windbreaker has a zip going from top to bottom on the garnment. AIs DO NOT. The solution is chaining loras, but this is not just extraordinarly time consuming, it's impractical in the context of a scene containing as many objects as the average kitchen . Runway is working on partnering with brands right now to try to address this but i think it will just turn into mandatory product placement.
      Then you have camera angles: Anything 'extreme' (think david lynch) cannot be done without loras even on single frames. But beyond that, if you wanted to make a frame of 'a man dressed elegantly at a coffee table', imagine how many thousands of tiny details you'd have to prompt vs grabbing a Gh5 and filming. So people use blender to create depth maps and attempt to generate based on that 'template' so at least things look coherent.
      99.999999% of the stuff you see that looks 'impressive' is either vid to vid or pure sheer luck that 4 second sequences feel somehow connected to each other.
      TLDR: We're a very very long way away from 'taking what's in your mind' and putting it in images. Right now it's a giant random seed lottery, or the same old 'cyberpunk' look where people forgive that the 'futuristic helmet' looks nothing like what they understand to be a helmet to be.

  • @aiadvantage
    @aiadvantage 9 หลายเดือนก่อน +18

    Absolutely loved this one. Great job!

  • @deemon710
    @deemon710 9 หลายเดือนก่อน +7

    Hey btw, thanks so much for these latest-in-ai videos. It really helps to stay informed on what's out there and helps us be able to spot fake stuff.

  • @passwordyeah729
    @passwordyeah729 7 หลายเดือนก่อน +4

    It's insane how fast AI is developing. This video is already slightly outdated... scary times we live in.

    • @danlock1
      @danlock1 4 หลายเดือนก่อน

      Why must you always reference sanity?

  • @rookandpawn
    @rookandpawn 9 หลายเดือนก่อน +7

    the amount of research and knowledge in this video is off the charts. ty for your efforts! subscribed, amazing content.

  • @thedementiapodcast
    @thedementiapodcast 5 หลายเดือนก่อน +1

    What I've learned from using these tools almost daily is that the human brain becomes very quickly attuned to pick up the little details that betray AI generation.
    1. start by looking at clothing. Coherence in clothing is currently near impossible. Jackets don't have zippers, have 10 buttons where there should be none, that kind of thing.
    2. Check objects in the background. I don't know anyone who bothers to create a LORA for every single bg oject especially in bg scenes, so the coffee maker, the fridge, etc are all going to be looking 'off brand'.
    3. elements these tools aren't trained on are evidently missing: if you're a sneakerhead, you'll quickly spot that the jordans have converse soles, etc.
    4. camera angles are all very boring. Anything that's dramatic with massive differences in proximity to the lens are going to need a lora.
    It's therefore no suprise Runway current commercial strategy is to partner with brands to push specific objects in scenes - so the coffee machine will be a 'nespresso' machine, but expect this to be abused to the max (it's basically mandatory product placement).
    I think we'll see 'ai videos' but this is the precambrian stage of it. We need tools to create scenes that aren't random but reflect the artist vision (currently light maps in blender pushed to img2img type process are the best option).

  • @Entropy67
    @Entropy67 9 หลายเดือนก่อน +9

    What the hell? Combine a couple of these with generative AI playing as a DM for D&D and you might have something crazy. Anyone starting a project let me know. Might be a very fun and open ended game.

  • @AjSmit1
    @AjSmit1 9 หลายเดือนก่อน +2

    the first @bycloud video i saw was about 'is AI gon steal our jerbs? prob not' and i've been watching ever since. i appreciate the work you do to keep the rest of us in the loop

  • @owencmyk
    @owencmyk 8 หลายเดือนก่อน +4

    What if you trained a diffusion model on videos by having it interprate them as like... 3 dimensional pixel arrays so you're basically extending its ability to generate images into the temporal dimension

  • @canyouread
    @canyouread 5 หลายเดือนก่อน

    I'm so glad I didnt skip the sponsor segment LOL

  • @shakaama
    @shakaama 9 หลายเดือนก่อน +1

    so which do i use?

  • @alexcrowder1673
    @alexcrowder1673 4 หลายเดือนก่อน

    I like how at 3:08 he says "This one has the best generation quality" and then proceeds to show us the derpiest CGI lion I have EVER seen.

  • @ted_van_loon
    @ted_van_loon 8 หลายเดือนก่อน

    you should also add how heavy they are to run, and how well they work with different types of hardware. for example if you can run them on a normal cpu. or need a speciffic type of gpu or a speciffic type of dedicated hardware or such.
    since it affects a lot of how and who can use it.
    since many models atleast used to only work with cuda, requiring speciffic hard to get and insanely expensive scalper nvidia gpu's.
    a model which can also work on normal consumer cpu's makes it available to basically everyone.
    but also it should be known how well they are optimized for certain thigns so how fast they are with a common type of
    cpu, gpu, dpu, tpu, fpga, or whatever things are comonly used or should be used. since fpga's are becoming the future currently based on how companies start to price fix them rapidly and increase even the prices of the old very cheap models greatly despite fpga's still being quite actively developed but just no longer showed to the pblic which means there is a decent or big chance they plan to release it like the big next step sometime soon.
    also in ai keeping the weights and pre generating the main objects of interest and their info(so kind of like generating them like a simple 3d model to refference to, or like a detailed model refference prompt, this way all frames can easily, efficiently and effectively be kept similar, or even the same when detailed enough.

  • @walterhugolopezpinaya5641
    @walterhugolopezpinaya5641 9 หลายเดือนก่อน

    Thanks for the great video on the current landscape of generative video methods! ^^

  • @viratponugoti7735
    @viratponugoti7735 9 หลายเดือนก่อน +7

    "All these techniques are just here to assist ai video generation or just become a thirst trap"

  • @francesco9703
    @francesco9703 9 หลายเดือนก่อน +1

    I need the sauce for the Kita gif at 6:14, it looks so clean

  • @yalmeme
    @yalmeme 9 หลายเดือนก่อน +1

    Hi guys. Is any of currently existed tools can do img2img on realtime video? So I can use it in streaming?

  • @AthenaKeatingThomas
    @AthenaKeatingThomas 8 หลายเดือนก่อน +1

    Wow, this was far more thorough than I expected it would be. Thanks for the information about HOW video generation works as well as the examples of some of the current tools!

  • @Aizunyan
    @Aizunyan 8 หลายเดือนก่อน +1

    13:41 its called rotoscoping from 1880s

  • @Chuck8541
    @Chuck8541 9 หลายเดือนก่อน +13

    Things are moving SO fast.
    Also, hey dude. Can you put together a playlist, or even a paid Udemy course to get those of us that are noobs up to speed with how to use this stuff...maybe from a creator/consumer standpoint? I'd love to get into Ai and create things, but it seems there are dozens of models, and methods. I don't know where to start. I only understand like 30% of the technical words you use. haha

    • @David.L291
      @David.L291 9 หลายเดือนก่อน +1

      maybe could see what works best for you

  • @Spamkromite
    @Spamkromite 8 หลายเดือนก่อน +1

    Not only out of hand. Most of the ecosystem of their databases were made from stolen footage and pictures from all over internet across 5 years, even from private videos and anything you sent through messengers. Once that is found out, the owners of those sites will be sued into nothingness, specially when photograms from movies and other copyrighted animated films are discovered. Like, we can't use those photograms when we make our videos and upload them to TH-cam and we are banned from getting a single monetization. Why these sites can make cash with the same footage and double down by using full movies even? But that's just me thinking too hard 🤔

  • @QuoVadistis
    @QuoVadistis 9 หลายเดือนก่อน

    I want to generate a video scene with 5 characters in a room who speak, and use photos to faceswap, and I want to provide the text for the speech. It is only short clips for fun and learning but I want good qualityI think FUlljourney can do it all (not sure), but what are the best tools to use right now? Many thanks.

  • @SianaGearz
    @SianaGearz 9 หลายเดือนก่อน

    I have recovered a "lost" music video, as in uploads exist but they're so low quality that you can't even tell what's happening due to a deinterlacing error, they're all from like 2006, and from what i know all the official data masters etc have been lost to a fire. The data i have recovered is a slightly blocky and noisy 4mbit MPEG2 made from what looks like a painfully well-worn Betacam at a TV studio when they were changing equipment. I'm trying to make it presentable, but so far upscaling has generated some creepy facial frames. Is there an AI workflow that i can feed a handful of high resolution images of the performer's face to have it restore it? Maybe first a pass that makes faces less bad even if inconsistent, and then reintroduce frame to frame coherence with simswap or the like?

  • @dochotwheels2021
    @dochotwheels2021 9 หลายเดือนก่อน +1

    I am trying to find a image to image generator that can turn my still pictures into cartoons/watercolers/low bit poly/ect. Do you know of any program that does a good job? I use midjourney but it never does it correct, its usually something totally differnt.

    • @finallyhaveausername5080
      @finallyhaveausername5080 9 หลายเดือนก่อน

      Try searching for style transfer programs rather than image2image generation; they tend to be more lossless.

  • @lorenzoiotti
    @lorenzoiotti 9 หลายเดือนก่อน

    Is there something like sadtalker for videos? Wav2lip worked on videos too but from what I’ve seen sadtalker only does images

  • @tja9212
    @tja9212 9 หลายเดือนก่อน

    may the fuzzyness (3:40) result of the model being trained by classic movie material, which normally has some amount of grain?
    photographs normally dont have visible grain, but in the movies you pretty much couldnt avoid it till recent digital developments...

  • @TraciMartin
    @TraciMartin 9 หลายเดือนก่อน

    How do you not include deforum? Which is the best.

  • @chynabad9804
    @chynabad9804 9 หลายเดือนก่อน +1

    Thank you, nice snapshot of the current capabilities.

  • @eloujtimereaver4504
    @eloujtimereaver4504 5 หลายเดือนก่อน

    Can we have links to some of your examples?
    I have not seen all of them, and cannot find some of them.

  • @e8error600
    @e8error600 9 หลายเดือนก่อน +4

    It was cool shit at first, now its getting scary...

    • @patrickfoxchild2608
      @patrickfoxchild2608 9 หลายเดือนก่อน +5

      It's already scary. This is just what the public has produced.

    • @tylerwalker492
      @tylerwalker492 9 หลายเดือนก่อน +4

      @@patrickfoxchild2608 And we'll never find out exactly what governments will produce!

  • @MrJohnnyseven
    @MrJohnnyseven 9 หลายเดือนก่อน +3

    Wow after 30 years online what do we have...people watching crap AI videos that all look the same....

  • @zrakonthekrakon494
    @zrakonthekrakon494 9 หลายเดือนก่อน +1

    So many options with so much nuance and customizability, I hope the best methods continue to evolve into the widely used tech of the future instead of being phased out.

  • @MihajloVEnnovation
    @MihajloVEnnovation 9 หลายเดือนก่อน

    What are your opinions on Kaiber?

  • @TAREEBITHETERRIBLE
    @TAREEBITHETERRIBLE 9 หลายเดือนก่อน +2

    *_please keep watching_*

  • @Donxzy
    @Donxzy 9 หลายเดือนก่อน +1

    As a former hobbyist with SD and photoshop, this video cracks me up and it's accurate indeed

  • @angamaitesangahyando685
    @angamaitesangahyando685 9 หลายเดือนก่อน +2

    AI waifus in 2025 is my only cope in life.
    - Adûnâi

  • @PrintThatThing
    @PrintThatThing 9 หลายเดือนก่อน

    Great video!!! Very helpful and fun to watch. 🎉😊

  • @darii3523
    @darii3523 9 หลายเดือนก่อน +15

    Ai is growing bigger and bigger

    • @boudescotch
      @boudescotch 9 หลายเดือนก่อน

      Ai is my pp

    • @TheOddBugg
      @TheOddBugg 9 หลายเดือนก่อน

      fo sho

  • @Arewethereyet69
    @Arewethereyet69 9 หลายเดือนก่อน

    thanks for the video. great channel by the way. just subscribed

  • @Rscapeextreme447
    @Rscapeextreme447 9 หลายเดือนก่อน

    I think we should call category 3 “corridor video creation”

  • @DG123z
    @DG123z 9 หลายเดือนก่อน +3

    Once it gets 3d modeling instead of just being videos solely of images, the moment will become a lot more realistic.

    • @robertceron9056
      @robertceron9056 9 หลายเดือนก่อน +2

      CSM and imagine 3d does it, but Picasso nvidia ai will have a better version

  • @humanharddrive1
    @humanharddrive1 9 หลายเดือนก่อน

    the ice cream so good part gave me whiplash

  • @alonsomartinez9588
    @alonsomartinez9588 9 หลายเดือนก่อน

    There is also Phenaki!

  • @raspberrymann
    @raspberrymann 9 หลายเดือนก่อน +1

    I want the source of this part 4:57

  • @dan323609
    @dan323609 9 หลายเดือนก่อน +1

    That day will come, when I try using nuke copycat with SD. Btw I made some tests and it was very not bad

  • @JDST
    @JDST 9 หลายเดือนก่อน +2

    "thank you, ice cream so good. yes yes yes gang gang gang"
    Such inspiring words. 😢😢😢

  • @nefwaenre
    @nefwaenre 9 หลายเดือนก่อน +2

    i have been using Simswap for 2 years now. i use it mainly to animate my characters in real life. i am waiting for a day when i can inpaint a consistent video. For rg- change the shirt colour of my subject in the video.

    • @finallyhaveausername5080
      @finallyhaveausername5080 9 หลายเดือนก่อน +3

      If you're just looking to animate a character based on real life movements then you could try something like EbSynth? you inpaint one or two keyframes per type of shot and it generates the rest.

    • @nefwaenre
      @nefwaenre 9 หลายเดือนก่อน

      @@finallyhaveausername5080 Thanks for the info. It's not just a character, i have these faces that i've created (there's only a few) from my paintings and dolls that i have and so i only need their faces to be there, which is why i use Simswap. Cuz i can have tons of videos with just these few characters this way.
      But i can't really post the videos online cuz then people might say it's stolen content, even though, this is an absolutely personal project and i have no intention of sharing this on a money based platform.
      So, i just want to change the shirt colours and maybe the bg from these simswap videos so that i can post online.

    • @SatriaTheFlash
      @SatriaTheFlash 9 หลายเดือนก่อน +1

      You should change to FaceFusion right now, it has the better results than SimSwap

  • @Benwager12
    @Benwager12 9 หลายเดือนก่อน

    3:15 I had to watch the video 3 times to hear "artistic"

  • @SylvesterAshcroft88
    @SylvesterAshcroft88 8 หลายเดือนก่อน

    The face morphing is so freaky on ai generated videos, also that isn't Margot Robbie! ;)

  • @obboAR
    @obboAR 9 หลายเดือนก่อน +1

    You're my go to AI image to Image to video video style text to Gan video multi frame to image generator, TH-camr.

  • @johnjohansson
    @johnjohansson 9 หลายเดือนก่อน

    What about zeroscope v3?

  • @Shajirr_
    @Shajirr_ 8 หลายเดือนก่อน

    The towel guy is the future of vtubing

  • @sneedtube
    @sneedtube 9 หลายเดือนก่อน

    I didn't quite get if there's a method to deepfake a live stream but I'm kinda re tar ded so I should probably give the video another rewatch

  • @Chuck8541
    @Chuck8541 9 หลายเดือนก่อน

    lmao at the guy standing backwards on the surfboard.

  • @joaodecarvalho7012
    @joaodecarvalho7012 9 หลายเดือนก่อน

    Things are about to get weird.

  • @blackterminal
    @blackterminal 9 หลายเดือนก่อน

    Would like Ai avatars to not loop hand movements but do more random movements.

  • @Kisai_Yuki
    @Kisai_Yuki 9 หลายเดือนก่อน +2

    IMO, a lot of these techniques are ... poor, but not for the reason you'd expect. The reason is because the underlying hardware needed to get a good result is out of reach. So starting with Stable Diffusion itself, yes, it can run on a smaller GPU, but the input training data was already low-resolution (512x512) and is incapable of generating anything else. That size was picked so it would fit on existing hardware. As soon as you tell SD to generate something bigger, the result is not a "higher resolution" result, but rather a different image with more chaotic data in the same palette.
    What is needed for good results is datasets that start out at the resolution of the output. So 2K, 4K, 8K, and that means the GPU video memory has a substantial increase each time, so to get an 8K image , you need 256 times as much VRAM as you would need for that 512x512, so if you needed 8GB for 512, you need 2TB for 8K. That's not even possible on a Nvidia DGX (which has 320GB.) So given present hardware, a 1K image would need a 32GB device.
    Which I think is going to have to come out is a tile-based model that renders 512x512 portions of each image and stitch them together, which means figuring out how to tell the AI that it's part of the same frame.

    • @phizc
      @phizc 9 หลายเดือนก่อน +2

      Stable Diffusion XL is trained on 1 megapixel images and the default generation size is 1024x1024. It can run on 8GB GPUs, AFAIK. It's trained on 1024x1024 images and some other specific resolutions.
      For SD 1.5 models you can upscale with ControlNet tiled. It splits the image into a grid and adds details to each, then combines it back to a seamless image.

  • @mat_deuh
    @mat_deuh 9 หลายเดือนก่อน

    Thank you for this review :)

  • @ceticx
    @ceticx 9 หลายเดือนก่อน +2

    Thought i didnt care about this at all but you kept me for all 20 minutes

  • @aspergale9836
    @aspergale9836 9 หลายเดือนก่อน +1

    2:50 - This shot is also AI-generated?

  • @andrewdunbar828
    @andrewdunbar828 9 หลายเดือนก่อน

    The prompt are interesting too are it?

  • @sotasearcher
    @sotasearcher 9 หลายเดือนก่อน +2

    you’re the MVP of this, keep it up 👏👏

  • @GENKI_INU
    @GENKI_INU 9 หลายเดือนก่อน

    How is ebsynth still relevant these days, in this landscape?
    Hasn't it been out forever now?

  • @Uthael_Kileanea
    @Uthael_Kileanea 8 หลายเดือนก่อน

    10:48 - I could hear:
    Dame da ne
    Dame yo
    Dame na no yo

  • @rem7502
    @rem7502 9 หลายเดือนก่อน

    1:34 bro wtf was that sponsorship lmao😵

  • @raphaelbussiere
    @raphaelbussiere 9 หลายเดือนก่อน +1

    Great vidéo ! perfect to share with friends :)

    • @miuki2721
      @miuki2721 9 หลายเดือนก่อน

      É

  • @guy_withglasses
    @guy_withglasses 9 หลายเดือนก่อน +3

    bro didn't link neuron activation

  • @MegaDixen
    @MegaDixen 9 หลายเดือนก่อน

    can wait get a new grapics card to play with this

  • @Tarbard
    @Tarbard 9 หลายเดือนก่อน +1

    This video really activated my neurons.

  • @zikwin
    @zikwin 9 หลายเดือนก่อน

    I tested almost all the mentioned techniques over the past few months, and nothing is missing as far as I know. Great video, sum everything, I like it

  • @JazevoAudiosurf
    @JazevoAudiosurf 9 หลายเดือนก่อน +1

    wow i actually learned something

  • @ummerfarooq5383
    @ummerfarooq5383 9 หลายเดือนก่อน

    That anime one is where we would like to see ni no kuni aniremaster

  • @GameSmilexD
    @GameSmilexD 9 หลายเดือนก่อน

    "that is not a legitimate hoverboard (it's got wheeeeels)"

  • @Dex_1M
    @Dex_1M 9 หลายเดือนก่อน

    wait can't you use pika labs to generate the woman with hair blowing then use deep fakes to fix the face ? and then loop it, and with some manure basic video editing you got an animation to put on your song ?

  • @allenraysales
    @allenraysales 7 หลายเดือนก่อน

    Thank you so much for this video. So informative! Keep up the great work!

  • @jakelionlight3936
    @jakelionlight3936 9 หลายเดือนก่อน

    introduction of quantum computing solves any divergence, if all possible routes are takin at the same time one route will be closest to perfect as possible..it will be indistinguishable from reality with zero lag... i imagine this is already being done if your in the mile high club... we are defiantly in the dark about a lot of things imo.

    • @obsidianjane4413
      @obsidianjane4413 9 หลายเดือนก่อน

      Someone handwaved "quantum something". Take a drink.

  • @moahammad1mohammad
    @moahammad1mohammad 8 หลายเดือนก่อน

    Slightly disappointed how many people fake the results of these AI's to make it seem it was entirely done with simple first-passthrough prompting only

  • @The3kproduction
    @The3kproduction 8 หลายเดือนก่อน

    thats next level catfish lol

  • @VJP8464
    @VJP8464 9 หลายเดือนก่อน +2

    We’re living in a truly unique age in human history; there’s the time of human history spanning a great many years before advanced technology, and in the future will be the time of the virtual being indistinguishable from reality, and everything being digital and supremely unnatural, which will span the time from several years from now to the end of humanity
    We’re the only humans who will ever get to experience the transitional phase between those two time periods, which will span only 1-200 years of the tens of thousands we have been/will be on this earth
    We’re the Guinea pigs of the technological future
    What we do these days is going to be critically important to the fate of people in the future, I sincerely hope we use our ‘trial run’ position for the good of everyone, especially since it’s too easy to use technology for malice

  • @Daniel_F_RJM
    @Daniel_F_RJM 8 หลายเดือนก่อน

    Great Video. thanks

  • @pameliaSofiA
    @pameliaSofiA 8 หลายเดือนก่อน

    This world is so fascinating and new to me. Can anyone tell me the easiest way (for a newbie) to get great results animating myself talking (starting with a real video)? I don't care about it being an accurate representation of me as I will put myself in different settings I just want the face to look good and move as a face would move. Thanks in advance for your direction!!

  • @OWMANez
    @OWMANez 9 หลายเดือนก่อน

    That’s insane…

  • @pointandshootvideo
    @pointandshootvideo 9 หลายเดือนก่อน +1

    Thanks for this video! The current state of the art is very disappointing. I'm wondering if creating a 3D controlnet skeleton and then generating 30 fps images using Reallusion would move the technology forward. Thoughts?

  • @DigitalForest0
    @DigitalForest0 9 หลายเดือนก่อน

    Thank you!, i personally got message juntil 0:38 that this video is not for me, i didn't waste my time, THANK YOU!

  • @NeXaSLvL
    @NeXaSLvL 9 หลายเดือนก่อน +1

    it's funny, technology used to help us create art, now we're using tools to assist the AI's video generation

    • @obsidianjane4413
      @obsidianjane4413 9 หลายเดือนก่อน +2

      The AI is still just the tool. For now.

  • @b.delacroix7592
    @b.delacroix7592 9 หลายเดือนก่อน +2

    No way any of this will be used for evil. Nope.

    • @theyoten1613
      @theyoten1613 6 หลายเดือนก่อน

      Every technology was used for evil. That's a non-argument.

  • @andrewdunbar828
    @andrewdunbar828 9 หลายเดือนก่อน

    makes it looks

  • @darkezowsky
    @darkezowsky 9 หลายเดือนก่อน +1

    roop is dead, but roop-unleashed is alive and even better ;)

  • @gaker19sc
    @gaker19sc 9 หลายเดือนก่อน

    7:18 Dude is NOT fine

    • @Iqbal1808
      @Iqbal1808 8 หลายเดือนก่อน

      bro ragdolled in the source engine

  • @kamillatocha
    @kamillatocha 9 หลายเดือนก่อน +1

    it all boils down to who will make the first AI Video porn
    and soon porn stars will go on strike too

  • @angloland4539
    @angloland4539 9 หลายเดือนก่อน

  • @Baraborn
    @Baraborn 9 หลายเดือนก่อน

    Wow great video.

  • @deemon710
    @deemon710 9 หลายเดือนก่อน

    This is all jaw-dropping sh*t. I worry about our fake news future 😢

  • @patrickfoxchild2608
    @patrickfoxchild2608 9 หลายเดือนก่อน +1

    hold the eff up, did anyone notice the Bud Light commercial it made had only women drinking it?

    • @tylerwalker492
      @tylerwalker492 9 หลายเดือนก่อน +2

      Bud Light knows it's new target demographic lmao