Scientists warn of AI collapse

แชร์
ฝัง
  • เผยแพร่เมื่อ 9 พ.ค. 2024
  • Learn more about AI, math, and physics with courses such as Neural Networks on Brilliant! First 200 to use our link brilliant.org/sabine will get 20% off the annual premium subscription.
    We’ve all become used to AI-generated art in the form of text, images, audio, and even videos. Despite its prevalence, scientists are warning that AI creativity may soon die. Why is that? What does this mean for the future of AI? And will human creativity be in demand after all? Let’s have a look.
    🤓 Check out our new quiz app ➜ quizwithit.com/
    💌 Support us on Donatebox ➜ donorbox.org/swtg
    📝 Transcripts and written news on Substack ➜ sciencewtg.substack.com/
    👉 Transcript with links to references on Patreon ➜ / sabine
    📩 Free weekly science newsletter ➜ sabinehossenfelder.com/newsle...
    👂 Audio only podcast ➜ open.spotify.com/show/0MkNfXl...
    🔗 Join this channel to get access to perks ➜
    / @sabinehossenfelder
    🖼️ On instagram ➜ / sciencewtg
    #science #technews #ai #tech #sciencenews
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 6K

  • @femkeligtvoet8896
    @femkeligtvoet8896 2 หลายเดือนก่อน +4353

    This reminds me of when I was playing with play-doh when I was young. You start out with many different colors, and somehow always end up with a big brown ball.

    • @Marc83Aus
      @Marc83Aus 2 หลายเดือนก่อน +242

      Great analogy.

    • @JasonW.
      @JasonW. 2 หลายเดือนก่อน +97

      Same happens with corn, radishes, and carrots.

    • @carlag9888
      @carlag9888 2 หลายเดือนก่อน +172

      The LLM's are becoming inbred lol

    • @bleeckerstblues
      @bleeckerstblues 2 หลายเดือนก่อน +34

      Agreed, excellent analogy.

    • @willisbarth
      @willisbarth 2 หลายเดือนก่อน +15

      Did it smell the same either way?

  • @jalvrus
    @jalvrus 2 หลายเดือนก่อน +1018

    "The more it eats its own output, the less variety the output has"... sounds exactly like the TH-cam recommendations algorithm.

    • @kenboydart
      @kenboydart 2 หลายเดือนก่อน +25

      Good grief .... I think your right !

    • @n.v.9000
      @n.v.9000 2 หลายเดือนก่อน +14

      It sound like 99.99% of humans.... TH-cam is already created output we use as input. AI has learned that behavior from us.

    • @SuliXbr
      @SuliXbr 2 หลายเดือนก่อน +9

      the YT algo is a reflection of your searches, my recommendation feed is always changing as I type in new searches for different content. but if instead you only live in the recommendations clicking away and never do your own new searches... sure it will get stale and sammey over time.

    • @Dr.JustIsWrong
      @Dr.JustIsWrong 2 หลายเดือนก่อน +5

      Which now only recommends from my own 'watch later' and 'watched history' lists. 🙄
      Oh, and utterly random music videos which I've never watched, ever.

    • @FRACTUREDVISIONmusic
      @FRACTUREDVISIONmusic 2 หลายเดือนก่อน

      Sounds like the problem with CBS Star Treks.

  • @Koperviking
    @Koperviking หลายเดือนก่อน +60

    I remember as a kid I used to record my voice and play it back on my speakers, which I then proceeded to record once more. By repeating it, I could hear how it slowly degraded until it was nothing more than a weird, synth-like sound.

  • @organfairy
    @organfairy หลายเดือนก่อน +58

    My biggest problem with AI is that it needs to get the information from somewhere, and sometimes these sources can be slightly dodgy. I did an experiment where I asked ChatGPT about some very narrow subjects: The Danish organplayer Peter Erling, the trio Klyderne, and the artist Jørgen Fonemy. These are subjects that I have some knowledge about and actually have written Wikipedia articles about. I could see that most of the answer that I got from ChatGPT was based on the exact Wikipedia articles that I wrote! I have tried to write the truth in those articles, but if I didn't care if things were correct - or worse; if I deliberately wanted to mislead people, then AI would base the answers on wrong data, if there wasn't multiple sources available. The problem I see with AI is that we trust it too much. Already now there are people who believe that it is an omniscient trustworthy source of all answers and that it will always be more correct than human knowledge or just knowledge that we have googled or looked up in an oldfashioned book.

    • @illarionbykov7401
      @illarionbykov7401 หลายเดือนก่อน +2

      Thank you for posting that. I suspected something like that is true.
      I like to test LLMs with riddles and verbal puzzles. The first impression I got was that the best of the LLMs were brilliant, as they could solve some of the toughest puzzles correctly, puzzles famous for being difficult even for the sharpest humans, and they even had good answers to pointed follow up questions. Then I tried novel puzzles based on famous ones, but with the questions reworded (by me) in subtle ways which changed the correct answer, and then the LLMs usually defaulted to "pattern matching" and giving me answers which were correct answers to the original "pattern" puzzles, but wrong answers to the novel reworded versions of the puzzles they were answering at the moment. They are good at answering known questions with known answers which are already published or posted on the Internet, but have trouble with novel variations which have never been published before. They are not figuring out the answer, but giving their best guess based on what they've already seen in their dataset.
      OTOH, the best LLMs keep getting better at adapting to novel variations, month to month, so it's wrong to generalize based on results from more than a few months ago. Their abilities are progressing rapidly at the moment.

    • @sportsentertained
      @sportsentertained 29 วันที่ผ่านมา +6

      It's also bad at interpreting articles. It told me something related to tech that I knew to be false and it provided links to articles "proving" it was correct. I read the articles and ChatGPT misinterpreted the text of every single cited source. Complete garbage as a research tool.

    • @illarionbykov7401
      @illarionbykov7401 29 วันที่ผ่านมา

      @@sportsentertained which version did you use? I've read that they keep dumbing down ChatGPT to save on backend resources. The paid version is better than the free version, but still not as good as it was at the beginning (before it got flooded with new users)

    • @robertagren9360
      @robertagren9360 26 วันที่ผ่านมา

      Then start printing books.

    • @katsmiles6734
      @katsmiles6734 16 วันที่ผ่านมา +2

      In other words, it's scraping data and possibly reorganising it slightly or cutting and pasting and not attributing where the data is from. Very sneaky.

  • @alieninmybeverage
    @alieninmybeverage 2 หลายเดือนก่อน +2142

    3rd possibility: AI learns how to gaslight us, and we forget how many legs elephants have.

    • @arctic_haze
      @arctic_haze 2 หลายเดือนก่อน +57

      This is a real possibility. I already noticed that my brain accepts the AI generated images as real even as I know what problems they have.

    • @alieninmybeverage
      @alieninmybeverage 2 หลายเดือนก่อน +34

      @@arctic_haze agreed. While it was said tongue in cheek, there are many kinds of peripheral knowledge about which we are impressionable.

    • @arctic_haze
      @arctic_haze 2 หลายเดือนก่อน +1

      @@alieninmybeverage I think it already happens on Instagram. People are using filters aimed at making them look like AI generated photos (smooth and symmetrical faces).

    • @markdowning7959
      @markdowning7959 2 หลายเดือนก่อน +16

      Sabine is a particularly good AI avatar. 🤖

    • @allenshafer1768
      @allenshafer1768 2 หลายเดือนก่อน +1

      Oh no

  • @realpdm
    @realpdm 2 หลายเดือนก่อน +568

    This could lead to a .. Nightmare on LLM Street....

    • @naparcasc
      @naparcasc 2 หลายเดือนก่อน +24

      That’s way funnier than it has any right to be 😂😂😂

    • @aromaticsnail
      @aromaticsnail 2 หลายเดือนก่อน +16

      Dad??? Did you get the milk?

    • @enriquea.fonolla4495
      @enriquea.fonolla4495 2 หลายเดือนก่อน +6

      THAT WAS GENIUS!

    • @pvanukoff
      @pvanukoff 2 หลายเดือนก่อน +3

      Win.

    • @deltaxcd
      @deltaxcd 2 หลายเดือนก่อน +1

      LLM are fine that's washt is known as training in synthetic data and it is done deliberately and it is reason why LLMs are getting better

  • @dunmatta2670
    @dunmatta2670 2 หลายเดือนก่อน +64

    That plastic analogy is probably the most succinct depiction of AI generated content contaminating the environment and why I always thought that human intervention in the use of computers is always necessary. We can fake human thinking to a degree, but getting the full complexity is still a pipe dream.

    • @LukaMagda1
      @LukaMagda1 23 วันที่ผ่านมา +6

      I don't understand why would we want machines thinking for us in the first place.

    • @magonus195
      @magonus195 11 วันที่ผ่านมา

      ​@@LukaMagda1sloth, indolence, and eventually totalitarian control

    • @trip_t2122
      @trip_t2122 10 วันที่ผ่านมา

      ​@@LukaMagda1 I think we can be dumb as a species. Just the same way we develop bombs that can completely wipe us. But maybe we do it for the sake of it or because we're just curious 🤷

    • @MensHominis
      @MensHominis 10 วันที่ผ่านมา +2

      @@LukaMagda1 There’s this rather grim meme (I can’t remember the source): “Years back we were thrilled about AI taking over all of our annoying work so we could all focus on self-improvement and self-fulfilment, all become artists and the likes. What has happened instead is that AI is now creating our art and our writing while we’re still cleaning toilets for a living.”

    • @primus0348
      @primus0348 9 วันที่ผ่านมา +4

      @@MensHominis Instead of Doing what we imagined it to do, its does the exact opposite, How did we as a Species Fuck up the Simplest Idea that AI is suppose to be, we had one job and we made that concept into the Worst thing Possible.

  • @sheshotjfk8375
    @sheshotjfk8375 2 หลายเดือนก่อน +9

    I recognized this as a possible problem when I learned that they were training AI by allowing them to converse with people on Reddit. AI developers can now apparently pay a fee to be allowed to plug their AI into Reddit and have it learn by having conversations there. It occurred to me that, "wait a minute, wont then the AI's end up conversing with each other and training each other??? Won't this cause problems?"

  • @markvoelker6620
    @markvoelker6620 2 หลายเดือนก่อน +2464

    Apparently in the original Matrix movies storyline, the reason why the machines needed to keep those troublesome humans around was not as an energy source (“batteries”) but as a source of creativity. But the writers thought that this idea was too complex so they substituted the battery idea instead.

    • @GabrielLeni
      @GabrielLeni 2 หลายเดือนก่อน +70

      It's also in 'The Machine Stops'

    • @firecat6666
      @firecat6666 2 หลายเดือนก่อน +235

      Too bad, that's a much better idea. Although with all this talk about creativity, and AI putting an end to creativity and whatnot, I've never seen anyone mention that creating doesn't only mean creating good stuff, it also means creating crap. It seems to me that the people behind all these AI programs usually want them to create good stuff and not crap, so to me it's no wonder that they tend to end up converging (to creating good stuff, I'd hope) if they're trained on their own creations. Even if the original idea for The Matrix was better, I'd find it hard to believe that after a while the machines would still need humans at all, after they had learned enough about how to have ideas, good and crap, from us humans (and obviously, over time their thought processes would converge in the direction of having better and better ideas).
      EDIT: forgot a comma

    • @rumination2399
      @rumination2399 2 หลายเดือนก่อน +330

      Lol. The battery idea was the dumbest thing in the movie

    • @sh4dow666
      @sh4dow666 2 หลายเดือนก่อน +43

      I agree that the creativity idea would have been much better than the battery one, but ... all our knowledge about physics comes from inside the matrix, so maybe they just fabricated a different "physics engine" for it, so anyone escaping would be sufficiently confused to be easily captured?

    • @Rapscallion2009
      @Rapscallion2009 2 หลายเดือนก่อน +34

      Same in the Terminator universe. Well, almost. Skynet keeps useful people around to develop terminators and so on. In the early stages it actually preserves workers until they have built automated factories.

  • @robertruffo2134
    @robertruffo2134 2 หลายเดือนก่อน +1575

    As someone who used to play with photocopiers as a kid... A copy of a copy of a copy is always much worse and weirder than you might think. Small flaws amplify until you all you get is a smudged blur.

    • @phattjohnson
      @phattjohnson 2 หลายเดือนก่อน +50

      That's if you're using so-called "AI" exclusively. Using it sporadically, as merely another software tool in your creative arsenal, will give you the edge on those who flatly refuse to use it on principle.
      Anyway, there's financial incentives for big tech companies to ensure their AI is more accurate, faster, easier to access etc. than the competition. They're not just going to press the red button and let their AI run loose.. It's all still a service that needs 24/7 support by HUMANS behind the scenes.

    • @justafriend5361
      @justafriend5361 2 หลายเดือนก่อน +10

      Especially if the original was the nth copy of a blueprint.
      Had this in highschool...

    • @QIKUGAMES-QIKU
      @QIKUGAMES-QIKU 2 หลายเดือนก่อน

      Especially if that photocopy is of your butt😂

    • @QIKUGAMES-QIKU
      @QIKUGAMES-QIKU 2 หลายเดือนก่อน +17

      ​@@phattjohnsonBot 😂

    • @Also_sprach_Zarathustra.
      @Also_sprach_Zarathustra. 2 หลายเดือนก่อน +5

      You truly don't know how ai systems (& AGI) work. Real AI systems are'nt photocopiers.

  • @diegocrusius
    @diegocrusius 27 วันที่ผ่านมา +5

    to me the scariest thing is how quickly people advocated against themselves the moment they realized the potential with AI

  • @Rosie-uf5ox
    @Rosie-uf5ox หลายเดือนก่อน +156

    I love that this underscores how complex human intelligence really is.

    • @squamish4244
      @squamish4244 หลายเดือนก่อน +3

      It doesn’t seem to be that complex, however, given how quickly AI went from stupid to smart.

    • @martakrasuska2483
      @martakrasuska2483 หลายเดือนก่อน

      Or perhaps we just fell deep into the trap of belief system that as a society and civilisation we have already learnt everything there ever was about ourseleves and our human consciousness. @@squamish4244

    • @A.waffle
      @A.waffle หลายเดือนก่อน

      Yes, we will believe anything 🤣

    • @man.horror
      @man.horror หลายเดือนก่อน +32

      @@squamish4244No, its not smart at all in reality. It’s not even actual AI. It is an algorithmic system. Give it more data and it will get sharper. Thats how it’s programmed. It has no ability to think or comprehend what its outputting. A true AI that simulates the human mind in digital means would likely use algorithms as part of its system, but not as the entire basis.
      Todays “AI” is nothing but a generation system. And it’s not able to think and uniquely create anything truly new, based on the limitlessness of the human mind. It can mash and mutate things due to its flaws of understanding, but it is actually not truly and willingly making something new. It copies and makes mistakes which could be claimed to be creativity, which these algorithms have no actual ability to harness.

    • @squamish4244
      @squamish4244 หลายเดือนก่อน +3

      @@man.horrorYes, the expert swoops in. Whatever.
      It's not that AI is that smart, it's that humans are not as smart as we thought we were.
      I'll take Max Tegmark's books over your two paragraphs here, thank you very much.
      Copium over 9000.

  • @MichaelDembinski
    @MichaelDembinski 2 หลายเดือนก่อน +589

    A friend in the UK is a graphic designer; he says that over the past few months, more and more clients have been saying 'NO!' to AI-generated artwork - "it's too samey". They'd rather pay more for something original. Trouble is, AI has pushed down the rates; so while designers and artists are noting an uptick in requests for proposals, the money is much worse.

    • @typograf62
      @typograf62 2 หลายเดือนก่อน +49

      Yes, AI images looks horrible. To many details that make no sense, glittering stuff, imposing backgrounds, flaming skys, opulent clothing ... Often I do not wan't to read the text, it just feels like candy all the day.

    • @MCRuCr
      @MCRuCr 2 หลายเดือนก่อน +48

      AI will teach us what truly matters ... Human connection and true emotions is what we should care about. Spending time with your loved ones, (com-)passion etc.

    • @jackmiddleton2080
      @jackmiddleton2080 2 หลายเดือนก่อน +4

      It just seems like there is so much competition in anything creative that whoever is paying can have people jump through whatever hoops they want. And why wouldn't you ask for original art instead of AI generated art if you have the leverage.

    • @zperdek
      @zperdek 2 หลายเดือนก่อน +1

      ​@@typograf62Only way out of it is that designers has to use Ai and start to manage it.

    • @ghasttastic1912
      @ghasttastic1912 2 หลายเดือนก่อน +2

      ai cannot get what a roblox game thumbnail looks like. it can generate one but its not convincing at all. even the other styles of roblox thumbnail dont fit what ai generates.

  • @creatingwithlove
    @creatingwithlove 2 หลายเดือนก่อน +880

    This is exactly what I was telling people the other day. Our greatest danger with AI isn't that it'll take over but that at the moment we begin relying on it most, the more it will collapse because it's going to end up cannibalizing itself.

    • @moose9211
      @moose9211 2 หลายเดือนก่อน +7

      Wouldn’t there be backups for ai to be set back is this were to happen?

    • @goodlookinouthomie1757
      @goodlookinouthomie1757 2 หลายเดือนก่อน +41

      @moose9211 Ideally you'd think so but from what I understand nobody even knows how these things think any more so it's hard.

    • @AparnaGurudiwan
      @AparnaGurudiwan 2 หลายเดือนก่อน +4

      Why would it cannibalize itself ? I don't get it

    • @moose9211
      @moose9211 2 หลายเดือนก่อน +1

      @@goodlookinouthomie1757 guess we’ll have to wait and find out

    • @Szpagin
      @Szpagin 2 หลายเดือนก่อน +55

      ​@@AparnaGurudiwan AI being trained with AI-generated content.

  • @arifchagla8752
    @arifchagla8752 หลายเดือนก่อน +7

    That’s really interesting.. because comparing it to ‘bad cinema’, most bad cinema is bad in the same way, if that makes sense. Overused tropes, predictable storylines, cliche characterisation. Is there someone that can expand on this thought?

    • @rogierb5945
      @rogierb5945 9 วันที่ผ่านมา

      'Overused tropes, predictable storylines, cliche characterisation.' They exist for a reason, because most people like them. All the bad stories of the past have been shedded and only the really good ones remain. They inspire new generation of storytellers. Some new ideas might be added but most new ideas will be shedded because they arent like by the audience. Everything you see today are 'tried and tested' formulas. They have a proven trackrecord througout human history. Most people arent particularly interested in originality, they want what they like, and storytelling history has already filtered most of the ideas wich people like.

    • @arifchagla8752
      @arifchagla8752 9 วันที่ผ่านมา

      @@rogierb5945 most people might like them, but there was a time when they didn’t. Production companies try to ‘play it safe’ and by doing so release stuff that leaves audiences feeling empty/unfulfilled. I recently watched a film, Challengers, it was not what I expected, not that film exactly, but maybe the answer lies to taking risks and creating something truly engaging and unique, then the trope cycle repeats. Is it self cleansing? Right now it really needs a cleanse I feel like

  • @drachimera
    @drachimera 28 วันที่ผ่านมา +3

    Sabine, as a professional in the application of machine learning in medicine I would like to thank you for making this video! It’s understandable and it reaches a lot of people! There is the AI hype (which people should not believe because it comes from executives and rookies) and there is the machine learning reality that veterans understand. This technology will be useful in automating some drudgery and common simple tasks…. It’s dogshit at doing anything truly valuable. What’s most worrying is the very real threat, without laws, that this nonsense will create such a firehouse of bullshit that we can’t get through our email, find what we need on the web, tell the difference between fact and fiction, and generally think for ourselves!

  • @nerdexproject
    @nerdexproject 2 หลายเดือนก่อน +903

    It's true, when you've worked enough with ChatGPT you can immediately recognize a ChatGPT text. It just always has a certain vibe that makes it distinguishable from human text.

    • @manutosis598
      @manutosis598 2 หลายเดือนก่อน +47

      I saw a guy using chatgpt on youtube comments and i can confirm

    • @NoFeckingNamesLeft
      @NoFeckingNamesLeft 2 หลายเดือนก่อน

      Corporate-lobotomy vernacular English

    • @destructionman1
      @destructionman1 2 หลายเดือนก่อน +9

      Is your comment chatGPT? Is this?

    • @gmenezesdea
      @gmenezesdea 2 หลายเดือนก่อน +48

      I fear it gets so good that we can't even pick up on those little flaws and quirks any more especially for videos. When those Sora videos were released the only one I could tell was AI was the woman walking on the street (her hand and face had weird details). I imagine the next ones will fool me better.

    • @TCakes
      @TCakes 2 หลายเดือนก่อน +47

      The key is for a human to utilize and modify AI generated content, not just copy/paste. Also realizing that some ai is better than others at specific tasks (gemini for emails, chatgpt for code, etc)

  • @tandt7694
    @tandt7694 2 หลายเดือนก่อน +267

    My experience with chat GPT is that you can ask it 2 or 3 questions, get it to contradict itself, and when you point out the contradiction, it starts to ask if you are angry, and/or says IT'S taking a break from YOU to let you relax...😊😮😂

    • @l.w.paradis2108
      @l.w.paradis2108 หลายเดือนก่อน +70

      So it does have gaslighting down pat. 🤣🤣🤣

    • @tandt7694
      @tandt7694 หลายเดือนก่อน +12

      @@l.w.paradis2108 That's exactly what happened. 💯

    • @officialpennsyjoe
      @officialpennsyjoe หลายเดือนก่อน +20

      Makes one wonder if the AI engineer had a lack of qualification or lack there of critical thinking skills.

    • @JesseDLiv
      @JesseDLiv หลายเดือนก่อน +10

      The gaslighting has begun

    • @derrickmcadoo3804
      @derrickmcadoo3804 หลายเดือนก่อน +5

      Don't stare into the Dark Crystal. Has no one watched the movie?

  • @lg2971
    @lg2971 19 วันที่ผ่านมา +1

    Thank you for clearly articulating what many have been trying to point out.

  • @GaryJust
    @GaryJust หลายเดือนก่อน +4

    Short, to the point, informative. Thank you, Sabine.

  • @joanlopez8769
    @joanlopez8769 2 หลายเดือนก่อน +434

    You reminded me of Pandora, the music recommendator that provided you with music according to the 👍 and 👎 that you gave to the songs proposed. No matter if you started with Black Sabbath, Chopin or Yunchen Lahmo. Eventually, after a couple dozen songs, you always ended up in a Coldplay loop.

    • @Lazarus1095
      @Lazarus1095 2 หลายเดือนก่อน +93

      What dystopian hellscape is this?!

    • @shrimpkins
      @shrimpkins 2 หลายเดือนก่อน +109

      All roads lead to Coldplay.

    • @mipmipmipmipmip
      @mipmipmipmipmip 2 หลายเดือนก่อน +35

      This seems to be the algorithm TH-cam Music uses for their "song radio" playlists 😢

    • @xizar0rg
      @xizar0rg 2 หลายเดือนก่อน +10

      This sounds like user error; I've had a sub to Pandora basically since it was still just the Music Genome Project and I've never heard Coldplay on my stations.
      Coldplay: Not even once.

    • @user-mz6iy5ip9o
      @user-mz6iy5ip9o 2 หลายเดือนก่อน +45

      Damn this reminds me of youtube. I've had to start making new accounts all the time because the algorithm is quickly devolving into recommending quite literally the same videos I've already watched over and over and over and i can;t find anything new or exciting. Music is by far the worse, I decide I want to go outside my usual tatse and listen to nostalgic dirty pleasure pop from my youth, youtube wants me to listen to my usual stuff again... It's absolutely gotten worse than it used to be without a doubt

  • @icls9129
    @icls9129 2 หลายเดือนก่อน +208

    Adding random variation probably isn't as easy as it may sound because the randomness still has to follow certain rules. For example, no one is going to believe that elephant with two trunks.

    • @a_kazakis
      @a_kazakis 2 หลายเดือนก่อน +19

      I think you are mistaking randomness for imperfections. She is not saying images need to have faults on them. Diversity here means for example some elephants are young, some adult. Some are eating some are sleeping some are drinking. Some are photographed at night, some are walking on grass, some on rock, etc. If you see the AI samples provided, they all look exactly the same. Zero diversifying.

    • @lolbajset
      @lolbajset 2 หลายเดือนก่อน +36

      @@a_kazakis but that's his point... how will the AI know what's appropriate and what's not? how can it know to add diversity in lighting and background, and not in the number of trunks or skin color?

    • @drno87
      @drno87 2 หลายเดือนก่อน +6

      AI models usually have some way of computing how likely they think different outputs are. A model that turns a written prompt into an image has some notion of how "close" an image is to the prompt. Instead of taking the closest image to the prompt, you might instead take another nearby image determined by some random number.
      Unfortunately, there isn't a good rule for defining the precise details of the randomization scheme. There's a lot of ad-hoc methods that work well for one group of prompts but fail for others.

    • @lip3gate
      @lip3gate หลายเดือนก่อน +10

      @@a_kazakis It makes no sense. There are already millions of photos of elephants from different angles carrying out different activities in different scenarios. If ALL the photos available on the internet (copyright or not) are not enough for the model to be able to generate convincing photos, the problem is not having more diversity in the dataset

    • @audreylin3466
      @audreylin3466 หลายเดือนก่อน +5

      ​@@a_kazakis It reminds me of a children's art class. One kid will draw a house, car and tree; And a dozen other kids will copy them. There may be variations like an apple tree or a dog but they're all relatively alike.

  • @denisematteau
    @denisematteau 7 วันที่ผ่านมา +1

    I was using an AI image generator to come up with rug designs. The first few were great but it soon deteriorated to regurgitated images similar to what it already produced.

  • @Ibhenriksen
    @Ibhenriksen หลายเดือนก่อน +2

    It's kinda like Google search engine. It started out sucking, then there was a time it was pretty good to find stuff. Now it sucks again....

  • @Muxxyy
    @Muxxyy 2 หลายเดือนก่อน +497

    There's a third option: there may be soon a deliberate attempt to poison the content to make it unreadable for AI. There are already tools out there that scramble images just enough to make them confusing for AI to use as a training set.

    • @phattjohnson
      @phattjohnson 2 หลายเดือนก่อน +37

      Given how much these systems have already been trained, any 'poisoned' images would now likely be ignored as the noise they probably amount to.

    • @darkthunder301
      @darkthunder301 2 หลายเดือนก่อน +88

      @@phattjohnson if there's enough poison then statistically it will reach the sample set of plenty AI systems and lock itself into garbage. If the poison is ignored then that's a smaller sample space AI has access to and become boring and derivative.

    • @ArcanePath360
      @ArcanePath360 2 หลายเดือนก่อน +12

      The only way I know is by setting your meta data to your images to be erroneous. How can you scramble an image and still have it viewable to humans? Doesn't the AI access it the same way we would?

    • @rmidifferent8906
      @rmidifferent8906 2 หลายเดือนก่อน +34

      ​@@ArcanePath360You can change a lot of pixels slightly without humans noticing any changes. AI will see it though and learn accordingly

    • @ArcanePath360
      @ArcanePath360 2 หลายเดือนก่อน +8

      @@rmidifferent8906 But if it's unnoticeable, what's the point?

  • @johnelmer1556
    @johnelmer1556 2 หลายเดือนก่อน +519

    My experiance with ChatGPT shows it to be a regurgitator, the test questions were in an area of X-ray physics that I know well and it spewed out all the usual stuff with no insight, no deep understanding, no creativity, nothing that would indicate any form of curiosity.

    • @lukeskyvader3217
      @lukeskyvader3217 2 หลายเดือนก่อน +59

      Still enough to replace 98% of the current jobs ;)

    • @othercryptoaccount
      @othercryptoaccount 2 หลายเดือนก่อน +5

      3.5 or 4?

    • @Threemore650
      @Threemore650 2 หลายเดือนก่อน +9

      I think Meghan Markle gets it to write her speeches.
      It’s all wordsoup.

    • @Glacierlune
      @Glacierlune 2 หลายเดือนก่อน +20

      ​@@lukeskyvader3217
      I like how you said it like it actually happened but there isn't any evidence beyond some idiot repeating marketing material that couldn't be proven as lying even tho everyone knows they are making shit up.

    • @user-ni2rh4ci5e
      @user-ni2rh4ci5e 2 หลายเดือนก่อน +16

      garbage in & garbage out. Put in the extremely usual stuff, expecting something novel? GPT is basically bound to what you ask, mirroring the original input.

  • @tygorton
    @tygorton หลายเดือนก่อน +34

    Glad you started this conversation. There is also the theft component of generative AI. A TH-camr like yourself will get automatically copyright struck for using 4 seconds of a clip in a 20 minute original video. Yet these generative AI companies can use entire social media platforms with content painstakingly created by individuals across decades to create their data sets. This is peak hypocrisy in which, as per usual, corporate "big money" is protected while the individual is left with no means of defending their content. Generative AI is 100% theft in my opinion.

    • @barbi111
      @barbi111 17 วันที่ผ่านมา +3

      I agree

    • @darkushippotoxotai9536
      @darkushippotoxotai9536 9 วันที่ผ่านมา +1

      No art is truly "original". Artists are inspired by previous artists who are inspired by their surroundings and modify reality slightly based on their mental conception of what they want to highlight. Art is derivatised inherently by nature of human learning. Generative AI follows similar processes. It doesn't 'copy and paste' as people have claimed. It has a distinct concept, albeit less defined than a human, of what it is asked to portray. AI art is inspired by and not directly copying actual works. If we start copyright striking AI, it should follow that we strike virtually every other art piece.

    • @tygorton
      @tygorton 9 วันที่ผ่านมา +2

      @@darkushippotoxotai9536 You keep believing that. It's a tired and completely flawed argument. First off, there is a human TIME factor involved. A human artist must first put in the hundreds of hours of work to accomplish some level of mastery over their craft before they can even THINK about mimicking another artist's style. That process produces mutual respect. This entire component is lost with "AI" slop. There is so much more at play here but it's just not worth getting into in a comment section on TH-cam for someone who has no actual desire to objectively weigh new perspectives. You want the AI future. Well, it's coming. Nothing will stop it. The tech overlords are investing trillions so you'll get your wish. I hope it is everything you want it to be.

    • @darkushippotoxotai9536
      @darkushippotoxotai9536 9 วันที่ผ่านมา

      @@tygorton So, simply requiring more time and being less efficient and sometimes even of a lower quality is better because a human made it ? Sidenote, I didn't really say mimicry, but rather drawing inspiration. Sure, AI can do that as well, but I was moreso talking about inspiration or to put it simply, pointers or definitions or Illustrations of art. Humans do not make an unprecendented or completely unique art. It's subconciously drawing on other works and surroundings of the artist. Almost Same as an AI, just very inefficient. As for intent, It's a human writing a prompt. An AI doesn't simply mash things into a image. How many artists you know of have drawn a celtic man chasing a dog through a world made up of needles ?

    • @tygorton
      @tygorton 9 วันที่ผ่านมา +1

      @@darkushippotoxotai9536 Enjoy the "efficiency". Like I said, your AI future is coming. It will be a world of emptiness filled with people who lack wisdom; the evidence of this is already permeating every aspect of our culture and it hasn't even started yet. Enjoy.

  • @notBeggingMattandLissy2PlayRE4
    @notBeggingMattandLissy2PlayRE4 หลายเดือนก่อน +1

    This is already true for me when writing. In the first few rounds it appears as if the AI is very creative but soon after some repetitions it becomes clear that the AI keeps repeating itself over and over again. This is one of the reasons I am not concerned too much. It appears that the human still has to input A LOT of guidance to make sure it doesn't repeat itself and actually gives you more interesting "mixes" instead of repetition.

  • @lundsweden
    @lundsweden 2 หลายเดือนก่อน +183

    So basically, if you keep feeding the output back into the input, you could get a feedback loop.

    • @Greenmachine305
      @Greenmachine305 2 หลายเดือนก่อน +2

      Not exactly in this case but I certainly get your point, in that the result is undesirable if one values health or positivity. To your point, I think a better description of Sabine's observation about the failing of AI would be "garbage in, garbage grows". Perhaps the creators should take this to heed and develop systems that augment the process to manage the generated information in a way that aligns with what is in humanity's best interest.
      Less garbage is in everyone's best interest.

    • @SandersMacLane
      @SandersMacLane 2 หลายเดือนก่อน +2

      This would make an interesting experiment. Begin with a discrete distribution of objects which is peaked, like a Gaussian.Sample the entire distribution gauging similarity as a dot product. Exclude one most-dissimilar object each time the entire distribution is sampled. ultimately you should sharpen the distribution until you get a spike at the most probable /identical objects.

    • @Greenmachine305
      @Greenmachine305 2 หลายเดือนก่อน

      @@SandersMacLane What field do you work in?

    • @fromfareast3070
      @fromfareast3070 2 หลายเดือนก่อน

      Sounds like Systems theory

    • @g0d182
      @g0d182 2 หลายเดือนก่อน +1

      😮😮....and or a prompting problem, using basic prompts and expecting deep answers

  • @tamlynburleigh9267
    @tamlynburleigh9267 2 หลายเดือนก่อน +259

    You make a good point. Already I can usually pick the ‘style’ of AI generated images. They have a certain ‘style’ because they are in a sense too perfect, too smooth, too balanced. It is not something one could define in some cases, but the human brain is good at recognising patterns.

    • @TheMillionDollarDropout
      @TheMillionDollarDropout 2 หลายเดือนก่อน +10

      Tell that to the tons of people coming at a genuine human artists because they all thought he was lying and using an image generator

    • @RustOnWheels
      @RustOnWheels 2 หลายเดือนก่อน +12

      Too smooth..? I find there is this blurriness that makes them so easily recognizable.

    • @MyAmpWamp
      @MyAmpWamp 2 หลายเดือนก่อน +27

      They often have this fractal like composiotion. Many generated images with people have a bit of paintairly feeling because the most of database was artists pages like Artstation. You can easily see Artgerm's style in many of pritty girls pictures. And why all the images are young woman because this is young woman is most popular subject on these pages when it comes to people. In photography young woman is I believe also one of the most popular subject in human cathegory.

    • @RustOnWheels
      @RustOnWheels 2 หลายเดือนก่อน +6

      @@MyAmpWamp this calls for Erik swamping AI with shirtless old men

    • @capnbarky2682
      @capnbarky2682 2 หลายเดือนก่อน +13

      There is no composition to AI art. Human artists will be selective about rendering in order to focus the viewer on certain things.

  • @RobertMarzullo
    @RobertMarzullo 4 วันที่ผ่านมา

    Unfortunately it will only be a matter of time and they will either get better with the parameters that introduce random variables that simulate creativity or they will achieve singularity which they are moving extremely fast towards. I believe we just have to keep creating original works of art. Ai can never replace that. Draw and paint more originals and let Ai exist for the people that don’t want the joy and fulfillment that true creativity brings them.

  • @Jumptownwore
    @Jumptownwore หลายเดือนก่อน +2

    Makes me think of the 3 body problem issue/chaos theory. The more variables, the faster chaos erupts.

  • @ZackLee
    @ZackLee 2 หลายเดือนก่อน +520

    As an artist, this is a known issue in HUMANS
    Thats why the art solution is to look at the "old masters" as mentors before learning how to draw from more modern artists

    • @jeanclaudethedarklord6205
      @jeanclaudethedarklord6205 2 หลายเดือนก่อน +32

      Really "love" how a tool for HUMAN expression is now replaceable by a fucking machine

    • @asdu4412
      @asdu4412 2 หลายเดือนก่อน +124

      Call me a snob, but I'm even more pessimistic about the decline of human taste than I am about the technical shortcomings of AI, which is a problem that reliance on AI for the production of images, text, music, etc. will likely exacerbate, but certainly didn't create.
      From my point of view, even before it started to become obvious how bad and samey AI art really was, it was already quite obvious how the stuff people wanted AI to create was junk in the first place: pop culture fanart and stuff that mimicked stereotypical pop culture tropes, done in a glossy, quasi-realistic style. The only "interesting" AI art occurred early, when AIs tended to fail at their task and produced bizzarre unintentional surrealism.
      There was a famous image of a collection of completely unrecognizable objects that made the rounds a few years ago and which was (incorrectly) described as an attempt at reproducing the visual experience of someone having a stroke (whereas it was just AI image generation still being too primitive to successfully reproduce its models): that might well have been the aesthetical peak of AI art.

    • @DarkFox2232
      @DarkFox2232 2 หลายเดือนก่อน +11

      Or adopt creative mentality. Next time you create, take piece of paper. Crumple it. And use it as stippling too.
      For following projects, paint paper with some thin color, let it dry. Put layer of transparent soap or similar material. Dry again. Layer of another color, followed by different color. Repeat few more times. Final layer should be black or white paint. Then use scratching tool to "draw" with different pressure.
      Even lid of some jar can be used as artistic tool for painting. Or plastic body of old pen as spraying tool.
      Same applies to sculpting, dancing, music, ... Just let your mind free itself from cage of mundane existence.

    • @truck6859
      @truck6859 2 หลายเดือนก่อน +9

      And then the true output comes from the human soul, which AI doesn't have.

    • @FragmentOfInfinity
      @FragmentOfInfinity 2 หลายเดือนก่อน +14

      ​@@truck6859correct. Eventually with enough training and data purification, AI will have more soul than humanity.

  • @PhilMoskowitz
    @PhilMoskowitz 2 หลายเดือนก่อน +39

    Garbage In/Garbage Out. I've been saying the same thing about both AI and Analytics for the past decade and a half. People only want to look at processes, algorithms, ease of use, speediness, raw power, TCO, design and pretty UI with both AI and Analytics. You rarely hear people talk about things like bias, data integrity and context. Those three things only come into conversation when AI and Analytics produce horribly incorrect results.

    • @KevinOlsen-cd9ez
      @KevinOlsen-cd9ez หลายเดือนก่อน +6

      But understanding bias, data integrity and context would require...uh...you know, uh...like...thinking. we can't have that.

  • @Youngmichaelthekid
    @Youngmichaelthekid หลายเดือนก่อน +4

    I really like this take. Thank you for all the information.

  • @istvanpraha
    @istvanpraha หลายเดือนก่อน +1

    In my industry gives vague answers when you ask about laws. It keeps saying that things vary. They don’t vary, there’s just more than two answers. That’s not the same as varying. Some customers go under option A and others under option B. You can’t just tell them it varies

  • @gsvenddal728
    @gsvenddal728 2 หลายเดือนก่อน +107

    Wow... this is like ultra-high-speed "Groupthink"

    • @jovetj
      @jovetj 2 หลายเดือนก่อน +3

      Yup. And people fear this! LOL! (Not that herd mentality and groupthink aren't bad things, among human...)

    • @DJ_POOP_IT_OUT_FEAT_LIL_WiiWii
      @DJ_POOP_IT_OUT_FEAT_LIL_WiiWii 2 หลายเดือนก่อน +8

      This is not surprising. It's like trying to compress the same file again and again, it will inflate.

    • @PrivateSi
      @PrivateSi 2 หลายเดือนก่อน +1

      Soon with Forced Diversity Quotas too no doubt...

    • @Utoko
      @Utoko 2 หลายเดือนก่อน

      This is such nonesense. In terms of LLM's it is the desired outcome because you predict the most likely next token. You want the best answer, not any answer as default.
      and yes all models have already a "temperature" parameter, which regulated the unpredictability and range of the possible tokens which can be chosen.
      For images the same. The example is really bad in the paper they use the same prompt, don't inject random noise. Yes Midjourney as a consumer product has the issue but the underlying models don't have the issues. You can have as much randomness, creativity and variance as you want.
      This video displays the increase accuracy, which they aim for as issue, which it is not. temperature=0.6 or higher and you get your creative storytelling back.

  • @danre64
    @danre64 2 หลายเดือนก่อน +451

    Every email in the future will start with: "i hope this email finds you well" 😂

    • @edt6488
      @edt6488 2 หลายเดือนก่อน +36

      No, it has found me unwell! Please call an ambulance for me!

    • @spvillano
      @spvillano 2 หลายเดือนก่อน +5

      An excellent filter phrase... ;)

    • @sebastiankorner5604
      @sebastiankorner5604 2 หลายเดือนก่อน +15

      They even translate the phrase in German, where it make even less sense. Ich hoffe meine Nachricht erreicht Sie gut... Lastly erreicht Sie bei bester Gesundheit. Both are phrases not used in German.

    • @ashroskell
      @ashroskell 2 หลายเดือนก่อน

      As these AI errors flood the net, will they become more and more of the training data for other AI’s? Until images get increasingly mutated and standard emails all start with, “I hope this emu fondles your willy.”

    • @ashroskell
      @ashroskell 2 หลายเดือนก่อน

      @@edt6488: To which ChatGBT responds, “You’re an ambulance . . . Oh, wait. That didn’t work, did it?”

  • @Clarkillustrations
    @Clarkillustrations 29 วันที่ผ่านมา

    Than you for covering this! I've been saying this since I saw the 1st interest in AI art

  • @sarah-janelambert8962
    @sarah-janelambert8962 16 วันที่ผ่านมา +1

    I've been drawing attention to iterative decay of information in these systems for some time. Eventually we will just end up with a 'grey goo' situation similar to that suggested for uncontrolled nanobot replication.

  • @ExploringAI42
    @ExploringAI42 2 หลายเดือนก่อน +91

    The one thing people should know about machine learning is: a machine learning trained model will only be good as its training data. It's just learning (in theory) the pattern behind the data leading to a host of problems.
    The main issue is that it doesn't actually reason about the data. Let's say I train a model where I have several examples where I have pi as 3.14 and then one where it's 4. The model doesn't reason "you know.... this one example seems to be wrong" but rather it updates the model to make it slightly more likely it will give the wrong answer.
    So how do you prevent models training on information generated by another machine learning model? The current approach is to stick to information before generative AI become dominate but most of that information (for better or worse) is probably considered or part of the training dataset.
    The main problem is that there's a popular opinion in machine learning (and sadly AI) that, as an AI researcher, I have had to deal with. This opinion is the key to all AI problems is that we just need to use larger models, with more training data, and train it in the "correct" way. "Look how far LLMs have come. Just imagine how much better they will be in a couple years". But you run into the 90-10 principle: 10% of the effort for 90% of the results and vice versa. It's why self-driving cars are taking a long time: there is nearly an infinite extremely rare cases that the car needs to make the right decision in. As such, it should be expected for the current LLMs to plateau performance wise unless new smarter methods are found.

    • @JordanCorkins
      @JordanCorkins 2 หลายเดือนก่อน +2

      Thank you for your insight, I think I agree with this. In the case of LLMs, they clearly have a use case already that will not go away, but I don't think they can deliver on the promises being made. I do not see how to make them be reliable enough to work in most business situations. I feel that many companies are looking for a way to implement them, and almost making their engineers find a way to make them useful, even it it makes no sense.
      The scaling already seems unsustainable, and while the "emergent" behaviors are very cool, nobody really understands how they relate to scaling (aka its not a defined ratio of x amount of compute/data for x more emergent behaviors)

    • @phattjohnson
      @phattjohnson 2 หลายเดือนก่อน +10

      It's not even machine 'learning'. It's 'just' scripted data consolidation, procedural compression and re-generation, and some other mumbo-jumbo that honestly has all been around since the conception of PCs. Just now we've got several modules all running simultaneously in one disjointed codeblock.

    • @octavioavila6548
      @octavioavila6548 2 หลายเดือนก่อน +11

      I'll do you one better. We will never solve this issue. It's a fundamental impossibility. We will never have self-driving cars. There is no exponential curve, no singularity. Forget it. We are very close to the best AIs will ever be

    • @JordanCorkins
      @JordanCorkins 2 หลายเดือนก่อน +4

      @@octavioavila6548 You base this on what exactly? Claiming AGI will never happen, and self driving will never happen is the same as the people who think we will have AGI in 2 years because of the hype. Nobody knows the limits or timeline, but I don't see why it would be impossible.

    • @goodlookinouthomie1757
      @goodlookinouthomie1757 2 หลายเดือนก่อน +8

      "Hold up. Something's wrong here. Not sure what it is but I feel like we should take a step back and go through it again"
      Said no AI ever, past, present or probably future.

  • @hunteralderman4867
    @hunteralderman4867 2 หลายเดือนก่อน +164

    I think a big part of the convergence is that people often are attracted to certain tropes and conventions when it comes to what they like, so AI produced images are actively being 'pruned and purified' by our preference of our existing cultural paradigms.
    What I think is really interesting is the feedback, where people's tastes of which conventions they like are in turn influenced by AI art.

    • @juanausensi499
      @juanausensi499 2 หลายเดือนก่อน +7

      Yep. It is easy to see AI images tend to be standarized. What is not that easy to see is if that is really a problem. People like standards. Just look how actors and actresses look like.

    • @thenonexistinghero
      @thenonexistinghero 2 หลายเดือนก่อน

      You couldn't be more wrong. The woke crap is purposefully programmed into it. Same for the censorship. Has nothing to do with preference and cultural paradigms.

    • @engelbertgruber
      @engelbertgruber 2 หลายเดือนก่อน +2

      means it is a problem of biological intelligence too ? 😂

    • @Marquis-Sade
      @Marquis-Sade 2 หลายเดือนก่อน

      @@juanausensi499They dont

    • @Marquis-Sade
      @Marquis-Sade 2 หลายเดือนก่อน

      @@engelbertgruberWhy?

  • @ABH565
    @ABH565 หลายเดือนก่อน +5

    So basically ai need a human brain. Take note matrix.

  • @grosvenorclub
    @grosvenorclub 2 หลายเดือนก่อน +43

    A friend of mine who has been a musician since the late 1950's explained tp me a few years back how there is very little originality in music these days as much is dependent on preset rhythms , chords , etc much due to "electronic " devices . It can only get worse with so called AI

    • @la6136
      @la6136 หลายเดือนก่อน +3

      Music will get worse in the future. Music production programs are all adding AI now. In the future tt will be record labels using AI to write lyrics, produce beats, and then they will get a hologram to perform instead of a human so they don't have to pay artists.

    • @pigcatapult
      @pigcatapult หลายเดือนก่อน +3

      @@la6136 hopefully there will always be indie bands

    • @patientzerobeat
      @patientzerobeat หลายเดือนก่อน +8

      It's not that music is getting worse. It's that WAY more people do it now, and a growing percentage of them are indeed cranking out unoriginal stuff because of the cheap tools that allow that to happen. There's still the same amount if not MORE original unique music being made now, but it can get lost in an ocean of cookie cutter creations. There is also the bias that happens when only the best stuff from the past has staying power; there is so much crap music from decades ago that is forgotten and/or unavailable. Whereas everything current is available right now obviously, and crap music from just a few years ago is way less likely to be forgotten and/or unavailable. And in this digital age, nothing ever really gets "erased". Who's going to digitize and put online some garbage unoriginal music from 1982?

    • @grosvenorclub
      @grosvenorclub หลายเดือนก่อน +1

      @@patientzerobeat Yes I agree but there were actually far more amateur musicians back in the 50's and 60's as that was probably the last of the generations who relied on home entertainment ( ie some or a number of members of a family played some sort of musical instrument ( Radio initially and then TV started to kill that of )

    • @SpiceFox
      @SpiceFox หลายเดือนก่อน +3

      People have always used similar cord progressions. You use the same base stuff to make something new. If you can’t find good music nowadays, you aren’t looking hard enough

  • @thebooksthelibrarian8530
    @thebooksthelibrarian8530 2 หลายเดือนก่อน +109

    2) More randomnes in AI output might do away with the problem of repetitive AI output, but it might increase the mistakes. Instead of elephants with big heads or two heads, we might get elephants with two big heads.

    • @red.aries1444
      @red.aries1444 2 หลายเดือนก่อน +1

      Or we get more pink elephants or other colors or with red instead of green grass...

    • @robadkerson
      @robadkerson 2 หลายเดือนก่อน

      @@red.aries1444 that wouldn't be so bad if we can get AI to help us create real pink elephant and red grass DNA

    • @Rich-Oh
      @Rich-Oh 2 หลายเดือนก่อน +12

      Downside: elephants with two big heads
      Upside: two big headed elephants are all young and good looking.

    • @thebooksthelibrarian8530
      @thebooksthelibrarian8530 2 หลายเดือนก่อน +4

      @@red.aries1444Actually, I would prefer green elephants. That's more environmentaly friendly.

    • @matheussanthiago9685
      @matheussanthiago9685 2 หลายเดือนก่อน

      ​@@Rich-Oh and white

  • @davemottern4196
    @davemottern4196 2 หลายเดือนก่อน +279

    This is exactly what I've been thinking since all of this exploded into popular awareness.
    It's like a giant ouroboros eating it's own tail. I'm glad to see that people are talking about this.
    Editing to add: Will you critics please lighten up? I'm not anti-AI. I'm just agreeing with Sabine that this is a potential problem that should be studied. All new technologies have potential problems that need to be studied and understood. Pointing this out does not make me some kind of neo-luddite.

    • @2ndfloorsongs
      @2ndfloorsongs 2 หลายเดือนก่อน +9

      People have been eating their own tales since there were people, I'm not sure why AI is expected to be different. Most people aren't that creative, but a few are; most AIs won't be creative, but a few will. Same old, same old.

    • @kikijuju4809
      @kikijuju4809 2 หลายเดือนก่อน

      @@2ndfloorsongs Most Ai will be X time better than best human in creativity, you can compete with machines

    • @mr_pigman1013
      @mr_pigman1013 2 หลายเดือนก่อน

      AI inbreeding is real

    • @HarryNicNicholas
      @HarryNicNicholas 2 หลายเดือนก่อน +3

      remember when photography was going to destroy art?

    • @milferdjones2573
      @milferdjones2573 2 หลายเดือนก่อน

      On appearance the science on that will show the AI photo shown are popular world wide. But of course it becomes too much of the same causing desire for diversity.
      Important to point out there actually a science in area of attraction both in humans and other species and we need to start shutting down those with non scientific type opinions especially the it just one culture imposing its values and the effort to make all appearance beautiful which is impossible our brains demand an ugly. Example make overweight attractive healthy becomes ugly.
      Better to push the traditional view of attraction only skin deep and accept your appearance state great to bad as unimportant to one’s value as a human being.
      And of course set beauty for the weights that actually healthy and live longer. Note some studies show a tad underweight might live longest.

  • @andrewhall7176
    @andrewhall7176 17 วันที่ผ่านมา +1

    This actually is not that suprising, when you think about it: these AIs are basically using huge amounts of data to approximate averages of various things, and with more iterations they extract more and more core features until they just have the same set of features they are using all the time. It's like taking data scores and continually averaging them until you are left with one value.

  • @FixTechStuff
    @FixTechStuff 5 วันที่ผ่านมา

    I called this a few months ago, good to see I'm not the only one who can see where this is heading.

  • @theprogram863
    @theprogram863 2 หลายเดือนก่อน +224

    Consider what generative AI actually is. It's designed to produce data which most closely resembles its training data. So distinctive and idiosyncratic ideas are actively selected against.

    • @sluggo206
      @sluggo206 2 หลายเดือนก่อน +6

      It's right there in the name: "GENERATIVE" AI.

    • @WeirdWizardDave
      @WeirdWizardDave 2 หลายเดือนก่อน +9

      The caveat being "unless you ask it to be distinctive and idiosyncratic". AI generated content isn't random, its the result of prompting. Short generic prompts will illicit generic content.

    • @SandersMacLane
      @SandersMacLane 2 หลายเดือนก่อน +3

      yes, a form of clustering and entropy reduction!!

    • @Nat-oj2uc
      @Nat-oj2uc 2 หลายเดือนก่อน +14

      ​@@WeirdWizardDaveexcept it still won't produce original idea that is distinguishable from gibberish unless it's trained on sufficient data which is impossible in case of original ideas

    • @Chek94
      @Chek94 2 หลายเดือนก่อน +10

      @@WeirdWizardDave It will produce distinct and idiosyncratic content -- in a way that matches it's training data.

  • @ZappyOh
    @ZappyOh 2 หลายเดือนก่อน +325

    1) AI is trained on data from the Internet.
    2) AI outputs data to the Internet.
    3) Goto 1
    ... haven't anybody acquainted themselves with the topic of "inbreeding" ?

    • @user-kw8kh8dg3h
      @user-kw8kh8dg3h 2 หลายเดือนก่อน +6

      Yep...After all, AI is a tool...
      It's like eating soup with a mesh strainer....

    • @user-kh7kx9en9l
      @user-kh7kx9en9l 2 หลายเดือนก่อน +6

      Solve that problem with using an A.I. classifier for detecting whether data is synthetic or not.
      Diversity isn't going down its just laziness in coming to creating datasets.

    • @LtFoodstamp
      @LtFoodstamp 2 หลายเดือนก่อน +5

      There are solutions to this though.
      1) AI scientists run AI through quality data.
      2) AI scientists run AI through a comparison between quality data and its outputs to provide corrective comparison.
      3) Give AI real vision (robotic eyes) so it can observe real life examples from the real world.
      4. Humans keep involvement in the process of determining what gets posted to the internet. If AI produces garbage it's less likely to be selected. If it produces something accurate, it's more likely to be accepted. Survival of the fittest response.

    • @alieninmybeverage
      @alieninmybeverage 2 หลายเดือนก่อน +3

      I took your advice and asked AI what "inbreeding" is. It replied:
      "SUAVE WHARRRRRGARBLE"
      and knowing is half the battle.

    • @SebSenseGreen
      @SebSenseGreen 2 หลายเดือนก่อน +2

      Never, ever, use a Goto statement!

  • @TheBigdog868
    @TheBigdog868 หลายเดือนก่อน +2

    Doctor Frankenstein used snippets from a whole bunch of people to make his monster. I was told the experiment didn't turn out well for him either. 😂

  • @simplicity4904
    @simplicity4904 หลายเดือนก่อน +2

    I’m not surprised by the finding and I dare say that it is obvious. I dare say that because such a thought has crossed my mind and I have often tried to convince others who are willing to listen beyond the hype; AI is “artificial” but not “intelligent”. There are different ways to assess and critique AI - philosophically, linguistically, psychologically, biologically, etc, which many thoughtful experts, beyond the industry, have challenged the claims of AI, in particular AGI - but, for me, it comes down to creativity and novelty, the latter AI lacks completely and the former AI can only mimic. If you want to be impressed, see a human child.

  • @fwiffo
    @fwiffo 2 หลายเดือนก่อน +92

    The popular image generation models prior to Stable diffusion were GANs (generative adversarial networks). The way they worked was to have two different networks - one trained to generate images, and the other trained to classify images as real or fake. This forced the generator to learn to avoid the most identifiable characteristics and to generate a diverse set of images.
    Stable diffusion was more effective and scaleable for higher resolution images, keeping the whole image globally coherent. But it's likely that reviving some adversarial techniques could help with the diversity issue.

    • @Coach-Solar_Hound
      @Coach-Solar_Hound 2 หลายเดือนก่อน +15

      Actually one of the biggest issues with GANs that they were very prone to "Mode Collapse". During mode collapse rather than producing a diverse set of images, the adversarial network would hone in on specific features which were not recognized by the discriminator network. The result: a lower diversity in images which get produced.
      The reason why diffusion took off in the first place is that due to noise being used as a base, the diversity was higher, as the initial noise served as a "random seed" for the generation in a sense. Mode collapse can be avoided, but takes a lot more effort to avoid, and can lead to problems in many architectures.
      (Note, im not a researcher.) This is mostly from scant reading I've done here and there.

    • @andersonfaria8949
      @andersonfaria8949 2 หลายเดือนก่อน +6

      @@Coach-Solar_Hound you're absolutely right but I'd like to add another point here, it's not just about model collapsing, the reason why GANs end up losing degrees of freedom is because of overfitting. The ultimate trick to win the discriminator is to draw exact copies of the dataset and that's why you need to save "backups" and move back in time of trainning when you see important details are being left out.
      Now, regarding diffusion vs. GANs that's a more broader discussion: GANs theoretically should excel in image generation but the investment towards diffusion (especially prompt to image) is way higher so while GANs seem to be lacking, they should actually be a better solution overall.
      What you said about taking "random seed" is also true for GANs, the generator will always take a random number and try to draw what it knows about the dataset from there.
      There's a really interesting video explaining all the details in computerphile channel: th-cam.com/video/Sw9r8CL98N0/w-d-xo.html

    • @andersonfaria8949
      @andersonfaria8949 2 หลายเดือนก่อน +1

      Image controlling for GANs is still an active area of research, what we do today to influence latent space results is to move specific directions in latent space. To know where to move you can use dimensionality reduction techniques to find specific vectors controlling image relevant attributes (check the paper of GANSpace).
      Another option is to do img2img transfering style or mixing with prompting information

    • @8888Rik
      @8888Rik 2 หลายเดือนก่อน

      Your comment and the replies are extremely interesting.

    • @fwiffo
      @fwiffo 2 หลายเดือนก่อน

      @@Coach-Solar_Hound Yes, that's true, although there were a lot of developments going on to fix that. The biggest problem was either the generator or the discriminator getting too far ahead of the other, and the whole thing getting stuck. So the rate of learning of the two parts had to be balanced. There was another issue where the set of produced images was not representative of the training data because the generator favored generating "easy" images. For instance, if it was generating faces, it would avoid producing details like glasses or beards, or prefer to generate less angular faces (i.e. the output would overrepresent women).
      There are lots of types of regularization to be done, and techniques to help with those things. Adversarial learning, generally, is a really useful technique. So I think it's time to bring it back to diffusion.
      (I have done work on GANs personally, although it's been a few years).

  • @morenofranco9235
    @morenofranco9235 2 หลายเดือนก่อน +133

    Great presentation, Sabine. I have always maintained that AI is like students cribbing exam answers. One student just has to copy one thing wrong, once. From then on it is a done disaster. When scientists hypothesised robots making copies of themselves - they never saw this far into the mess.

    • @bami2
      @bami2 หลายเดือนก่อน +9

      You are 100% correct and it's already happening. I noticed it first when I was looking up a certain niche question that had a bunch of AI generated garbage in the search results, that somehow kept repeating a nonsensical "fact". I pinned it down to a single forum post that was made 10 years ago where somebody made a typo or something that made no sense, but this post was ingested by the machine learning dataset and that dataset was being used to generate a bunch of blogposts/websites, because of the way LLMs write (long dense sentences with very specific subjects) shot up high in search engine rankings.
      So now there's 20+ different sites all parroting this garbage information, which was then used in other datasets and ingested by most LLMs now, if I ask that specific question to any LLM, it will parrot out the same garbage because there's now 20+ "sources" all saying the same thing, but all based on some stupid forum post made a long time ago by a real person who made a typo or didn't fully understand english language.

    • @sjonnieplayfull5859
      @sjonnieplayfull5859 หลายเดือนก่อน +1

      An old comic saw this coming: Storm. In the album 'The von Neuman machine' they are sent out to intercept a planet on a collision course with Pandarve, only to find out it is a conglomerate of small von Neuman machines who search for resources, then reproduce themselves, but the code got corrupted because small flaws were reproduced millionfold and got larger over time.
      Guess AI programmers are not nerdy enough to read comics

  • @CogitoBcn
    @CogitoBcn 25 วันที่ผ่านมา +1

    The problem is oldest than your video suggests. Automatic translation and even grammar correctors have been distorting human language (and reducing language variance) for decades, and we have incorporated their language quirks in our day to day language.

  • @boredscientist5756
    @boredscientist5756 หลายเดือนก่อน +1

    Very good video! It makes total sense!

  • @maphezdlin
    @maphezdlin 2 หลายเดือนก่อน +87

    Look how people are more and more hating CGI in movies to the point that some movies refuse to do any. If you have ever read anything written by AI you know it has the ability to make the most exciting subjects boring.

    • @cara-setun
      @cara-setun 2 หลายเดือนก่อน +3

      Can you name any of these movies?

    • @icyjaam
      @icyjaam 2 หลายเดือนก่อน +1

      Even Nolan uses very heavy CGI

    • @maphezdlin
      @maphezdlin 2 หลายเดือนก่อน +5

      @@cara-setun,
      Oppenheimer (2023), Skyfall (2012), Inception (2010), Mission Impossible: Ghost Protocol (2011), Mad Max: Fury Road (2015), The Dark Knight (2008), Casino Royale (2006), 1917 (2019), Top Gun: Maverick (2022)

    • @Felixr2
      @Felixr2 หลายเดือนก่อน +4

      @@maphezdlin All of those movies used CGI. All of them. Many of the stunt scenes are mostly real footage, sure, but a lot of them are edited beyond recognition.
      Oppenheimer only lists 49 vfx artists on IMDB, but that's mostly because 80% of them weren't credited.
      Skyfall lists 578 vfx artists.
      Inception had 295.
      Mission Impossible: Ghost Protocol had 347.
      Mad Max: Fury Road had a whopping 742.
      The Dark Knight had 468.
      Casino Royale had only 161, which is in fact impressively low, but still not 0.
      1917 had 422.
      Top Gun: Maverick had 455.
      For reference, Avatar: The Way of Water (2022), a movie we can hopefully all agree had immense amounts of CGI, credits 1113 vfx artists.
      The Hobbit: The Desolation of Smaugh (2013) had 915.
      Most of the movies you mentioned had close to if not more than half of that. What did all these people do if there's no CGI?

    • @maphezdlin
      @maphezdlin หลายเดือนก่อน +4

      @@Felixr2, K VFX and CGI are different.
      But you are right the links that I saw that said NO CGI lied. They should have said minimized CGI. Thanks for catching it.

  • @BoogieBoogsForever
    @BoogieBoogsForever 2 หลายเดือนก่อน +14

    I think that if we program in randomness, they'll introduce wacky and impossible and obvious problem elements.
    The problem is in explaining how to adjust and add randomness to a program which doesn't understand the original state and how it has simplified and made things uniform.
    It doesn't understand what it does, so how can it introduce some oomph. How can it know when it's introduced too much?
    There are way to many parameters which can be tweaked.

  • @mishalzeera8172
    @mishalzeera8172 28 วันที่ผ่านมา +1

    After the autotune-as-an-effect started, with Cher singing "Believe", you have multiple generations of singers who mimic the sound of autotune artifacts quite naturally and spontaneously. Also, young people having plastic surgery to mimic the look of phone camera filters. We are also trainable, it turns out. I think that is an important element to consider when predicting the future of this stuff.

  • @jdmac44
    @jdmac44 2 หลายเดือนก่อน +2

    Worst case scenario, it'll be like Walmart moving in to a community, destroying the mainstreet businesses, everyone takes Wal-jobs, and then the Walmart closes because the local economy is crap simply leaving a ghost town with people who don't have the capital, business acumen or consumer base to reboot mainstreet.

  • @johnatyoutube
    @johnatyoutube 2 หลายเดือนก่อน +60

    As an AI scientist, we've been talking about this for years. Once the AI starts eating its own tail it will quickly optimize to a singularity of stupidity in its own echo chamber. The only way for AI to continue to work is to automatically label all AI output and ignore it for training. Or to manually post label it by humans. Humans are necessary for AI success in any case. It would be interesting for you to discuss both the labeling servant culture and its injustices as well as the impossibility of AGI if AI depends on human labeling.

    • @DKNguyen3.1415
      @DKNguyen3.1415 2 หลายเดือนก่อน +13

      Reminds me of the trend of compensating CEOs with stock.

    • @johnatyoutube
      @johnatyoutube 2 หลายเดือนก่อน +4

      ​@@DKNguyen3.1415especially if the company is losing money and laying off workers.

    • @peznino1
      @peznino1 2 หลายเดือนก่อน +2

      "...it will quickly optimize to a singularity of stupidity..." Think you just optimized for word salad.

    • @DKNguyen3.1415
      @DKNguyen3.1415 2 หลายเดือนก่อน +4

      @@johnatyoutube Well, it's basically optimizing the short term stock-price at the expense of everything else so the CEO cash out. Long-term viability, product quality, worker productivity, accurate book-keeping and finances, even the best interests of shareholders and real profits and revenue don't matter if sacrificing them can result in a stock payout before the consequences hit.

    • @anywallsocket
      @anywallsocket 2 หลายเดือนก่อน +3

      As a layman I disagree. You’re right if you don’t think outside the box, but we can use AI to sample evolutionary algorithms to generate networks for more AI models. This space is practically limitless.

  • @tullochgorum6323
    @tullochgorum6323 2 หลายเดือนก่อน +22

    AI can learn from itself when there is an objective outcome to measure. For example Chess, Go and Poker AI engines can improve by playing against themselves (though they also benefit from historical game records and playing against humans). Where there is no objective measure, such as art or creative writing, it's difficult to see how AIs can improve without human input.

    • @salvadoran_uwu
      @salvadoran_uwu 2 หลายเดือนก่อน +4

      Exactly, human input, that's why experts say one job that may rise after AI is "human trainer." I've seen many voice bots need human input to improve their accents and pronunciation.

    • @hovertank307
      @hovertank307 หลายเดือนก่อน +1

      Art is intended to please humans. If we want AI to train on AI generated art, the set must first be curated by humans to contain images we find pleasing. If you let AI train on all images generated by AI, it will keep getting worse (unless some programmer figures out a trick around this)

    • @tullochgorum6323
      @tullochgorum6323 หลายเดือนก่อน +5

      @@hovertank307 As a coder myself, I'd hate to be given the task of developing an algo to rank the quality of visual art!
      Music may be more doable. Interestingly, the very first computer scientist, Ada Lovelace, predicted way back in 1843 that computers could generate music.
      Because it's based on relatively predictable patterns, there are generative music AIs that produce interesting results or that interact with human players.
      They may soon have commercial applications for less demanding fields like advertising jingles, where originality is not the aim. Hack commercial composers must be fearing for their jobs...

    • @hovertank307
      @hovertank307 หลายเดือนก่อน

      @@tullochgorum6323 yes, I would not even try it. I meant a trick to sidestep the need to write such an algorithm.

  • @WillyWP
    @WillyWP 2 หลายเดือนก่อน +1

    I agree. I also know that the amount of input required to create accurate images that are on par with illustrations, photos or graphic design, especially if it needs to fit within a preexisting brand visual language, and need to be otherwise visually descriptive to achieve a specific goal, is ridiculous. Try entering meeting notes, and text describing a brand visual language into an AI generator and to visually convey a concept and you won't get anything that works anytime soon. Give a 250 word creative brief to a qualified professional an you will get something back right away. In this way AI is not close outpacing the human brain in this way.

  • @pauljs75
    @pauljs75 หลายเดือนก่อน +1

    This is probably a reflection of the filtering used to keep AI presentable to the public. Filtering limits the bandwidth of the input data and successive runs with a filter would narrow the scope even further.
    It's like running noise through a feedback loop that also has a filter on it. Eventually you may get a sinewave tone with enough passes through the filter. There's also something about the limiting rule sets that act a lot like quantization with generative sound design when used with random input values. If you change the rules to the point of quantizing down to one note, that one note is all you're going to get regardless of the input values and how random they are.
    Weird analogy, but I swear there's some very basic AI behavior that can be observed with something like generative music - and certain rules like picking scales or rhythm patterns will make a melody fit into a genre just by letting a computer do its thing. Sure images or language are more complicated, but it seems that similar nuances in emergent behavior are there.

  • @keithdafox2257
    @keithdafox2257 2 หลายเดือนก่อน +103

    A third potential is that we decide to move on from brute forcing LLMs to work and get more efficient or different learning models. A human does not need to look at a billion images to learn how to draw. Even if we don't have AI that are capable of what we can do, it does demonstrate that there are better ways to design AI. Right now it's kinda brute forcing and incredibly inefficient

    • @Coach-Solar_Hound
      @Coach-Solar_Hound 2 หลายเดือนก่อน +36

      except we perceive images for the entirety of our lives.every waking moment. The amount of frames we see in a day is a topic which is disputed, however, you can quickly imagine how these pile up, I assume reaching a billion in a lifetime may be possible, even at 20 years we may be approaching around a billion images seen in our waking days, if not more. Small moments of perception (not necessarily visual) may leave an impact (emotional or otherwise). This then results in creativity.

    • @keithdafox2257
      @keithdafox2257 2 หลายเดือนก่อน +7

      @@Coach-Solar_Hound I never thought of that but you do have a point there. Still, even so, an AI can sift through many more fps on a specific topic than we can yet can still take a lot. But also we do have an understanding of the world
      I read somewhere about an AI system that first learned, via simulations, how physics works, understanding 3D objects and whatnot. Then it was able to learn a topic much more efficiently than the other. But, I don't recall the article so who knows. I do feel like LLMs are kinda a brute force method of training data, but I also definitely don't understand how they work enough so who knows. It will be interesting

    • @defaulted9485
      @defaulted9485 2 หลายเดือนก่อน +7

      ​@@Coach-Solar_HoundCorrection. You perceive image when your brain isn't dozing off. Your conscious brain only learns one thing at a time and dumps the rest of the noises.
      AI eats everything up because its a server farm. It processes 100 image per CPU per second in a server made out of hundreds of CPUs.
      If you process every data like AI, your brain will have a seizure and dumps the rest of the information. This isn't including Tunnel Vision, the importance of peripheral vision, spectrum perception, object of focus, and more perspective where your brain dumps information on the visible Field of View to save your memory storage.
      It's far different.

    • @Coach-Solar_Hound
      @Coach-Solar_Hound 2 หลายเดือนก่อน

      @@defaulted9485 that's fair, but our subconscious brain and perception is still filtering categorizing and receiving all of this data.
      It's just that our system for cataloging and interprting visual data has had so many years of evolution that it has become this advanced and efficient.
      There's definitely a big difference in retention between active processing by the concious brain and simply perceiving. But I was moreso arguing that the amount of images we perceive through our lifetime is quite high in quantity. There are definitely layers to this, and the importance of abstract representations that we're able to make and share are not to be understated.
      Furthermore, I don't really know how much our unconscious brain influences the concious brain. But there is definitely a non-negligible impact.
      The advanced filtering and cataloguing is what makes us so special as a species anyway. The lack of semantic understanding in the largest thing that sets us apart from NNs currently.
      In my interpretation, current image based systems are really just advanced enough to mimic the following systems: encode visual data in some lower level (compact) representation and recall from this representation into some visual data. Much akin to a memory.

    • @user-ks3gz2bs5e
      @user-ks3gz2bs5e 2 หลายเดือนก่อน +2

      ​@@defaulted9485 a computer learns one bit at a time, our brains learn multiple x multiple things at a time, both instantaneously.
      Our brains do not actually dump noise, it turns it down but continues working on everything recieved from our senses to our memories, to imagination, which is of course how we create.

  • @SKLightenUpNow
    @SKLightenUpNow 2 หลายเดือนก่อน +57

    Please, would you put under the video the references of the sources you use? A study from Japan, another from France - please, give us the links! Thank you.

    • @ThehakPlay
      @ThehakPlay หลายเดือนก่อน +4

      They are in the video bro. Right under both of those studies are arXIV citations that you can easily google. If you aren’t motivated enough to google them, you were not motivated enough to read and learn from an academic paper anyway

  • @rubenmahrla9800
    @rubenmahrla9800 29 วันที่ผ่านมา

    I have been using gemini's free version since it's release, and I have been noticing a sharp decline in reliability and a massive uptick in refusal to answer questions, i.e. ignoring questions on some very public information such as laws etc.

  • @chrishoyt7548
    @chrishoyt7548 17 วันที่ผ่านมา +1

    Indeed, thank you.
    Chris

  • @BraydonAttoe-xs4yg
    @BraydonAttoe-xs4yg 2 หลายเดือนก่อน +115

    Surprised we arent already forcing watermarks on ai content. Actually blown away. Like giving a kid a staw house and fireworks and not expecting a fire😊

    • @adamshinbrot
      @adamshinbrot 2 หลายเดือนก่อน +13

      Who would force it? Who would enforce it? How?

    • @esbensloth
      @esbensloth 2 หลายเดือนก่อน +11

      How would you even watermark plain UTF-8 text like what LLMs produce and I am typing now?

    • @BraydonAttoe-xs4yg
      @BraydonAttoe-xs4yg 2 หลายเดือนก่อน +4

      @esbensloth use those intellectual problem solving skills we humans have and deduce that I'm referring to the concept of a watermark. Or at least I figured those reading would have assumed that. My bad

    • @BraydonAttoe-xs4yg
      @BraydonAttoe-xs4yg 2 หลายเดือนก่อน +5

      @@adamshinbrot people said that same thing before we had firefighters, roads, schools... etc

    • @BaddeJimme
      @BaddeJimme 2 หลายเดือนก่อน +6

      If the real beneficiaries of mandatory watermarking turn out to be people that train AIs, then I'm against it.

  • @Mihi967
    @Mihi967 2 หลายเดือนก่อน +14

    Very interesting findings there and suggest that while initial models create biases, refined models may also create average bias.

  • @theEisbergmann
    @theEisbergmann 26 วันที่ผ่านมา +1

    I hope it boots universities a bit to start bringing individuality back into academia. The amount of students I heard that said "whatever I'll just chatgpt it and work over the bumps" is staggering.

  • @nithinrao7191
    @nithinrao7191 หลายเดือนก่อน +1

    It seems like it's only a matter of scale and fixing those issues with current models.

  • @Mike__G
    @Mike__G 2 หลายเดือนก่อน +15

    This issue has occurred to me for quite a while. I have worked with Big Data extensively and had brief real world experience with AI development. AI’s reuse of AI-generated data seems highly likely to result in a “creativity asymptote.”

    • @PanduPoluan
      @PanduPoluan 2 หลายเดือนก่อน

      The issue is that "creativity" is the totally wrong word to describe what AI does.
      An AI is currently a glorified summarisation machine with weighted forecasting ability. It has no capacity of becoming creative. It can only extrapolate, with zero understanding of what it is extrapolating.
      AI bros will defend AI tooth and nail to pull in more funding before they bail out. Just like Crypto and NFT. GAI is the "scam du Jour".

  • @calimon00
    @calimon00 2 หลายเดือนก่อน +26

    I’ve been saying this to friends for about a year now. I’m glad I’ve finally run into an expert identifying and addressing this potential problem.

    • @covalentbond7933
      @covalentbond7933 2 หลายเดือนก่อน +9

      I hope your intuition leads to wealth and happiness bro, make sure to use it well

    • @11lvr11
      @11lvr11 หลายเดือนก่อน +1

      Same

  • @billdavis5483
    @billdavis5483 5 วันที่ผ่านมา +2

    I think Frank Herbert might have already told us the eventual solution in Dune.

  • @zacheray
    @zacheray 7 วันที่ผ่านมา

    There’s a chaos setting in Midjourney. I imagine that modifier will be available in everything. It works well for randomness although the weirdness is a bit much

  • @rishyrish6508
    @rishyrish6508 2 หลายเดือนก่อน +11

    its already happening on youtube. the same videos with different thumbnails usually one or two words are changed

  • @2bfrank657
    @2bfrank657 2 หลายเดือนก่อน +69

    I kind of wonder if this problem actually started with the widespread use of the internet. We went from communicating with books, which had to meet a certain standard before the expense of publishing could be justified, to zero-cost sharing of opinions on the internet, to having machines lap up these opinions and feed them back to us. Each of the above steps involving less rigour than that which precedes it.

    • @user-iv5gy3rc2b
      @user-iv5gy3rc2b 2 หลายเดือนก่อน +9

      You're on to something. Everybody is an expert on the internet, even 10-year-olds and meth heads. Used to require some credentials to publish and teach others or at least experience and actual knowledge as opposed to opinions.

    • @mikemondano3624
      @mikemondano3624 2 หลายเดือนก่อน

      Yes, the truth and lies are now on equal footing. The village idiots that we tolerated compassionately now have joined together to form political and social blocs. We might even begin to question Silicon Valley's idea that everything they come up with is purely good.

    • @mikemondano3624
      @mikemondano3624 2 หลายเดือนก่อน +1

      @@user-iv5gy3rc2b Opinions are fine so long as they are correct.

    • @Reach41
      @Reach41 2 หลายเดือนก่อน +7

      Books on flying saucers, ancient space aliens building the pyramids, etc. have been published for at least 70 years... I'll bet one could get their horoscope reading from an online AI today, and perhaps a tarot card reading.

    • @fastestdraw
      @fastestdraw 2 หลายเดือนก่อน +13

      I'd disagree - you only need to open a random victorian book that isn't a 'classic' to see how little rigour went into the majority of written work.
      Its survivorship and recency bias. Easy to remember the classics, but pulp fiction gets pulped.
      We don't exactly remember victorian 'heres detailed descriptions of this weeks executions and gristly crimes' newspapers, but 'highly embelished true crime podcasts' are exactly the same thing.
      Ditto with 'news' that was basically made up - to the point that a lot of the british emprie's decisions in india were highly influenced by people claiming the earth was hollow, or that they had been there and writing entirely fictional accounts about the country.
      People have made terrible decisions on bad information for a long time. The main change AI is causing is that you can no longer say 'they probably didn't write three thousand pages and provide detailed illustration on something obviously false'.

  • @Roboartist117
    @Roboartist117 หลายเดือนก่อน +2

    AI as it currently stands will become a generalization of people. If it doesn’t become a unique individual, or become better at imitating us, we’ll just learn to recognize them as we grow up and live with them.

  • @charlesdelajungle9473
    @charlesdelajungle9473 10 วันที่ผ่านมา

    Never heard this before ! That's really interesting.
    I'm sure we'll hear about it, that's sounding like a nightmare.
    Thanks

  • @chw1tt
    @chw1tt 2 หลายเดือนก่อน +13

    Exactly! I've been wondering about the potential for this problem. Thanks for pointing it out.

  • @LaminarRainbow
    @LaminarRainbow 2 หลายเดือนก่อน +8

    I wonder if the generated elephants look so similar because usually generated images try to match the sample sizes (512x512, 1024x1024) which only leaves so much room for good compositions and wonder if in future with larger models we might see this change a bit more.

    • @phattjohnson
      @phattjohnson 2 หลายเดือนก่อน

      That example was from 2 years ago too. I've been playing around with "AI" art generation lately.. you do get the odd extra finger or third leg (giggle) but that's half the charm of it :P

  • @IvoryEmbassy
    @IvoryEmbassy หลายเดือนก่อน +1

    Great stuff! I've highlighted this spiraling feedback loop on my blog and in one of my videos, and I'm happy to see similar thoughts here. Since the launch of ChatGPT, content creators, including science and medical communicators, have feared job loss due to generative AI. I've been more optimistic, saying that communicators with original ideas will thrive in a constantly paler and monotonous information landscape. Thanks!

  • @stephendedalous4363
    @stephendedalous4363 29 วันที่ผ่านมา

    HI would it be possible to do a round up of the current peer reviewed material regarding the spike protein of corona-virus. I find it hugely difficult to approach the material because of the vested interests and difficulty of appraising academic papers. as an alternative a round up of how to approach such an issue in terms of assessing the material including how to screen papers and thus establish credibility etc

  • @fen4554
    @fen4554 2 หลายเดือนก่อน +6

    As a 80-90s kid, I just wanted to say your thumbnail looks like artwork for a gameboy game with that left stripe.

    • @EyMannMachHin
      @EyMannMachHin 2 หลายเดือนก่อน

      For some reason I noticed that right away when Sabine started using this picture format, but was afraid to ask. 🤣

    • @GANONdork123
      @GANONdork123 2 หลายเดือนก่อน +1

      I thought I was the only one lol

  • @TheEVEInspiration
    @TheEVEInspiration 2 หลายเดือนก่อน +4

    1:10 Why so surprised?
    It's well known that in echo chambers all differentiating opinion/perception gets eliminated.
    And when AI is following the input data given, it will convergence to a consensus in order to establish its rules.
    This is also why "learning the rules" works, the randomness is just to be less sensitive to small input variations.

  • @rasmustorkel9568
    @rasmustorkel9568 หลายเดือนก่อน +2

    Great video. This problem may be a teething problem, though. After the computers started beating the top human Chess players, Go players like myself felt smug. We said things like "Chess is about crunching through possibilities. Go requires real intelligence." For years we annoyed chess players with this sort of talk, citing the inability of computers to beat the top human Go players as proof. This lasted about 18 years until AlphaGo came along in 2016. Important point: AlphaGo does not play like humans, except faster and more accurately. It came up with some genuinely novel moves.
    So, I would not take any problems that AI is now experiencing as an accurate predictor of what AI will be like in another two decades or so.

    • @illarionbykov7401
      @illarionbykov7401 หลายเดือนก่อน +1

      Amazingly, your comment is getting ignored. One of the biggest problems with AI is that mainstream reporting on AI has been woefully incomplete and ignorant for decades. Even most AI professionals know only the bits and pieces they work on. Very few people see the big picture, and our news media are largely responsible by failing to keep us informed. Simply catching people up on what's already been achieved in the AI field is a huge task.

    • @rasmustorkel9568
      @rasmustorkel9568 29 วันที่ผ่านมา +1

      @@illarionbykov7401 Yes. We should work on the assumption that in the long term AI will be limited by what humanity allows and not by what is technically possible. And then we should think about and discuss what we will allow.
      Clearly marking AI generated stuff, as Sabine suggested, is a good start but not nearly enough.

  • @dmitriydanilov6367
    @dmitriydanilov6367 23 วันที่ผ่านมา +1

    Thank you for the video! I am not an expert on AI just used to work with big data. It has never occured to me that AI collapse due to low quality of input might be an issue, but it actually makes sense.
    I would like to point out that adding more randomness might now be even viable solution to this problem since all the existing random number generator are in fact pseudo-random (the random result can be predicted if you know the algorithm)
    Guess we will see how it will play out in the future

  • @Fido-vm9zi
    @Fido-vm9zi 2 หลายเดือนก่อน +7

    I absolutely love reading comments & knowledge shared by people. Seems like a computer or program doesn't really know the world, discernment. Still pretty interesting & useful.

    • @aktchungrabanio6467
      @aktchungrabanio6467 2 หลายเดือนก่อน

      Comments are bullshit thoough

    • @Fido-vm9zi
      @Fido-vm9zi 2 หลายเดือนก่อน

      @@aktchungrabanio6467 some

  • @tobykelsey4459
    @tobykelsey4459 2 หลายเดือนก่อน +25

    One potentially positive side-effect of this "averaging effect" of AI output - if it continues - is that creative people who want to distinguish their output from the common generative stuff will be forced to be more individualistic and idiosyncratic to be distinct and valuable. Of course if generative output is then trained on their later output this becomes an "arms race".

    • @manutosis598
      @manutosis598 2 หลายเดือนก่อน +1

      Best outcome, we get to laugh at obama pissing at mr beast skibidi sigma toilet and it doesn't steal jobs

  • @Chlocean
    @Chlocean หลายเดือนก่อน

    Didn't even think of AI Art cannibalism. That is such an interesting thought-experiment. Thanks for the video and I'l be back for more!

  • @james-kh7oi
    @james-kh7oi 2 หลายเดือนก่อน

    Nice cover...Sabine. thanks

  • @mhayato3
    @mhayato3 2 หลายเดือนก่อน +31

    This reminds me of something happened in bycicle industry, sales were declining so "creatively" they "invented" 29" wheel and stopped producing 26" .

    • @petter9078
      @petter9078 2 หลายเดือนก่อน +7

      Sounds like something Apple could do.

    • @seraph4581
      @seraph4581 2 หลายเดือนก่อน +10

      29" wheels are better though. Specially for climbing, due to physics. Bigger lever = less effort needed. 29" are actually just wider 700c wheels which road bikes had been using for decades at that point.

    • @gedeonducloitre-delavarenn8106
      @gedeonducloitre-delavarenn8106 2 หลายเดือนก่อน +2

      How does the size of the wheel improve the efficiency ? why not then go up to 35", 40" or even 50" wheels then ? why didn't we stick with pennyfarthings?

    • @p60091
      @p60091 28 วันที่ผ่านมา

      ​@@gedeonducloitre-delavarenn8106 more momentum, lower speed, better for going further. diminishing returns. Penneyfarthings were fixed gear, difficult to ride difficult to balance, easier to break, among other issues.

  • @murex0909
    @murex0909 2 หลายเดือนก่อน +15

    I love listening to your channel, you explain the most complex subjects in a clear easy and simple way for easy comprehension
    Thank you and keep up the great channel
    Love it

  • @jameslindsay7846
    @jameslindsay7846 หลายเดือนก่อน

    Very neat. I intuitively assumed it would go the divergent route... but convergent?? That's a surprise. 😮

  • @freecat1278
    @freecat1278 หลายเดือนก่อน +1

    I was just called by an AI telemarketer. I am sick & this was reflected in my voice when I answered the phone. The AI tried to relate to me by matching the quality of my voice. It sounded like a drill sergeant or concentration camp guard mocking me.

  • @phattjohnson
    @phattjohnson 2 หลายเดือนก่อน +10

    3:20 - this would've been the PERFECT time to touch on how Google's Gemini went the completely OPPOSITE direction when prompted :P
    Perhaps this illustrates how much the human hand is still in control of how these AI models operate, as opposed to datasets becoming derivative.

    • @BobCat0
      @BobCat0 2 หลายเดือนก่อน

      The White Germans would prefer you to think that Nazis were not White and German.

  • @Sshodan
    @Sshodan 2 หลายเดือนก่อน +9

    I think that top companies will eventually start using commercial data sets - libraries of images they DO own the rights to and that are guaranteed to have human authors. A lot of people who work on stock images today will be working for expanding thous data sets. That is what Adobe is already doing and it is going to be a huge new business and employment opportunity for creative people.

  • @sueelliott4793
    @sueelliott4793 หลายเดือนก่อน +2

    I ♥ all your videos thanks for sharing your knowledge.

  • @brichan1851
    @brichan1851 7 วันที่ผ่านมา

    This is something touched on in Halo regarding Cortana, and other “smart” A.I.s. This is known, in the Halo universe, as “rampancy.”