Has Generative AI Already Peaked? - Computerphile

แชร์
ฝัง
  • เผยแพร่เมื่อ 2 ธ.ค. 2024

ความคิดเห็น • 3.6K

  • @Computerphile
    @Computerphile  6 หลายเดือนก่อน +105

    Bug Byte puzzle here - bit.ly/4bnlcb9 - and apply to Jane Street programs here - bit.ly/3JdtFBZ (episode sponsor)

    • @worldofgoblins
      @worldofgoblins 6 หลายเดือนก่อน +22

      Could you explain what “There exists a non-self-intersecting path starting from this node where N is the sum of the weights of the edges on that path” means? Is the end node for the “path” one of the purple nodes?

    • @MacGuffin1
      @MacGuffin1 6 หลายเดือนก่อน +4

      Humans do fine with less data, volume of data is clearly not the issue. 'Has generative AI already peaked?' : Not even close...

    • @mr_easy
      @mr_easy 6 หลายเดือนก่อน +3

      @@worldofgoblins Yeah, same doubt

    • @squirlmy
      @squirlmy 6 หลายเดือนก่อน +6

      B​ut things like LLMs don't at all work like human intelligence. That's like saying "people make ethical decisions all the time, so my pocket calculator should have no problem with issues of morality."

    • @paulmichaelfreedman8334
      @paulmichaelfreedman8334 6 หลายเดือนก่อน +1

      @@MacGuffin1 There's still ways to go, but I do think there is an asymptote. But not before this type of AI has become as useful and intelligent as the droids in Star Wars. But not much beyond that. It remains an input-output system, defeating any chance of ever evolving to human-like emotions, for example. For human-like AGI, I think Ben Goertzel is much in line with that and he says that is still quite some time away as such an AI is radically different from transformer and generative AIs.

  • @miroslavhoudek7085
    @miroslavhoudek7085 6 หลายเดือนก่อน +6842

    As a sort of large trained model myself, running on a efficient biological computer, I can attest to the fact that I've been very expensive over the decades and I certainly plateaued quite some time ago. That is all.

    • @ExecutionSommaire
      @ExecutionSommaire 6 หลายเดือนก่อน +141

      Haha, I totally relate

    • @avi7278
      @avi7278 6 หลายเดือนก่อน +55

      fr fr

    • @SteinGauslaaStrindhaug
      @SteinGauslaaStrindhaug 6 หลายเดือนก่อน +170

      @@avi7278 Apparently youtube thinks "fr fr" isn't English so it offered to translate it… it translates to "fr fr" apparently 🤣

    • @AnimeUniverseDE
      @AnimeUniverseDE 6 หลายเดือนก่อน +36

      I get that you were just making a joke but the current version of AI could not be further from humans.

    • @salasart
      @salasart 6 หลายเดือนก่อน +1

      XD This was hilarious!

  • @marcusmoonstein242
    @marcusmoonstein242 6 หลายเดือนก่อน +4432

    You've just described the problem being experienced by Tesla with their self-driving software. They call it "tail-end events", which are very uncommon but critical driving events that are under-represented in their training data because they're so rare.
    Tesla has millions of hours of driving data from their cars, so the software is better than humans in situations that are well-represented in the data such as normal freeway driving. But because the software doesn't actually understand what it's doing, any event that is very uncommon (such as an overturned truck blocking a lane) can lead to the software catastrophically misreading the situation and killing people.

    • @bened22
      @bened22 6 หลายเดือนก่อน +298

      "Better than humans" (X)

    • @andrej2375
      @andrej2375 6 หลายเดือนก่อน +84

      It's better than humans AND we'll keep working on it

    • @James2210
      @James2210 6 หลายเดือนก่อน +430

      ​@@bened22It's like you commented without actually reading what it's saying

    • @bened22
      @bened22 6 หลายเดือนก่อน +197

      @@James2210 I read it but I don't even believe the softened claim.

    • @ids1024
      @ids1024 6 หลายเดือนก่อน +136

      In theory, an advantage of self-driving cars *could* be that the software has "experience" with many of these uncommon situations that few human drivers would, which could save lives when a quick reaction is needed, or the best response is something most people wouldn't think to do. But that may be decades away still.

  • @Rolox01
    @Rolox01 6 หลายเดือนก่อน +2643

    So refreshing to hear grounded academics talk about these sorts of things and take realistic look at what’s happening. Feels like everyone wants to say anything about generative AI

    • @Dino1845
      @Dino1845 6 หลายเดือนก่อน +129

      It's only now I feel we're learning of the monstrous cost & technical limitations of this technology now that we're past the initial hype.

    • @harrylane4
      @harrylane4 6 หลายเดือนก่อน +186

      @@Dino1845I mean… people have been talking about that since the start, you just weren’t listening

    • @the2theonly672
      @the2theonly672 6 หลายเดือนก่อน +82

      @@harrylane4exactly, you can’t clickbait that like you can with “this crazy new AI will take your job”

    • @snickerdoooodle
      @snickerdoooodle 6 หลายเดือนก่อน +30

      People can still talk about the ethics and ramifications that AI has on the human element without your permission, just saying.

    • @the_mad_fool
      @the_mad_fool 6 หลายเดือนก่อน

      It's because all the crypto grifters jumped onto the AI bandwagon, so there's just a ton of scammers and liars flooding the air with their phony claims. Reminds me of back when stem cell research was the next big medical thing, and suddenly people were coming out with "stem cell anti-aging cream" made from "bovine stem cells."

  • @tommihommi1
    @tommihommi1 6 หลายเดือนก่อน +8010

    generative AI has destroyed internet search results forever

    • @vincei4252
      @vincei4252 6 หลายเดือนก่อน +1087

      Nah, Google did that because of their greed.

    • @priapulida
      @priapulida 6 หลายเดือนก่อน +9

      @@vincei4252 .. and because they are woke
      (edit: I thought this is obvious, but apparently not, use the simple prompt "Google and DEI" to get a summary)

    • @Alice_Fumo
      @Alice_Fumo 6 หลายเดือนก่อน +214

      Come on, any 100IQ+ human with half an hour of time could figure out how google or whomever could largely fix those issues if they really wanted to.
      Also, the grapevine says that OpenAI search gets announced next monday so maybe there'll be some competition finally. Take buckets of salt with this though, I don't know where I heard it, but I'm quite sure it wasn't a trustworthy source.

    • @tommihommi1
      @tommihommi1 6 หลายเดือนก่อน +1250

      @@yensteel the point is that trash generated by AI has flooded the results

    • @no_mnom
      @no_mnom 6 หลายเดือนก่อน +258

      Add to the search before:2023

  • @Posiman
    @Posiman 6 หลายเดือนก่อน +1073

    This is the computational side of the argument for AI peak.
    The practical side is that the amound of existing high-quality data in the world is limited. The AI companies are already running out.
    They theorize about using synthetic data, i.e. using model-generated data to train the model. But this leads to a model collapse or "Habsburg AI" where the output quality starts quickly deteriorating.

    • @ShankatsuForte
      @ShankatsuForte 6 หลายเดือนก่อน +25

      This isn't even remotely true. Microsoft has already trained a GPT 3.5 tier model entirely on synthetic data.

    • @chja00
      @chja00 6 หลายเดือนก่อน +214

      I absolutely love the fact that someone named it Habsburg AI.

    • @ash7324
      @ash7324 6 หลายเดือนก่อน +82

      CPT 8 is going to have a big weird jaw and a limp

    • @funginimp
      @funginimp 6 หลายเดือนก่อน +8

      The only reason synthetic data works in practice but not in theory is because it makes up for the transformer algorithm not being optimal and other training data being worse quality.

    • @nathanshaffer3749
      @nathanshaffer3749 6 หลายเดือนก่อน +68

      Personally, I believe that we have to improve the quality of data. But we are no where near the quality that humans train on. We percieve the world in 4 dimensions (3 spatial + time) and have additional context layered into it such as sound and touch. The difference between observing pictures of trees and being able to walk around a tree every day as it grows. Being able to watch the way it moves in the wind, seeing the difference in lighting throughout the day is orders of magnitudes of data quality vs seeing static 2d images of trees. This of course completely ignores all of the human to human interactions involving direct instructions as well as being able to ask questions about your world to to get specific information.
      imagine living your whole life floating in empty space and being taught through a series of pictures paired with words. No instruction on how to read. Just here's a picture and here is it's description, you work it out. Maybe you have continuous access to all of the data to be able to compare and contrast so you don't have to rely on the human memory to analyze it. Even given an immortal human, I think we would plateau pretty quickly. Any critical thinking that this human acquired, I feel would be attributed to a base level biologic programming that we are born with. Which is not much I think.

  • @Spnart
    @Spnart 6 หลายเดือนก่อน +109

    You don't understand, a bunch of people with finance degrees on reddit told me General AI is just around the corner because ChatGPT responded in such a way that it "held a conversation with them therefore its alive". It's almost like the less familiar you are with computer science and fundamentals, the more you treat technology as a form of magic.

    • @mrosskne
      @mrosskne 6 หลายเดือนก่อน

      If something seems intelligent to you, it is. There is no experiment you can perform to show otherwise.

    • @benjaminloyd6056
      @benjaminloyd6056 4 หลายเดือนก่อน +11

      "Any sufficiently advanced technology is indistinguishable from magic"

    • @Darca1n
      @Darca1n 3 หลายเดือนก่อน +8

      ​@@benjaminloyd6056And amusingly the less you understand technology on a basic level, the lower the bar for "That's not technology, it's magic" gets.

    • @jared_bowden
      @jared_bowden 3 หลายเดือนก่อน +9

      I got a STEM degree not too long ago and I'm genuinely surprised how, eh, "not technical" alot of people in finance and business are, even though their jobs often demand technical thinking. However, the news I've heard is that even Wall Street is starting to see "through the smoke veil" and realize a) generative AI might reach a plateau, and b) even if it doesn't, how exactly are we going to make money off of this? Like, they don't just need to make money, they need to beverage a Ton of money to recoup costs, and it turns out generating money is harder than cat images.

    • @TehIdiotOne
      @TehIdiotOne หลายเดือนก่อน

      its almost as if LLM are literally designed to give you the illusion they are intelligent.

  • @leckst3r
    @leckst3r 6 หลายเดือนก่อน +1822

    10:37 "starts to hallucinate"
    I recently heard it expressed that AI doesn't "sometimes" hallucinate. AI is always hallucinating and most of the time its hallucination matches reality/expectation.

    • @sebastiang7394
      @sebastiang7394 6 หลายเดือนก่อน +413

      yeah but the same could be said of humans to some extend. We always have our own model of the world that is flawed and doesn’t match reality perfectly.

    • @fyang1429
      @fyang1429 6 หลายเดือนก่อน +29

      AI is just a very very cleaver Hans so that does make sense

    • @drew1564
      @drew1564 6 หลายเดือนก่อน +119

      If a hallucination matches reality, it's not a hallucination.

    • @fartface8918
      @fartface8918 6 หลายเดือนก่อน

      ​@@drew1564untrue

    • @Brandon82967
      @Brandon82967 6 หลายเดือนก่อน +16

      Not true. Someone told GPT 3, which is way worse than GPT 4, that it can call the user out if they asked a nonsense question. It was able to answer sensical questions like "who is the 40th president of the US" and called the user out when they asked nonsense like "when is the spatula frighteningly spinning"

  • @michaelujkim
    @michaelujkim 6 หลายเดือนก่อน +3136

    even if you took the whole internet as a dataset, the real world is orders of magnitude more complicated.

    • @B.D.E.
      @B.D.E. 6 หลายเดือนก่อน +362

      A simple but very important point that's easy to forget with all the optimistic ambitions for AI.

    • @mharrisona
      @mharrisona 6 หลายเดือนก่อน +16

      I appreciate your comment sir

    • @dahahaka
      @dahahaka 6 หลายเดือนก่อน +110

      Which is part of why people are working on merging robotics and ml
      However nobody is trying to let these things train on the real world, quite the opposite, turns out that training on the "whole internet" is vastly more efficient and transferable zero shot into the real world without any problems
      It's funny how you assume humans have perfect models of the world... You just need very very rough approximations

    • @Cryptic0013
      @Cryptic0013 6 หลายเดือนก่อน +127

      Yup. Look at the behaviors and beliefs of human beings who are, as they say, chronically online. Getting 100% of your information from the internet's not a great anchor to reality

    • @freezerain
      @freezerain 6 หลายเดือนก่อน +42

      But an AI should not be perfect, just be better then an average human. If dataset will contain all image recording, all books and movies and music all news and tiktoks and comments this is could be enough to be better then human in some tasks

  • @LesterBrunt
    @LesterBrunt 6 หลายเดือนก่อน +1245

    People just completely underestimate how complex human cognition is.

    • @BrotherTau
      @BrotherTau 6 หลายเดือนก่อน +28

      It's complex because we don't understand it.

    • @Chris-fn4df
      @Chris-fn4df 6 หลายเดือนก่อน +156

      @@BrotherTau It is complex because it is full of variables. That is what complexity is. There are many things described as "complex" that we have excellent understanding of.

    • @BrotherTau
      @BrotherTau 6 หลายเดือนก่อน +19

      @@Chris-fn4df Point well taken. However, my point is rather that human cognition and consciousness could have a very simple explanation. We think it's "complex" because we don't understand it.

    • @murathankayhan2312
      @murathankayhan2312 6 หลายเดือนก่อน +86

      @@BrotherTau nah. human cognition and conscousness will never have a "simple" explanation. even though they will have an explanation in the future, it will not be a "simple" one. you are underestimating our brains.

    • @patrickirwin3662
      @patrickirwin3662 6 หลายเดือนก่อน +20

      ​@@BrotherTau "We"? The most unexamined term used by the true believers in scientism. What's trying to understand human consciousness and cognition is human consciousness and cognition. When your screw turns itself without a driver, when your cornea sees itself without a mirror, when your tastebuds know what they taste like, you will understand it all.

  • @RobShocks
    @RobShocks 6 หลายเดือนก่อน +663

    Your ability to articulate complex topics so simply with very little cuts and editing adlib is amazing. What a skill.

    • @philippmillanjochum1839
      @philippmillanjochum1839 6 หลายเดือนก่อน

      Facts

    • @Pabliski577
      @Pabliski577 6 หลายเดือนก่อน +9

      Yeah it's almost like he talks to real people

    • @bikerchrisukk
      @bikerchrisukk 6 หลายเดือนก่อน +1

      Yeah, really understandable to the layman 👍

    • @RobShocks
      @RobShocks 6 หลายเดือนก่อน +2

      @@docdelete Teaching you say?! What a cool concept. Thanks for sharing.

    • @tomsenkus
      @tomsenkus 6 หลายเดือนก่อน

      He’s obviously AI 😮

  • @ekki1993
    @ekki1993 6 หลายเดือนก่อน +754

    As a bioinformatician, I will always assume that the exponential growth will plateau sooner rather than later. Sure, new architectures may cause some expected exponential growth for a while, but they will find their ceiling quite fast.

    • @visceralcinema
      @visceralcinema 6 หลายเดือนก่อน +65

      Thus, the exponential curve becomes logarithmic. 🤭🤓

    • @arthurdefreitaseprecht2648
      @arthurdefreitaseprecht2648 6 หลายเดือนก่อน +62

      ​@@visceralcinemathe famous logistic curve 😊

    • @typeins
      @typeins 6 หลายเดือนก่อน

      Imagine Blackrock created Aladin the super computer back in the days in the desert protected by the American military with full force. And now we are talking about small companies (compared to that giant monster)

    • @ekki1993
      @ekki1993 6 หลายเดือนก่อน +29

      @@typeins Were you responding to a different comment there?

    • @jan7356
      @jan7356 6 หลายเดือนก่อน +5

      The curve he drew and explained was actually logarithmic in the amount of data (sublinear).

  • @3dartxsi
    @3dartxsi 6 หลายเดือนก่อน +74

    Years ago, all the talk about AI was that anything resembling proper "strong" AGI was likely decades off, if we ever managed to achieve it. This is largely due to A.) limitations imposed on computer hardware(as currently envisioned) by the laws of physics themselves, & B.) the fact that we didn't have a full understanding of how a human brain works, limiting our ability to replicate it in any functional way
    Suddenly, silicone valley is selling AI as if we've reached that level, even though neither of the previous considerations have been addressed.
    This would be like someone you know discussing how they want to take a year off to go to Europe, despite not having the money to pay for the trip, or having a valid passport and then suddenly they are saying how they've bought plane tickets and will be leaving the country next week, even those previous issues were never dealt with.

    • @thiagopinheiromusic
      @thiagopinheiromusic 5 หลายเดือนก่อน +4

      You make a very compelling point about the current state of AI development versus the hype surrounding it. The sudden surge in AI capabilities and the way it's being marketed can indeed seem premature, given the unresolved foundational challenges.

    • @bluedistortions
      @bluedistortions 3 หลายเดือนก่อน +6

      I remember about 12 years ago, the cover of some big tech magazines were boasting that computer scientists had modeled a snail's brain.
      Reading for more details,
      1. A snail doesn't have anything we can really call a "brain," it's a bundle of nerve fibers, very limited in functionality.
      2. It took the full resources of one of the biggest computational super centers in the world to do so, and consumed a town's worth of electricity to do so.
      3. And it could only do it at 1/4 speed.
      Really should make us a bit more humble about our technology and more in awe of nature.

  • @djdedan
    @djdedan 6 หลายเดือนก่อน +876

    I’m not a child development. Specialist so take this with a grain of salt but What’s interesting is that you can show a child one image of a cat. Doesn’t even have to be realistic and they’ll be able to identify most cats from then on. What’s interesting is that they may mistake a dog for a cat and they will have to be corrected but from then on they will be able to discern the two with pretty high accuracy. No billions of images needed.

    • @bened22
      @bened22 6 หลายเดือนก่อน +237

      Yes. Computer AI has nothing to do with human intelligence.

    • @joaopedrosousa5636
      @joaopedrosousa5636 6 หลายเดือนก่อน +271

      That brain was in fact trained with a vast amount of data. That baby brain was created with a DNA that guided the development of those nervous structures while it was in the mother's womb. The ancestors of that baby going millions of years in the past interacted with the world through visual means and the more successful at reproducing the genes encoded those useful visual perception and classification tasks

    • @dtracers
      @dtracers 6 หลายเดือนก่อน +100

      What you are missing is the 3-4+ years of "training" the human AI has taken up to that point of different cats and dogs. +Millions of years of evolution that has made the best sub networks that has learned.
      That second piece is hard because it's like running Google's automl over every possible data set and possible network architecture for an LLM

    • @ParisCarper
      @ParisCarper 6 หลายเดือนก่อน +85

      Like someone else mentioned, evolution has selected for brains that can learn and adapt new concepts quickly. Especially very tangible concepts like different types of animals and how to easily recognize them.
      For the AI, you have to start from scratch. Not only do you have to teach it cats, but you have to teach it how to understand concepts in general

    • @mrosskne
      @mrosskne 6 หลายเดือนก่อน +27

      @@ParisCarper what does it mean to understand a concept?

  • @ohno5559
    @ohno5559 6 หลายเดือนก่อน +142

    Everyone remembers the people who wrongly said the internet wouldn't transform our lives, no one remembers the people who correctly said the segway wouldn't transform our lives

    • @isiahs9312
      @isiahs9312 6 หลายเดือนก่อน +5

      I can still remember people telling me how social media would fail.

    • @ohno5559
      @ohno5559 6 หลายเดือนก่อน +72

      @@isiahs9312 yeah and you remember because social media succeeded. it's survivorship bias

    • @morgant.dulaman8733
      @morgant.dulaman8733 6 หลายเดือนก่อน

      @@ohno5559 My guess: we're looking at the same situation as all booms, busts, and normalization we've seen for everything from Tulips to Cryptocurrency.
      New product gets introduced->it's viewed as the next big thing with infinite possibilities and a money printer attached-> people overinvest-> something goes wrong and the bubble pops->it collapses->later, quietly, we see its actual potential...and its usually less than suspected, but still has its uses and slips into mundane use.

    • @vincemccord8093
      @vincemccord8093 5 หลายเดือนก่อน +30

      "Victory has a thousand fathers, but defeat is an orphan."

    • @squamish4244
      @squamish4244 3 หลายเดือนก่อน +1

      @@ohno5559 Clicking on this video itself is confirmation bias.

  • @Benjamin1986980
    @Benjamin1986980 6 หลายเดือนก่อน +166

    As a chemical engineer, this reminds me of a warning from my numerical methods professor. More data add a more complex model does not mean better prediction. Eventually, you will get to the proverbial curve fitting of the elephant. This is where new model everything absolutely perfectly, but you have zero predictive power. Because you have now modeled in all the inherent chaos in the system.
    More complicated does not necessarily mean better

    • @adaelasm6467
      @adaelasm6467 6 หลายเดือนก่อน +13

      That’s just basic bias-variance trade off though. Literally taught in Machine Learning 101. Overfitting is just as bad as underfitting.

    • @balasubramanianah969
      @balasubramanianah969 3 หลายเดือนก่อน +1

      I don’t understand how what you said is different from basic overfitting.

    • @Benjamin1986980
      @Benjamin1986980 3 หลายเดือนก่อน +1

      @@balasubramanianah969 It's not. Just two ways of saying the same thing

    • @srikanthan1000
      @srikanthan1000 3 หลายเดือนก่อน +1

      U just described the problem with String theory. At least according to popular opinion.

  • @peterisawesomeplease
    @peterisawesomeplease 6 หลายเดือนก่อน +331

    I think a key issue is we are actually running out of high quality data. LLMs are already ingesting basically all high quality public data. They used to get big performance jumps by just using more data. But that isn't really an option anymore. They need to do better with existing data.

    • @jlp6864
      @jlp6864 6 หลายเดือนก่อน +110

      theyre also now "learning" from ai generated content which is making them worse

    • @Alan.livingston
      @Alan.livingston 6 หลายเดือนก่อน +27

      I worked on a system a while back that used parametric design to convert existing architectural plans and extrapolate them out into 3d models needed to feed into steel frame rolling machines. The hardest part of what we did was accomodating the absolute garbage architects would put in the plans. Turns out when a source isn’t created with being machine readable in mind it’s often difficult to do anything about it.

    • @peterisawesomeplease
      @peterisawesomeplease 6 หลายเดือนก่อน +69

      @@Alan.livingston Yea the problem of garbage or at least badly structured data is really clear in LLMs. Probably the most obvious example is they never say "I don't know" because no one on the internet says "I don't know". People either respond or they say nothing. So the LLMs don't have any idea of uncertainty. Another related issue that comes up constantly is that LLMs will give a popular answer to a popular question when you actually asked a question with a slight twist from the popular one. For example ask an LLM "what is an invasive species originating on an island and invading a mainland". They all answer the much more popular reverse question. Its a data problem the LLMs can't escape the overwhelming larger amount of data on the reverse question because all they see is the text.

    • @quickdudley
      @quickdudley 6 หลายเดือนก่อน +1

      ⁠@@jlp6864there are machine learning algorithms that aren't really affected by that kind of thing but adapting them to text and image generation would be pretty tricky to do right.

    • @picketf
      @picketf 6 หลายเดือนก่อน +9

      ​@@peterisawesomeplease I asked for ChatGPT 4 to calculate the distance to Gamma-ray burst 080916C one of the most violent time distortion events ever on record using a trifecta of 1. the cosmological formula for luminosity distance in a flat universe, 2. redshift, specifically to compensate for the time dilation effect, 3 Emission energy calculations. It responded 3 times the first 2 answers it concluded right after filling the page with formulas that they were incorrect and restarted the answering process presenting new numbers. I'd say it is a rather peculiar case but for sure those 2 wrong answers and the fact that it had become aware of their fallacy AFTER formulating everything twice is an attest to its indecision😅

  • @supersnail5000
    @supersnail5000 6 หลายเดือนก่อน +938

    Im surprised 'degeneracy' wasnt also mentioned in this - basically that as more AI generated content leaks into the dataset, further training could actually lead to worse results. There are ways of encoding the output to evidence that the data was generated, but that likely wont hold up if edits were made to the data prior to it entering the training corpus.

    • @Raccoon5
      @Raccoon5 6 หลายเดือนก่อน +58

      AI generated data is frequently used for training of AI and it has pretty decent results.
      I doubt what you say is true, but having real world in the data set is always important.
      But that's not really a problem since we are taking more and more videos and photos of the real world.

    • @TheManinBlack9054
      @TheManinBlack9054 6 หลายเดือนก่อน +70

      I do not think that the so called "model collapse" presents an actual danger to AI advancement as was shown by the Phi models. The models can be trained on synthetic data and perform well.

    • @existenceisillusion6528
      @existenceisillusion6528 6 หลายเดือนก่อน +180

      @@TheManinBlack9054 That synthetic data was carefully curated. The problem still exists if someone isn't careful enough.

    • @monad_tcp
      @monad_tcp 6 หลายเดือนก่อน +45

      @@Raccoon5 no, those are called adversarial models, they don't work that well

    • @pvanukoff
      @pvanukoff 6 หลายเดือนก่อน +17

      Think about pre-AI, when we just had humans learning. What "dataset" did they learn on? They learned on things created by humans before them. If humans can learn from humans, and generate new, interesting, innovative results, I don't see why AI can't do the same by learning from and building on data generated by other/previous AI.

  • @OneRedKraken
    @OneRedKraken 6 หลายเดือนก่อน +213

    This is sort of confirming my suspicions of where this is all heading atm.
    When I understood the reason why AIs are confused about human hands and limbs. It made me understand the biggest flaw with AI. It doesn't 'understand' anything. Which is why even though its' been dumped tons of reference images and photos of humans. Still doesn't understand that the human hand has no more than 5 fingers.
    Why is that? Because in it's test data it has pictures of people holding something with two hands, but where one hand is hidden by the angle/perspective. And so the AI only sees one hand, but a bunch of extra fingers. It's conclusion "Human hand can have up to 10 fingers". That's a really big hurdle to climb over.

    • @picketf
      @picketf 6 หลายเดือนก่อน +15

      Fingers problem has been fixed in Dall-E 3 also you can ask it for 3D Models now and it will output in Blender script which means it's being trained to link concepts to shapes

    • @nbboxhead3866
      @nbboxhead3866 6 หลายเดือนก่อน +25

      A lot of the problems caused by limbs and fingers in AI-generated images happen because there isn't any pre-planning, so duplicates of something there's meant to be only one of happen easily. For example, if you have it try and generate an image of a man waving at you, there are several different positions his arms and hands could be in that would make a coherent image, and because the AI doesn't start out thinking "One arm here, fingers positioned like this..." and instead just generates an image based on a function that returns pixel values independent of each other, you get separate parts of the image thinking they're the same arm.
      I guess Dall-E 3 must have some sort of pre-planning that happens, which is why it doesn't fail limbs and fingers as badly. (I say "as badly" because it still won't be perfect)

    • @kirishima638
      @kirishima638 6 หลายเดือนก่อน +11

      Except this has largely been fixed now.

    • @TheMarcQ
      @TheMarcQ 6 หลายเดือนก่อน +75

      "fixed" by programmers adding phrases to your prompts making sure to include appropriate number of fingers. That is a hotfix at best

    • @picketf
      @picketf 6 หลายเดือนก่อน +2

      @@TheMarcQ apparently polydactyly is a real condition that is statistically not that rare. Will Smith slurping noodles is not that long ago and current AI is really leaps better.

  • @PristinePerceptions
    @PristinePerceptions 6 หลายเดือนก่อน +357

    The data we have is actually incredibly limited. We only use mostly 2D image data. But in the real world, a cat is an animal. We perceive it in a 3D space with all of our senses, observe its behavior over time, compare it all to other animals, and influence its behavior over time. All of that, and more, makes a cat a cat. No AI has such kind of data.

    • @swojnowski8214
      @swojnowski8214 6 หลายเดือนก่อน +7

      you can't recreate a 3d matrix using a 2d matrix. it is about dimensions. Development is about going from lower to higher dimension, you can do it if you are at the higher dimension, but not at the lower, umless you maku up some stuff to plug holes, that's why llms hallucinate, that's why we dream ...

    • @burnstjamp
      @burnstjamp 6 หลายเดือนก่อน +56

      However it's also true that humans can easily and accurately tell animals apart from a young age, even if shown only static images (or even abstract representations) of them. The fact that we have more senses and dimensions with which we can perceive input seems less important than the fact that the human mind simply has far more capacity for pattern recognition.
      I also don't think that introducing more variety in input would solve the issue presented in the video-only delay it. If 2D image data fails to produce exponential or linear improvement in a generalized model over time, I fail to see how 3D data, or sonic data, or temporal data, or combinations therein would substantially change the reality of the problem

    • @joelthomastr
      @joelthomastr 6 หลายเดือนก่อน +6

      Has nobody done the equivalent of plugging a large neural network into a Furby or an Aibo?

    • @celtspeaksgoth7251
      @celtspeaksgoth7251 6 หลายเดือนก่อน +2

      @@burnstjamp and born instinct

    • @picketf
      @picketf 6 หลายเดือนก่อน +3

      Well, AI is currently being trained on the billions of minutes being uploaded to youtube. Imagine you could say at any point of 2023 or 2024 that you watched every single cat video ever uploaded.

  • @tommydowning3481
    @tommydowning3481 6 หลายเดือนก่อน +381

    I love this content where we get to delve into white papers with the convenience of a youtube video, not to mention with the genuine enthusiasm Mike always brings to the table.
    Great stuff, thanks!

    • @xX_dash_Xx
      @xX_dash_Xx 6 หลายเดือนก่อน +16

      same here, +1 for paper review. and I appreciate the pessimism-- nice change of pace from the autofellatio that two minute papers does

    • @skoomaenjoyer9582
      @skoomaenjoyer9582 6 หลายเดือนก่อน

      @@xX_dash_Xx i’ve had my fill of the generative AI hype too… “no, im not worried that my quality, new work will be replaced by a machine that moves heavily-represented, average work.”

    • @blucat4
      @blucat4 6 หลายเดือนก่อน +2

      Agreed, I love Mikes videos.

    • @brianbagnall3029
      @brianbagnall3029 6 หลายเดือนก่อน

      ​@@blucat4The videos are all right but you can tell he really wants to pick his nose.

    • @lobstrosity7163
      @lobstrosity7163 6 หลายเดือนก่อน

      That paper will be displayed in the Museum of Human Civilization. The robotic visitors will shaken their headoids at the naïvité of their late creators.

  • @marcbecker1431
    @marcbecker1431 6 หลายเดือนก่อน +74

    This is totally off-topic, but the quality of the X and Y axes of the graph at 5:43 is just stunning.

    • @johnpienta4200
      @johnpienta4200 4 หลายเดือนก่อน +5

      Thought this was sarcasm when I read it before getting to that point in the video - expecting wonky labels, honestly, but man, I don't think I could draw that well to save my life.

  • @Reydriel
    @Reydriel 6 หลายเดือนก่อน +1058

    5:35 That was clean af lol

    • @squirlmy
      @squirlmy 6 หลายเดือนก่อน +82

      I've never seen a better right angle drawn by hand!

    • @NoNameAtAll2
      @NoNameAtAll2 6 หลายเดือนก่อน +29

      @@squirlmy tbf, it's on a celled paper

    • @dBradbury
      @dBradbury 6 หลายเดือนก่อน +52

      @@NoNameAtAll2 I certainly couldn't do that that quickly and smoothly even it were gridded!

    • @drlordbasil
      @drlordbasil 6 หลายเดือนก่อน +49

      he's an AI

    • @BillAnt
      @BillAnt 6 หลายเดือนก่อน +5

      AI is more like crypto with diminishing returns. There will be incremental improvement, but less and less significant than the original starting point.

  • @TheGbelcher
    @TheGbelcher 6 หลายเดือนก่อน +597

    “If you show it enough cats and dogs eventually the elephant will be implied.”
    Damn, that was a good analogy. I’m going to use that the next time someone says that AI will take over the Earth as soon as it can solve word problems.

    • @tedmoss
      @tedmoss 6 หลายเดือนก่อน +10

      We haven't even figured out if we are at the limit of intelligence yet.

    • @WofWca
      @WofWca 6 หลายเดือนก่อน +3

      1:00

    • @Brandon82967
      @Brandon82967 6 หลายเดือนก่อน +15

      That's not true but if you show it enough Nintendo games and images of supervillains, it can put Joker in Splatoon.

    • @SimonFrack
      @SimonFrack 6 หลายเดือนก่อน +4

      @@Brandon82967What about Darth Vadar in Zelda though?

    • @Brandon82967
      @Brandon82967 6 หลายเดือนก่อน +2

      @@SimonFrack its possible

  • @natey313
    @natey313 6 หลายเดือนก่อน +145

    Another factor that I think plays a huge part in this decline of AI art generators specifically, is the decline of recular artists... As AI replaces and kills off artistic jobs, you reduce the data set even further, as generative AI images require other 'look alike' images to generate something... At some point, the data set available will be so small that AI art programs won't be able to make anything unique anymore... It will just be the same handful of images generated with slightly different features... At that point, the AI developers will have to make a choice; to leave it alone and watch their program get progressively more boring and stagnant, or use other AI generated images as part of the data set... But, AI art using AI art is similar to inbreeding... If it takes another AI generated image as 'acceptable data' it will take and add its imperfections with it, which, overtime, will pollute and corrupt the data set, and produce increasingly inferior images...

    • @walkingtheline1729
      @walkingtheline1729 6 หลายเดือนก่อน +23

      You are missing the fact the professional artist are putting invisible squiggles over there art that ai will pick up on to protect there are for being used for learning with out their concent. It's actively being lead to failure. They have been doing this for years

    • @童緯強
      @童緯強 6 หลายเดือนก่อน +4

      Until Open AI fires their entire Content Management Team, there will always be a job for real artists.

    • @thiagopinheiromusic
      @thiagopinheiromusic 5 หลายเดือนก่อน

      You've highlighted a critical issue in the long-term sustainability of AI art generation. The potential decline of human artists and the reliance on AI-generated images for training could indeed lead to a degradation in the quality and uniqueness of AI-generated art. Here’s a closer look at the implications and challenges:
      Decline of Regular Artists
      Reduced Diversity in Training Data:
      As fewer human artists produce original work, the pool of diverse, high-quality training data shrinks. This limits the variety and richness of inputs available for training AI models.
      Homogenization of Style:
      With a smaller dataset, AI art generators might start producing images that are increasingly similar, leading to a homogenization of artistic styles. The unique, personal touch that human artists bring to their work becomes harder to replicate.
      Challenges of Using AI-Generated Data
      Data Quality Degradation:
      Relying on AI-generated images for training can introduce errors and imperfections that compound over time. This can lead to a decline in the overall quality of the generated art, akin to the genetic defects seen in inbreeding.
      Loss of Innovation:
      Human artists innovate and push boundaries, creating new styles and techniques. AI models, however, are primarily imitators. Without fresh human input, the innovation in AI-generated art stagnates, leading to repetitive and uninspired creations.
      Potential Solutions
      Hybrid Approaches:
      Combining AI with human creativity could be a way forward. AI tools can assist artists, enhancing their productivity and providing new avenues for creativity, rather than replacing them entirely. This synergy can help maintain a diverse and high-quality dataset.
      Regular Infusion of Human Art:
      To prevent data degradation, it’s essential to continuously infuse the training datasets with new human-created art. This can ensure that AI models remain exposed to novel and diverse artistic expressions.
      Improved Feedback Mechanisms:
      Developing better feedback mechanisms where AI can learn from critiques and preferences of human artists and audiences can help maintain the quality of AI-generated art. This involves more sophisticated algorithms capable of understanding and incorporating nuanced feedback.
      Ethical and Sustainable Practices:
      Encouraging ethical practices in AI development and usage is crucial. This includes fair compensation for artists whose work is used to train AI models and promoting the importance of human creativity in the arts.
      Long-Term Implications
      The trajectory of AI art generation depends significantly on how these challenges are addressed. Here are some long-term implications to consider:
      Cultural Impact:
      The decline of human artistry could have a profound cultural impact, as art has always been a reflection of human experience and emotion. Maintaining a vibrant community of human artists is essential for cultural diversity.
      Economic Considerations:
      The economic ecosystem surrounding art and artists might need to adapt, finding new ways to support and integrate human creativity with AI advancements.
      Technological Evolution:
      AI technology itself will need to evolve, finding ways to better emulate the creativity and innovation inherent in human artistry. This could involve breakthroughs in how AI understands and generates art.
      In summary, the future of AI art generation hinges on finding a balance between leveraging AI’s capabilities and preserving the unique contributions of human artists. Ensuring that AI complements rather than replaces human creativity will be key to sustaining a dynamic and diverse artistic landscape.

    • @JohnnyThund3r
      @JohnnyThund3r 5 หลายเดือนก่อน +11

      I don't think any real artists jobs are in danger. I've been messing around with this A.I. generated stuff for awhile now and it's clear what you're talking about has already happened. The A.I. cannot create any new art styles like a human can, it's mostly just mimicking the styles of other human artists to create basically knock offs of their work. This is great for fast prototyping and early concept work, but it's not a real replacement for real designers who knows how to make things work in a game or movie. Any studio getting rid of their art team and replacing it with A.I. is shooting themselves in the foot, AI is a billion years behind us humans when it come to creativity and abstract conceptualization.

    • @madrabbit8722
      @madrabbit8722 4 หลายเดือนก่อน +6

      @@thiagopinheiromusic I think you just ironically highlighted a critical issue in generative AI by the fact that this entire essay you dumped here is instantly recognizable as being AI-generated

  • @msclrhd
    @msclrhd 6 หลายเดือนก่อน +331

    I've seen image generation gets worse the more qualifiers you add as well. Like asking for "a persian cat with an orange beard" -- it will start including orange coloured things, the fruit, or colour all the fur orange, give it orange eyes, or make the background orange. I think this is a fundamental limitation of the way transformer models work.
    Specifically, transformers are trying to reduce the entire thing into a single concept/embedding. This works when the concepts align ("Warwick Castle", "Medieval Castle", etc.) but not when the concepts are disjoint (e.g. "Spiderman posing for a photo with bananarama."). In the disjoint case it will mix up concepts between the two different things as it mushes them to a single embedding.
    A similar thing happens when you try and modify a concept, e.g. "Warwick Castle with red walls". The current models don't understand the concept of walls, etc., how things like cats or castles are structured, nor can they isolate specific features like the walls or windows. If you ask it for things like that (e.g. "Warwick Castle with stained glass windows") it is more likely to show pictures focused on that feature rather than an image that has both.

    • @Pravlord
      @Pravlord 6 หลายเดือนก่อน +12

      nope

    • @justtiredthings
      @justtiredthings 6 หลายเดือนก่อน +27

      Image generators are empirically getting better at this sort of coherence and prompt-adherence, so 🤷

    • @inverlock
      @inverlock 6 หลายเดือนก่อน +35

      Most people just work around this issue by doing multi step generation where they clearly separate concepts between steps and allow the later steps to generate content in the image with the context of the previous step as a base image. This doesn’t actually solve the issue but is a reasonably effective mitigation.

    • @roxymigurdia1
      @roxymigurdia1 6 หลายเดือนก่อน +7

      that's not how transformers work and also i dont know where u got the idea that models don't understand the concept of walls

    • @msclrhd
      @msclrhd 6 หลายเดือนก่อน +56

      @@roxymigurdia1 If you are just talking about walls, transformers can encode that concept as a vector sure. Similarly they can encode variants of those given enough data like "brick wall" or "wall with graffiti".
      My point is that if you ask the model to point out the walls on warwick castle it cannot do that as it does not know how to relate the concept (feature vector) of a wall to the concept (feature vector) of castles. Thus, even if it can encoded "red brick walls" and "warwick castle" correctly (which it can), it does not necessarily know how to draw "warwick castle with red brick walls" as it does not know how to align those two concept vectors, nor where the walls on the castle are in order to style them differently.
      That is what I meant.
      I've just tested this on Stable Diffusion 2.1 and that does a lot better job with these combined wall/castle examples than 1.5 does. It still messes up my "Spiderman in a photo with bananarama" example, though (ignoring the weird faces) :shrug:!

  • @ohalloranpeter
    @ohalloranpeter 6 หลายเดือนก่อน +39

    I was having exactly this argument during the week. Thanks Computerphile for making the point FAR better than I!

  • @alan1507
    @alan1507 3 หลายเดือนก่อน +7

    Your point about under-representation in the training set is spot on. In an experiment to chatGPT I gave it a. poem I'd written to analyse, and I asked it to identify what type of poem it was. It stated that it was a Shakespearean Sonnet and quoted the rhyme scheme ABABCDCDEFEFGG. Well it was a Sonnet but not a Shakespearean sonnet - a Petrarchan sonnet, which has an entirely different rhyme scheme. The reason why it got the wrong answer was because if you Google "Shakespearean Sonnet" you get over 7 million hits, but if you google "Petrarchan Sonnet" there are only about 600,000 hits. Therefore it's over 10 times as likely to plump for Shakespearean than Petrarchan. The only way to get this right would be to have a balanced training set, or to have the Bayesian prior probabilities of the two classes. Given the massive number of different classes it could be asked to identify, I do not see how this problem could be solved.

    • @golfrelax9795
      @golfrelax9795 15 วันที่ผ่านมา

      AI is just a better GOOGLE. It summarize it for you.That's it. If you asked something common, it'll sum up better. If it's something less common, it'll struggle. If it's something new, no search results!!

    • @alan1507
      @alan1507 9 วันที่ผ่านมา +1

      @@XenoCrimson-uv8uz it's not quite like that. The softmax function yields a probability for each possible answer. It would be possible for it just to pick the one with the highest probability, but (I'm told) this leads to repetitive cycles. So having got the posterior probabilities of all the words in the dictionary (as it were) it samples that probability distribution using a random number generator. In this way it focusses on maybe 20-30 most probable words. The trouble is that in the example above "Shakespearean" is 10 times as likely as "Petrarchan" so 10 times as likely to be picked by the random number generator.

    • @XenoCrimson-uv8uz
      @XenoCrimson-uv8uz 8 วันที่ผ่านมา

      @@alan1507 oh.

  • @LesterBrunt
    @LesterBrunt 6 หลายเดือนก่อน +27

    I’m a musicologist so my expertise is music and people truly, and immensely, underestimate how incredibly complex our ability to make music is. Sure AI can produce convincing tracks, but the ability to play in real time, in response to a human playing, is way beyond anything computers are capable of.
    The way we perceive music is actually extremely unique and complicated. We don’t see music as a series of frequencies in a rhythmical pattern, we see them as abstract entities. If I play Happy Birthday in another key, 2x as slow, and change a few notes, everybody would, somehow, still recognize the song. By every objective measure they are not the same song, but somehow every human perceives the music as some kind of abstract entity that can change its appearance without losing its identity. This seems so easy and trivial to us that it is incomprehensible to think that a computer could never do this. A computer reads the data and concludes that it can’t be happy birthday because the frequencies don’t match, the tempo doesn’t match. We all intuitively understand it is the same song, but to be able to quantify why is incredibly complex, and something WE don’t even fully understand.

    • @Aircalibur
      @Aircalibur 6 หลายเดือนก่อน +8

      AIs currently only look at one thing or concept at a time. They can't perceive the relationships between things or concepts. For instance, ask an image generator AI to give you a rhino wearing rollerblades and it will give you the rhino and the rollerblades, but they're not in the correct relationship; the rhino isn't actually wearing the rollerblades. By that I don't mean the image not looking correct because the rhino's legs/feet and the rollerblades don't mesh quite right, I mean that the rhino and the rollerblades are nowhere near each other because the AI can't figure out the precise relationship between the two objects. Obviously the dataset doesn't have a rhino on rollerblades, but it doesn't matter even if it did and even if the image looks correct because the fundamental issue remains: the AI doesn't even really know what something simple like a tree is. It doesn't know how the leaves relate to the twig, how the twig relates to the branch and how the branch relates to the trunk. It has just memorized a bunch of pictures and it had to be told what a tree is beforehand. The human assigns the meaning, not the AI. A human knows when a song begins and ends and understands that the song is the relationship between the notes, not a simple set of objective characteristics. The AI just memorizes the song and the definition is sealed right then and there. It's not flexible or creative and it doesn't think critically. It simulates intelligence, but it's not intelligent.

  • @squirrelzar
    @squirrelzar 6 หลายเดือนก่อน +51

    It’s interesting because animals, and specifically humans as the prime example of what a “general intelligence” should be almost proves it’s not a data problem. It’s a learning problem. I’d argue “the something else” is a sufficiently complex system that is capable of learning on comparatively small data sets. And we probably A don’t have the right approach yet and more importantly B don’t yet have access to the sort of technology required to run it

    • @fakecubed
      @fakecubed 6 หลายเดือนก่อน +31

      Our brains are constantly learning in massively parallel operations every minuscule fraction of a second every single day for decades at a time, across all of our senses simultaneously. It's pretty hard to compete with that.

    • @squirrelzar
      @squirrelzar 6 หลายเดือนก่อน +16

      @@fakecubed agreed - it’s a totally different system than “just some random numbers in a black box”

    • @annasofienordstrand3235
      @annasofienordstrand3235 6 หลายเดือนก่อน +5

      It's not a learning problem either, it's a metaphysical problem. If the senses gather "information," then how do neurons in the brain know what that information represents? Try to answer without using a homunculus argument. That is seemingly impossible, and so neural coding has failed.

    • @squirrelzar
      @squirrelzar 6 หลายเดือนก่อน +11

      @@annasofienordstrand3235 I think that’s what I’m getting at by saying a sufficiently complex system. The brain and its senses do ultimately boil down to a system of inputs and outputs. It’s just extremely well tuned for pattern recognition to the point where you only need to see a handful of cats to sufficiently identify all other cats. Hence my argument it’s not a data problem but a learning problem. You need a system that can operate on a small subset of data and be able to distinguish the important pieces of detail while chocking the rest up to attributes of that specific thing. And that’s only for classification problems. Just a tiny subset of what a general intelligence should be capable of

    • @mrosskne
      @mrosskne 6 หลายเดือนก่อน +2

      @@annasofienordstrand3235 What does it mean to know something?

  • @michelleblit6526
    @michelleblit6526 6 หลายเดือนก่อน +34

    my boyfriend is your student in the university of nottingham and he loves your videos!! you're the reason he came to study here!

  • @t850
    @t850 6 หลายเดือนก่อน +187

    ..."pessimistic" (logarithmic) perfomance is what economists would call "law of diminishing returns" and is basically how systems behave if you keep increasing one parameter, but keep all other parameters constant...:)

    • @fakecubed
      @fakecubed 6 หลายเดือนก่อน +11

      The thing is, the other parameters aren't constant. I also don't think we're close to maxing out on dataset sizes either.

    • @ekki1993
      @ekki1993 6 หลายเดือนก่อน +74

      And exponential performance is what any scientist outside of economics calls "unsustainable".

    • @t850
      @t850 6 หลายเดือนก่อน +21

      ​@@fakecubed ...that may be so, but each parameter contributes to the outcome to some degree, and even those have their limits. In the end it's only a matter of where exacty are we on the logarithmic curve. At firs curve may look as if will be rising indefenitely, but in realitiy it always reaches the "ceeling" before it flattens out.
      It's like driving a car. At first it seems as it will keep on accelerating forever, but in the end it reaches top speed no matter how hard or how long you floor it, how many speeds you have, how much power in the engine there is, or how low of a drag you can reach. If you want to keep on accelerating you need a new paradigm in propuslion (technology)...

    • @t850
      @t850 6 หลายเดือนก่อน +12

      @@ekki1993 ..."stonks" curve...:P

    • @Baamthe25th
      @Baamthe25th 6 หลายเดือนก่อน +2

      @@fakecubed What others parameters can be really improved, to the point of avoiding the datasets diminishing ROI issue ?
      Genuinely curious.

  • @TheNewton
    @TheNewton 6 หลายเดือนก่อน +49

    4:12 "it didn't work" , too right , afaik the emergent behaviors of large language models(LLMs, big data sets) as you scale up are plateaus and have not lead to any consistent formula/metrics to indicate emergent behavior can be extrapolated as a trend
    Meaning we don't know if there are more tiers to the capabilities a LLM could have even IF you have enough data.
    And there's a paper that such emergence is humans doing the hallucinating with a 'mirage' due to the measurement metrics.
    [1] , and also see the emergence part in the standford transformer talk[2], [3].
    The other issue in the here and now with scaling up to even bigger data is that most of the data after ~2023 is just gonna be garbage, as every source: the internet , internal emails, ALL content sources get polluted by being flooded with AI generated-content and no true way to filter it out.
    AKA model collapse[4] , though I don't know of much published work on the problem of LLM's eating their own and each others tails, probably more stuff if you view it as an attack vector for LLM security research. Bringing us again and again to realizing authenticity is only solvable by expensive intervention of human expertise to validate content.
    [1] arXiv:2206.07682 [cs.CL] Emergent Abilities of Large Language Models
    [2] youtube:fKMB5UlVY1E?t=1075 [Standford Online] Stanford CS25: V4 I Overview of Transformers , Emily Bunnapradist et al speakers
    [3] arXiv:2304.15004 [cs.AI] Are Emergent Abilities of Large Language Models a Mirage?
    [4] arXiv:2305.17493 [cs.LG] The Curse of Recursion: Training on Generated Data Makes Models Forget

    • @SlyRocko
      @SlyRocko 6 หลายเดือนก่อน

      The polluted data could definitely be alleviated if generative AI had functionality similar but opposite to the Nightshade antiAI tool, where generated AI works could have injections to exclude themselves from learning models.
      However, there are still the other limits to AI that can't be solved without some novel solution that we probably won't even find anytime soon.

    • @Hagaren333
      @Hagaren333 6 หลายเดือนก่อน +3

      I think I've seen 2 studies on model collapse, one applied to inverse diffusion models and one to LLM, I have to look them up because they both also come to the conclusion that synthetic data is harmful and model collapse is inevitable

  • @marcuswiesner9271
    @marcuswiesner9271 6 หลายเดือนก่อน +13

    OK, just some practical experimentation to back up what this video is saying.
    I looked up a bunch of pictures of rare birds, trees, and pictures with a lot of stuff going on in them and headed over to GPT4 to test it out. This is what I found:
    The AI did perfectly on identifying all of the rare birds, trees and plants. It perfectly described the scenes I sent it, down to the last detail, identifying all of the objects, even in abstract images.
    However, this is where it failed:
    When I asked it to identify the type of cloud in the background of one of the images, it guessed cumulus when they were clearly stratus. An easy question to get right.
    Any person can tell the difference between cumulus and stratus with 5 seconds of education on the subject.
    When I asked it to identify the state or city where one of the shots was taken, it wasn't able to do so even with the word "Miami" written in the scene.
    So here's what I think is happening.
    Being trained on the whole internet, it obviously knows the topic of any picture I could pull up and put into it.
    But in the case of asking what the clouds in the background of this image were, it failed because the image was not tagged with that data specifically.
    So in its training data, it knows images by their features and what they are tagged by.
    If the researchers want to create more comprehensive understanding, they'll have to label every part of every photo, which is obviously not even practical.
    This probably means:
    AI will be amazing at clear categorization tasks involving any image on the internet - especially the main subject of the image, like a specific image of a bird that was labeled with the species of the bird, or an image of a tree labeled with the type of tree.
    It may even become really good at new categorization tasks if specifically fine-tuned on some data, like cloud types and types of animals.
    But fresh pictures taken "in the real world" may stump the AI. And even the same image of the same tree in the background of a photo that appears to feature something else entirely may stump the AI, because the image was never tagged as having that tree in the background.
    That's essentially what I'm thinking. Because these stratus clouds are as clear as day, and it has doubtless been trained on millions of pictures of stratus clouds. But it couldn't identify them in the background of this racing track. So, seems pretty clear to me what's happening here.
    It doesn't seem to be developing a real ability to generalize beyond a certain point. So yeah, just to wrap up - this may be fixed if the researchers painstakingly tag every piece of every image they train it on, from the clouds to the trees in the background, at least for images on the internet.
    But it may struggle with real world, "fresh" imagery (that was not in its training data) forever, making hilarious mistakes for a long time to come.

    • @Onyx-it8gk
      @Onyx-it8gk 2 หลายเดือนก่อน +1

      This is very true. If you want to look into this further, all of these labels AI was trained on were provided by humans, specifically thousands of underpaid people in India.

    • @marcuswiesner9271
      @marcuswiesner9271 2 หลายเดือนก่อน

      @@Onyx-it8gk Oh 100%

  • @thunkin-ai
    @thunkin-ai 6 หลายเดือนก่อน +239

    10:17 the blue marker pen _still_ doesn't have the lid on...

    • @harryf1867
      @harryf1867 6 หลายเดือนก่อน +12

      I am that Analyst who says this in the meeting room to the person at the whiteboard :)

    • @redmoonspider
      @redmoonspider 6 หลายเดือนก่อน +3

      Where's the sixth marker? Bonus question..what color is it?

    • @sergey1519
      @sergey1519 6 หลายเดือนก่อน +2

      ​@@redmoonspiderBlack.

    • @sergey1519
      @sergey1519 6 หลายเดือนก่อน +4

      Last seen in stable diffusion video.

    • @redmoonspider
      @redmoonspider 6 หลายเดือนก่อน +1

      @@sergey1519 great answer!

  • @theronwolf3296
    @theronwolf3296 6 หลายเดือนก่อน +15

    There is a big difference between pattern matching and comprehension. At the first level, comprehension allows a mind to eliminate spurious matches, but further on, comprehension allows coming to conclusions that did not exist in the data. This is what intelligence really is. (GenAI could not have independently conceived of this study, for example). Essentially it's regurgitating whatever was fed into it. Actual comprehension goes far beyond that.
    Nonetheless this could be very useful for finding information though. An AI trained on millions of court cases, for example, could help a (human) lawyer track down relevant information that is critical... but it would require the application of human intelligence to determine that relevance as well as eliminate the material that does not apply.

    • @user-qm9ub6vz5e
      @user-qm9ub6vz5e 14 วันที่ผ่านมา

      I’d push back on this. Modern neural networks do have some level of learning that they do by definition. You can see this when you look at specific layers and what they are paying “attention” to. Cough cough attention is all you need. But the modality in what they learn is limited. This is why to have true comprehension of the world around us these system need to be embodied in a physical 3d form like a humanoid robot that has more than just vision and language also known as embodied ai. The true challenge is ensuring the network is able to understand an edge in an image and then be able to draw and reason over the edge if that makes sense. We are just at the beginning of what ai could truly bring into this world.

  • @petermoras6893
    @petermoras6893 6 หลายเดือนก่อน +18

    I think people mysticize Machine Learning and Generative AI far more than it needs to be.
    At the end of the day, ML is just an arbitrary function. It can be any function as long as we have the right input and output data.
    The obvious problem is that the possibility space of any problem balloons exponentially with it's complexity, so you eventually reach a point where you don't have enough resources to brute force the solution.
    However, I don't think we've reached the peak of generative AI, as there are other avenues of improvement other than more training data.
    One solution I think we'll see employed more is using more complex algorithms that help bridge the gap between the input and output data.
    For example, we don't train a Neural Net on pure images. We use a convolutional layer at the start in order to pre-process the image into data that is easier to find correlations with.
    But these layers can be anywhere in the NN and still be effective.
    (Personal opinion) For Image based gen-AI, I think future algorithms will use pre-trained algorithms that show understandings of 3D objects and their transposition onto 2D planes. The general image classifiers could then use the pre-trained 3D transposition as a basis for understanding 2D images, which would in theory give them an understanding of 2D object representation that is closer to our own.

  • @MaddMoke
    @MaddMoke 6 หลายเดือนก่อน +9

    I love the idea that not only are these companies investing millions of dollars into something that might be for naught but also convincing clueless board of directors in several industires to cut jobs and invest in a product that might never get better.
    Add into this the legitimate issues of stolen art or apps using personal data from its users who are forced to sign very unclear TOS and you get a morality bomb of human wasteful greed

  • @YDV669
    @YDV669 6 หลายเดือนก่อน +6

    So what he's saying, I think, is that cats are going to save us from the AI apocalypse. That's so cool.

  • @emerestthisk990
    @emerestthisk990 6 หลายเดือนก่อน +38

    I'm what you call a 'creative' that uses Photoshop and After Effects and in my professional work and personal time I cannot find a single use for AI. After a playing around period with ChatGPT I haven't gone back to use it. I don't even use the new AI features in PS. I think the corporations and big tech really want AI to be the next revolution and are trying to sell it to us as that so they can line their collective shareholder pockets'. But the reality is far from the monumental shift being sold.

    • @francisco444
      @francisco444 6 หลายเดือนก่อน +3

      Lol not a single use for AI?
      I've seen so many creatives make this mistake it's actually enraging. Creatives are about pushing the boundaries, fostering connection, and mastering their tools. If AI is not at all for you, that's fine. But you'll find yourself in the back of the line eventually.

    • @G3Kappa
      @G3Kappa 5 หลายเดือนก่อน +2

      I've been using Runway to automatically do masking and it works nice. of course it has its limits, but it does 80% of the work I would need to do by hand in seconds. And that's fine, because I can always retouch it by hand.

    • @LordConstrobuz
      @LordConstrobuz 5 หลายเดือนก่อน +4

      so basically youre a boomer and you dont know anything about AI or the AI tools available, lol. this is no different than being like "what? iPod? I dont need that, i have a CD walkman"

    • @newerstillimproved
      @newerstillimproved 3 หลายเดือนก่อน

      A not very creative creative, perhaps?

  • @astropgn
    @astropgn 6 หลายเดือนก่อน +15

    Every video I watch with dr pound makes me want to do a class with him. I wish his university would record the lectures and make them available on TH-cam

  • @programninja6126
    @programninja6126 6 หลายเดือนก่อน +19

    Everyone who wants to understand AI should study mathematical statistics. The Central Limit Theorem with a convergence of 1/sqrt(n) isn't just a coincidence, it's the best linear unbiased estimator of any 1 dimensional problem. (there's technically other distributions like one for min(x) with their own convergence rate, but it's not better than 1/sqrt(n)) Machine learning models may have the advantage of being non-linear and thus can fit many more models than simple linear regression, but they obviously can't outperform linear models in cases where the data actually is linear so to think that a model can have anything other than diminishing returns is yet to be shown and would break statistics at its core (in fact 1/sqrt(n) is the best case, high dimensional linear models have a convergence rate of n^(-1/5) or worse, so if your data has a high intrinsic dimension then your convergence will obviously be slower)
    On the other side people pay a lot for this kind of research and it's excellent job security to keep trying for something you know probably won't happen

    • @paulfalke6227
      @paulfalke6227 12 วันที่ผ่านมา

      AI research has a long history - at least since the 1950s. After some years an AI researcher has to jump from one AI fashion to the next AI fashion. They are used to do it. This is new only for the investors.

  • @bakawaki
    @bakawaki 6 หลายเดือนก่อน +63

    I hope so. Unfortunately, these mega companies are investing a ludicrous amount of money to force a line upwards that will inevitably plateau, while utterly disregarding ethics and the damage it will cause.

    •  6 หลายเดือนก่อน +1

      How will it cause ethics and damage, if it plateaus?

    • @I_Blue_Rose
      @I_Blue_Rose 6 หลายเดือนก่อน +2

      Not gonna happen.

    • @dsgowo
      @dsgowo 6 หลายเดือนก่อน +21

      It already is. Look at the colossal amount of energy needed to power these algorithms and the underpaid workers in developing countries who label data for the models to train off of.

    • @george-and-gracie7996
      @george-and-gracie7996 6 หลายเดือนก่อน

      This! And don't forget the disinformation it spreads that is destabilizing democracies around the world.

    • @Tobyodd
      @Tobyodd 5 หลายเดือนก่อน +6

      @@I_Blue_Rose when is the last time you've seen literally anything infinitely shoot improve without a plateau

  • @KnugLidi
    @KnugLidi 6 หลายเดือนก่อน +68

    The paper reinforces the idea that bulk data alone is not enough. An agent needs to be introduced into the learning cycle, where the particular algorithm needs to identify what pairs are needed for a specific learning tasks. In a nutshell, the machine need to know to direct its own learning toward a goal.

    • @davidgoodwin4148
      @davidgoodwin4148 6 หลายเดือนก่อน +1

      We are doing it a different way, via prompts. Our example is SQL for our system. The LLM knows SQL. It does not know our system as we never posted its structure publically (not because it is super secret, it is just internal). We plan to feed it a document describing our system. We then tell it to answer questions based on it (As a hidden prompt). That would work for teaching a model what an elephant is but I do feel you could provide fewer examples of new items once you have it generally trained.

    • @PazLeBon
      @PazLeBon 6 หลายเดือนก่อน +2

      if all people were he same that could eventually work.... but we aint, im particularly contrary by default :)

    • @Hexanitrobenzene
      @Hexanitrobenzene 6 หลายเดือนก่อน +1

      AlphaCode 2 has convinced me that LLMs + search will be the next breakthrough. Generate - verify paradigm. At present, it's not clear how to make the "verify" step as general as "generate".

    • @PazLeBon
      @PazLeBon 6 หลายเดือนก่อน +5

      @@Hexanitrobenzene llms kinda are search already

    • @siceastwood2714
      @siceastwood2714 6 หลายเดือนก่อน

      @KnugLidi isn't this just the concept of synthetic data? And there are actually efforts in creating AI's specialized on creating synthetic data needed for further training. I'm kinda confused why this concept isn't even mentioned here.

  • @SkullCollectorD5
    @SkullCollectorD5 6 หลายเดือนก่อน +107

    Could part of the asymptote argument be that it will become harder and harder to train models on accurate data that *was not* previously generated by (another) model?
    Oversimplified, basically every written body of work released after 2022 will have to be viewed critically - not just in the prior healthy sense of informing your opinion well, but now that you cannot be absolutely sure it wasn't generated and possibly hallucinated by an LLM. Does generation degrade as if copying a JPEG over and over?
    In that way, possible training data is limited to human history up to 2022/23 - if you were to be cynical about it.

    • @tobiasarboe5753
      @tobiasarboe5753 6 หลายเดือนก่อน +33

      I don't know if this plays into the argument from the video, but it *is* a very real problem that AI faces yes

    • @dahahaka
      @dahahaka 6 หลายเดือนก่อน +7

      Honestly, none of this matters, Human Level Intelligence will always be the lower abound of what's possible with machine learning, if a human has enough data to gain the intelligence that it does, there has to be enough data for the human to gain this knowledge and intelligence... I'm seriously confused how Dr. Pound is oblivious to that :( just like people are blinded by hype, it seems theres a growing amount of people who are blinded by their discontent with hype and trying to disprove hype. Idk what i think of that

    • @medhurstt
      @medhurstt 6 หลายเดือนก่อน +6

      I think its because the expectations of models is too great. Its already well known its possible to run an answer back through a model to improve it and I think this is what this paper is missing (although I haven't read it!). Its unrealistic to think a model can hold the answer to all questions from training. Many questions simply need multiple passes in the same way we need to think things through ourselves. I think the computerphile issue of insufficient representation of objects in models may well be real but is very solvable even if it become incremental improvement on the knowledge side of the AI.

    • @dahahaka
      @dahahaka 6 หลายเดือนก่อน +6

      @@medhurstt exactly, just think about if I asked my mum to explain to me what Counter Strike is, you wouldn't consider her to not be intelligent because she's only heard that a couple times in her life :D

    • @Jack-gl2xw
      @Jack-gl2xw 6 หลายเดือนก่อน +3

      Having enough high quality data to train on is always a concern, but fortunately, I think any (serious) body of work that is published with the aid of a GPT model will be audited by the author to make sure the quality is still high. In this sense, I dont think that this would be an issue because the signal to noise ratio is still positive. If models are trained on random internet content, I would say this is more of an issue as there may be some low quality generated text sneaking into the training data. While training on poorly made synthetic data may cause some issues, I think the main takeaway from the paper is that more data (even quality data) will not be able to get us to new transformative results. We need new learning approaches or techniques to better understand the data and the users intentions. Personally, I think this is where implementing a form of reinforcement learning would help break through this plateau. Supposedly this is what OpenAI's mysterious Q-Star algorithm is thought to be doing

  • @johnnylollard7892
    @johnnylollard7892 6 หลายเดือนก่อน +56

    People thought AI would be like a thinking robot from a sci fi movie. In fact, it's a fancier way to outsource labor. Instead of some developing economy, it gets outsourced to a computer program. In a similar fashion, quality declines in a way that's first invisible, and then slowly snowballs.

    • @realleon2328
      @realleon2328 6 หลายเดือนก่อน +3

      Yet companies try to and wonder why no one likes it

    • @TheBest-sd2qf
      @TheBest-sd2qf 6 หลายเดือนก่อน

      It's still remarkable what it can do. If it never gets any better it's ok, it's incredibly useful as it is.

    • @ergwertgesrthehwehwejwe
      @ergwertgesrthehwehwejwe 6 หลายเดือนก่อน +1

      So society objectively improves because less labor is needed to maintain it? Cry me a river

    • @thymenwestrum7011
      @thymenwestrum7011 5 หลายเดือนก่อน +2

      ​@@jurycould4275The whole technology concept revolves around replacing human jobs, what are you yapping about? AI has already replaced jobs and will continue to do so.

    • @Strix2031
      @Strix2031 4 หลายเดือนก่อน

      ​@@jurycould4275technology replaces jobs all the time

  • @doomtho42
    @doomtho42 6 หลายเดือนก่อน +16

    I can absolutely see there being a plateau regarding AI performance using current data processing techniques, and honestly I don’t think that’s as much of a “cavalier” idea as he suggests - I think the interesting part is how we progress from there and what new techniques and technologies arise from it.

  • @pedroscoponi4905
    @pedroscoponi4905 6 หลายเดือนก่อน +201

    Reminds me of the story doing the rounds a while back about people trying to use these new gen-"AI" to identify species of mushroom and whether they're safe to eat, and the results were, uuuuh, _dangerously_ incorrect to say the least 😅

    • @sznikers
      @sznikers 6 หลายเดือนก่อน +14

      Cant wait to see people with baskets of deathcaps identified by some halfbaked istore app as edible 😂

    • @DonVigaDeFierro
      @DonVigaDeFierro 6 หลายเดือนก่อน +50

      That's insane. Even expert mycologists need to get their hands on a specimen to accurately identify them.
      Irresponsible to use it, and way more irresponsible to publish it or advertise it.

    • @sznikers
      @sznikers 6 หลายเดือนก่อน +55

      @@DonVigaDeFierro sillicon valley bros dont care, gotta hustle to pay all the coaching bills...

    • @calvin7330
      @calvin7330 6 หลายเดือนก่อน +14

      @@DonVigaDeFierro This means you can't ever advertise an AI that can "do anything", like ChatGPT. Even if you build in exceptions for potentially dangerous queries people will get around them, sometimes on accident

    • @xlicer
      @xlicer 6 หลายเดือนก่อน +16

      @randomvideoboy1i agree with your comments but I think you are missing the point, that AI startups are the moment right now filled with all kinds of bad-faith bad actors grifters that just want to rush AI products with no care for any ethical concerns. And this is coming from someone that is actually supporting of AI and the technology.

  • @Alex-cw3rz
    @Alex-cw3rz 6 หลายเดือนก่อน +13

    I think one of the most worrying thing is when they start wanting to make the invested money back, over 150 billion has spent already

  • @poduck2
    @poduck2 6 หลายเดือนก่อน +22

    I'm curious about how AI will train itself as more and more AI content gets disseminated on the Internet. Will it degrade the performance, like photocopying pages that have been photocopied? It seems like it should.

    • @ItIsJan
      @ItIsJan 6 หลายเดือนก่อน +5

      in the best case, the ai wont et better

    • @itssardine5351
      @itssardine5351 6 หลายเดือนก่อน

      Yeah thats what I’m curious about. A couple years back it was extremely easy for big companies to mass-scrape the internet and take everything but now they would need to fight against themselves

    • @nadinegriffin5252
      @nadinegriffin5252 6 หลายเดือนก่อน +4

      Most quality information isn't found on the Internet anyways. Quality writing and information is found in books and novels. Paywalls prevent acess to quality material online.
      AI already has an issue with plagiarism and not citing sources.
      It's like getting a handwritten photocopy of a photocopy about the Cabbage Soup Diet in the 90s. It supposedly claims developed at a hospital but has noting to back up that claim. It isn't a published paper, it doesn't link to the hospital website or provide you with a contact to verify the information.
      Infact, because AI has such poor input of information I wouldn't be surprised if I asked it about the Cabbage Soup diet that it would tell me it was developed at a reputable hospital and is great for losing weight. 🤣

  • @djrmarketing598
    @djrmarketing598 6 หลายเดือนก่อน +10

    I think the graph flattens out more with diminishing returns as more examples get added - neural networks are just complex number systems and at some point if you take every image ever made and put it into a computer, I feel like the system just moves into a state where it doesn't really know more, it just is slightly more accurate at the same but still makes errors, while still not being able to identify "anything". One of the issues I think we have is we're applying a "digital" solution to an analog system. Human and animal eyesight isn't pixels. If it was, we'd already have cameras attached to brains for the blind. The data stream of nature I believe is more of a "video" learning algorithm. It's not about the pixels but the change of each pixel to the next pixel. Look at an open field - you see the field, but when a cat runs across, you see the cat because your brain (and animal brains) are designed to see that and hear that, and thus it's much different. AI is not trained that way. We should be training AI from "birth" with a video stream teaching it about shapes and colors, like we do children. Maybe we can rapidly do that, but we're not going to see a breakthrough in AI until we're doing "real live" training and not just throwing a few billion words and images into a neural net.

    • @hexzyle
      @hexzyle 6 หลายเดือนก่อน +6

      Yeah the reality is that these algorithms, no matter how much data you put into them, are still only reading/understanding the data within the constraints of the data type. E.g. a picture of a cat is just a picture of a cat, not an actual cat with weight, 3 dimensions, and a skeleton. These algorithms will become excellent at classifying or generating pictures of cats, but not being able to process how a cat moves in 3d space, the physical affects it has on the world, or how its skeleton is orientated geometrically. The machine algorithm is still hyper specific to "pictures" even though it has a facade of something more complex than that at first glance.

    • @thedudewhoeatspianos
      @thedudewhoeatspianos 6 หลายเดือนก่อน +1

      Even if we do that, do we have any reason to believe it can achieve a different result? It might be worth trying but i suspect human brains use a different kind of pattern matching, and no amount of training data will overcome that hurdle. We have to change our approach.

    • @alexandrebenoit
      @alexandrebenoit 6 หลายเดือนก่อน +1

      What your describing is just a video encoding. At the end of the day, computers are digital. Any video stream, no matter how you encode it is going to be digital. I'm not saying encoding is not important, it's a huge factor in training an effective model, but we are already doing that and have been for a long time.

  • @ReflectionOcean
    @ReflectionOcean 5 หลายเดือนก่อน

    By YouSum Live
    00:00:03 Using generative AI for various tasks
    00:00:27 Potential of achieving general intelligence through vast data
    00:01:31 Challenge: vast data needed for zero-shot performance
    00:02:00 Limitations of adding more data and models
    00:02:20 Importance of data quantity for effective AI applications
    00:03:28 Difficulty in training models for specific and rare concepts
    00:04:44 Performance plateau despite increasing data and model size
    00:06:10 Need for alternative strategies beyond data accumulation
    00:10:22 Uncertainty about future AI advancements and performance trends
    By YouSum Live

  • @EtienneFortin
    @EtienneFortin 6 หลายเดือนก่อน +43

    It's probably always the case for any problem. At some point brute force plateau. Reminds me when the only way to increase the processor speed was increasing the clock speed. I remember seeing graphs where the performance grew linearly and they were planning a 10 GHz P4. At some point they needed to be a little bit more clever to increase performance.

    • @beeble2003
      @beeble2003 6 หลายเดือนก่อน +5

      Clock speed is a nice analogy. 👍

    • @ahmataevo
      @ahmataevo 6 หลายเดือนก่อน +2

      It turns out requiring everyone to have liquid helium heat exchangers for basic computing is not viable for consumer-grade economies.

    • @jcdenton2819
      @jcdenton2819 5 หลายเดือนก่อน +2

      Well, now it's even worse because we are at physical limit and they just add MORE processors on die, calling it "chiplet design", resulting in comically large processors, video chips and their coolers and radiators, with a corresponding power consumption.

  • @marybennett4573
    @marybennett4573 6 หลายเดือนก่อน +33

    Very interesting! This reminds me of a post that demonstrated that one of the image generation AI programs couldn't successfully follow the prompt "nerd without glasses". Clearly the model determined that having glasses is an intrinsic property of "nerds" given the majority of it's source images included them.
    Silly little example but illustrative of the broader issue I think.

    • @tamix9
      @tamix9 6 หลายเดือนก่อน +7

      That's more to do with the clip encoder being not great at encoding meanings like "without". Adding "without something" can instead make it more likely for that thing to appear.

    • @fakecubed
      @fakecubed 6 หลายเดือนก่อน +1

      @@tamix9 Yeah, you need to use negative prompts if you want to avoid something.

  • @Shocker99
    @Shocker99 19 วันที่ผ่านมา +5

    I like how they called it.
    I hear ChatGPT's new model is only slightly better than the current model in some cases, but worse in others.

    • @chinese_bot
      @chinese_bot 16 วันที่ผ่านมา

      If you compare LLM’s advancements all off ChatGPT you’re gonna be real confused when things don’t work out like you planned. Adding more data has diminishing returns, but there are other avenues and algorithms (think AlphaGO’s self learning models, stuff that Strawberry only recently has started incorporating into their LLMs) and it’s already doing things that people said were impossible just months ago. Even if all advancement were to stop and AI were to be forever stuck in its current state, it would still be producing advancements in every single field it’s deployed in for decades and decades.

  • @beaverson
    @beaverson 6 หลายเดือนก่อน +25

    I hope generative "AI" gets sued for all the date they stole with the mask of being nonprofit.. But I also hope it stops getting better. Dont want more dirty money getting pumped into Silicon Valley at the expense of 95% of the working classes data.

  • @EJD339
    @EJD339 6 หลายเดือนก่อน +13

    So can someone explain to me when I google a question now it
    Tends to highlight the wrong answer or not even answer the question I was searching for? I didn’t feel like it use to be that.

    • @addeyyry
      @addeyyry 6 หลายเดือนก่อน +1

      Found bing to be much more reliable actually

  • @Ptaku93
    @Ptaku93 6 หลายเดือนก่อน +10

    this plataeu isn't a problem, it's a solution. I'm so thankful for it and I hope other papers on the topic repeat these findings

  • @rozukke
    @rozukke 6 หลายเดือนก่อน +79

    Happy to see general sentiment trending in a more realistic and less hype-based direction for this tech. I've been trying to make this same impression on people I know irl for ages now, especially people panicking about their jobs and AI sentience and other similar bs as if there is any likelihood of it happening in the immediate future. I blame TH-cam for recommending all those horrible doomer AI podcasts indiscriminately

    • @Kknewkles
      @Kknewkles 6 หลายเดือนก่อน +14

      Heh, I was worried last year so I learned the basics of "AI". Since then I'm not as worried, but increasingly annoyed at a first tech bubble that I clearly understand is just that. A bubble. A marketing bundle of most promising buzzwords in history.

    • @dahahaka
      @dahahaka 6 หลายเดือนก่อน +7

      Anti-Hype based sentiment that disregards very very basic mathematics and statistics, isn't a good thing IMO, this is just as bad as Hype

    • @pedroscoponi4905
      @pedroscoponi4905 6 หลายเดือนก่อน +40

      I think a lot of people worried about their jobs are less worried about the AI actually matching their skill level, and more about their bosses/clients buying into the hype and replacing them anyway.

    • @riccardoorlando2262
      @riccardoorlando2262 6 หลายเดือนก่อน +15

      @@pedroscoponi4905 Honestly, that's a much more real problem. Just like with all tech bubbles before this one..

    • @inkryption3386
      @inkryption3386 6 หลายเดือนก่อน +13

      ​@@pedroscoponi4905 yeah this isn't a technological issue, it's a class issue.

  • @sloaiza81
    @sloaiza81 6 หลายเดือนก่อน +6

    Godel Escher Back came out in 1979 and addressed most issues behind true AI. Almost half a century later and most still don't have a clue what it takes to make general intelligence.

    • @humanoid60
      @humanoid60 6 หลายเดือนก่อน

      Since when did GEB actually propose any solutions to what it takes to create true AI? It seems you fundamentally misunderstood the book...

  • @Lambda_Ovine
    @Lambda_Ovine 6 หลายเดือนก่อน +18

    considering that, on top of these findings, the internet is being flooded with ai generated garbage, to the point that models are being trained with the generated output of other generative ai resulting in degenerate output that is going back into the datasets, i think is very reasonable to believe that we're going to hit a plateau or even start to have qualitative degradation

    • @kabosune9097
      @kabosune9097 6 หลายเดือนก่อน +5

      I always hear this for the past 10 months, but I haven't ever seen AI image models degrade. And they can always roll back

  • @Archimedeeez
    @Archimedeeez 6 หลายเดือนก่อน +11

    the camera work is enthralling.

  • @kevindonahue2251
    @kevindonahue2251 6 หลายเดือนก่อน +25

    Yes, data required to train new models grows exponentially. They've already trained it with everything that can get their hands on and are moving towards "synthetic" ie AI generated data. The age of the Hapsburg AI is already here.

    • @ahmataevo
      @ahmataevo 6 หลายเดือนก่อน +4

      It's already got the body part deformities part down in image generation.

  • @doctorgibberish
    @doctorgibberish 6 หลายเดือนก่อน

    That’s the most wholesome and theme fitting sponsor ad I’ve ever seen

  • @MichaelLonetto
    @MichaelLonetto 6 หลายเดือนก่อน +5

    I think the future performance is more likely to be a series of stacked sigmoids of decreasing magnitude than either a full flattening or a transition to exponential growth, as engineers figure out tweaks to gain efficiency from the current models until that, too hits diminishing returns.

    • @thekingoffailure9967
      @thekingoffailure9967 2 หลายเดือนก่อน

      Don’t call me a stacked sigmoid!! 😢

  • @specy_
    @specy_ 6 หลายเดือนก่อน +6

    One thing im mostly worried about LLMs is the fact that we are able to improve them only if more data is found or the architecture changes, lets say the second is doable, we still need the first. Where do we get this data? Well big portion of it comes from the internet, blog posts, articles, open source software,etc. who uses LLMs the most? Blogs, articles, and open source software. We are also polluding the web with low quality LLM generated texts. We all know what happens when you give an AI it's own input... It will be analogous to over fitting. We could find a way to detect and exclude LLM generated content, but pretty much everything nowadays uses a little bit of it.
    At best we have very few salvageable data to use, at worst, the LLMs actually start getting worse

    • @thiagopinheiromusic
      @thiagopinheiromusic 5 หลายเดือนก่อน +1

      You raise a crucial point about the sustainability of training LLMs with high-quality data. The feedback loop of AI generating content that's then used to train future models could indeed lead to a degradation in quality, akin to overfitting. Detecting and filtering out AI-generated content is a potential solution, but as you mentioned, it's becoming increasingly challenging as AI use grows.
      One way to address this might be to create more robust methods for curating and verifying data quality, perhaps by combining human oversight with advanced AI techniques. Additionally, diversifying data sources and encouraging the creation of new, high-quality content by humans can help maintain the integrity of training datasets.
      Ultimately, innovation in how we approach AI training and data curation will be key to ensuring that LLMs continue to improve without succumbing to the pitfalls of self-referential data. It's a complex issue, but with thoughtful strategies, we can mitigate these risks.

  • @radio655
    @radio655 3 หลายเดือนก่อน +1

    Excellent explanation. Its always great to watch Computerphile

  • @seankelley3619
    @seankelley3619 6 หลายเดือนก่อน +6

    Even other big tech companies are admitting that the only reason their models don’t do as well as OpenAI is because OpenAI has such an ungodly amount of data (a lot of which is stolen). And OpenAI has eluded to the fact that they’re running out… Sounds like a plateau to me.

  • @luketurner314
    @luketurner314 6 หลายเดือนก่อน +4

    2:04 "Always go back to the data" -Larry Fleinhardt (Numb3rs)

  • @mharrisona
    @mharrisona 14 วันที่ผ่านมา +2

    I think this video will age like fine wine

  • @Cheezitnator
    @Cheezitnator 3 หลายเดือนก่อน +5

    Also literally nobody wants generative AI except the corpos selling it, the corpos who don't want to pay artists/writers, and art thieves. Incidentally, these are all the same entities. So it's only motivated by greed and trend chasing which can only be bad for everyone.

  • @jorgandar
    @jorgandar 6 หลายเดือนก่อน +6

    The algorithm in our brain is NOT AN LLM!!! We may express thoughts through language, but "thought" is not the same as merely expression of words. We visualize, we have 'feelings', we have fuzzy intuitions, and then we have all sorts of unconscious processes that interact without our consciousness in strange ways. The point is, only the surface of all of these processes is language that comes out. An LLM is not the same as a thinking being. I have no doubt we'll eventually replicate real intelligence as there is nothing magic about our brains, but right now we are doing very interesting surface fakes, but it's not actual thinking.

  • @FhtagnCthulhu
    @FhtagnCthulhu 3 หลายเดือนก่อน +1

    "In the meantime, lets look at this paper" is the most academic thing you can say behind "this is only one paper".

  • @niello5944
    @niello5944 6 หลายเดือนก่อน +3

    The problem with relying too much in AI in the medical field that I have been worried about is that it'd mean overall less practice for doctors at identifying the problem. If most doctors are lacking in the experience that they should have gain with years of doing the job, then they'll lack experience to identify the more obscure diseases and underlying problems. That isn't something AI can cover, especially when these kinds of things can change without us knowing or deprive us of new discoveries.

  • @jonathonreed2417
    @jonathonreed2417 6 หลายเดือนก่อน +8

    This was an interesting video considering public discussion of “power laws”. I hope you do another video about “synthetic data” which is being discussed now, what is it exactly why someone would want to use it drawbacks etc. I’m personally in the skeptical camp but it would be interesting to hear an academic answer to these questions.

    • @scampercom
      @scampercom 6 หลายเดือนก่อน

      Came here to make sure someone mentioned this.

    • @bilbo_gamers6417
      @bilbo_gamers6417 6 หลายเดือนก่อน

      Synthetic data is going to be huge, and will bring a significant performance increase as time goes on. All of OpenAIs data will likely be generated in-house in addition to the data they currently have from before the low bar AI generated dross began to pollute the internet.

    • @taragnor
      @taragnor 5 หลายเดือนก่อน +1

      The problem is that synthetic data can largely miss the point of finding the exceptions. Synthetic data will be based off some pattern, meaning it's going to create permutations of the existing data, and likely that's going to fall mostly in the most common cases. The real problems with training AI is getting data on edge case scenarios. It needs to see things it hasn't seen before.

  • @daleatkin8927
    @daleatkin8927 6 หลายเดือนก่อน +2

    @6:30 I think you’ve mis classified the optimistic argument as far as being on the cusp of something huge. Even as it is right now, AI is transformative once it hits the economy. It doesn’t have to improve to “general intelligence” to be transformative.

  • @Quargos
    @Quargos 6 หลายเดือนก่อน +25

    Honestly, it sounds like the conclusion you're coming to, about it not being able to infer to new things that aren't very well represented is just a kind of simpler thing that feels like it ought to be obvious: The nature of the dataset informs the nature of the resulting model. If the dataset doesn't have the information to differentiate between tree species (or if the info it has there is highly limited), then of course the model won't either. The model is simply a "compressed" form of the information fed in, and not other info.
    That "You show it enough cats and dogs and the elephant is just implied" as said at the start can never hold. Because if you've never seen an elephant, how could you ever be expected to recognise or draw one? I do not believe that extrapolation to anything outside of the dataset will ever truly work.

    • @peterisawesomeplease
      @peterisawesomeplease 6 หลายเดือนก่อน +1

      I don't like the elephant example either. But I think the point of the paper isn't just that if the data is missing the model won't be able to handle it. The point is that you exponentially growing amounts of data to get linear increases in performance. And we are already running out of high quality data to feed models.

    • @wmpx34
      @wmpx34 6 หลายเดือนก่อน +4

      I agree, but it sounds like many AI researchers don’t. So where’s the disconnect? Either we are wrong or they are overly optimistic. Like the guy in this video says, I guess we will see in a couple years.

    • @alexismakesgames6532
      @alexismakesgames6532 6 หลายเดือนก่อน +3

      The "elephant" speaks to human creativity. It's the ability to make new things based on base principles. Maybe being trained on only cats and dogs is too optimistic to make an elephant but say it also had some other animals so it knew what "horns" were ect. The hope is you could then give it a very detailed description involving concepts it already knew and hopefully get an elephant. Then you could teach it "elephant" and it basically becomes able to learn anything that can be described. There are a lot of other steps but this is one of the keys to having AGI.
      I agree though, it is terribly optimistic to think this will happen with the current ML models. Which is my main problem with them, they pretty much regurgitate only what they have a lot of basis for and become very derivative and patchy in areas where the data is thin.

    • @lopypop
      @lopypop 6 หลายเดือนก่อน

      In the same way that it will probably eventually be able to solve math problems it hasn't been explicitly trained on, I think the argument goes that once it "understands" enough fundamentals of the real world, it can extrapolate out quite creatively.
      This won't work with the example of recalling strict factual data that it hasn't been trained on ("draw me an elephant" ), but it might work with enough prompting to get something reasonable (generate an image of a large land mammal of a specific size, anatomical properties, and other qualities). It's possible that it generates an image that looks like an elephant without ever being trained on elephant photos

    • @Ylyrra
      @Ylyrra 6 หลายเดือนก่อน +8

      @@wmpx34 Most AI "researchers" aren't asking the question. They're too busy being engineers and looking to improve their AI, not look at the bigger picture. The bigger-picture people at all those companies have a vested interest in a product to sell that has a long history of hyperbole and "just around the corner" wishful thinking.
      You don't ask the people building a fusion reactor how many years away practical fusion is and expect an unbiased remotely accurate answer.

  • @joelandrews8041
    @joelandrews8041 6 หลายเดือนก่อน +23

    One potential solution - Has there been any research into developing an AI model which classifies a problem for use by another more specialised AI model?
    For the plant/cat species case, a master AI would identify the subject, and would then ask the specialised subject specific AI model for further info on that subject. This prevents the master AI from needing all the vast amount of training data of the subject specific AI.
    Not sure if I've explained this very well!

    • @zactron1997
      @zactron1997 6 หลายเดือนก่อน +28

      The concept makes sense, but the problem is more fundamental according to this paper. It's not about whether you can train a model to do all these things, it's about the requirement for data at a rate that doesn't match the supply.
      In your example, the problem is never having enough data to make a good enough "specialist" AI to a sufficient quality.

    • @helix8847
      @helix8847 6 หลายเดือนก่อน +9

      Yeah its called FineTuning. Companies are doing it all the time right now, but are not just giving it away.

    • @terbospeed
      @terbospeed 6 หลายเดือนก่อน +11

      Agents, Mixture of Experts, RAG, etc

    • @joelandrews8041
      @joelandrews8041 6 หลายเดือนก่อน

      @@helix8847 thanks for this. I'd like to learn more about this @Computerphile!

    • @JayS.-mm3qr
      @JayS.-mm3qr 6 หลายเดือนก่อน +3

      Man.... they are developing advancements for everything. Any problem that you can think of, people are coming up ways to address with code. That is the magic of coding. You have a problem, and express the problem and solution in a virtual environment, in coded language that computers understand, and the computer outputs something that we understand. Have you heard of AI agents? Those address the thing you asked about. It turns out that using multiple llm's, and developing the ability for ai to complete tasks, makes them a lot more effective. This is true without increase data size. Yes, models are becoming better, without new data.

  • @robinvik1
    @robinvik1 6 หลายเดือนก่อน +18

    An AI needs thousands of pictures in order to distinguish between a cat and a dog. A very small child need about 12 pictures. Clearly they are not learning the same way we are.

    • @97DarkSkull
      @97DarkSkull 6 หลายเดือนก่อน

      That's not true. You can teach stable diffusion how you look from just few pictures

    • @robinvik1
      @robinvik1 6 หลายเดือนก่อน +8

      @@97DarkSkull Stable fusion was trained on 2.3 billion images

    • @ericblair5731
      @ericblair5731 2 หลายเดือนก่อน +1

      Clearly this person has never met a human child.
      Children learn from extensive real world exposure to household pets. Not from isolated images. Even if you don't have a dog at home they meet numerous dogs and have extended interactions with those animals from the comfort of their stroller long before they can speak. So no they don't get just a dozen images. They get hours of real world exposure.

    • @BiggumsMcHoney
      @BiggumsMcHoney 2 หลายเดือนก่อน

      ML models more so create a mathematical imprint of all the data that was fed in, and differences in i.e. raw pixel data are worked out. They don’t “learn” at all. “Learning” was just a helpful analogy that has gone way too far and confused the public. Now even some software engineers are confused and think LLMs can “reason”.

  • @cajun70122
    @cajun70122 5 หลายเดือนก่อน +6

    There is a misleading statement at about the 1:00 mark, where he says that if you show enough pictures of cats and fogs then the elephant is implied - but no, that is not true. The "AI" will only recognize a picture of an elephant AS an elephant if it is trained on pictures of elephants and told that they are elephants. Otherwise the best it can do is say "That picture is probably not a cat and not a dog".

    • @Darca1n
      @Darca1n 3 หลายเดือนก่อน

      Heck, turn a picture of a cat or a dog upside down and it will have no idea what it's looking at, no species change required.
      Or from an angle it hasn't been trained on, for that matter.

  • @delespai5592
    @delespai5592 6 หลายเดือนก่อน +17

    Can someone help me out? Mike said that these AIs can't detect specific species of tree, but the interviewer countered with an app he has which can detect specific species of tree. Mike said it's not the same because the app is just doing classification. But Mike was talking about the AIs ability to classify a specific species of tree, so what's the problem difference that he's pointing out?

    • @jozefwoo8079
      @jozefwoo8079 6 หลายเดือนก่อน +1

      +1 this

    • @braineaterzombie3981
      @braineaterzombie3981 6 หลายเดือนก่อน +9

      It does detect specific species of trees but less frequently. There is higher chance of ai model failing to recognise a rare species than a common species just like humans.

    • @ferinzz
      @ferinzz 6 หลายเดือนก่อน +22

      The app takes an image and compares with a specific dataset. Trying to see what is a close match. Think of it like Google reverse image search. This leaf looks a lot like this leaf in the database.
      Generative AI takes noise and tries to approximate the results to something that is similar to the text associated with the training data. Think of it like generating a map in Civ 6. To get close to the desired result I need to manipulate the noise in these specific ways to get close to the result desired.

    • @11DJcube
      @11DJcube 6 หลายเดือนก่อน +30

      I imagine this app is not using a general generative model, but something called a discriminative model.
      If you ever watch any intro into machine learning it's usually creating a discriminative model that for example tries to recognize a handwritten digit or something else that's a pretty much a well defined and closed set. A model like that doesn't generate any new content but classifies the input into already established categories, in this case tree species.

    • @JayS.-mm3qr
      @JayS.-mm3qr 6 หลายเดือนก่อน +9

      Classification, or binary classification, is a basic machine learning model that predicts if something is or is not. Like, cat, or not cat. To classify lots of things, a different model is used, called multi modal classification. It's just a different set of instructions. I use plantnet all the time for plant classification.

  • @AIWorkforceEvolution1
    @AIWorkforceEvolution1 7 วันที่ผ่านมา

    These informative TH-cam videos are incredibly interesting! I truly enjoy learning from them, and I’m also trying to share similar information with my viewers on my own channel

  • @TheBookgeek7
    @TheBookgeek7 6 หลายเดือนก่อน +4

    This is very interesting. I did something like he was suggesting, once, with the Bing AI. I have this entirely original (and possibly insane) idea in the world of Aesthetics, and I wanted to try it out on the AI; I mean if anything is likely to tell you if you've got something completely insane or not, it would be a device which basically exists to tell you the most popular ideas of the moment- so... that's basically why I did it. All that it could do, actually, was repeat what I was saying back to me, almost verbatim. It was weird! 🤣

  • @squeekywheel9591
    @squeekywheel9591 6 หลายเดือนก่อน +3

    Summary: at some point there is diminishing returns. Then you need a new tech leap/methodology leap.

  • @zoroark567
    @zoroark567 6 หลายเดือนก่อน +1

    I think there’s an additional interesting problem here, which is that the more data we add to these models, the more pronounced the issues of underrepresented edge cases will be. If having more pictures of house cats than of rare endangered animals is a problem now, doubling the input data is going to double the size of that problem.

  • @danremenyi1179
    @danremenyi1179 6 หลายเดือนก่อน +4

    This is a very old concept in economics called the Law of Diminishing Returns. And it applies to a lot of different aspects of life.

    • @lazyraccoon1526
      @lazyraccoon1526 4 หลายเดือนก่อน

      There is no intrinsic reason to expect diminishing returns

    • @demonzabrak
      @demonzabrak 3 หลายเดือนก่อน

      @@lazyraccoon1526 study more about entropy

  • @owlmostdead9492
    @owlmostdead9492 6 หลายเดือนก่อน +19

    It’ll plateau because there’s basically no intelligent design, it’s a brute-force comprised algorithm that’ll never evolve beyond the dataset.

    • @SmileyEmoji42
      @SmileyEmoji42 6 หลายเดือนก่อน +8

      That's not the problem. It's not even "expert" on the entire dataset, only on the common bits.

    • @taragnor
      @taragnor 6 หลายเดือนก่อน +6

      Well the algorithm itself is intelligently designed, but each algorithm has its limits. Honestly it's hard to believe AI intelligence is limitless. Even biological intelligence is heavily capped by your brain. Sure you can learn things, but your actual reasoning capability eventually hits a wall.

    • @fakecubed
      @fakecubed 6 หลายเดือนก่อน +1

      It's designed to the level of our current human intelligence.

  • @stu8924
    @stu8924 4 หลายเดือนก่อน

    Thank you for such an enlightening video. I have been following your channel for some time now and you should be applauded for the way each of your team are able to translate complicated topics into information consumable to by non-academic folks like me.
    It sounds very similar to when a person first starts a diet or commence a fitness program, at first the person will see fantastic results, they’ll look and feel fabulous, but, as they continue the same routine, the results begin to diminish, as their body adapts to its new norm.
    It is at this point that the person needs to cycle their routine, make alterations/variations to their diet or fitness program.
    These Gen AI models need to cycle their routines or make adjusts to their current fitness programs, and, over time, they will once again see fantastic results.

  • @NoNameAtAll2
    @NoNameAtAll2 6 หลายเดือนก่อน +27

    what's up with red filter?

    • @johnornelas
      @johnornelas 6 หลายเดือนก่อน +6

      looks like an incorrect WB setting in the camera.

    • @Asidders
      @Asidders 6 หลายเดือนก่อน +1

      it's to trigger the AI

  • @jimmy21584
    @jimmy21584 6 หลายเดือนก่อน +16

    Considering the open source community’s achievements with model building, fine tuning and optimisation, I think the really interesting improvements will come from there, and not so much from throwing more data into training and stacking more layers.

    • @fakecubed
      @fakecubed 6 หลายเดือนก่อน +5

      Yeah, the open source community is where all the best innovations are happening in figuring out how to make the most out of the models that are out there, but big investor money paying for PhDs to do research gets us important mathematical innovations and big volumes of training data. Both are necessary.

    • @sebastiang7394
      @sebastiang7394 6 หลายเดือนก่อน

      Is it really tough? Unix was a commercial product. The first desktops were commercial products. In what discipline does open source lead the technical curve.

    • @fakecubed
      @fakecubed 6 หลายเดือนก่อน +1

      @@sebastiang7394 The first desktops were home-built kit computers. Open source doesn't mean non-commercial.

    • @sebastiang7394
      @sebastiang7394 6 หลายเดือนก่อน

      @@fakecubed Ok so you're talking about a desktop pc. Not the concept of a gui. Misunderstanding on my part. Sorry about that. Yeah tinkerers definitively do a lot of great stuff. Especially in the early days of computing and still today. But I think it's hard to argue that the majority of innovation happens in the open source world. Just look at all the stuff that happened at Bell labs or IBM. Basically the beginning of Open Source (the GNU project) aimed at reproducing all the commercial tools that already existed. They basically aimed at recreating the UNIX ecosystem. And still today most big open source projects are either backed by large cooperations or are rather small in scale.

  • @mattyrjackson4261
    @mattyrjackson4261 4 หลายเดือนก่อน +2

    10:45 I have seen this lack of Chat-GPT's ability to create code for highly advanced, new research areas and it essentially just says it cannot and gives a more basic answer with the advice to research further yourself. I.e., its just for a basic skeletal example, which the human has to customise

  • @johnpwmcgrath
    @johnpwmcgrath 3 หลายเดือนก่อน +5

    It would be really nice if AI companies didn’t need to steal everyone else’s work to be successful

  • @Awould-m5s
    @Awould-m5s 6 หลายเดือนก่อน +9

    This was my first instinctive thought about how useful ai will be in the long run, for this type of thing. Thank you for putting it into words.

    • @fakecubed
      @fakecubed 6 หลายเดือนก่อน +1

      It won't ever be less useful than it is right now, and it's already very useful. The logarithmic growth may be true, or not, but we're still in the very steep part of it.

  • @penepleto1210
    @penepleto1210 6 หลายเดือนก่อน +2

    It fascinates me to think that techbros fail to realize that by the time you design a generative AI that can, for example, solve world hunger, you necessarily must have already solved world hunger without that AI, since a generative AI can only *emulate* what you can already do

  • @rayjaymor8754
    @rayjaymor8754 6 หลายเดือนก่อน +9

    AI is autocorrect on steroids. As far as I can tell it's a great tool, I can get it to write some simple javascript stuff for me way faster than I can type it out myself.
    But it can't create anything "new" it just re-iterates concepts that already exist and are well documented.
    Now granted, it can do this faster than I as a human can.
    But if you need it to come up with an idea or a new way of doing something - that is where it falls apart.

  • @KipIngram
    @KipIngram 6 หลายเดือนก่อน +9

    I believe it. Our own human thinking isn't just "a huge amount of computing power." There's something else going on we don't understand - the machines are not ever going to be able to do everything we can do.

    • @paulfalke6227
      @paulfalke6227 12 วันที่ผ่านมา

      The creation can not be greater then the creator. Therefore, our AI products will stay forever below our own intelligence capabilities.

    • @KipIngram
      @KipIngram 12 วันที่ผ่านมา

      @@paulfalke6227 Well, to the extent we do things via step-by-step process (and we do do that sometimes), the AIs will outclass us. They just have so much more SPEED, and they're getting more all the time. What they'll never be able to do is the "flashes of inspiration" that we have - the intuitive "gut instinct" stuff that turns out right way more often than random luck. That's the part of us that isn't "mechanized" and will never be within reach of a machine.
      And I think all of that is closely connected with how we're able to actually "be aware" of the world around us - machines will never do that either. It will never be "ending a life" when we demolish a machine.

  • @turul9392
    @turul9392 14 วันที่ผ่านมา +1

    Before he started drawing the graphs I suspected the real world perfomance to be logarithmic. Basically a drag race. Getting harder as you go.

  • @donkeroo1
    @donkeroo1 6 หลายเดือนก่อน +5

    Generative AI is designed to produce average results, regression to the mean. The results are already sounding robotic.

  • @johnarnebirkeland
    @johnarnebirkeland 6 หลายเดือนก่อน +20

    Expecting AGI by increasing LLM size and complexity, sounds a lot like emergence in complex systems theory. I.e. there is precedence for this happening in biology etc. but there is absolutely no guarantee that it will happen, or if there is emergence that it will be anything useful in the context of AGI interacting with humans. But then again you could also argue that the LLM results we currently see already is proof of emergence.

    • @Qacona
      @Qacona 6 หลายเดือนก่อน +14

      I suspect that we'll develop models that are able to fool humanity into thinking they're AGI long before we actually hit 'real AGI'.

    • @kneesnap1041
      @kneesnap1041 6 หลายเดือนก่อน

      I think it's more likely a new technique / algorithm, or rather a collection of them, rather than a complex system creating unexpected behavior through just sheer data size.
      Think about how much "training data" a person gets. We only need to see a handful of cats & dogs before we can reliably tell them apart. We don't need dozens, hundreds, or thousands.
      To imagine that massive datasets are an absolute requirement for AI seems a bit unlikely to me because of this.

    • @Real-HumanBeing
      @Real-HumanBeing 5 หลายเดือนก่อน +1

      They aren’t. They’re proof of the scalability of contexts from the dataset, but fact of the matter is, giving an answer that is more intelligent than the dataset is also a deviation from probabilities of the dataset, just as a worse answer is. In short, it’s the dataset.

    • @thiagopinheiromusic
      @thiagopinheiromusic 5 หลายเดือนก่อน

      It's a fascinating debate. While the current advancements in LLMs are remarkable and suggestive of emergent properties, relying solely on scaling up models for AGI might be overly optimistic. A more targeted approach, possibly integrating other AI techniques and innovations, could be necessary to achieve meaningful AGI.

  • @VikiSil
    @VikiSil 6 หลายเดือนก่อน +2

    The problem is a human one. We are conflating what being a generalist and being a specialist means. Human idea of a *general* A.I. is an A.I. that *specializes* in everything.