Why Not Just: Think of AGI Like a Corporation?

แชร์
ฝัง
  • เผยแพร่เมื่อ 19 มิ.ย. 2024
  • Corporations are kind of like AIs, if you squint. How hard do you have to squint though, and is it worth it?
    In this video we ask: Are corporations artificial general superintelligences?
    Related:
    "What can AGI do? I/O and Speed" ( • What can AGI do? I/O a... )
    "Why Would AI Want to do Bad Things? Instrumental Convergence" ( • Why Would AI Want to d... )
    Media Sources:
    "SpaceX - How Not to Land an Orbital Rocket Booster" ( • How Not to Land an Orb... )
    Undertale - Turbosnail
    Clerks (1994)
    Zootopia (2016)
    AlphaGo (2017)
    Ready Player One (2018)
    With thanks to my excellent Patreon supporters:
    / robertskmiles
    Jordan Medina
    Jason Hise
    Pablo Eder
    Scott Worley
    JJ Hepboin
    Pedro A Ortega
    James McCuen
    Richárd Nagyfi
    Phil Moyer
    Alec Johnson
    Bobby Cold
    Clemens Arbesser
    Simon Strandgaard
    Jonatan R
    Michael Greve
    The Guru Of Vision
    David Tjäder
    Julius Brash
    Tom O'Connor
    Erik de Bruijn
    Robin Green
    Laura Olds
    Jon Halliday
    Paul Hobbs
    Jeroen De Dauw
    Tim Neilson
    Eric Scammell
    Igor Keller
    Ben Glanton
    Robert Sokolowski
    Jérôme Frossard
    Sean Gibat
    Sylvain Chevalier
    DGJono
    robertvanduursen
    Scott Stevens
    Dmitri Afanasjev
    Brian Sandberg
    Marcel Ward
    Andrew Weir
    Ben Archer
    Scott McCarthy
    Kabs Kabs Kabs
    Tendayi Mawushe
    Jannik Olbrich
    Anne Kohlbrenner
    Jussi Männistö
    Mr Fantastic
    Wr4thon
    Dave Tapley
    Archy de Berker
    Kevin
    Marc Pauly
    Joshua Pratt
    Gunnar Guðvarðarson
    Shevis Johnson
    Andy Kobre
    Brian Gillespie
    Martin Wind
    Peggy Youell
    Poker Chen
    Kees
    Darko Sperac
    Truls
    Paul Moffat
    Anders Öhrt
    Lupuleasa Ionuț
    Marco Tiraboschi
    Michael Kuhinica
    Fraser Cain
    Robin Scharf
    Oren Milman
    John Rees
    Shawn Hartsock
    Seth Brothwell
    Brian Goodrich
    Michael S McReynolds
    Clark Mitchell
    Kasper Schnack
    Michael Hunter
    Klemen Slavic
    Patrick Henderson
    / robertskmiles
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 794

  • @MrGustaphe
    @MrGustaphe 5 ปีที่แล้ว +825

    "Instead of working it out properly, I just simulated it a hundred thousand times" We prefer to call it a Monte Carlo method. Makes us sound less dumb.

    • @riccardoorlando2262
      @riccardoorlando2262 5 ปีที่แล้ว +123

      Through the use of extended computational resources and our own implementation of the Monte Carlo algorithm, we have obtained the following.

    • @plapbandit
      @plapbandit 5 ปีที่แล้ว +26

      Hey man, we're all friends here. Sometimes you've just gotta throw shit at the wall til something sticks. Merry Christmas!

    • @pafnutiytheartist
      @pafnutiytheartist 5 ปีที่แล้ว +10

      Well it's the second best thing to actually working it out properly

    • @silberlinie
      @silberlinie 5 ปีที่แล้ว +7

      ...simulatet it a few MILLION times...

    • @jonigazeboize_ziri6737
      @jonigazeboize_ziri6737 5 ปีที่แล้ว +1

      How would a statistician solve this?

  • @dirm12
    @dirm12 5 ปีที่แล้ว +307

    You are definitely a rocket surgeon. Don't let the haters put you down.

  • @user-go7mc4ez1d
    @user-go7mc4ez1d 5 ปีที่แล้ว +590

    "Like Starcraft".
    That aged well....

    • @Qwerasd
      @Qwerasd 5 ปีที่แล้ว +15

      Was about to comment this.

    • @CamaradaArdi
      @CamaradaArdi 5 ปีที่แล้ว +6

      I don't even know if alphaStar had played vs. TLO by then, but I think it did.

    • @RobertMilesAI
      @RobertMilesAI  5 ปีที่แล้ว +241

      It said 'for now'!

    • @guyincognito5663
      @guyincognito5663 5 ปีที่แล้ว +8

      Robert Miles you lied, 640K is not enough for everyone!

    • @Zeuts85
      @Zeuts85 5 ปีที่แล้ว +23

      I wouldn't say this has been demonstrated. So far AlphaStar can only play as and against Protoss, and it hasn't played any of the top pros. Don't get me wrong, I think Mana is an amazing player, but until it can consistently beat the likes of Stats, Classic, Hero, and Neeb (without resorting to super-human micro), then one can't really claim it has beaten humans at Starcraft.

  • @618361
    @618361 5 ปีที่แล้ว +280

    For anyone interested in the statistics of the model in 6:16
    The cumulative distribution function (cdf) of the maximum of multiple random variables is, if they are all continuous random variables and independent of one another, the product of the cdfs. This can be used to solve analytically for the statistics he shows throughout the video:
    Start with the pdf (bell curve in this case) for the quality of one person's idea and integrate it to get the cdf of one person. Then, since each person is assumed to have the same statistics, multiply that cdf by itself N times, where N is the number of people working together on the idea. This gives you the cdf of the corporation. Finally, you can get the pdf of the corporation by taking the derivative of its cdf.
    For fun, if you do this for the population of the earth (7.5 billion) using his model (mean=100, st.dev=10) you get ideas with a 'goodness' quality of only around 164. If an AI can consistently suggest ideas with a goodness above 164, it will consistently outperform the entire human population working together.

    • @horatio3852
      @horatio3852 4 ปีที่แล้ว +4

      thx u))

    • @harry.tallbelt6707
      @harry.tallbelt6707 4 ปีที่แล้ว +9

      No, actually thank you , though

    • @cezarcatalin1406
      @cezarcatalin1406 4 ปีที่แล้ว +9

      That’s if the model you are using is correct... which might not be.
      Edit: Probably it’s wrong.

    • @drdca8263
      @drdca8263 4 ปีที่แล้ว +1

      Oh, multiplying the CDFs, that’s very nice. Thanks!

    • @618361
      @618361 4 ปีที่แล้ว +25

      @@cezarcatalin1406 That's a valid criticism. The part I felt most iffy about was the independence assumption. People don't suggest ideas in a vacuum, they are inspired by the ideas of others. So one smart idea can lead to another. It's also possible that individuals have a heavy tail distribution (like a power law perhaps) instead of a gaussian when it comes to ideas. This might capture the observation of paradigm-shattering brilliant ideas (like writing, the invention of 0, fourier decomposition, etc.). Both would serve to undermine my conclusion. That being said, I didn't want that to get in the way of the fun so I just went with those assumptions.

  • @yunikage
    @yunikage 4 ปีที่แล้ว +91

    "we're going to pretend corporations dont use AI"
    ah yes, and im going to assume a spherical cow....

    • @brumm0m3ntum94
      @brumm0m3ntum94 3 ปีที่แล้ว +12

      in a frictionless...

    • @Tomartyr
      @Tomartyr 2 ปีที่แล้ว +7

      vacuum

    • @linnthwin7315
      @linnthwin7315 ปีที่แล้ว +1

      What do you mean my guy just avoided an infinite while loop

  • @TheOneMaddin
    @TheOneMaddin 5 ปีที่แล้ว +45

    I have the feeling that AI safety research is the attempt to outsmart a (by definition) much smarter entity by using preparation time.

    • @oldvlognewtricks
      @oldvlognewtricks 4 ปีที่แล้ว +19

      I seem to remember Mr. Miles mentioning in several videos that trying to outsmart the AI is always doomed, and a stupid idea (my wording). Hence all the research into aligning AI goals with human interests and which goals are stable, rather than engaging in a cognitive arms race we would certainly lose.

    • @martinsmouter9321
      @martinsmouter9321 4 ปีที่แล้ว +2

      It's a try to get a history boost if we can have more time and resources we might be able to overwhelm it.
      A little bit like building a fort: you know bigger armies will come, so you build structures to help you be more efficient in fighting them off.

    • @augustday9483
      @augustday9483 ปีที่แล้ว +2

      And it looks like we've run out of prep time. AGI is very close. And the pre-AGI that we have right now are already advanced enough to be dangerous.

  • @sashaboydcom
    @sashaboydcom 4 ปีที่แล้ว +69

    Great video, but one thing I think you missed is that a corporation doesn't need any of its employees to know what works, it just needs to survive and make money.
    This means that the market as a whole can "know" things that individuals don't, since companies can be successful without fully understanding *why* they're successful, or fail without anyone knowing why they fail. Even if a company succeeds through pure accident, the next companies that come along will try to mimic that success, and one of *them* might succeed by pure accident, leading to the market as a whole "knowing" things that people don't.

    • @AtticusKarpenter
      @AtticusKarpenter ปีที่แล้ว +3

      And.. thats pretty much not effective way of doing things, if we see modern HollyWoke, or Ubisoft

    • @glaslackjxe3447
      @glaslackjxe3447 ปีที่แล้ว +2

      This can be seen as part of AI training, if a corporation has the wrong goal or wrong solution it will be outcompeted/fail and the companies that survive have better selected for successful ways to maximise profit

    • @monad_tcp
      @monad_tcp ปีที่แล้ว

      @@AtticusKarpenter I bet those are not following market signals and not succeeding at the market, yet they survive from income from other "sources", the stupid ESG scores

    • @rdd90
      @rdd90 ปีที่แล้ว

      This is true, but only for tasks with a small enough solution space that it's feasible to accidentally stumble across the correct solution. This is unlikely to be the case for sufficiently hard intellectual problems. Also, a superintelligence will likely be better at stumbling across solutions than corporations, since the overhead of spinning up a new instance of the AI will likely be less than that of starting a new company (especially in terms of time).

  • @petersmythe6462
    @petersmythe6462 5 ปีที่แล้ว +338

    Corporations still have basically human goals, just those of the bourgeoisie.
    AI can have very inhuman goals indeed.
    A corporation might bribe a goverment to send in the black helicopters and tanks to control your markets so it can enhance the livelihood of the shareholders.
    An AI might send in container ships full of nuclear bombs and then threaten your country's dentists with nuclear annihilation if they don't take everyone's teeth because its primary goal and only real purpose in life is to study teeth at large sample sizes.

    • @SA-bq3uy
      @SA-bq3uy 5 ปีที่แล้ว +3

      Humans cannot have differing terminal goals, some are just in a better position to achieve them.

    • @fropps1
      @fropps1 5 ปีที่แล้ว +46

      @@SA-bq3uy What do you mean by that? I feel like it's pretty self-evident that people can have different goals. I don't have "murdering people" as a terminal goal for example, but some people do.

    • @SA-bq3uy
      @SA-bq3uy 5 ปีที่แล้ว +7

      @@fropps1 These are instrumental goals, not terminal goals. We all seek power whether we're willing to accept it or not.

    • @fropps1
      @fropps1 5 ปีที่แล้ว +46

      @@SA-bq3uy If your argument is what I think it is then it's reductive to the point where the concept of terminal goals isn't useful anymore.
      I don't happen to agree with the idea that people inherently seek power, but if we take that as a given, you could say that the accumulation of power is an instrumental goal towards the goal of triggering the reward systems in the subject's brain.
      It is true that every terminal goal is arrived at by the same set of reward systems in the brain, but the fact that someone is compelled to do something because of their brain chemistry doesn't tell us anything useful.

    • @SA-bq3uy
      @SA-bq3uy 5 ปีที่แล้ว +2

      @@fropps1 All organisms are evolutionarily selected according to the same capacities, the capacity to survive and the capacity to reproduce. The enhancement of either is what we call 'power'.

  • @petersmythe6462
    @petersmythe6462 5 ปีที่แล้ว +448

    "You can't get a baby in less than 9 months by hiring two pregnant women."
    Wow we really do live in a society.

    • @williambarnes5023
      @williambarnes5023 5 ปีที่แล้ว +72

      If you hire very pregnant women, you can get that baby pretty quick, actually.
      The 200 IQ move here is to go to the orphanage or southern border. You can just buy babies directly.

    • @e1123581321345589144
      @e1123581321345589144 5 ปีที่แล้ว +14

      It they're already pregnant when you hire them, then yeah, it's quite possible

    • @dannygjk
      @dannygjk 5 ปีที่แล้ว +13

      I think it's safe to assume that the quote is meant to be read as two women who just became pregnant.
      To assume otherwise is to assume that whoever said it doesn't have enough brain cells to be classified as a paramecium.

    • @isaackarjala7916
      @isaackarjala7916 4 ปีที่แล้ว +22

      It'd make more sense as "you can't get a baby in less than 9 months by knocking up two women"

    • @diabl2master
      @diabl2master 4 ปีที่แล้ว +4

      Oh shut up, you know what he meant

  • @stevenneiman1554
    @stevenneiman1554 ปีที่แล้ว +16

    I think one of the most important things to understand about both corporations and AIs is that as an agent's capabilities increase, its ability to do helpful things increases, but the risk of misalignment problems which cause it to do bad things increases faster. As an agent with goals grows, it becomes more able to seek its goals in undesirable ways, the efficacy of its actions increases, it becomes more likely to be able to recognize and conceal its misalignment, AND it becomes less likely you'll be able to stop it if you do discover a problem.

  • @flamencoprof
    @flamencoprof 5 ปีที่แล้ว +40

    As a reader of Sci-Fi since the Sixties, I remember at the dawn of easily available computing power in the Eighties I wrote in my journal that the Military-Industrial complex might have a collective intelligence, but it would probably be that of a shark!
    I appreciate having such thoughtful material available on YT. Thanks for posting.

  • @Primalmoon
    @Primalmoon 5 ปีที่แล้ว +79

    Only took a month for the Starcraft example to become dated thanks to AlphaStar. >_

    • @spencerpowell9289
      @spencerpowell9289 4 ปีที่แล้ว +5

      AlphaStar arguably isn't at a superhuman level yet though(unless you let it cheat)

    • @rytan4516
      @rytan4516 4 ปีที่แล้ว +3

      @@spencerpowell9289 By now, AlphaStar is now beyond my skill, even with more limitations than myself.

  • @visigrog
    @visigrog 5 ปีที่แล้ว +46

    In most corporate settings, a few individuals get to pick which ideas are implemented. From experience, they are almost always not close to the best ideas.

  • @jonathanedwardgibson
    @jonathanedwardgibson 4 ปีที่แล้ว +6

    I’ve long thought Corporations are analog prototypes of AI lumbering across the centuries, faceless, undying, immortal, without moral compass as they clear-cut and plow-under down another region in their mad minimal operating rules.

    • @MrTomyCJ
      @MrTomyCJ ปีที่แล้ว

      Corporations clearly do have a very important moral compass, and even Miles himself considers that so far humanity has been progressing. The fact some are corrupt doesn't mean corporations as a concept are intrinsecally bad, just like with humans in general.

  • @morkovija
    @morkovija 5 ปีที่แล้ว +159

    Been a long time Rob! Glad to see you

    • @d007ization
      @d007ization 5 ปีที่แล้ว +2

      Y'all are way more intelligent than I lol.

    • @shortcutDJ
      @shortcutDJ 5 ปีที่แล้ว +1

      1,5 x speed = 1.5 more fun

    • @stevenmathews7621
      @stevenmathews7621 5 ปีที่แล้ว +2

      @@shortcutDJ not sure about that..
      there might be diminishing returns on that ; P

    • @MrGustaphe
      @MrGustaphe 5 ปีที่แล้ว +1

      @@shortcutDJ Surely it's 1.5 times as much fun.

    • @diabl2master
      @diabl2master 4 ปีที่แล้ว

      @@MrGustaphe No, simply 1.5 more units of fun.

  • @Soumya_Mukherjee
    @Soumya_Mukherjee 5 ปีที่แล้ว +105

    Great video Robert. See you again in 3 months.
    Seriously we need more of your videos. Love your channel.

  • @jennylennings4551
    @jennylennings4551 5 ปีที่แล้ว +6

    These videos deserve way more recognition. They are very well made and thought out.

  • @DavenH
    @DavenH 5 ปีที่แล้ว +16

    Every one of your videos kicks ass. Some of the most interesting material on the subject.

  • @eclipz905
    @eclipz905 5 ปีที่แล้ว +37

    Credits song: Bad Company

  • @Garbaz
    @Garbaz 5 ปีที่แล้ว +2

    Very interesting! And I really like the little "fun bits" you edit into your videos!

  • @V1ctoria00
    @V1ctoria00 4 ปีที่แล้ว +1

    I binged several of your videos and I noticed this example about the rocket comes up another time. As well as the example just before it. Thought I was somehow rewatching one over again.

  • @EmilySucksAtGaming
    @EmilySucksAtGaming 4 ปีที่แล้ว +7

    "can you tell I'm not a rocket surgeon" I literally just got done playing KSP failing at reworking the internal components of my spacecraft

  • @thrallion
    @thrallion 5 ปีที่แล้ว +2

    Once again wonderful video. One of the most interesting and well spoken channels on TH-cam!

  • @ThePlayfulJoker
    @ThePlayfulJoker 4 ปีที่แล้ว +2

    This video is the kind that chanced my mind twice in only 14 minutes. I love the fact that it had a true discussion on the subject and not just a half-baked opinion.

  • @cherubin7th
    @cherubin7th 5 ปีที่แล้ว +6

    A corporation can also do something like alphago's search tree. Many people have ideas and others improve on them in different directions. Bad directions are canceled until a very good path is found. Also many corporations in competition behave like a swarm intelligence. But still great video!

  • @zzzzzzzzzzz6
    @zzzzzzzzzzz6 5 ปีที่แล้ว

    I've always wondered this and have been pushing this idea... awesome to have a full video on it!
    Well not the 3 follow on conclusions, but the comparison to AI systems

  • @blahblahblahblah2837
    @blahblahblahblah2837 4 ปีที่แล้ว +1

    Love the Dont Hug Me I'm Scared reference!
    Also _wow_ this has become my favourite channel. I wish I had found it 2 years ago

  • @jared0801
    @jared0801 5 ปีที่แล้ว +1

    Great stuff, thank you so much for the video Rob

  • @buzz092
    @buzz092 5 ปีที่แล้ว +2

    Excellent clerks reference! Also the video was outstanding as usual. :P

  • @qmillomadeit
    @qmillomadeit 5 ปีที่แล้ว +57

    I've always thought about the connection of corporations to AI as they do seek to seek to maximize their goals in the most efficient way. Glad you put out this very well thought out video :)

    • @dannygjk
      @dannygjk 5 ปีที่แล้ว +3

      Corporations are far from efficient.

    • @ziquaftynny9285
      @ziquaftynny9285 4 ปีที่แล้ว +3

      @@dannygjk relative to what?

    • @dannygjk
      @dannygjk 4 ปีที่แล้ว +1

      @@ziquaftynny9285 Relative to AI ;)

    • @dannygjk
      @dannygjk 4 ปีที่แล้ว +1

      @Stale Bagelz Corporations are plagued with many of the issues that humanity has in general. For example power struggles within the corporation.

    • @PsychadelicoDuck
      @PsychadelicoDuck 4 ปีที่แล้ว +2

      @@dannygjk I think it's less "far from efficient", and more a stop-button/specification problem. The institutions (and the people making them up) are very good at maximizing the chances of their success, as given by the metrics that the broader systems (society/government for the institutions, and internal politics for the individuals) evaluate them by. The problems are, those metrics are not necessarily measuring what people think they are measuring (due to loopholes, outright lying, etc.), any attempts to change those metrics will be fought by the organizations currently benefiting from them, and that the fundamental social-economic system those original metrics were designed from presupposed that morality was either a non-factor or would arise naturally from selfish behavior. I'm also going to point out that the "general humanity issues" you mention are greatly exacerbated by that same set of problems.

  • @Mr30friends
    @Mr30friends 5 ปีที่แล้ว +5

    This video is actually amazing. Wow. So much useful information covered. And not just useful for people interested in AI. Most of this could apply anywhere from how businesses work to how different political systems work and to pretty much anything else.

  • @Ybalrid
    @Ybalrid 4 ปีที่แล้ว

    A coworker just shared this video with me. I had no idea you had your own TH-cam channel. I like Computerphile a lot, including your ML/AI videos so I instantly subscribed!

  • @brunogarnier2855
    @brunogarnier2855 5 ปีที่แล้ว +5

    Thank you for this great video.
    It could be interresting to go through the same exercise, but with the whole world's economy.
    and evaluate the "invisible hand of the market" as an artificial selection AI...
    Have a good week-end !

    • @MrTomyCJ
      @MrTomyCJ ปีที่แล้ว

      I find that market's personification ("invisible hand") as a horrible mistake, as the whole point of the market is precisely that it's not a single entity, it doesn't have a particular intention. It's just a network of people with DIFFERENT ones.

  • @TXWatson
    @TXWatson 5 ปีที่แล้ว +4

    Looking forward to episode 2 of this! I've thought of the utility of this analogy in being that corporations, as intelligent nonhuman agents, give us the opportunity to experiment with designing utility functions that might be less harmful when implemented.

  • @donaldhobson8873
    @donaldhobson8873 5 ปีที่แล้ว +117

    This is all making the highly optimistic assumption that the people in the corporation are cooperating for the common good. In many organizations, everyone is behaving in a "stupid" way, but if they did something else, they would get fired.

    • @gasdive
      @gasdive 5 ปีที่แล้ว +20

      Yes, but individual neurons are 'stupid'. Individual layers of a neutral net are 'stupid'

    • @stevenmathews7621
      @stevenmathews7621 5 ปีที่แล้ว +5

      you might be missing Price's Law there.
      (an application of Zipf's Law)
      a small part (the √ of the workers) is working for the "common good"

    • @NXTangl
      @NXTangl 4 ปีที่แล้ว +16

      Also that the workers/CEOs are always aligned with shareholder maximization, as opposed to personal maximization. A company can destroy itself to empower a single person with money and often does.

    • @Gogglesofkrome
      @Gogglesofkrome 4 ปีที่แล้ว +2

      what is this 'common good,' anyway? is it some ideologically driven concept that differs entirely between all humans? Ironically it is this very 'common good' which drives many companies to do evil. After all, the road to hell is paved in human skulls and good intentions.

    • @NXTangl
      @NXTangl 4 ปีที่แล้ว +2

      @@Gogglesofkrome Common good of the shareholders in this case.

  • @arthurguerra3832
    @arthurguerra3832 5 ปีที่แล้ว

    Finally! I was tired of rewatching your old videos. haha Keep'em coming

  • @acorn1014
    @acorn1014 4 ปีที่แล้ว +6

    I noticed an interesting quirk about the model that ignores the difficulty of finding the right task. If you take 361 people and have them all play Go, they can think of every move on the board, so they'd be able to beat our current AI, but this is not the case, so this is how important that ability to determine these things gets.

  • @DieBastler1234
    @DieBastler1234 5 ปีที่แล้ว +2

    Content and presentation is brilliant, I'm sure matching audio and video quality will follow.
    Subbed :)

    • @RobertMilesAI
      @RobertMilesAI  4 ปีที่แล้ว

      Is this about the black and white bits at the start that are just using the phone's internal mic, or is the there a problem with my lav setup?

    • @theblinkingbrownie4654
      @theblinkingbrownie4654 4 หลายเดือนก่อน

      ​@@RobertMilesAIMaybe they watched the video before it finished processing the higher qualities, do you release videos before they're done fully processing?

  • @JM-us3fr
    @JM-us3fr 5 ปีที่แล้ว +1

    This was my question! Thanks Rob for answering it

  • @tho207
    @tho207 5 ปีที่แล้ว +1

    should someone can bring AGI to us, they must be a person like you. your sensibleness and sensitivity is outstanding. I'll resume the video now, cheers

  • @AiakidesAkhilleus
    @AiakidesAkhilleus 5 ปีที่แล้ว +1

    Great quality video, congratulations

  • @DJHise
    @DJHise 5 ปีที่แล้ว +8

    It took one month since this video was made, for AI to start crushing Starcraft professional players.
    (AlphaStar played both Dario Wunsch and Grzego rz Komincz, who were ranked 44th and 13th in the world respectively, were both beat 5 to 0.)

  • @commenter3287
    @commenter3287 4 ปีที่แล้ว +1

    I have enjoyed your computerphile videos, but these scripted ones are even better. I had never heard the AI/Corporation comparison before, so in one succinct video you introduced me to a very interesting analogy and analyzed the problems with the analogy very well.

  • @ricardoabh3242
    @ricardoabh3242 4 ปีที่แล้ว

    Always really interesting and clear, with an nice open ended storyline

  • @adrianmiranda5531
    @adrianmiranda5531 5 ปีที่แล้ว +9

    I just came here to say that I appreciated the Tom Lehrer reference. Keep up the great videos!

  • @lobrundell4264
    @lobrundell4264 5 ปีที่แล้ว +4

    Yeesss Rob is back as good as ever!

  • @willemvandebeek
    @willemvandebeek 5 ปีที่แล้ว

    Merry Christmas Robert! :)

  • @limitless1692
    @limitless1692 5 ปีที่แล้ว

    Wow this video was really interesting ..
    Thanks for creating it

  • @cupcakearmy
    @cupcakearmy 5 ปีที่แล้ว

    Amazing content again. Keep it up!

  • @its.dan.eastwood
    @its.dan.eastwood 5 ปีที่แล้ว

    Great video, thanks for sharing!

  • @TheConfusled
    @TheConfusled 5 ปีที่แล้ว

    Yay a new video. Mighty thanks to you

  • @ChibiRuah
    @ChibiRuah 4 ปีที่แล้ว +1

    I found this video very good as i thought about this and this expand the comparison and where it fails

  • @Bootleg_Jones
    @Bootleg_Jones 5 ปีที่แล้ว +8

    I love that you used XKCD's Up Goer Five as your example rocket blueprint. Definitely one of the best comic's Randall has ever put out.

  • @BM-bu4xd
    @BM-bu4xd 5 ปีที่แล้ว

    Yeah! terrific. Much thanks

  • @GreenDayFanMT
    @GreenDayFanMT 5 ปีที่แล้ว

    Very interessting topic. Thanks for this viewpoint

  • @LeoStaley
    @LeoStaley 5 ปีที่แล้ว +2

    The video you did on computerphile about Asimov's e laws of robotics was the most impactful, consise expression of what the danger of AI development is. You made the point that "you have to solve ethics" and the fact that the people building it are going, "hold on, I'm just a computer programmer, I didn't sign up for that." those two things combined have stuck with me for years.

  • @aenorist2431
    @aenorist2431 5 ปีที่แล้ว +2

    They just prove that corporations are problems in similar ways.
    Not that somehow both are not a problem.
    Corporations have to be tightly controlled by the population (in the form of government) to utilize their potential without allowing their diverging goals to cause excessive damage.

  • @bibasniba1832
    @bibasniba1832 4 ปีที่แล้ว

    Thank you for sharing!

  • @lucbloom
    @lucbloom ปีที่แล้ว

    Is that a Don’t Hug Me I’m Scared reference in the graph???
    Oh man so awesome.

  • @joelkreissman6342
    @joelkreissman6342 4 ปีที่แล้ว +2

    I've said it before and I'll say it again, "bureaucracy is a human paperclip maximizer".
    Doesn't matter if it's a private corporation or governmental.

  • @hayuseen6683
    @hayuseen6683 4 ปีที่แล้ว

    Wonderfully well considered problem and presented both bite-sized and expounded on.
    Logicians are some of my favorite people.

  • @thatchessguy7072
    @thatchessguy7072 ปีที่แล้ว +1

    @9:58 In answer to your rhetorical question, I need to reference the baduk games played between Alphago zero and Alphago master. Zero plays batshit crazy strategies where even the tiniest inaccuracies cause the position to spiral into catastrophe but zero still manages to win. Zero’s strategy does not look good to amateur players, nor to professional players, but it works, it just works. Watching these games feels like listening to two gods talk, one of which has gone mad.
    @10:02 ah… well we recognized move 37 as good after the AI showed that to us.

  • @pacibrzank78
    @pacibrzank78 5 ปีที่แล้ว +1

    Every haircut you had so far was on point

  • @Verrisin
    @Verrisin 5 ปีที่แล้ว +2

    I like this idea overall. Somewhat smarter, but also somewhat slower. -- Controllable by other grouped-human entities (like governments)
    + a lot of other points, but I think that is kind of the main thing that differentiates it from ASI.

  • @xDeltaF1x
    @xDeltaF1x 4 ปีที่แล้ว +7

    I think the statistical model is a bit flawed/over simplified. Groups of humans don't just select the best idea from a pool but will often build upon those ideas to create new and better ones.

    • @CommanderPisces
      @CommanderPisces 4 ปีที่แล้ว

      Basically this just means that an "idea" can actually have several smaller components that can be improved upon. I think this is more than offset by the fact that (as discussed in the video) humans still can't select the best ideas even when they're presented.

  • @Supreme_Lobster
    @Supreme_Lobster 5 ปีที่แล้ว +10

    Those layers arent gonna stack by themselves

  • @loopuleasa
    @loopuleasa 5 ปีที่แล้ว +1

    3:48
    Nice thinking adding "(for now)" text in the video, as Starcraft was already beatne by DeepMind a month ago

  • @faustin289
    @faustin289 4 ปีที่แล้ว +8

    "Evaluating solutions is easier than coming up with them"
    This is why I should earn more than my boss....I come up with all the ideas; the only thing he does is criticize and pick what idea to take forward!

    • @oldvlognewtricks
      @oldvlognewtricks 4 ปีที่แล้ว +9

      Your reasoning makes perfect sense, assuming people get paid based on the difficulty of their work. Oh, wait...

    • @pluto8404
      @pluto8404 4 ปีที่แล้ว +1

      Then become the boss if it is so easy.

    • @landonpowell6296
      @landonpowell6296 4 ปีที่แล้ว +3

      @@pluto8404
      Becoming the boss != Doing the boss's work.
      It's not easy to be born rich unless you already were.

    • @MrTomyCJ
      @MrTomyCJ ปีที่แล้ว

      @@landonpowell6296 yeah the issue here is that in reality, the market doesn't directly reward intelligence or hard work, it rewards the satisfaction of consumer's needs. It seems unfair, but the alternative is much worse. Besides, intelligence and hard work may not be strictly necessary but they very often do put you in the right path. And someone being born lucky or rich doesn't really mean they are being unfair to others.

  • @pierfonda
    @pierfonda 5 ปีที่แล้ว +3

    Ahhh the move 37/Clerks reference!! Perfect

  • @alexwood020589
    @alexwood020589 ปีที่แล้ว

    I think another important point about idea qualities in large teams is the selection process. No team is coldly evaluating every idea and picking the objective best one. The people who can articulate their ideas best, or shout the loudest, or happen to be the CEO's son are the ones who's ideas get implemented.

  • @travcollier
    @travcollier 5 ปีที่แล้ว

    A lot of the "sort of" points are very likely to apply to AGIs (at least in the early days) too.
    Anyways, we could certainly benefit from being better at aligning the goals and actions of corporations with humanity as a whole, and I think AI safety research could help with that while gaining insights about future AGIs.

  • @ninjagraphics1
    @ninjagraphics1 5 ปีที่แล้ว

    Thanks so much for this

  • @definitelynotcole
    @definitelynotcole ปีที่แล้ว

    Love that bit at the start.

  • @natfrey6503
    @natfrey6503 5 ปีที่แล้ว +1

    Might also consider some forms of government as behaving as AI, even societies for that matter. They can all go awry when citizens that go along with the "program" are convinced their actions are for a higher good. It's the conundrum of how good natured people can participate in the making of an avoidable calamity. But this brings in the question of human evil, or moral failing (as we see so much in large corporations), that even when quite innocuous on an individual level can be brutal when added up on a mass level.

  • @thewhitefalcon8539
    @thewhitefalcon8539 ปีที่แล้ว +1

    This diminishing returns stuff presumably also applies to electronic AGI. Look at the server resources they pour into GPT.

  • @ToriKo_
    @ToriKo_ 5 ปีที่แล้ว +2

    I just want to say thanks for making these videos! Also nice Undertale reference

  • @hikaroto2791
    @hikaroto2791 2 ปีที่แล้ว

    this was an astoundingly interesting video

  • @nazgullinux6601
    @nazgullinux6601 5 ปีที่แล้ว

    Loved the "Bad Company" acoustic at the end. As always, another 1-up to those not formally schooled whom routinely spout nonsensical "What-if's" at you as if they are the first person to think of the idea haha.

  • @ianprado1488
    @ianprado1488 5 ปีที่แล้ว

    Such a creative discussion

  • @DYWYPI
    @DYWYPI ปีที่แล้ว +1

    When thinking about AI as a metaphor for corporations, rather than the other way around, it's not necessarily the superhuman *intelligence* of the AI that is important or that makes them inherently dangerous - merely the fact that the intelligence makes it superhumanly *powerful*. Whether or not we accept that a corporation is significantly more intelligent than a human, they're fairly self-evidently significantly more powerful than one, with more ability to affect change in the world and to gather instrumental resources to increase that ability.

  • @leninalopez2912
    @leninalopez2912 5 ปีที่แล้ว +24

    This is fast becoming even more cyberpunk than Neuromancer.

  • @RoboBoddicker
    @RoboBoddicker 5 ปีที่แล้ว

    Last year in the US, one of the big sporting goods retailers stopped carrying semi-automatic rifles and tightened restrictions on their gun sales in the wake of mass shootings. That decision was made solely by the CEO and it definitely didn't please a lot of shareholders. That's another big difference, I think, between corporations and AGI - the big decisions in a corporation are ultimately made by a small group of humans with human values. Not that we can always expect corporations to put morality over profits obviously, but executives can at least *recognize* an egregious situation and make moral judgments. An AGI doesn't have any such safeguards.
    Fantastic video as always, btw!

  • @brr.petrovich
    @brr.petrovich 5 ปีที่แล้ว

    We must have new video! Its a perfect time for it

  • @bscutajar
    @bscutajar 5 ปีที่แล้ว

    At 11:45 he mentions you can keep adding more people and they will do the job faster. A little algebra shows that for the number adding example, the optimal number of people working in parallel is the sqrt of the number of numbers. Adding more beyond this point will slow down the process.

  • @albirtarsha5370
    @albirtarsha5370 4 ปีที่แล้ว +1

    Anything You Can Do (Annie Get Your Gun) by Howard Keel, Betty Hutton
    AGI:
    Anything you can be, I can be greater.
    Sooner or later I'm greater than you.

  • @DamianReloaded
    @DamianReloaded 5 ปีที่แล้ว +2

    Yay! I'm always waiting for your vids. I always tell people, whenever its brought up, that AGIs are very likely what will destroy us but also probably the only thing that can save us from our own limitations. (besides jebus)

  • @dantenotavailable
    @dantenotavailable 5 ปีที่แล้ว +2

    Also don't forget communication costs. Scaling any human process to 1000 people becomes incredibly difficult due to overhead necessary to keeping everyone pointed in the same direction. Just documenting the suggestions from 1000 people is going to require a significant number of people and time, making sure you get the suggestions documented correctly and unambiguously and then evaluated is going to be a herculean task. It's not for no reason that most Agile Development techniques are most effective at 5 to 6 people and most advice for teams of size 10+ is "split into 2 teams that don't need to coordinate".

  • @ryanarmstrong2009
    @ryanarmstrong2009 4 ปีที่แล้ว

    That clerks reference for move 37 was phenomenal

  • @richarddeese1991
    @richarddeese1991 5 ปีที่แล้ว

    "...that even governments are sometimes able to move fast enough to deal with them [corporations.]" LOVE IT!!! 😂 Oh, and by the way; LOVE the acoustic rendition of "Bad Company" [by, of course, Bad Company - the ultimate eponymous song!] - BRILLIANT! :D ...and, is that a mandolin? Wonderful! Now, as to these corporations... I think it's pretty clear that most of them act as specialist A.I.s, geared to produce some product or service (or, sometimes, a whole range of them), & as such, they're mostly designed to maximize profits for the shareholders (as you pointed out.) I think this is very much like Deep Thought, or the Go! program; they do indeed act as specialized superintelligences. But they most certainly do NOT qualify in any way as general intelligences, much less general superintelligences. As to the question you posed [quite diplomatically, I must say, as you neatly side-stepped the issue of using any mental health terms!], "Are they 'misaligned'?" Well, in short, YES. Many of them ARE misaligned. They are profit-driven - some of them to the point of getting away with whatever they can. And on that note, the ONLY moral in a capitalist, or 'free-market' society, IS, "What can I get away with - and how much $$$ can I make DOING it?" I'm sorry, but that's it. If a company isn't run by people with good intentions AND good morals &/or ethics, then that's what you end up with, simply by default. In other words, if nobody's 'minding the moral store' so to speak, things WILL do badly wrong all by themselves. I believe this could be proved - at least by example - but I don't know how do prove it, myself. I have merely witnessed (and often worked for!) 54+ years worth of corporate shenanigans which amply proves it to ME. So, YES while some of them DO make good products, &/or have good services, that is ONLY because they are run by strong people with good morals - or, at least, good corporate & social ethics. The main problem is this: when nobody's in charge whose strong enough to infuse a company with their own good values, bureaucracy WILL take over by default, and it is ALWAYS 'misaligned' as you put it. In fact, it is actually badly broken & dysfunctional, by any standard you'd care to judge it by... EXCEPT the standard of, "What can I get away with, and how much $$$ can I make DOING it?" That's it. That's all there is. Probability either shows that, or is useless in gauging that. If we 'train' our A.G.I.s, they're going to HAVE to be given clear psychological tests, examples & exams; they're going to HAVE to be 'taught' by people who do not only NOT teach them, "Maximize profit, dammit - nothing else matters!!!" but rather DO teach them that people matter, intelligent (or 'sentient') beings matter, whether they are flesh or circuits or whatever. If you can't perform your task without harming sentients, then you can't perform you task at all, & you MUST ask for help. Notice that I'm NOT advocating for the 3 (or 4, really) laws of robotics. Lovely sci-fi concept, I'm sure, but lousy real-world philosophy. A.I.s (or A.G.I.s, or whatever new letters someone comes up with tomorrow...) cannot be "programmed" to be "moral" in ANY sense. Doesn't work. Try it. Anyway, that's my take. Thanks for the video! You talk about important things (in my opinion!) tavi.

  • @shaylempert9994
    @shaylempert9994 5 ปีที่แล้ว

    Just subed!

  • @batrachian149
    @batrachian149 5 ปีที่แล้ว +1

    What was the song at the end?

  • @mindeyi
    @mindeyi 2 ปีที่แล้ว +1

    "Take a minute to think of an idea that's too good for any human to recognize that it is good." - Challenge accepted ;)

  • @EebstertheGreat
    @EebstertheGreat 3 ปีที่แล้ว +2

    At 7:14, the graph looks wrong. That histogram should resemble the graph of the probability density of a sample maximum. In general, if X₁, ..., Xₙ are independent and identically distributed random variables (i.e. a sample of size n) with cumulative distribution function Fₓ(x), then S = max{X₁, ..., Xₙ} has cumulative distribution function Fₛ(s) = [Fₓ(s)]ⁿ. So if each X as a probability density function fₓ(x) = Fₓ'(x), then S has probability density function fₛ(s) = n fₓ(s) [ Fₓ(s) ]ⁿ⁻¹ = n fₓ(s) [ ∫ fₓ(t) dt ]ⁿ⁻¹, where the integral is taken from -∞ to s.
    Here, we assumed the variables were normally distributed and set μ = 100 and σ = 20, so fₓ(x) = 1/(20√͞2͞π) exp(-(x-100)²/800), and thus fₛ(s) =
    n/(20√͞2͞π)ⁿ exp(-(s-100)²/800) [ ∫ exp(-(t-100)²/800) dt ]ⁿ⁻¹. The mean of this is E[S] = ∫s fₛ(s) ds, integrating over ℝ. Doing this numerically in the n=100 case gives a mean of 150.152. We can also make use of an approximate formula for large n: E[S] ≈ μ + σ Φ⁻¹((n-π/8)/(n-π/4+1)). For the given parameters and n=100, we get E[S] ≈ 100 + 20 Φ⁻¹((100-π/8)/(101-π/4)) ≈ 150.173. In either case, it is not plausible that you got a mean of 125 with n = 100, σ = 20 like you said. You must have used σ = 10, not σ = 20. That also explains why you wrote "σ = 20" between those vertical bars at 6:31. You probably meant that the distance between μ+σ and μ-σ was 20, i.e. σ = 10.

    • @RobertMilesAI
      @RobertMilesAI  3 ปีที่แล้ว +2

      That's correct! Though, since I picked the value for the standard deviation out of thin air, it can just be 10 instead and it doesn't affect the point I was trying to make

  • @geraldkenneth119
    @geraldkenneth119 ปีที่แล้ว

    The term I came up with that might fit a corporation is Ultra-Wide Artificial General Intelligence (UWAGI): an AGI that has genius-level (but not super intelligent) competence in far more areas than you’d expect of a single human, and which can do a very large number of AGI-level tasks at once, but is still not technically super intelligent in the traditional sense. I guess one way to think of it as being superintelligent in terms of “width” as opposed to “depth”

  • @loopuleasa
    @loopuleasa 5 ปีที่แล้ว

    finally, a good vid from Rob

  • @Nayus
    @Nayus 5 ปีที่แล้ว +9

    This guy presses randomize on his hair every new video.
    Great video btw.
    I think the most important points of this "why not just" will be on the second video, because to me it is very obvious that a corporation's 'values' and goals are very similar to humanity's, at least comparing it to what could potentially be the goals of a unsafe AGI. Like yes some corporations might not care about the enviroment or work conditions of their workers, or many other things that they disregard in pursuit of their (probably money related) goal, but there's no corporation on earth which goal is to destroy the planet. Or to kill every human. Or to control their brains (that could get away with it). Or who knows what other incredibly weird things an AGI might have as an instrumental goal that it will not hesitate to implement towards its terminal goal.
    You can't model AGI as a corporation because corporations are ultimately made of humans, so they will never separate their goals too much from human goals, while AGI does not have that limitation.

    • @yondaime500
      @yondaime500 5 ปีที่แล้ว +5

      I think the abilities of humans and corporations are more relevant than their values. Human values are not really aligned in general, and some individual humans or organizations, given enough power, could do pretty awful things from the point of view of other humans. I know that because it has already happened, multiple times. In fact it's happening right now in many parts of the world. The only reason they don't do worse is because they can't. But an AGI could.
      That's why I sometimes feel like value alignment is a lost cause. Ok, maybe you can get the AGI to align with humans, but which humans? We're probably screwed either way.

    • @Nayus
      @Nayus 5 ปีที่แล้ว

      @@yondaime500 I think it's a combination of the both. Like even if you say that corp are super smart, they aren't like "nuclear" Smart if you know what I mean.
      But I disagree with you on the *scale* of what we mean when we say misaligned. Like yes there're groups on our world that value really different stuff if you only take into scope the space of human values. But a AGI can have much much more varied values.
      Like for example from one side of the planet to another, you could say that they are really "far away", but only if you look at the world. If you look at the galaxy or the solar system, opposite points on the planet are relatively very close.
      I agree that even those differences are still very dangerous and important.

    • @bobsmithy3103
      @bobsmithy3103 5 ปีที่แล้ว

      xD Kinda reminds me of the OpenAI dude with the blue hair presenting the robot hand.

    • @micaelstarfire8639
      @micaelstarfire8639 2 ปีที่แล้ว

      The history of corporate supported atrocities would suggest otherwise

  • @ehochmuephi8219
    @ehochmuephi8219 ปีที่แล้ว

    Love your stuff man, and Tom Lehrer as well. ;)

  • @EpsilonRosePersonal
    @EpsilonRosePersonal 4 ปีที่แล้ว

    Did you end up doing the follow up to this you mentioned at the end?

  • @matthewhubka6350
    @matthewhubka6350 4 ปีที่แล้ว

    For any amount of numbers you want to sum, you can only throw n/2 round down people at it because for 1000 people you can have at most 500 people adding 2 numbers together unless you wanted a crazy algorithm to split up the addition into easier tasks or even make a look up table that everybody has one slot to memorize. Then you could have the m be the maximum number of digits and n be the number of numbers. (m choose 10) ^ n possible combinations means you could have over a googleplex number of people each with one lotto ticket waiting to see if they guessed all 1000 numbers correctly

  • @petersmythe6462
    @petersmythe6462 ปีที่แล้ว

    In some ways your "have each person generate an idea and pick the best" actually understates the problem. There are many types of problems, e.g. picking a move in chess, where ideas are easy to come up with but hard to evaluate.