AlphaGo & Deep Learning - Computerphile

แชร์
ฝัง
  • เผยแพร่เมื่อ 7 มิ.ย. 2024
  • AlphaGo beat the Go World Champion 4-1. Why do the creators not know how? Brais Martinez is a Research Fellow & Deep Learning expert at the University of Nottingham.
    / computerphile
    / computer_phile
    This video was filmed and edited by Sean Riley.
    Computer Science at the University of Nottingham: bit.ly/nottscomputer
    Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

ความคิดเห็น • 426

  • @OmegaF77
    @OmegaF77 8 ปีที่แล้ว +65

    There are more "right" words in this video than all atoms in the observable universe.

    • @ChimbzZ
      @ChimbzZ 5 ปีที่แล้ว +2

      Damn! You ruined the video for me hahaha! I hadn't noticed until I read your comment

    • @tikz.-3738
      @tikz.-3738 3 ปีที่แล้ว

      That's like 10⁸²!

    • @ronmaessen2021
      @ronmaessen2021 ปีที่แล้ว

      2 Minutes in I was only concentrating on when the next "right" was going to come, not on what he was saying haha

    • @ronmaessen2021
      @ronmaessen2021 ปีที่แล้ว

      Seems like he learns by positive feedback and he's not getting it.

  • @MacoveiVlad
    @MacoveiVlad 8 ปีที่แล้ว +420

    Right? :)

    • @TremendousSax
      @TremendousSax 8 ปีที่แล้ว +19

      Such an obnoxious vocal tic that shows up all over the place these days but especially in tech circles.

    • @MacoveiVlad
      @MacoveiVlad 8 ปีที่แล้ว +10

      +Tremendous Sax Right?

    • @MacoveiVlad
      @MacoveiVlad 8 ปีที่แล้ว +6

      +Tremendous Sax Sorry, i couldn't help it? :))

    • @TremendousSax
      @TremendousSax 8 ปีที่แล้ว +33

      +Macovei Vlad :D when I hear that I feel like screaming, "I don't know, you're the one explaining something to me!!! If I have to assure you that you're saying valid statements we're both in trouble."

    • @HiAdrian
      @HiAdrian 8 ปีที่แล้ว +2

      *+Tremendous Sax* As far as tics go, Kip Thorne and John Carmack take the cake with their _"Ai(ght)?"_

  • @JamesV1
    @JamesV1 8 ปีที่แล้ว +94

    Correction for the description: It was 4-1, not 3-1.

    • @Computerphile
      @Computerphile  8 ปีที่แล้ว +25

      +AprimalDwarf thanks, sorted now!

    • @srb2Espyo
      @srb2Espyo 8 ปีที่แล้ว +37

      +Computerphile Further correction: "its creators", not "it's creators".

    • @gfetco
      @gfetco 8 ปีที่แล้ว

      +AprimalDwarf .. and how do you know wikipedia isn't the one which should be corrected?

    • @lilomar2525
      @lilomar2525 8 ปีที่แล้ว +8

      +Enlightenment It was 4-1, I watched all five matches.

    • @gfetco
      @gfetco 8 ปีที่แล้ว +2

      Jacob Smith ... I'll take your word for it, then.

  • @JoshuaBarretto
    @JoshuaBarretto 8 ปีที่แล้ว +157

    Love the "No thanks!" at 0:08

    • @Computerphile
      @Computerphile  8 ปีที่แล้ว +37

      +Joshua Barretto glad someone spotted that! >Sean

    • @JoshuaBarretto
      @JoshuaBarretto 8 ปีที่แล้ว +11

      ***** I'm challenging myself to spend a month without flash. Mostly because it's proprietary design is not well-suited to Linux.

    • @Life4YourGames
      @Life4YourGames 8 ปีที่แล้ว +3

      +Joshua Barretto Well, shouldn't be that hard since HTML5 ;D

    • @JoshuaBarretto
      @JoshuaBarretto 8 ปีที่แล้ว +1

      Life4YourGames
      It's definitely easier than 6 months ago.

    • @Life4YourGames
      @Life4YourGames 8 ปีที่แล้ว

      Joshua Barretto TH-cam has HTML5 a bit longer than six months, but yeah, I still keep running into websites saying "Right click to activate pluging" x)

  • @Kram1032
    @Kram1032 8 ปีที่แล้ว +32

    I hear "more moves than atoms in the (observable) universe" all the time.
    But in fact, if we take those quoted estimates of 10^80 particles in the observable universe and 10^172 legal moves for a 19x19 grid, then actually, it's more moves than:
    Take every single particle in our observable universe. For each of them, imagine a copy of our observable universe, filled with 10^80 particles once more. At that point you'd have (10^80)² = 10^160 particles. There are STILL more moves. You gotta multiply the number of universes by another factor 1.000.000.000.000
    Only THEN do you roughly approach the number of legal moves on a 19x19 Go board.
    If each particle in this universe was a universe of its own, filled with an equal number of particles, that wouldn't be enough.

    • @vsiegel
      @vsiegel 3 ปีที่แล้ว

      So, the difference is minor: It is
      "more moves than atoms in the (potentially observable) multiverse".
      That should work for most definitions of "multiverse".

  • @BobSmun
    @BobSmun 8 ปีที่แล้ว +107

    Hmm. The way he said that sounded like AlphaGo learned Go from scratch purely by playing itself. That isn't true and is in fact still a goal of google.
    They first trained it by showing it a whole lot of existing games, until it was able to predict what a human would do a relatively high percentage of the time. Then, after that, they got it to play itself to develop it further.

    • @styleisaweapon
      @styleisaweapon 8 ปีที่แล้ว +42

      +Benjamin Schroeder He also doesnt have a clue about how chess engines work, nor does he seem to have a grasp of the size of the chess game space. Both games are finite and in both cases the best engines do not use any brute force algorithm. This guy doesnt even understand the prime difference between chess engines and go engines, which is iterative deepening for chess vs monte-carlo for go. The fundamental problem with go engines has never been the size of the search space. Its always been the difficulty of the static evaluation heuristic.

    • @LLHLMHfilms
      @LLHLMHfilms 8 ปีที่แล้ว

      Perhaps

    • @busTedOaS
      @busTedOaS 8 ปีที่แล้ว +2

      +Benjamin Schroeder They said they used low ELO games, though. Watching amateurs play doesn't tell you anything about how to beat Lee Sedol.

    • @hanniffydinn6019
      @hanniffydinn6019 8 ปีที่แล้ว +1

      No, you feed it the rules via amateur games, then it deep learns. Alpha go can beat 99.9% of human go players. So it clearly works, even though it's not even "thinking".

    • @idlegandalf
      @idlegandalf 8 ปีที่แล้ว +7

      +busTedOaS That isn't true. If you review some of the english commentary (I think it was game two or three) when some of the team members (of DeepMind) were being interviewed they said it was fed with all kinds of skill levels, including professional games and historic games.

  • @yoloswaggins2161
    @yoloswaggins2161 8 ปีที่แล้ว +29

    I disagree when he says chess computers can check exhaustively, even though there are less variations than in go there's still far too many variations to check.

    • @Mitjitsu
      @Mitjitsu 7 ปีที่แล้ว

      Chess computers can already play perfectly if there is 8 or less pieces on the board. Not to mention they have extensive opening books. However once the game opens up after a dozen or so moves, then it becomes impossible for even the most advanced of supercomputers to calculate.

    • @georgechristoforou991
      @georgechristoforou991 4 ปีที่แล้ว

      Yes but when a computer can check all the possibilities to a depth of 30 or 40 moves deep then it's approaching but not quite achieving perfection. In GO this is not possible.

  • @EebstertheGreat
    @EebstertheGreat 8 ปีที่แล้ว +13

    I understand the distinction Martinez is trying to draw between computational chess and Go, but the reality is that the best chess engines do not perform exhaustive breadth-first searches of several ply to pick a move. Rather, they perform depth-first searches of aggressively pruned trees out many ply (often 8 or more) until they find a suitably high-valued move. A full breadth search typically goes out just 2 ply (which is still further than the 1-ply full breadth search most humans would perform, immediately rejecting most moves before considering possible responses or even really thinking about them).
    In the end, there are some similarities between how chess engines play and AlphaGo. AlphaGo _does_ still search out a couple ply, then evaluates each end position according to some heuristic. The difference is that AlphaGo does not search as far (due to the far larger complexity of go) and the way the parameters of its heuristic were determined (from reinforcement learning).

  • @DanielRiveroPadilla
    @DanielRiveroPadilla 8 ปีที่แล้ว +27

    Now I would like to see a video about evolutionary computing, sounds so cool.

    • @y__h
      @y__h 8 ปีที่แล้ว +3

      Yesss I agree with you, mate.

    • @IceMetalPunk
      @IceMetalPunk 8 ปีที่แล้ว +2

      +Daniel Rivero Padilla It is :) After many years of playing with various evolutionary learning algorithms for various tasks, I'm still in love with it :)

  • @iVilius
    @iVilius 8 ปีที่แล้ว +2

    Number of times he said "Right?" is far greater than the amount of possible moves in Go..

  • @makingnoises2327
    @makingnoises2327 8 ปีที่แล้ว +15

    I was hoping for a discussion of the actual architecture of AlphaGo, which is covered in much more detail in the game commentary.

    • @busTedOaS
      @busTedOaS 8 ปีที่แล้ว

      +makingnoises The official research paper is the source you want. The commentary just repeated that information.

  • @DJT.on.Ethereum
    @DJT.on.Ethereum 8 ปีที่แล้ว +19

    So what I gathered from this video is that as soon as alphabet offers access to a mutual fund it is likely they have taught an AI how to play the stock market.
    I would immediately buy into any such mutual fund.

    • @ffejulator
      @ffejulator 8 ปีที่แล้ว +14

      +Nathanael Ries The stock market is already largely automated. It's really interesting stuff actually if you feel like looking into it.

    • @TechyBen
      @TechyBen 8 ปีที่แล้ว

      +Nathanael Ries Problem is, everyone else also knows this. So it may tip the bias...

    • @Rocket33r
      @Rocket33r 8 ปีที่แล้ว

      +TechyBen Everybody else knows what? That Alphabet is running a mutual fund with AI? That alone wouldn't negate the edge it has, assuming that it has one. For it's edge to be negated, everybody would have to have the same AI as Alphabet.

    • @DJT.on.Ethereum
      @DJT.on.Ethereum 8 ปีที่แล้ว +1

      TechyBen
      I have trained a character recognition neural network with Google's open-sourced TensorFlow software. The neural network is only the same if it is trained with the same set of data. Once it is done being trained or if it is trained with a different set of data it learns based on the input it encounters. Different neural networks will do different things depending on the data they encounter, regardless of the software used to create them.

    • @TechyBen
      @TechyBen 8 ปีที่แล้ว +1

      I meant that, if we know that only computers can win, the only way to win is not to... [***message terminated, contact Sarah Connor***]

  • @bitti1975
    @bitti1975 8 ปีที่แล้ว +4

    Please rerecord this. This contains at least two glaring misinformations which are even obvious to a layman who happened to read a few of the news articles surrounding the AlphaGo - Lee Sedol match:
    1. Chess can not be brute-forced. What helps to prune the tree significantly is that you have a relatively fast to calculate evaluation function
    2. DeepMind used a ton of supervised learning to train AlphaGo and only reinforcement learning after that. They announced they will try to create a version based on pure reinforcement learning but if this works out still remains to be seen

  • @Asha2820
    @Asha2820 7 ปีที่แล้ว +22

    Never mind human vs machine Go; I want to see machine Vs machine Go! Like a brainy version of robot wars, with reasearch institutes and companies battling it out for AI glory!

    • @aeroscience9834
      @aeroscience9834 7 ปีที่แล้ว

      I would like chess too

    • @jonathanlamarre3579
      @jonathanlamarre3579 7 ปีที่แล้ว +3

      en.wikipedia.org/wiki/World_Computer_Chess_Championship

    • @MrWizardjr9
      @MrWizardjr9 7 ปีที่แล้ว

      yes the strongest chess engines are 3000+ rating while magnus is only like 2850 or something. it would be interesting to see if a neural network engine for chess will perform better than existing engines

    • @josephrott1920
      @josephrott1920 4 ปีที่แล้ว

      See alpha go vs master series of I think 60 games.

  • @enhex
    @enhex 8 ปีที่แล้ว +1

    AlphaGo started with supervised learning to train the initial network. DeepMind downloaded a database of games performed by human players, and let the network learn from it.
    Afterwards they pitted the network against itself, and letting it improve over its older version.
    And instead of exhaustive search they use probability of which move is most likely to lead to winning.
    The big difference is that AlphaGo uses general AI algorithms, which can be applied to other problems.
    Chess AI like Deep Blue isn't general, it's can only work with Chess.

  • @mikeunleashed1
    @mikeunleashed1 8 ปีที่แล้ว +26

    isnt awesome that the champion won at least once

    • @MrAlbinocreeper
      @MrAlbinocreeper 8 ปีที่แล้ว

      +mikeunleashed1 but every time AlphaGo plays it can get smarter

    • @rikschaaf
      @rikschaaf 8 ปีที่แล้ว +8

      +MrAlbinocreeper The champion can also get smarter

    • @IceMetalPunk
      @IceMetalPunk 8 ปีที่แล้ว +5

      +Rik Schaaf But AlphaGo can get smarter much faster :P

    • @Optimistas777
      @Optimistas777 8 ปีที่แล้ว +12

      +MrAlbinocreeper no. It was explained in an after-match press-conference by Hassabis that it would be impossible to learn anything from one match with the 18 (current) version of AlphaGo. Furthermore, DeepMind intentionally keeps AlphaGo shutdown between the matches, so all the games are played by the same program, i.e. no in-match learning is involved.

    • @mikeunleashed1
      @mikeunleashed1 8 ปีที่แล้ว

      reminds me of hunter x hunter's scenes depicting the same game, which makes that one victory feel all the more awesome for me xD

  • @JoshuaAugustusBacigalupi
    @JoshuaAugustusBacigalupi 8 ปีที่แล้ว +1

    This video doesn't seem to match what the AlphaGo team described during the 5th and final match. I'm not an expert in machine learning, but team members made it clear that the system could assess the probability of Lee's move based on training with actual human games. AlphaGo may have used self-generated games, as purported in the video, but it seemed clear from the team's description that AlphaGo relied primarily on classifications and features drawn from actual human generated board positions from online play, an important feature of AlphaGo's training that the video either wasn't aware of or chose to ignore.

  • @jeremyj.5687
    @jeremyj.5687 8 ปีที่แล้ว

    What a fascinating topic! I could listen to this guy for hours.

  • @NathansWargames
    @NathansWargames 8 ปีที่แล้ว +2

    I heard the next step for Alpha go is to learn SC2, could be fun to watch 1 of the pros at sc2 facing alpha go in a few years time

    • @busTedOaS
      @busTedOaS 8 ปีที่แล้ว +2

      +Sharky Productions It's about SC1:BW, and it will not be years, but months.

  • @CoyMcBob
    @CoyMcBob 8 ปีที่แล้ว +4

    This is a great summary of reinforcement learning, however I think you kind of missed out on the key parts of the neural networks associated with the algorithm.
    While Google did have a lot of reinforcement learning, this was actually after it built its neural networks that actually formed most of the evaluation function and the move-choosing algorithm. The reinforcement learning did help push the Monte Carlo Tree Search algorithm to truly become formidable to the top professionals, but keep in mind that a great deal came from data collected from online games. This was in no way pure reinforcement learning (in the example of AlphaGo) as you made it seem at times.
    I do understand that you weren't trying to make it too specific to AlphaGo, I just don't want people to think that DeepMind cooked up some completely clueless machine learning function that in a year rose to that skill level.

  • @CP-hd5cj
    @CP-hd5cj 5 ปีที่แล้ว +1

    It's fascinating that since this was released, human go players have learned so much from alpha go, and vice versa. It's like this interface between human and alpha go is itself an adversarial neural network.

  • @HerrLavett
    @HerrLavett 8 ปีที่แล้ว

    This channel is the best! Keep posting videos!

  • @HankYarbo
    @HankYarbo 8 ปีที่แล้ว +4

    Chess is 100% totally not done via exhaustive search! Here's a quote from Google's AlphaGo paper in nature magazine explicitly contradicting chess uses exhaustive search:
    "In large games, such as chess (b ≈ 35, d ≈ 80) and especially Go (b ≈ 250, d ≈ 150), exhaustive search is infeasible, but the effective search space can be reduced by two general principles. First, the depth of the search may be reduced by position evaluation: truncating the search tree at state s and replacing the subtree below s by an approximate value function v(s) ≈ v*(s) that predicts the outcome from state s. This approach has led to superhuman performance in chess..."

  • @indiansoftwareengineer4899
    @indiansoftwareengineer4899 5 ปีที่แล้ว

    I loved this channel.

  • @Borthralla
    @Borthralla 8 ปีที่แล้ว +43

    A significant portion of the information about Alphago in this video is wrong. Please actually do the research next time. In fact, the biggest innovation of Alphago was the very fact that it DID use data from human games to make a policy network. The policy network calculates the probability that a human player would play a move based on what it's learned from hundreds of thousands of online amateur matches, enormously simplifying the search tree. In addition, the value network , which calculates the probability of winning in a given board state, was developed using millions of board states from amateur games along with which side was the winner. Combining the policy network and the value network, the tree-searching algorithms can work significantly faster, so fast that a single CPU of the current version would probably outplay Alphago from October running on 1200 CPU's. Even compared to the version that faced Lee Seedol, Alphago on a single CPU still has a winrate of 30%, which exceeds that of Lee Seedol. If anything, Alphago may have taught us that the type of AI discussed in the video will become obsolete.

    • @pleasedontwatchthese9593
      @pleasedontwatchthese9593 8 ปีที่แล้ว +1

      +Philip M I think he got the point across. He said basically the same thing you did.

    • @sebihorst1816
      @sebihorst1816 8 ปีที่แล้ว +5

      I mean what you say is true but thats not the point of the video. This video is just explaining the use of deep learning in combination with Go and not the programm alphago. Also they did use human moves to make the program "smarter" but that just speeds up the process of learning because you give the program extra information it can use to evole itself.
      So the video is not wrong it just isn't talking about all the faccets of AlphaGo but rather only the deep learing part.

    • @LarlemMagic
      @LarlemMagic 8 ปีที่แล้ว

      +Philip M Yes, but it did play against itself after the first few lessons. Simulating evolution against itself.

    • @Magikarpador
      @Magikarpador 7 ปีที่แล้ว

      Wait... I don't know much about Go... but is a win rate of 30% not garbage? I'm really intrigued about what this means.

    • @milano528
      @milano528 6 ปีที่แล้ว

      Philip M

  • @miro2424
    @miro2424 8 ปีที่แล้ว

    Basically that how you learn, the teacher does not give you the answer but tries to help you get to it by yourself keeping you on the right track.

  • @Lagspike99
    @Lagspike99 8 ปีที่แล้ว +2

    What would happen if AlphaGo plays versus itself? Thanks for the video, it was great.

    • @YugShende
      @YugShende 8 ปีที่แล้ว +2

      +Lagspike99 AlphaGo has played with itself. In fact that is how the AI evolved. The engineers fed a set of 30 million possible moves as data set and then AlphaGo played with itself a number of times and classified some moves as "Good Moves" and some moves as "Bad Moves". Playing with itself will just be like any two computer programs interacting with each other, thinking of the other as a user. It won't generate some sort of a paradox/blackhole. just saying.

  • @socrat33z
    @socrat33z 8 ปีที่แล้ว +5

    This guy has no idea what he's talking about in Chess. The difference is the difficulty of evaluating the board at any given position. It is much easier in chess.

  • @quarkmarino
    @quarkmarino 8 ปีที่แล้ว

    would alphaGo would make the same movement with the same borad configuration, or would it do another movement with the same winning weight?

  • @federicopettinicchio
    @federicopettinicchio 7 ปีที่แล้ว

    actually it was a mix for alpha-go, they want to try starting from random too but alphago had a basis they didn't start from random and went up from there they started from an already somehow set playstyle and went from there.

  • @luckystrke
    @luckystrke 8 ปีที่แล้ว +1

    Sorry if I missed it in the video; but didn't the program analyse millions of master games in order to improve on the "evolutionary" computer strength?

  • @TheFishCostume
    @TheFishCostume 8 ปีที่แล้ว +1

    "Why do it's creators not know how" in the description is wrong; it should be "Why do its creators not know how."

  • @gautambhat352
    @gautambhat352 8 ปีที่แล้ว

    What kind of approach would one use if they were to replicate AlphaGo's learning algorithm to Poker, or would it not work at all?

  • @4IN14094
    @4IN14094 8 ปีที่แล้ว +21

    AlphaGo => BetaGo => Skynet
    The end in here!

    • @bok..
      @bok.. 8 ปีที่แล้ว

      its a shame AI will never be able to fully experience emotion...

    • @4IN14094
      @4IN14094 8 ปีที่แล้ว

      Neodinium Of cause, AIs are logical afterall.

    • @ericsbuds
      @ericsbuds 8 ปีที่แล้ว +1

      +Neodinium you don't know that! how can we assume anything about what AI can or cant do? our brains/nervous systems are physical objects that follow the laws that all other matter in the universe seems to follow. it would stand to reason we could create something that could 'feel' as humans tend to describe it.

    • @TheKivifreak
      @TheKivifreak 8 ปีที่แล้ว +2

      +Neodinium depends on the implementation. If people want to make AI feel, so be it.

    • @Lamb785
      @Lamb785 8 ปีที่แล้ว

      +4IN14094 I'm waiting for that ReleaseGo xD

  • @mrgunner2you95
    @mrgunner2you95 6 ปีที่แล้ว

    I want to get into Go via using my computer.. anyone have trusted Go game hosts or a direction I should dive into?

  • @waelbasille8971
    @waelbasille8971 8 ปีที่แล้ว +10

    right?

  • @sachoslks
    @sachoslks 8 ปีที่แล้ว +7

    Ben Affleck talking about machine learning. Nice.

  • @cosmicWil
    @cosmicWil 8 ปีที่แล้ว

    AlphaGo does use brute force but from what I read it is only useful for 20 moves into the future after that it states to take too long to compute.

  • @SM-ut2mf
    @SM-ut2mf 8 ปีที่แล้ว

    i m really excited about the neural network vids

  • @GriffinCalme
    @GriffinCalme 8 ปีที่แล้ว

    I was going to say that you guys should do a video on neural networks now, but it looks like there's one in the works-exciting!
    a video exploring evolutionary algorithms or backpropagation would be neat too.

  • @FranciscoKurpiel
    @FranciscoKurpiel 8 ปีที่แล้ว +3

    Incorrect. They first took a big amateurs database and trained to predict the moves. After that they used reinforcement learning to improve.

  • @ericsbuds
    @ericsbuds 8 ปีที่แล้ว

    oh damn! the computer beat the champ at Go? i was curious to find out who won the contest.

  • @rafabulsing
    @rafabulsing 8 ปีที่แล้ว +3

    I counted 78 "right?"

  • @BariumCobaltNitrog3n
    @BariumCobaltNitrog3n 8 ปีที่แล้ว +1

    I would have liked seeing some moves the machine made and responses by the human as examples of what Martinez was describing. I know how to play Go, but some watching may not, explaining the basics might make a lot of this less abstract. Also, maybe talk about what the parameters were exactly, how many stones on the board? and who has how many? is a piece in the open or on a border?

  • @karlkastor
    @karlkastor 8 ปีที่แล้ว

    Now I'm excited for the Neural Network video!

    • @Celrador
      @Celrador 8 ปีที่แล้ว

      +Karl Kastor
      Don't get too excited.
      The basic ideas are really pretty simple. The complexity arises from the fact, that its an interconnected network.
      And in essence its just a hill-climbing algorithm. (So it's working based on geometric approximation.)

  • @gummipalle
    @gummipalle 8 ปีที่แล้ว +2

    How many years untill society, including city planning and civil infrastructure, is planned by a machine?
    And when will it begin distributing career chips?

  • @danielkruyt8192
    @danielkruyt8192 8 ปีที่แล้ว +1

    I can't stop hearing the rights, I'm listening for them. D:

  • @jozefsk7456
    @jozefsk7456 8 ปีที่แล้ว

    alpha go was trained on many human games, but in future Demis said he is interested in trying to teach alpha go only by playing against itself, and as a result might find some even better tactics

  • @geeb009
    @geeb009 8 ปีที่แล้ว +5

    I am very surprised that he says that you can brute force chess because I think this is wrong. You can go down the tree of moves up to 10/15 moves but at some points you have to evaluate the position without using brute force.

  • @adrianmelic
    @adrianmelic 8 ปีที่แล้ว

    Now I need to follow Brais Martinez!
    Is he on Twitter?

  • @JarodBenowitz
    @JarodBenowitz 8 ปีที่แล้ว

    Is the predominate topology of neural networks k-partite graphs?

  • @JahMusicTube
    @JahMusicTube 8 ปีที่แล้ว +1

    Can you really enumerate all possible chess positions? I thought it was a way too huge number for exhaustive search!

  • @SuperMaDBrothers
    @SuperMaDBrothers 8 ปีที่แล้ว

    What's written above the whiteboard above Andey?

  • @grayfoxfive
    @grayfoxfive 8 ปีที่แล้ว +1

    4:44 to 4:52: "...they really don't know what kind of tactic the computer is following..." That's a profound statement. In theory, the software builders should be able to do a kind of data dump to eventually understand the computer's own algorithms, but the sheer amount of data crunching it's doing in a relatively short time might make that almost impossible.

  • @nabilelqatib1548
    @nabilelqatib1548 8 ปีที่แล้ว +1

    He said "Right" 237 times in 11 minutes, he definitely should train more to set up a Guinness World Record.

    • @MacoveiVlad
      @MacoveiVlad 8 ปีที่แล้ว

      +Nabil Elqatib Have you actually counted?! I thought about doing that for the fun of it but decided it wouldn't actually be fun. :)

    • @nabilelqatib1548
      @nabilelqatib1548 8 ปีที่แล้ว +1

      +Macovei Vlad I did, I could have missed 1 or 2 but that's the approximate number :-)

  • @goteo446
    @goteo446 7 ปีที่แล้ว

    what language did they make alpha go

  • @Cornerback3141
    @Cornerback3141 8 ปีที่แล้ว

    Does AlphaGo run on a supercomputer?

  • @SpitefulAZ
    @SpitefulAZ 7 ปีที่แล้ว

    ¿De donde es Brais Martinez?

  • @apreasher
    @apreasher 8 ปีที่แล้ว +6

    right.

  • @yali_shanda
    @yali_shanda 8 ปีที่แล้ว

    Evolutionary computing, maybe for a next video?

  • @pleasedontwatchthese9593
    @pleasedontwatchthese9593 8 ปีที่แล้ว

    I think the point here is the GO ai used a more heuristic approve while chess AI uses something more defined. The truth is you can use "AlphaGo" for any type of AI, and has been applies to chess and has. I think the cool thing is just they got it to work well enough to play go really well.
    Or in summery they needed a more broad AI system than traditional AI systems.

  • @raxar1234
    @raxar1234 8 ปีที่แล้ว

    in order to be fair, human should be allowed have much more time or even no time limit. alpha go essentially has unlimited time since if it becomes an issue, it's just matter of popping in more processors

    • @MrWizardjr9
      @MrWizardjr9 4 ปีที่แล้ว

      i dont think it scales infinitely there will be a point where performance doesnt increase by much even if you add a trillion more processors

  • @DevinDTV
    @DevinDTV 8 ปีที่แล้ว +4

    5:20 this isn't how they did it. 6:45 is wrong. They didn't use a genetic algorithm. What they actually did is use a library of pro matches to feed the AI a basic idea of what a pro player would do in various situations, the AI abstracted from those examples and learned to play generalized situations, and then after that they had the AI play against itself millions of times to optimize and become better.
    Also, the best Chess AIs don't bruteforce either, except the endgame. They look for high value moves while looking a few moves into the future, but they don't generally examine every possibility.

    • @nightangel7239
      @nightangel7239 8 ปีที่แล้ว

      How is using a library of matches and playing against itself evidence that it doesn't use a genetic algorithm?

    • @DevinDTV
      @DevinDTV 8 ปีที่แล้ว

      It's not, it's just an explanation of what they did. I was paraphrasing one of the people who worked on the project. He specifically said they didn't use a genetic algorithm.

    • @nightangel7239
      @nightangel7239 8 ปีที่แล้ว

      DevinDTV
      Oh, I see. Thanks for explaining that. I thought you were saying "They used a library of matches" and 'genetic algorithms' were exclusive of each other.

  • @PixelPhobiac
    @PixelPhobiac 8 ปีที่แล้ว

    Actually, they started with a data-set of 1000 human-played matches as a starting point to learn from.

  • @U014B
    @U014B 8 ปีที่แล้ว

    Is this at all related to Deep Hurting?

  • @johnhammer8668
    @johnhammer8668 7 ปีที่แล้ว

    Should have been more clear on how plotting the line with different values of the line is different from the so called "parameters" in the context of deep learning algorithm . Right ?

  • @akajiblubb2401
    @akajiblubb2401 8 ปีที่แล้ว +1

    i am a go Player myself. you guys should start playing go Folks :)

  • @durchschnittlich
    @durchschnittlich 8 ปีที่แล้ว +1

    Right?

  • @flakkanon
    @flakkanon 8 ปีที่แล้ว +2

    Brais Martinez is a very right man..

  • @jimpikles
    @jimpikles 8 ปีที่แล้ว

    why is there a clip of the bishop being knocked over? :P

    • @Cyberspine
      @Cyberspine 8 ปีที่แล้ว +5

      +jimpikles Subliminal satanist propaganda, of course.

  • @vitoryugojsm
    @vitoryugojsm 4 ปีที่แล้ว

    Claude Shannon's study on chess estimated 10^120 possible games, on an average 40 moves per player (lower bound for sure): 1950, "Programming a Computer for Playing Chess".
    With promotions and even with the 50-move rule (after the last captured pawn) you can't estimate precisely all possible states chess has. Let alone brute force it.
    The main difference to Go is that there are huge library of «best» moves from all the chess masters which any algorithm can devour through. But that's human knowledge, from centuries of documented play. That's certainly not what deep reinforcement learning is, but everywhere it seems people use it on top of supervised learning and then conclude something has brute-force solubility.
    Nevertheless, Chess is still a polynomial (P-Space) problem, but then again so is Go. Both are within reach.

  • @fips001
    @fips001 8 ปีที่แล้ว +2

    chess is not solved by brute force!

  • @ecstasyofgold888
    @ecstasyofgold888 8 ปีที่แล้ว

    magic, got it

  • @HKG5sentsp
    @HKG5sentsp 8 ปีที่แล้ว +1

    02:32 did anyone heard "fucker"?

  • @rikschaaf
    @rikschaaf 8 ปีที่แล้ว +1

    Though I understand the video, not everyone is an AI student, so some graphical illustration might help.

  • @Nalkahn
    @Nalkahn 8 ปีที่แล้ว

    Let's count how many right moves there are in this video.

  • @umbrellaeinheit
    @umbrellaeinheit 8 ปีที่แล้ว

    Right.

  • @DominicGo
    @DominicGo 8 ปีที่แล้ว

    Yasss

  • @banderi002
    @banderi002 8 ปีที่แล้ว

    Wait a minute... are you the guy from Pressing Issues!?

    •  8 ปีที่แล้ว +1

      Hahaha, I am Maurice Chavez and today on pressing issues..

  • @richardtickler8555
    @richardtickler8555 7 ปีที่แล้ว +2

    it would be really evil to create a video game that learns from the player and becomes unbeatable

    • @richardtickler8555
      @richardtickler8555 7 ปีที่แล้ว

      and yes i know there are easier ways to make an unbeatable game

    • @AirIUnderwater
      @AirIUnderwater 7 ปีที่แล้ว +1

      Have you seen a simple computer learning how to play Super Mario Bros. yet? Also, didn't they say that Starcraft is the next thing they want to try and tackle now?

    • @richardtickler8555
      @richardtickler8555 7 ปีที่แล้ว

      AirIUnderwater
      i saw parts of it. i was more thinking about a game that learns while you play and the enemies get cleverer with every fight. some games get really boring when you see the flaw in enemy patterns

    • @LeifNelandDk
      @LeifNelandDk 6 ปีที่แล้ว

      Richard Tickler, the tricky thing in writing games is to hit the right difficulty. Not too easy and users get bored and quit. Too hard and users don't get anywhere and quit.

  • @bobbishmax62
    @bobbishmax62 8 ปีที่แล้ว

    Is that JackFrags interviewing?

  • @afbdreds
    @afbdreds 6 ปีที่แล้ว

    also more positions in chess than particles in universe

  • @zielad362
    @zielad362 8 ปีที่แล้ว

    Lee sedol isn't the world go champion, he used to be now the strongest go player is ke jie.

  • @0530628416
    @0530628416 8 ปีที่แล้ว

    in numberphile video the aproximation to possible chess games goes to about 10 to 120 so brute forcing is not really possible , why are you giving contradictory information. i get it go is much more "open" than chess but still chess is not brute-force friendly ...

  • @whiterottenrabbit
    @whiterottenrabbit 8 ปีที่แล้ว

    Right... Right? Right?!

  • @DustinRodriguez1_0
    @DustinRodriguez1_0 8 ปีที่แล้ว

    You can't brute-force chess. We can't even calculate the exact number of possible games!

  • @xXUxCXx
    @xXUxCXx 8 ปีที่แล้ว

    Ethics, Ethics, Ethics.. start thinking!

  • @NotAnTroll
    @NotAnTroll 8 ปีที่แล้ว +1

    He sure says "right" a lot, right?

  • @011081xxx9
    @011081xxx9 8 ปีที่แล้ว +2

    RIGHT!?

  • @HughesEnterprises
    @HughesEnterprises 6 ปีที่แล้ว

    Riiiiiiight

  • @hmack22
    @hmack22 8 ปีที่แล้ว

    So does this mean B.F. Skinner wasn't as wrong about learning as the cognitive scientists would have us believe? I suppose the answer is that real living creatures can't perform millions of iterations to fine-tune their behaviours by successive approximation, but still, it's neat to see how powerful this type of learning actually is. Could the same approach someday be used to learn language before Verbal Behavior turns 100?

  • @michael169chapman
    @michael169chapman 8 ปีที่แล้ว +3

    In Chess, as well as in Go, there are more games than particles in the universe....

  • @jonhuh3519
    @jonhuh3519 7 ปีที่แล้ว

    wanted to point out that they had game data from amateur players online to begin with.

  • @joaogoncalves1149
    @joaogoncalves1149 8 ปีที่แล้ว +1

    When spanish try to speak english... I always have problems understanding them.

  • @perkodanny
    @perkodanny 6 ปีที่แล้ว

    2:36 It's funny to say something like "almost infinite" because it's pretty much incoherent, but we know what he means.

  • @CreativeStormGaming
    @CreativeStormGaming 5 ปีที่แล้ว

    RIIIGHT?!

  • @AJ-kj1go
    @AJ-kj1go 8 ปีที่แล้ว +8

    Is Go a game with perfect information?

    • @midgetwars1
      @midgetwars1 8 ปีที่แล้ว +3

      +Aj Koorstra No. The possible amount of moves are more than atoms in the universe. Which is why it makes this so special.

    • @Freakschwimmer
      @Freakschwimmer 8 ปีที่แล้ว

      +Aj Koorstra
      But symmetrical Information :)

    • @TheLivetuner
      @TheLivetuner 8 ปีที่แล้ว +55

      +midgetwars1 but it IS perfect information, both players can see everything clearly on the board, there is no concealed information. What you're saying is it's not a solved game, which is why we need deep learning for now.

    • @AJ-kj1go
      @AJ-kj1go 8 ปีที่แล้ว +6

      But chess also has more moves than the amount of atoms in the universe.... I get why the learning is different here, just trying to come terms with the game I guess because from my understanding it's still a finite game of symmetrical and perfect information

    • @TheLivetuner
      @TheLivetuner 8 ปีที่แล้ว +22

      +Aj Koorstra the thing with Go is, not only the legal moves are much much more numerous than chess, they are also hard to quantify and evaluate. You can easily give value points to chess pieces, but evaluating a single stone move on Go is a considerably more daunting task.

  • @matthewgray8846
    @matthewgray8846 6 ปีที่แล้ว

    Right? Right? Right?

  • @Fiifufu
    @Fiifufu 8 ปีที่แล้ว

    I really like the Computerphile videos. But I think the thumbnails and titles are sometimes quite misleading. Also you sometimes end the video right when you get to the topic mentioned in the title. I've seen other people also commenting about this. I'm not saying your videos are bad and I couldn't make better videos myself. Just asking you to deliver what you "promise" :)

  • @mrIcelandicdude
    @mrIcelandicdude 8 ปีที่แล้ว

    it sounds like brady has a cold, are you ok?
    (this is a joke i know he has someone else to film computerphile)