Also, the algorithm that evaluates a general position well with depth 0 can't exist. What Max Deutsch wanted isn't possible, not even in theory. The fact that he went with this approach shows that he understands nothing about chess.
I don't think he wanted 0 depth, he wanted low depth and to find properties in position that are associated with a position being good. Whatever he wanted, Max is dumb for thinking he could revolutionize chess in a month.
no, if there is some machine which has memorized all the possible chess positions and the best move from that position then given a position just pick the next best position which can be done in constant time.
@@hellstenlight9454 That's not an algorithm that you are describing, that's a database. Also, it is irrelevant whether you store that info on a PC disc, whether you store it on any machine or on some other device. In principle, you don't need a machine to store that info, it is just data that you are storing. In reality you can't store that data in any way, neither you can ever get that data. Why? Because the number of possible chess positions is a number with more than 100 zeroes, but even if it was possible it wouldn't help Max Deutch play chess better without having access to the database while he is playing.
@@DipsAndPushups What i am describing is an algorithm which gets values from a database if you'd like. [" In principle, you don't need a machine to store that info, it is just data that you are storing." ] can't seem to make sense out of it. [" In reality ... data. Why? Because the number of possible chess positions is a number with more than 100 zeroes"] bad or at least incomplete argument to defend your first statement( also 100 zeroes is 0 but we can let that slide). The only sensible statement from your previous comment is [" but even if it was possible it wouldn't help Max Deutch play chess better without having access to the database while he is playing."]
It's pretty much the same as thinking you could beat Usain Bolt at the 100m after practicing 1 month. The world champions are champions because they're talented AND practiced their discipline for years.
you're not wrong. however, i think this video isn't really about world champions in (human) chess though. i think this video is more about world champions in computer chess?
@@palimpsestransparent I thought he + algorithm (i.e. e will bring a computer and use engine but it's an engine really made from scratch) Vs Magnus ? Also what's your opinion of zs4zpf ? Cool username btw palimpsestransparent. I remember learning palimpsest in university in philosophy class. (I went to a Catholic university.)
That's the real truth right here. All GMs have hundreds if not thousands of games memorized. So there's basically no way you're going to out memorize them, and that was Max's strategy.
It is pretty much impossible to beat someone like Magnus who is a prodigy (but actually any super GM), if you only start seriously working on chess later on your life. First of all, for the majority of the population, there is a ceiling in the level of play you can acheive, and that ceiling is usually just shy of an IM. And secondly, pretty much all of the super GMs, apart from being naturally talented, have started playing chess when they were on kindergarden, basically utilizing kid's superpower to absorb any information. There are IM's who have tried to go for the GM title later in their life and they have failed. This guy was never gonna get anywhere near Magnus' level even if he was given his entire life to train ...
you could give him multiple lifetimes and he wouldnt beat magnus because 1. you need to start very young to reach those kinds of levels 2. you need certain mental capabilites that almost no one has
I have a slightly different take on this. You seem to be assuming that he was programming an AI-style neural network engine. I think what he was proposing was more akin to a classic procedural system such as a Stockfish engine where pieces have value and so on. In this sense the problem is not that he would need to "train" the system in any way, rather that the result he gets, once he gets it, is too large to memorize. Essentially he is proposing: 1. Create a good enough Stockfish-style engine to beat Carlsen, based on pure evaluation, not on AI training. This in itself is laughable, as teams of experts have dedicated decades of intense work to program such engines. 2. He proposes to memorize every feasible chess position along with what the engine is proposing as the best move in that position. So essentially memorising billions of positions. A lifetime of memorising would not allow him to memorize even 0.1%, let alone memorize 100% in one month.
I don’t agree on number 1, I think it is pretty doable in 1 month. But there is a catch, I wouldn’t expect that engine to compete with SF. But I bet it can beat Magnus.
@@dogacel You are saying you can program a chess algorithm in a month that beats Carlsen? I very much doubt that. Even if you were the best programmer in the world. It took decades and large teams of experts to reach an algorithm to beat Kasparov, so how can you claim to be able to create an algorithm in under a month? Total nonsense.
@@palimpsestransparent "It took decades and large teams of experts to beat Kasparov" in 1996. Currently we have a very high computational power. You shouldn't compare the first engine that beat a human to the current engines. There are lots of open-source chess engines that can defeat Carlsen easily. Do you think every one of them is developed by large teams of experts in decades? No. We have techniques and information available everywhere. Stockfish itself is open-source. There are forums and wikis around how you can program a chess engine. I already wrote a chess engine with a friend. I would be happy to talk about it and explain you how it is possible if you like.
Yes, I think it is possible to do that in a month if a person dedicates themselves into this task. It will be hard, the person needs a lot of skills but it is far from being impossible.
@@dogacel So what you are saying is that the reason the world's best experts took many years to develop a winning algorithm is simply because they did not dedicate themselves enough to the task? You are saying that they could have programmed the algorithm in one month but instead took 30 years because they did not dedicate themselves to it? Do you realise how ridiculous this sounds and how disrespectful it is to the experts at IBM who programmed Deep Blue?
In comparing Stockfish NNUE to Leela, is it correct to say that Stockfish relies more on calculation and heuristics than neural networks, while Leela is the opposite?
No, not NNUE. The WSJ piece and thus the information in this video predates Stockfish NNUE, so when I mention stockfish relying more on calculation and the heuristic borrowed from human knowledge, that refers specifically to pre-NNUE stockfish. There are still certainly differences between leela and stockfish, but the general mechanic for evaluation (using neural networks to score positions) is now much more similar with NNUE, hence why stockfish is now back to being strongly competitive with those neural network based engines.
@@dylanpcase Even the NNUE version relies more on calculation. Leela usually uses a larger model than Stockfish, resulting in shallower search depths but more accurate leaf evaluations.
If you read his blog posts, then you'll see that he was only planning on "playing Magnus" through the PlayMagnus app. He actually thought it would be the one goal he wouldn't be able to achieve. However, since his posts were gaining popularity, WSJ offered him the opportunity to play Magnus and as a chess and Magnus fan, he gladly accepted the opportunity of a lifetime. Perhaps he regrets his decision after all the backlash. He was also limiting the decimal weights to 2 digits and only 2 layers so the max possible strength of the neural network was very limited. it was really supposed to be a fun experiment for someone dabbling in ML/AI and experimenting with human calculation. He did not deserve the hate that he received
Meh, if that was true, he should’ve mentioned it at some point or be more apologetic or humble after the defeat. The fact that he thought he played 9 good moves against Magnus is so dumb
@@nza1804 he was humble. he said he was around 1500 rating normally and stood no chance. him saying he played 9 good moves is a joke that the only good moves he made were basic opening theory and he was bad on his own.
@@circle_line bro, the moment he said he made Magnus nervous was so silly, I wouldn’t even classify that as a joke. I’m 1600 and I could tell you that wasn’t 1500 play, more like 1000. You can analyse that game and he played poorly even if he was 1500. I heard that he did only want to beat Magnus on the mobile game and this became way more publicised than it should’ve been but the guy should’ve been more realistic honestly. It’s just silly but thanks for your comment. Didn’t know about the whole event really being set up by WSJ, thought they just reported it
@@nza1804 What he meant was Magnus was taking this challenge seriously until he saw that he sucked when he moved his knight or whatever. Nervous was the wrong word. Max has like 20 blog posts for this challenge (he has blog posts for every challenge) that answer every question this dude that made the video had, but apparently, nobody could be bothered to read it before making videos about the situation
In one of the last blog posts on beating Magnus he also estimates "it would take between 500-1,000 hours to become a human chess computer capable of defeating the world champion". He still thought he can somehow memorize a machine learning model capable of beating Magnus Carlsen in that time. At the time he hadn't even trained any decent model at depth 0. He goes on to say "I suspect that I’ll be circling back some time in the future, putting in these 1,000 hours, and, assuming everything goes to plan, playing a competitive game against Magnus (in what will still likely be a very lengthy game). Until then, Magnus can continue enjoying his spot at the top…". For me it's pretty clear this guy is just a narcissist and he has no idea what he is talking about.
When he talks about things that are just impossible to an audience that doesn't know it, it reminds me of that Theranos scam - borderline psychopath try-hard throwing a hail mary to feel important
he wants to "train his brain into a computer" yet probably can't remember a 15 move game (so 30 positions). that's the west coach "entrepreneurs" for you, folks
what is the point in creating an algorithm which you cant use during the game? Or is max deutsch an ai and can just download that programm into his head....
What would be interesting if there was or is an engine that evaluates each piece in a position and uses that to help evaluate the position. Idk if this is a good idea or feasible. Any thoughts?
Stockfish and alphazero (the best chess AIs) use a neural network to evaluate a position. The reason we don't use a neural network to give a score for each piece is because of this: 1) It's easier to just give it the whole board and it ends up giving the same result. And 2) we don't want to introduce human bias into our scoring system, we want to design an AI that could eventually be smarter than us and so ideally we dont want to include any of our knowledge about what is a good or bad position in the algorithm.
Most engines used to do this before neural evaluation was revolutionized, however only to a certain extent. This is because the more sophisticated an evaluation function is, the more it damages search depth.
> On December 5, 2017, the DeepMind team released a preprint introducing AlphaZero, which within 24 hours of training achieved a superhuman level of play in these three games by defeating world-champion programs Stockfish, elmo, and the three-day version of AlphaGo Zero. (re 15:45) 1 - ah so alphazero really came out in late 2017 just after the max deutsch thing? 2 - if no, then what do i misunderstand? if yes, then COINCIDENCE? I THINK NOT!!!!!!! XD
Yeah, that's exactly right, AlphaZero did come out just after this piece was published in 2017, so it very well could be that Max Deutsch developed AlphaZero in one month and sold it to the Google DeepMind team who then took all the credit haha
Hey Dylan, when you say a model stops getting better at some point, does this factor in new data being ingested or it simply will not get better using the same data set? i.e. ingestion of new data can improve the model
You can certainly continue to improve a model by adding data, but that would typically be done before the training process (the training process being what is portrayed in the video with the read outs of mse and train accuracy).
Also, the algorithm that evaluates a general position well with depth 0 can't exist. What Max Deutsch wanted isn't possible, not even in theory. The fact that he went with this approach shows that he understands nothing about chess.
or computing
I don't think he wanted 0 depth, he wanted low depth and to find properties in position that are associated with a position being good. Whatever he wanted, Max is dumb for thinking he could revolutionize chess in a month.
no, if there is some machine which has memorized all the possible chess positions and the best move from that position then given a position just pick the next best position which can be done in constant time.
@@hellstenlight9454 That's not an algorithm that you are describing, that's a database. Also, it is irrelevant whether you store that info on a PC disc, whether you store it on any machine or on some other device. In principle, you don't need a machine to store that info, it is just data that you are storing. In reality you can't store that data in any way, neither you can ever get that data. Why? Because the number of possible chess positions is a number with more than 100 zeroes, but even if it was possible it wouldn't help Max Deutch play chess better without having access to the database while he is playing.
@@DipsAndPushups What i am describing is an algorithm which gets values from a database if you'd like. [" In principle, you don't need a machine to store that info, it is just data that you are storing." ] can't seem to make sense out of it. [" In reality ... data. Why? Because the number of possible chess positions is a number with more than 100 zeroes"] bad or at least incomplete argument to defend your first statement( also 100 zeroes is 0 but we can let that slide). The only sensible statement from your previous comment is [" but even if it was possible it wouldn't help Max Deutch play chess better without having access to the database while he is playing."]
I'm amazed that nobody has remarked on those half assed pull ups. I wouldn't count a single one of those.
maybe people did in the other videos?
He isn't even going all the way down...
It's pretty much the same as thinking you could beat Usain Bolt at the 100m after practicing 1 month. The world champions are champions because they're talented AND practiced their discipline for years.
you're not wrong. however, i think this video isn't really about world champions in (human) chess though. i think this video is more about world champions in computer chess?
Usain
@@BryceRogers_ thanks man. I've corrected it.
@@nicbentulan no, not really. It is about human level of chess. He's saying he (a human) can beat Carlsen (a human).
@@palimpsestransparent I thought he + algorithm (i.e. e will bring a computer and use engine but it's an engine really made from scratch) Vs Magnus ?
Also what's your opinion of zs4zpf ?
Cool username btw palimpsestransparent. I remember learning palimpsest in university in philosophy class. (I went to a Catholic university.)
Ironically its easier to beat magnus than to memorise all the positions 😂😂
he would have had a better chance just buying and reading a book.
edit: and maybe set and easier goal like "learning chess openings in a month"
You could give him a year and still be no closer to beating Magnus, who himself has memorised around 10k games of chess.
That's the real truth right here. All GMs have hundreds if not thousands of games memorized. So there's basically no way you're going to out memorize them, and that was Max's strategy.
a year ? A life time
It is pretty much impossible to beat someone like Magnus who is a prodigy (but actually any super GM), if you only start seriously working on chess later on your life.
First of all, for the majority of the population, there is a ceiling in the level of play you can acheive, and that ceiling is usually just shy of an IM.
And secondly, pretty much all of the super GMs, apart from being naturally talented, have started playing chess when they were on kindergarden, basically utilizing kid's superpower to absorb any information.
There are IM's who have tried to go for the GM title later in their life and they have failed.
This guy was never gonna get anywhere near Magnus' level even if he was given his entire life to train ...
you could give him multiple lifetimes and he wouldnt beat magnus because
1. you need to start very young to reach those kinds of levels
2. you need certain mental capabilites that almost no one has
I have a slightly different take on this. You seem to be assuming that he was programming an AI-style neural network engine. I think what he was proposing was more akin to a classic procedural system such as a Stockfish engine where pieces have value and so on. In this sense the problem is not that he would need to "train" the system in any way, rather that the result he gets, once he gets it, is too large to memorize.
Essentially he is proposing:
1. Create a good enough Stockfish-style engine to beat Carlsen, based on pure evaluation, not on AI training. This in itself is laughable, as teams of experts have dedicated decades of intense work to program such engines.
2. He proposes to memorize every feasible chess position along with what the engine is proposing as the best move in that position. So essentially memorising billions of positions. A lifetime of memorising would not allow him to memorize even 0.1%, let alone memorize 100% in one month.
I don’t agree on number 1, I think it is pretty doable in 1 month. But there is a catch, I wouldn’t expect that engine to compete with SF. But I bet it can beat Magnus.
@@dogacel You are saying you can program a chess algorithm in a month that beats Carlsen? I very much doubt that. Even if you were the best programmer in the world. It took decades and large teams of experts to reach an algorithm to beat Kasparov, so how can you claim to be able to create an algorithm in under a month? Total nonsense.
@@palimpsestransparent "It took decades and large teams of experts to beat Kasparov" in 1996. Currently we have a very high computational power. You shouldn't compare the first engine that beat a human to the current engines.
There are lots of open-source chess engines that can defeat Carlsen easily. Do you think every one of them is developed by large teams of experts in decades? No. We have techniques and information available everywhere. Stockfish itself is open-source. There are forums and wikis around how you can program a chess engine.
I already wrote a chess engine with a friend. I would be happy to talk about it and explain you how it is possible if you like.
Yes, I think it is possible to do that in a month if a person dedicates themselves into this task. It will be hard, the person needs a lot of skills but it is far from being impossible.
@@dogacel So what you are saying is that the reason the world's best experts took many years to develop a winning algorithm is simply because they did not dedicate themselves enough to the task? You are saying that they could have programmed the algorithm in one month but instead took 30 years because they did not dedicate themselves to it? Do you realise how ridiculous this sounds and how disrespectful it is to the experts at IBM who programmed Deep Blue?
In comparing Stockfish NNUE to Leela, is it correct to say that Stockfish relies more on calculation and heuristics than neural networks, while Leela is the opposite?
No, not NNUE. The WSJ piece and thus the information in this video predates Stockfish NNUE, so when I mention stockfish relying more on calculation and the heuristic borrowed from human knowledge, that refers specifically to pre-NNUE stockfish. There are still certainly differences between leela and stockfish, but the general mechanic for evaluation (using neural networks to score positions) is now much more similar with NNUE, hence why stockfish is now back to being strongly competitive with those neural network based engines.
@@dylanpcase Even the NNUE version relies more on calculation. Leela usually uses a larger model than Stockfish, resulting in shallower search depths but more accurate leaf evaluations.
I thought we're not even sure how many chess positions there are.
Although there are some estimates.
If you read his blog posts, then you'll see that he was only planning on "playing Magnus" through the PlayMagnus app. He actually thought it would be the one goal he wouldn't be able to achieve. However, since his posts were gaining popularity, WSJ offered him the opportunity to play Magnus and as a chess and Magnus fan, he gladly accepted the opportunity of a lifetime. Perhaps he regrets his decision after all the backlash. He was also limiting the decimal weights to 2 digits and only 2 layers so the max possible strength of the neural network was very limited. it was really supposed to be a fun experiment for someone dabbling in ML/AI and experimenting with human calculation. He did not deserve the hate that he received
Meh, if that was true, he should’ve mentioned it at some point or be more apologetic or humble after the defeat. The fact that he thought he played 9 good moves against Magnus is so dumb
@@nza1804 he was humble. he said he was around 1500 rating normally and stood no chance. him saying he played 9 good moves is a joke that the only good moves he made were basic opening theory and he was bad on his own.
@@circle_line bro, the moment he said he made Magnus nervous was so silly, I wouldn’t even classify that as a joke. I’m 1600 and I could tell you that wasn’t 1500 play, more like 1000. You can analyse that game and he played poorly even if he was 1500. I heard that he did only want to beat Magnus on the mobile game and this became way more publicised than it should’ve been but the guy should’ve been more realistic honestly. It’s just silly but thanks for your comment. Didn’t know about the whole event really being set up by WSJ, thought they just reported it
@@nza1804 What he meant was Magnus was taking this challenge seriously until he saw that he sucked when he moved his knight or whatever. Nervous was the wrong word. Max has like 20 blog posts for this challenge (he has blog posts for every challenge) that answer every question this dude that made the video had, but apparently, nobody could be bothered to read it before making videos about the situation
In one of the last blog posts on beating Magnus he also estimates "it would take between 500-1,000 hours to become a human chess computer capable of defeating the world champion". He still thought he can somehow memorize a machine learning model capable of beating Magnus Carlsen in that time. At the time he hadn't even trained any decent model at depth 0. He goes on to say "I suspect that I’ll be circling back some time in the future, putting in these 1,000 hours, and, assuming everything goes to plan, playing a competitive game against Magnus (in what will still likely be a very lengthy game).
Until then, Magnus can continue enjoying his spot at the top…". For me it's pretty clear this guy is just a narcissist and he has no idea what he is talking about.
Thank you i needed that video... that max deutsch video was very difficult to watch
When he talks about things that are just impossible to an audience that doesn't know it, it reminds me of that Theranos scam - borderline psychopath try-hard throwing a hail mary to feel important
he wants to "train his brain into a computer" yet probably can't remember a 15 move game (so 30 positions). that's the west coach "entrepreneurs" for you, folks
By the way it's Elo not E.L.O. it's named after the creator Arpad Elo.
what is the point in creating an algorithm which you cant use during the game? Or is max deutsch an ai and can just download that programm into his head....
great video. very educational
bot comment
This was interesting, thanks for the video, hope you make more
bot comment
@@theimmux3034 no, this is not a bot comment, beep boop beep
Great video
Analysing what, now? What's there to analyse in the first place 😄.
Hi I’m maximum german
What would be interesting if there was or is an engine that evaluates each piece in a position and uses that to help evaluate the position. Idk if this is a good idea or feasible. Any thoughts?
Stockfish and alphazero (the best chess AIs) use a neural network to evaluate a position. The reason we don't use a neural network to give a score for each piece is because of this: 1) It's easier to just give it the whole board and it ends up giving the same result. And 2) we don't want to introduce human bias into our scoring system, we want to design an AI that could eventually be smarter than us and so ideally we dont want to include any of our knowledge about what is a good or bad position in the algorithm.
Most engines used to do this before neural evaluation was revolutionized, however only to a certain extent. This is because the more sophisticated an evaluation function is, the more it damages search depth.
> On December 5, 2017, the DeepMind team released a preprint introducing AlphaZero, which within 24 hours of training achieved a superhuman level of play in these three games by defeating world-champion programs Stockfish, elmo, and the three-day version of AlphaGo Zero.
(re 15:45)
1 - ah so alphazero really came out in late 2017 just after the max deutsch thing?
2 - if no, then what do i misunderstand? if yes, then COINCIDENCE? I THINK NOT!!!!!!! XD
Yeah, that's exactly right, AlphaZero did come out just after this piece was published in 2017, so it very well could be that Max Deutsch developed AlphaZero in one month and sold it to the Google DeepMind team who then took all the credit haha
@@dylanpcase OMG HAHAHAHAHAHAHAHAHAHAHAHAHA
@@dylanpcase gold 😂😂😂
@@SheelByTorn EXACTLY
@@dylanpcase hey Dylan happy 1 year anniversary to this video! XD
Btw what's your opinion of Hans Niemann Vs Magnus Carlsen? Check out zs4zpf
Hey Dylan, when you say a model stops getting better at some point, does this factor in new data being ingested or it simply will not get better using the same data set? i.e. ingestion of new data can improve the model
You can certainly continue to improve a model by adding data, but that would typically be done before the training process (the training process being what is portrayed in the video with the read outs of mse and train accuracy).
The Stockfish of today is way stronger than Alfa-Zero it's not even competitive btw.
nice
yeah we get it, max was a nobody way out of his depth.