It's not hat Alphastar has maphacks as you mentioned, but it can see the whole screen at the same time (zoomed out that much), while it still doesn't see through fog of war. So it can see everything a real player could but just all of it at the same time. Edit: at least that's what i heard from the official stream while they were talking about that (the only advantage that AI has).
True, but they also developed a version that has a virtual 'screen' which it has to move around and only sees inside of it. After these 5 games they let it play against Mana Live. SPOILER: . . . . . . Mana won.
I will add to this that it is limited to 300APM average. But that is still so unbelievable on the decision making. Oh and also, AlphaStars "saw" 2000 years (17,520,000 hours!!) of replays. So it knows many many many things that players did and how to react.
@@niklosupinfo It's not limited to its average as something can't be limited to its average value since there is no fixed length for the game. It can only handle/needs(not sure about that) about 300APM average. There is no limit to APM - other than computational limits - since at one point it got to around 1100 i think?
A little information about AlphaStar: First of all, yes, AlphaStar is a learning AI. Essentially it works through a process analogous to evolution where successive generations throw off random changes to their strategies. The most successful strategies survive while the bad ones get bled off. Give it 200 years of successive generations and the AI will get really good. Also, AlphaStar can only ever see units and structures it has vision of - it just doesn't have an actual camera to control in most of the games. It basically acts as though it's zoomed out so that it's camera looks at the entire map at once - but it still can't see through the fog of war. With that being said, it's EPM is more or less 100% of its APM - every single action is for the most part deliberate. This gives it a rather insane mechanical advantage over human players. Basically, that 4000 APM spike you mentioned? Every single action was intentional. No spam whatsoever. Lastly, 44:45... Cue Megalovania.
@noobenstein Please learn what the term "AI" means, you're confusing general AI that doesn't exist yet with narrow AI that does and gets more powerful every year. It's built using deep neural networks rather than "databanks" of any kind.
@noobenstein Wow dude you spend way too much time on 4chan. What is with this stupid NPC meme and your objectively wrong statements about this AI and AI in general? I'm a computer programmer, and I know what the terms you use erroneously actually mean. There's a strict definition for what a lookup table is and what a databank is. This AI is exactly that - an AI. It's fundamentally different from a databank, because it fails to do something a databank does - store data. The whole point is that the AI does not fail in weird situations, because it's literally made to be versatile, like in SC2, a DYNAMIC game. Obviously we don't have an AI that can adapt to anything, but that's because we don't know how to teach it everything yet. The modern AI is a neural network that uses layers of neurons to drive its decision-making process. It's a very complex piece of engineering, and it is definitely intelligent. If you can recall from Biology 101, we are exactly that - neural networks, but biological ones. AlphaGO went into every game without having any kind of data whatsoever in its memory, only its own training. This is the fundamental difference between AI and simple number-crunching engines - the AI develops a kind of intuition, the skill to play the game. The next step for this kind of AI is exactly what these videogames are trying to achieve. Once we learn how to program complex ideas and thoughts and train AI to understand them, we could start heading towards a human-level intelligent entity, comparable to humans in intellect.
What you describe by "random" seems to be "mutation". There are other ways of passing genes and getting out of local minim. Moreover the genetic algorithm is probably an optimization over some class of machine learning algorithm. I'm sure someday they'll get into details.
The reason why the AI walls off in the third game, is that it's a different "personality". They started by feeding a (large) number of games into the Neural Net (professional as well as lower level) and the NN developed some core strategies out of the input. Then these strategies fought against each other, until there were the few most successful strategies (=personalities) left. For TLO, they explicitly said they used a different personality for each game (so there were five different strategies). I assume the same is true for MaNa. So, Alphastar 3 developed its gameplay including walling off and was successful, so it also uses it in the game.
19:00 it learns in fact its also learned from our past games blizzard gave AS our matches to learn from it also has vision of all visible parts on map like structures units So alpha star with creep will be OP asf AlphaStar types: Black Sheep Wall
Oracle Name: AlphaStar The Starcraft playing AI is now in the game as an oracle. If it can survive for ten minutes, he will hack the internet and delete all non-starcraft related material.
18:21. Alpha sends the probe to collect minerals on the expansion while it waited for the money for the second wall gateway. That way the probe wasn't wasting anytime. Builds the second gateway and then Sends it to the nexus with its cargo :) Truly efficient!
Adding to what +[BenHarryKirk] said, not to further confuse +[Falcon Paladin], AlphaStar learning is done prior to the match. Basically goes like this: AlphaStar learns from replays of actual human gameplay (just watching), then you have a big amount of versions of AS that goes into a tournament against themselves, at the end of it (10 days for us, 200yrs for them) the most promising top 5 gets hand-picked by Deepmind guys. What you see in these 5 matches are 5 different Alpha Stars and most important they don't have knowledge of previous matches the other did against LiquidMaNa. Also AlphaStar went through another week of training before going against MaNa, so they are different than the ones who fought against TLO, supposedly better ones but not guaranteed. (all the info is what I gathered from the official stream and the dev talking)
Does AlphaStar think in advance or does it just work on what it knows at the moment? Like it starts to build observers only when it 100% knows there're dark templars out already.
I'm pretty sure Alphastar's selling point is the fact that it's a learning AI actually - it basically learned by playing against itself over an equivalent of a couple hundred years or something (need to actually read up on it for the exact dates and times etc.) It wasn't originally "programmed" to do anything other than the basic controls. I think they've also put a limit on it's APM to a degree.
This is what I felt like when I first tried to play the computer on the highest levels, it was like they knew exactly what I was doing even if I tried crazy builds, rushes, all ins or anything lol. Mind you, I am not a great player but I could destroy the lower level AI. I liked your analogy with chess because I found the same thing when I play against AI in chess. They go from playing like a child to playing against Bobby Fisher 🤣 great cast as always Falcon!
These games were from about a week before that. For the game Mana won, they trained a different AI which didn't have the same grand vision as these ones. It instead had a normal camera view as a human player.
It's a learning AI, not a genetic algorithm as you seem to imagine it. It's a neural network whose knowledge is encoded in the weights of the connections between the neurons. When it's learning it adjusts the weights a tiny bit, which is why it needs to play for centuries to figure anything out. In tournaments like this it doesn't learn anything, because it is only a few games anyway, and learning is expensive.
No, it isn't learning from Mana. If you are interested, Mana posted his own video of these games in which he talks about what he was thinking during these games. It was very interesting for me and definitely worth watching. Also, some more info about how AlphaStar was developed: deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/ AlphaStar is still very much robotic at this point in its development. The only reason it is playing at a pro level is because it has literally perfect macro, literally perfect micro, and (in these games shown) can see and react to the whole map so long as it has vision. You can see how much AlphaStar leans on it's computer micro from its unit choices: Stalkers and Phoenix mostly. It has not developed unit compositions, or reacting to the opponents unit compositions. I am interested to see if it will be able to develop these more nuanced strategies and how it will be able to defend from being predictable or when only one agent is used for an entire set. Edit: Also, in the last match, AlphaStar was required to use a virtual mouse and keyboard, and only could move things if it was looking at it on the screen. It makes a huge difference.
There would be no point (in terms of improving its play) to putting AlphaStar on ladder to learn the meta. It is almost certain that the meta isn’t the best possible way to play. It’ll eventually find ways to play that are better. Just as it did with Go. The Go world champion said something to the effect of ‘after 2000 years of study it turns out we don’t know how to play Go’ once he had had time to reflect on his losses.
Iirc, the 5-game match was fully played with the "zoomed-out vision". There was a live showmatch after the discussion between MaNa and a different approach of the AI, this time with a more constrained "screen size". . SPOILER: . . . . . This live Showmatch was won by MaNa
They also said that the AI was focusing on certain areas anyway, so it didn't really matter that much in the end. MaNa prepared extensively for the last showmatch. That's why he won.
This is definitely machine learning. They train a neural network (technically an LSTM, a type of recurrent neural network) using Reinforcement Learning, which DeepMind are sort of the top experts on. We'll have to wait until they publish a paper about this to get all the details, though.
Reaper name: Silver Age Superman Today we think of Superman as the guardian of Truth, Justice and the American way. However, during the Silver Age of Comics Superman did such horrific acts as denying Jimmy Olsen and Aquaman a drink of water in a desert (Superman's Pal Jimmy Olsen No. 115), dueling Batman with revolvers (World's Finest No. 261), and declaring himself the King of the World (Action Comics No. 311). For these acts, and many other instances of Superdickery, Silver Age Superman was stuffed into a Kryptonite lined Reapersuit.
18:50 no, this IS a learning ai. this ai learns from its mistakes, its uses deep leaning and state-of-the-art ai techniques to LITERALLY learn from each game that it plays, it is actually learning also these are 5 individual agents, but they do learn
AI is not (yet) capable of generalizing from small amount of data or analyzing its mistakes like humans do. It requires hundreds of years of nonstop play to learn, so it wouldn't be able to learn anything from a single game.
Thanks for the cast. Are the TLO games available because the stream didn't show a couple games that I'd like to see. Can't wait for the Serral matchup. In computer chess, there's a style of play that is deliberately anti-computer and has been used by non top ranked players to defeat the best engines in the past so I wonder if Serral will try anti-AI tactics like MaNa seemed to have found in Game 5. But what I really want to know is how all the races compare. In chess the computers have "figured out" that starting as white confers a slight advantage though I guess in SC what might be optimal for a computer might not be doable in human tournaments.
All I've learned from this whole alphastar thing is that it really doesn't matter whatever ridiculous nonsense you build as long as you can micro well enough
Alphastar is on 1 mind with every unit that he plays, like Alphastar is every unit on the map and can move it simultaniusly. ;) Its APM is 300 at average, but he doesnt just make APM with probes in the start, when there is nothing to do it doenst do, but when there is a battle the APM goes waaay up. Its seem unreal.
Using the Ability of the Phoenix to lift enemies Into the air briefely, allowing them to be taken out of the fight temporarily (and likely focused in the process)
20:54 you really misunderstand AlphaStar. "I don't know if its been programmed to go up that ramp"? No its not programmed with any preconceived strats. All of what its doing its learned to do. The creators have no hand in its decision making.
I would love to see what happens when these AI opponents have some kind of APM cap implemented. What if AlphaStar was only allowed to make a maximum of, say, 6 or 7 actions per second?
He isn't playing aganist the same alpha star. Each game he is facing a different one. They took the top five alpha star from the alpha star league they have and that's what Mana is playing aganist
Not exactly the top five, the "least exploitable" five (technically, a sample from the Nash distribution). The point is, if the top five were, say, all cannon rushers, it would be stupid to include five cannon rushers because that would be exploitable.
Honestly I want to see how far they can take it once they prove that it can obliterate humans. There's a channel on youtube called Micro AI that has some ancient videos of some examples of insane micro. I would very much like to see insane tricks that can only be achieved with inhuman levels of speed. Imagine a crowd of 100 marines, all being individually microed and stimmed at the very moment they get into range of the enemy. It's the dream! Imagine how many simultaneous engagements could happen without an APM limit!
@@FalconPaladin The developers did put an APM cap on it. The developers in fact expressly stated that because they wanted the fighting between the computer and humans be on equal grounds. In fact, the distribution (the standard deviation, the mean, the median, and the mode) of the APM of Alpha is actually lower than a pro-gamer's APM.
@@FalconPaladin Also AlphaStar cannot manipulate units that are not currently on its screen, I'm not sure how fast it zips back and forth, but it does have to move around to manipulate.
Also, if they are playing 5 games, there are 5 computer AI units respectively playing each game. What the developers do is that they flip a coin and assign one of the 5 computer units (or however many remaining) against the human player. So it's not the same computer. The computer can not engage in game theory. That is why the developers did that. I am not sure Alpha is as "good" as people make it out to be based on the coin flip feature. You have to know your opponent and counter their style if you wanna beat them. With this computer, you can't because you are not playing against the same unit every game. Look at IBM's Deep blue. The human chess players, with a little practice, were able to beat Deep blue. Because the computer is not good at strategy, it is good at tactics. Assuming arguendo that the computer is good at strategy, all strategy have counters in SC2. Just like rock-paper-scissors. But the computer, isn't smart enough (yet) to engage in "GAME THEORY." In fact, no computer is. This machine sucks. It doesn't player on fair terms, namely, the coin flip model the developers added isn't fair. In short, screw Google.
U are right! If they have let the same agent play 5 games consecutively I bet even TLO could have won the series. I'm pretty sure that one agent used only one strategy, but if the net outcome would have dropped the NN would have been smart enough to try variation of the same strategy. Now I don't think that would have been enough to win 3/5 matches, maybe 1, stretching it maybe even 2 matches if the AI have seen lots of games with a varieted training set (means not trained too specifically, but a bit loosen)
Neural net based systems are not programmed that's old school AI programming where the AI behavior is explicitly designed by the programmers and/or advisors.
It is a bit stupid that there is no APM limit for AI like there is for the one that plays DOTA. Also, the fact that it sees the map in zoomed out way also makes no sense, because it can do actions simultaneously (or at least with almost no delay) in different places at one's, so with some APM limitation and camera limitation, it would be a waaaaay different game
Alphastar's average apm is limited, which allows it to spike during fights but still have lower average apm than pros. The issue is not the apm spikes but rather epm. Because Alphastar doesn't spam, its epm is waaaaay higher than Mana's
Neither it’s APM or EPM is an issue. This isn’t a project to develop a stunted AI that can give Pro humans a good game, it’s to develop an AI that is as good as possible at a task that is much harder to define than something like Chess or Go. Doing so will provide huge insights into how to tackle problems in architecting future AI’s and GAI’s.
And after awhile, this ai will be unbeatable, a simple if/then scenario, after two hundred years, based on game state, it will have the answer to everything, I would however like to see it try and figure out Terran or Zerg but again, it's pointless after a couple hundred thousand iterations, I'd love to see it's first few hundred goes at Terran
I've watched later Alphastar games on the ladder. They generalized it to play with and against all races, to use standard openings and protect itself from cheese... but it plays too much like a human now, there are no crazy unstoppable strategies. Sad.
"Its not a learning AI" NO NO NO. It uses a neural network and learns from replays etc etc. It may be configured to only play SC but it LEARNS from played SC matches.
While the AI itself is impressive, they should limit its top APM potential, as it gets ridiculous during fights. At that point it's gaining an edge over the human player through sheer superhuman micro plus the ability to keep managing its bases during it.
this is not amazing, but expected, and worrisome. just you wait Falcon Paladin, just you wait, China has fully animated tv anchors, starcraft has perfect AI, not only professional players are about to loose job, but also yourself Falcon.
No game players’ livelihoods are under threat. In sports/recreation people will always want to know who is the best person. Accordingly we still run 100 metre races despite cars being faster.
Falcon :( i have been improving my english while i watch your videos, xD (I learn while having fun) and subtitles help me understand you at the same time that i hear you, could you put subtitles again please? :) thank u very much for you cast :D
i think its not fair for AI to see hidden units, for human player, you can still see the hidden unit but to the AI its so obvious. its pointless to use dark templar.
300 apm is bollocks, most actions of real players are just repeating orders, it should be set to mimic having a body with eye-hand lag and all that so it wold use it's decision-making and not perfect micro...
I like the commentaries but come on, you can prepare yourself better than that... A ton of crap is said all along the video... Alphastar doesn't have a map hack, it just have a full zoom out of the map. It still can't see throw fog of war. And there is actually several different agents which play the games. Each match is played against a different agent which have its unique set of strategies. For example, it is probably why in the 3rd game it walled of...
one thing you must understand, Alpha doesn't have human emotion, it aren't scare of anything, that why when it pushed with stalkers, it didn't back down, even when there was a Dt in the mix. This thing is dangerous.
don't let AlphaStar learn Terran we all know what happened the last time an AI got nukes
Or maybe they did already. That's how we got InNoVation.
Well, now that we know how Alphastar plays Terran
I feel pretty comfortable
It's not hat Alphastar has maphacks as you mentioned, but it can see the whole screen at the same time (zoomed out that much), while it still doesn't see through fog of war. So it can see everything a real player could but just all of it at the same time.
Edit: at least that's what i heard from the official stream while they were talking about that (the only advantage that AI has).
Yeah, that's what it is. Not maphacks but just zoomed out version of the game.
True, but they also developed a version that has a virtual 'screen' which it has to move around and only sees inside of it. After these 5 games they let it play against Mana Live. SPOILER:
.
.
.
.
.
.
Mana won.
it also can not make moves out side its vision
I will add to this that it is limited to 300APM average. But that is still so unbelievable on the decision making. Oh and also, AlphaStars "saw" 2000 years (17,520,000 hours!!) of replays. So it knows many many many things that players did and how to react.
@@niklosupinfo It's not limited to its average as something can't be limited to its average value since there is no fixed length for the game. It can only handle/needs(not sure about that) about 300APM average. There is no limit to APM - other than computational limits - since at one point it got to around 1100 i think?
A little information about AlphaStar:
First of all, yes, AlphaStar is a learning AI. Essentially it works through a process analogous to evolution where successive generations throw off random changes to their strategies. The most successful strategies survive while the bad ones get bled off. Give it 200 years of successive generations and the AI will get really good.
Also, AlphaStar can only ever see units and structures it has vision of - it just doesn't have an actual camera to control in most of the games. It basically acts as though it's zoomed out so that it's camera looks at the entire map at once - but it still can't see through the fog of war.
With that being said, it's EPM is more or less 100% of its APM - every single action is for the most part deliberate. This gives it a rather insane mechanical advantage over human players. Basically, that 4000 APM spike you mentioned? Every single action was intentional. No spam whatsoever.
Lastly, 44:45... Cue Megalovania.
Agreed
@noobenstein Please learn what the term "AI" means, you're confusing general AI that doesn't exist yet with narrow AI that does and gets more powerful every year. It's built using deep neural networks rather than "databanks" of any kind.
@noobenstein Oh, did I trigger something? I'm so sorry!
@noobenstein Wow dude you spend way too much time on 4chan. What is with this stupid NPC meme and your objectively wrong statements about this AI and AI in general?
I'm a computer programmer, and I know what the terms you use erroneously actually mean.
There's a strict definition for what a lookup table is and what a databank is.
This AI is exactly that - an AI. It's fundamentally different from a databank, because it fails to do something a databank does - store data. The whole point is that the AI does not fail in weird situations, because it's literally made to be versatile, like in SC2, a DYNAMIC game. Obviously we don't have an AI that can adapt to anything, but that's because we don't know how to teach it everything yet.
The modern AI is a neural network that uses layers of neurons to drive its decision-making process. It's a very complex piece of engineering, and it is definitely intelligent. If you can recall from Biology 101, we are exactly that - neural networks, but biological ones.
AlphaGO went into every game without having any kind of data whatsoever in its memory, only its own training. This is the fundamental difference between AI and simple number-crunching engines - the AI develops a kind of intuition, the skill to play the game.
The next step for this kind of AI is exactly what these videogames are trying to achieve. Once we learn how to program complex ideas and thoughts and train AI to understand them, we could start heading towards a human-level intelligent entity, comparable to humans in intellect.
What you describe by "random" seems to be "mutation". There are other ways of passing genes and getting out of local minim. Moreover the genetic algorithm is probably an optimization over some class of machine learning algorithm. I'm sure someday they'll get into details.
The reason why the AI walls off in the third game, is that it's a different "personality". They started by feeding a (large) number of games into the Neural Net (professional as well as lower level) and the NN developed some core strategies out of the input. Then these strategies fought against each other, until there were the few most successful strategies (=personalities) left.
For TLO, they explicitly said they used a different personality for each game (so there were five different strategies). I assume the same is true for MaNa. So, Alphastar 3 developed its gameplay including walling off and was successful, so it also uses it in the game.
19:00 it learns in fact its also learned from our past games blizzard gave AS our matches to learn from
it also has vision of all visible parts on map like structures units
So alpha star with creep will be OP asf
AlphaStar types: Black Sheep Wall
Oracle Name: AlphaStar
The Starcraft playing AI is now in the game as an oracle. If it can survive for ten minutes, he will hack the internet and delete all non-starcraft related material.
18:21. Alpha sends the probe to collect minerals on the expansion while it waited for the money for the second wall gateway. That way the probe wasn't wasting anytime. Builds the second gateway and then Sends it to the nexus with its cargo :)
Truly efficient!
53:12 check the mineral jump, it went up by 400 in one go... I can't figure out why.
Two ideas:
A) Just really crazy Probe timing that it all just cashed in at once
B) Canceled a building in progress
Humans made AlphaStar, so consider this a win for humanity. Technical progress is an entirely human thing.
true. strong ai will help us a lot when it comes.
48:47 So I guess Has changed his name to Nobody
Finally! A cast from the best caster about the biggest thing so far in 2019
Edit: Also first
Adding to what +[BenHarryKirk] said, not to further confuse +[Falcon Paladin], AlphaStar learning is done prior to the match. Basically goes like this: AlphaStar learns from replays of actual human gameplay (just watching), then you have a big amount of versions of AS that goes into a tournament against themselves, at the end of it (10 days for us, 200yrs for them) the most promising top 5 gets hand-picked by Deepmind guys.
What you see in these 5 matches are 5 different Alpha Stars and most important they don't have knowledge of previous matches the other did against LiquidMaNa.
Also AlphaStar went through another week of training before going against MaNa, so they are different than the ones who fought against TLO, supposedly better ones but not guaranteed.
(all the info is what I gathered from the official stream and the dev talking)
Does AlphaStar think in advance or does it just work on what it knows at the moment? Like it starts to build observers only when it 100% knows there're dark templars out already.
Even when our good friend Is late to the party, Falcon your take Is most welcome
Agreed
Hello Any news about Alphastars VS Serral ?
I'm pretty sure Alphastar's selling point is the fact that it's a learning AI actually - it basically learned by playing against itself over an equivalent of a couple hundred years or something (need to actually read up on it for the exact dates and times etc.) It wasn't originally "programmed" to do anything other than the basic controls. I think they've also put a limit on it's APM to a degree.
This is what I felt like when I first tried to play the computer on the highest levels, it was like they knew exactly what I was doing even if I tried crazy builds, rushes, all ins or anything lol. Mind you, I am not a great player but I could destroy the lower level AI. I liked your analogy with chess because I found the same thing when I play against AI in chess. They go from playing like a child to playing against Bobby Fisher 🤣 great cast as always Falcon!
i this before or after MANA won vs Alphastar?
These games were from about a week before that. For the game Mana won, they trained a different AI which didn't have the same grand vision as these ones. It instead had a normal camera view as a human player.
It's a learning AI, not a genetic algorithm as you seem to imagine it. It's a neural network whose knowledge is encoded in the weights of the connections between the neurons. When it's learning it adjusts the weights a tiny bit, which is why it needs to play for centuries to figure anything out. In tournaments like this it doesn't learn anything, because it is only a few games anyway, and learning is expensive.
No, it isn't learning from Mana. If you are interested, Mana posted his own video of these games in which he talks about what he was thinking during these games. It was very interesting for me and definitely worth watching. Also, some more info about how AlphaStar was developed: deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/
AlphaStar is still very much robotic at this point in its development. The only reason it is playing at a pro level is because it has literally perfect macro, literally perfect micro, and (in these games shown) can see and react to the whole map so long as it has vision. You can see how much AlphaStar leans on it's computer micro from its unit choices: Stalkers and Phoenix mostly. It has not developed unit compositions, or reacting to the opponents unit compositions. I am interested to see if it will be able to develop these more nuanced strategies and how it will be able to defend from being predictable or when only one agent is used for an entire set.
Edit: Also, in the last match, AlphaStar was required to use a virtual mouse and keyboard, and only could move things if it was looking at it on the screen. It makes a huge difference.
There would be no point (in terms of improving its play) to putting AlphaStar on ladder to learn the meta. It is almost certain that the meta isn’t the best possible way to play. It’ll eventually find ways to play that are better. Just as it did with Go. The Go world champion said something to the effect of ‘after 2000 years of study it turns out we don’t know how to play Go’ once he had had time to reflect on his losses.
Oh thank you Falcon, I bet it's gonna be an amazing stream! Just started watching!
Any why not one cannon Rush?
Iirc, the 5-game match was fully played with the "zoomed-out vision". There was a live showmatch after the discussion between MaNa and a different approach of the AI, this time with a more constrained "screen size".
.
SPOILER:
.
.
.
.
.
This live Showmatch was won by MaNa
They also said that the AI was focusing on certain areas anyway, so it didn't really matter that much in the end. MaNa prepared extensively for the last showmatch. That's why he won.
Yes you are right, thanks!
This is definitely machine learning. They train a neural network (technically an LSTM, a type of recurrent neural network) using Reinforcement Learning, which DeepMind are sort of the top experts on. We'll have to wait until they publish a paper about this to get all the details, though.
Can the AI micro units and build units at the same time?
Yes
No, it cannot. It still has to process its input to the game in a serial manner. It can go from one to the other and back very quickly though.
Better casting here than the original :)
Reaper name: Silver Age Superman
Today we think of Superman as the guardian of Truth, Justice and the American way. However, during the Silver Age of Comics Superman did such horrific acts as denying Jimmy Olsen and Aquaman a drink of water in a desert (Superman's Pal Jimmy Olsen No. 115), dueling Batman with revolvers (World's Finest No. 261), and declaring himself the King of the World (Action Comics No. 311). For these acts, and many other instances of Superdickery, Silver Age Superman was stuffed into a Kryptonite lined Reapersuit.
holy cow cant wait to watch your casting of alpha vs serral
Ah-ha! I've been waiting for this one! Thanks, FP!
No problem! :)
why is alphastar not playing with the purifier skin :3
Yay, you cast it! :D
Only 1 hour and a bit for 5 games. This will be interesting.
18:50
no, this IS a learning ai. this ai learns from its mistakes, its uses deep leaning and state-of-the-art ai techniques to LITERALLY learn from each game that it plays, it is actually learning
also these are 5 individual agents, but they do learn
It learns during training runs *not* during competition with Mana for example.
AI is not (yet) capable of generalizing from small amount of data or analyzing its mistakes like humans do. It requires hundreds of years of nonstop play to learn, so it wouldn't be able to learn anything from a single game.
Thanks for the cast. Are the TLO games available because the stream didn't show a couple games that I'd like to see.
Can't wait for the Serral matchup. In computer chess, there's a style of play that is deliberately anti-computer and has been used by non top ranked players to defeat the best engines in the past so I wonder if Serral will try anti-AI tactics like MaNa seemed to have found in Game 5. But what I really want to know is how all the races compare. In chess the computers have "figured out" that starting as white confers a slight advantage though I guess in SC what might be optimal for a computer might not be doable in human tournaments.
All I've learned from this whole alphastar thing is that it really doesn't matter whatever ridiculous nonsense you build as long as you can micro well enough
First a great TvT, now a special PvP, awesome series!
11:03 what !? Didn't expected that 🤣
This is what I was waiting for.
Alphastar is on 1 mind with every unit that he plays, like Alphastar is every unit on the map and can move it simultaniusly. ;) Its APM is 300 at average, but he doesnt just make APM with probes in the start, when there is nothing to do it doenst do, but when there is a battle the APM goes waaay up. Its seem unreal.
Noob here wth does the term 'lifting' mean?
Using the Ability of the Phoenix to lift enemies Into the air briefely, allowing them to be taken out of the fight temporarily (and likely focused in the process)
20:54 you really misunderstand AlphaStar. "I don't know if its been programmed to go up that ramp"? No its not programmed with any preconceived strats. All of what its doing its learned to do. The creators have no hand in its decision making.
I would love to see what happens when these AI opponents have some kind of APM cap implemented. What if AlphaStar was only allowed to make a maximum of, say, 6 or 7 actions per second?
Its apm is capped already
Falcón,the apm of Alpha was over 1k in a big engage
He isn't playing aganist the same alpha star. Each game he is facing a different one. They took the top five alpha star from the alpha star league they have and that's what Mana is playing aganist
Not exactly the top five, the "least exploitable" five (technically, a sample from the Nash distribution). The point is, if the top five were, say, all cannon rushers, it would be stupid to include five cannon rushers because that would be exploitable.
I wanna see what Alphastar can do without the APM limit!
I don't know that it had one. I saw APM spikes of 500 and 1500 while casting.
Honestly I want to see how far they can take it once they prove that it can obliterate humans.
There's a channel on youtube called Micro AI that has some ancient videos of some examples of insane micro. I would very much like to see insane tricks that can only be achieved with inhuman levels of speed. Imagine a crowd of 100 marines, all being individually microed and stimmed at the very moment they get into range of the enemy. It's the dream! Imagine how many simultaneous engagements could happen without an APM limit!
@@FalconPaladin The developers did put an APM cap on it. The developers in fact expressly stated that because they wanted the fighting between the computer and humans be on equal grounds. In fact, the distribution (the standard deviation, the mean, the median, and the mode) of the APM of Alpha is actually lower than a pro-gamer's APM.
@@FalconPaladin Also AlphaStar cannot manipulate units that are not currently on its screen, I'm not sure how fast it zips back and forth, but it does have to move around to manipulate.
Also, if they are playing 5 games, there are 5 computer AI units respectively playing each game. What the developers do is that they flip a coin and assign one of the 5 computer units (or however many remaining) against the human player. So it's not the same computer. The computer can not engage in game theory. That is why the developers did that. I am not sure Alpha is as "good" as people make it out to be based on the coin flip feature. You have to know your opponent and counter their style if you wanna beat them. With this computer, you can't because you are not playing against the same unit every game.
Look at IBM's Deep blue. The human chess players, with a little practice, were able to beat Deep blue. Because the computer is not good at strategy, it is good at tactics. Assuming arguendo that the computer is good at strategy, all strategy have counters in SC2. Just like rock-paper-scissors. But the computer, isn't smart enough (yet) to engage in "GAME THEORY." In fact, no computer is.
This machine sucks. It doesn't player on fair terms, namely, the coin flip model the developers added isn't fair.
In short, screw Google.
U are right! If they have let the same agent play 5 games consecutively I bet even TLO could have won the series.
I'm pretty sure that one agent used only one strategy, but if the net outcome would have dropped the NN would have been smart enough to try variation of the same strategy.
Now I don't think that would have been enough to win 3/5 matches, maybe 1, stretching it maybe even 2 matches if the AI have seen lots of games with a varieted training set (means not trained too specifically, but a bit loosen)
DeepMind is machine learning. It uses neural networks to learn from each game.
Neural net based systems are not programmed that's old school AI programming where the AI behavior is explicitly designed by the programmers and/or advisors.
Dear Falcon Paladin sir, I like humanity but I severely doubt this includes also homo sapiens.
I would rather see Innovation, Maru or Ty (or Stats) take on Alphastar Alphastar before I see Serral
Huh... Noticed a couple oddities, like putting Pylons in front of the Assimilators
It is a bit stupid that there is no APM limit for AI like there is for the one that plays DOTA. Also, the fact that it sees the map in zoomed out way also makes no sense, because it can do actions simultaneously (or at least with almost no delay) in different places at one's, so with some APM limitation and camera limitation, it would be a waaaaay different game
There literally is an APM limit. And it is way lower than Mana or TLO's APm.
Alphastar's average apm is limited, which allows it to spike during fights but still have lower average apm than pros. The issue is not the apm spikes but rather epm. Because Alphastar doesn't spam, its epm is waaaaay higher than Mana's
@@sage4670 AlphaStar does actually do spam. For example instead of giving 1 move order it will spam multiple move orders in a line like a human.
right, Nari
and also
it has cam limit
Neither it’s APM or EPM is an issue. This isn’t a project to develop a stunted AI that can give Pro humans a good game, it’s to develop an AI that is as good as possible at a task that is much harder to define than something like Chess or Go. Doing so will provide huge insights into how to tackle problems in architecting future AI’s and GAI’s.
And after awhile, this ai will be unbeatable, a simple if/then scenario, after two hundred years, based on game state, it will have the answer to everything, I would however like to see it try and figure out Terran or Zerg but again, it's pointless after a couple hundred thousand iterations, I'd love to see it's first few hundred goes at Terran
But does it know it's playing StarCraft? That's the real question
I've watched later Alphastar games on the ladder. They generalized it to play with and against all races, to use standard openings and protect itself from cheese... but it plays too much like a human now, there are no crazy unstoppable strategies. Sad.
"Its not a learning AI"
NO NO NO. It uses a neural network and learns from replays etc etc. It may be configured to only play SC but it LEARNS from played SC matches.
th-cam.com/video/sB7unYvSKk8/w-d-xo.html
Alphastar's shield battery strategy relies on his ubereffective micro. Not human!
i ask a few ppl and if they know what is alphastar, seems like im not the only person under the rock.
While the AI itself is impressive, they should limit its top APM potential, as it gets ridiculous during fights. At that point it's gaining an edge over the human player through sheer superhuman micro plus the ability to keep managing its bases during it.
this is not amazing, but expected, and worrisome. just you wait Falcon Paladin, just you wait, China has fully animated tv anchors, starcraft has perfect AI, not only professional players are about to loose job, but also yourself Falcon.
No game players’ livelihoods are under threat. In sports/recreation people will always want to know who is the best person. Accordingly we still run 100 metre races despite cars being faster.
Falcon :( i have been improving my english while i watch your videos, xD (I learn while having fun) and subtitles help me understand you at the same time that i hear you, could you put subtitles again please? :)
thank u very much for you cast :D
TH-cam has automated subtitles. I've never made my own subtitles. :)
@@FalconPaladin Oh didn't know that, thank you anyway :) i love your channel
@@warcryplayer5751 do you like English?
And what is your primary?
Good job Mana!
i think its not fair for AI to see hidden units, for human player, you can still see the hidden unit but to the AI its so obvious. its pointless to use dark templar.
왜 찡해지는지 모르겠네
300 apm is bollocks, most actions of real players are just repeating orders, it should be set to mimic having a body with eye-hand lag and all that so it wold use it's decision-making and not perfect micro...
Skynet is here lol!
Starts off with gaming and then controls military rip
Poland in Starcraft... I don't think it😧😧😧
(Polska do boju😀)
omg it one step to become war ai. it it have access internet play generals or starcraft in real world only different stuff.
Hej Hej
Hej Hej!
UNreal.
I like the commentaries but come on, you can prepare yourself better than that... A ton of crap is said all along the video...
Alphastar doesn't have a map hack, it just have a full zoom out of the map. It still can't see throw fog of war. And there is actually several different agents which play the games. Each match is played against a different agent which have its unique set of strategies. For example, it is probably why in the 3rd game it walled of...
one thing you must understand, Alpha doesn't have human emotion, it aren't scare of anything, that why when it pushed with stalkers, it didn't back down, even when there was a Dt in the mix. This thing is dangerous.
Skynet is coming guys! Jk
And it Definantly remembers (learns) what it's enemy does.
AI is a powerful but glorified tool, and it'll continue to be nothing more than a tool for at least a century.
It's capped at 310 apm
Not according to the WCS APM tracker.
@@FalconPaladin oh. When I watched the tlo games live they said it was otherwise it was not fair. Maybe some games it was and some games it was nit
That's only average APM, the peaks are at 1500.
worst commentary ever