DeepMind’s AlphaStar Beats Humans 10-0 (or 1)

Поделиться
HTML-код
  • Опубликовано: 6 сен 2024
  • DeepMind's #AlphaStar blog post:
    deepmind.com/b...
    Full event:
    • DeepMind StarCraft II ...
    Highlights:
    • Game highlights of Alp...
    Agent visualization:
    • AlphaStar Agent Visual...
    #DeepMind's Reddit AMA:
    / we_are_oriol_vinyals_a...
    APM comments within the AMA:
    / eexs0pd
    Mana’s personal experience: • DeepMind Starcraft 2 d...
    Artosis’s analysis: • AlphaStar - Analysis b...
    Brownbear’s analysis: • DeepMind AlphaStar Ana...
    WinterStarcraft’s analysis: • StarCraft 2 Pros vs Go...
    Watch these videos in early access:
    › / twominutepapers
    Errara:
    - The in-game has been fixed to run in real time.
    We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
    313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Jason Rollins, Javier Bustamante, John De Witt, Kaiesh Vohra, Kjartan Olason, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty.
    / twominutepapers
    Splash screen/thumbnail design: Felícia Fehér - felicia.hu
    Károly Zsolnai-Fehér's links:
    Facebook: / twominutepapers
    Twitter: / karoly_zsolnai
    Web: cg.tuwien.ac.a...

Комментарии • 1,3 тыс.

  • @TwoMinutePapers
    @TwoMinutePapers  5 лет назад +500

    This is a new Two Minute Papers episode, which is not two minutes, and is not about a paper (yet). Welcome to the show! 🎓

    • @Leibniz_28
      @Leibniz_28 5 лет назад +35

      This is one of the best not paper not two minute video

    • @SpiritFryer
      @SpiritFryer 5 лет назад +5

      Great vid, thank you for covering this! I just wanted to note that it would've been good to mention that TLO was not playing his main race in the game. He was playing Protoss, whereas his main is Zerg (because the AI was only trained on Protoss vs Protoss). So when mentioning that TLO is a "mid-Grandmaster" and that Mana is "an even better" player, it's all from the perspective of TLO not playing his main.

    • @coolbits2235
      @coolbits2235 5 лет назад +5

      ..."a great deal of mechanical skill, split second decision".... this is an advantage for a computer because it does not tire

    • @vaibhavgupta20
      @vaibhavgupta20 5 лет назад

      I was going to write that.

    • @c--b
      @c--b 5 лет назад +2

      The best videos are always the ones where you can't contain your excitement enough to keep it under two minutes!

  • @christopher7567
    @christopher7567 5 лет назад +996

    Yeah but the pro player wasn’t even Korean...

    • @johnsmith-rk5mn
      @johnsmith-rk5mn 5 лет назад +59

      Also, TLO is main zerg, not protos....

    • @SupaDupaYT
      @SupaDupaYT 5 лет назад +18

      @@johnsmith-rk5mn it played against mana too

    • @johnsmith-rk5mn
      @johnsmith-rk5mn 5 лет назад +6

      @@SupaDupaYT Right you are. I commented too early in the video.

    • @zissler1
      @zissler1 4 года назад +8

      Supa Dupa mana isn’t even top 25

    • @tysej4
      @tysej4 4 года назад +2

      @@zissler1 He used to be.

  • @TheVerandure
    @TheVerandure 5 лет назад +156

    Correction of 6:06. AlphaStar can only play Protoss v Protoss on a single map. It cannot play any playstyle in the game (for now).

    • @TwoMinutePapers
      @TwoMinutePapers  5 лет назад +16

      I think you are talking about races. :)

    • @deep.space.12
      @deep.space.12 5 лет назад +20

      That's racist. ;)

    • @lapidations
      @lapidations 4 года назад +17

      @@TwoMinutePapers some play styles are only accessible to some races, so I agree with him.

    • @zissler1
      @zissler1 4 года назад +2

      it's working with other races and can play players online, but it can't even beat good amateurs right now.

    • @MrFiddleedee
      @MrFiddleedee 4 года назад +1

      @@lapidations stay in your lane science head, his statement was correct. (have played competitive SC1/SC2)

  • @emp5352
    @emp5352 5 лет назад +68

    "The number of actions performed by the AI roughly matches the player"...
    If the AI goes against a 300 APM player, then the AI has 300 APM.
    The difference is that the player has "meaningful APM" and "filler APM", the latter which is just them clicking or pressing buttons to warm their fingers up.
    If all 300 APM done by the AI is "meaningful APM", then the players never stand a chance...

    • @couchpotatoe91
      @couchpotatoe91 3 года назад +7

      It would pretty much be able to micromanage every single unit in an engagement. Imagining timing your stim separately and splitting perfectly while pulling back units about to die in the last second. Even in a lategame engagement with a hundred units.

    • @Novacification
      @Novacification 3 года назад +6

      Yeah, I also wondered what the cap was. Was it allowed to fire 50% of that APM within a 10 second engagement controlling each unit seperately and then retreat until it had more APM available?
      There's also a huge difference in APM used to macro and other tasks based on routine as opposed to that APM being used to micro meaningfully in a skirmish with enemy troops. Each unit targeting optimal targets so the available damage is spread out in a way that minimizes overkill is impressive but hardly an indication of the AIs strategic ability.
      Overproduction of workers in anticipation of losing some was pretty cool though.

  • @fendoroid3788
    @fendoroid3788 5 лет назад +371

    13:41 Minute Papers

    • @DoctorMandible
      @DoctorMandible 5 лет назад +20

      Sadly, RUclips changed its algorithm. Now any video under 10 minutes has very low chance of being recommended. So everyone does 10+ minute videos now, even when shorter would be better. So RUclips is wasting everyone's time for no good reason.

    • @zlatanonkovic2424
      @zlatanonkovic2424 5 лет назад +7

      I like that these videos are longer now. The topic is way to complex and interesting to cover it in less than 10 minutes

    • @fendoroid3788
      @fendoroid3788 5 лет назад +2

      @@DoctorMandible But i get a lot of old short videos as recommended...

    • @Trazynn
      @Trazynn 5 лет назад +1

      @@fendoroid3788 I only see 10+ min videos in my sidebar.

    • @rodrigosilveira6886
      @rodrigosilveira6886 5 лет назад

      @@Trazynn sounds like the algorithm thinks you like 10+ min videos better than 2 min videos... Thus it'll keep recommending those

  • @Ironypencil
    @Ironypencil 5 лет назад +481

    While the number of APM is human level, I think the precision of those actions is way above human level (selecting single units, while a human might accidentally select multiple) which was very obvious in the micro-management-heavy engagements, adding some noise to the mouse positioning might be interesting.

    • @YeeLeeHaw
      @YeeLeeHaw 5 лет назад +83

      This is why these things are pretty uninteresting to look at outside the machine learning development aspect. They are still cheating, but everybody paints it out to be something extraordinary. I don't see anything amazing with this that we didn't see with the OpenAI agent in Dota.

    • @Butterkekskrumel
      @Butterkekskrumel 5 лет назад +30

      I read under deepminds video, that the apm ist human level, but all actions of alphastar are meaningfull. human actions may be executed twice, so the meaningfull apm level is much lower.

    • @YumekuiNeru
      @YumekuiNeru 5 лет назад +12

      @Gor1lla not all the actions are meaningful apparently as it learned a habit of spamming move-commands, but I hardly believe that accounts for a sizeable portion of its APM

    • @jonmichaelgalindo
      @jonmichaelgalindo 5 лет назад +95

      But, isn't this kind of relevant? At some level, being superhuman can't be cheating any more than TLO and MaNa being faster than me would be cheating. Factory robots have had superhuman servo-motor control, precision, and speed for decades.
      Leveling the playing field makes sense for a while, since you want to test the AI's strategy powers, but at some point you have to concede that the AI of the future will always be more precise, faster, and more aware than humans. It will eventually catch up in the strategy and common sense departments, but it will never lose those superpowers, not even on that inevitable day when an AI plays with a some robot hands and a camera pointed at a monitor.

    • @idjles
      @idjles 5 лет назад +1

      Ironpencil the stalkers were all firing as a unit on another unit. I’m waiting to see stalkers fire individually - not waste a shot and blink out in perfection.

  • @roman2011
    @roman2011 5 лет назад +287

    These players aren't Koreans so the AI has the advantage right off the start

    • @AsJPlovE
      @AsJPlovE 5 лет назад +8

      Korean or not - AI ain't facing off the best in anything close to ideal conditions - aka normal rules humans play with.
      It's always gotta be some specific conditions humans are not allowed to engage with.
      Tbh it is pretty pathetic to try to boast winning against handicapped players.
      10/10 ILL ALWAYS WIN AT TAKING CANDY FROM A BABY

    • @Spyro_2076
      @Spyro_2076 5 лет назад +14

      @@AsJPlovE I think he was joking...

    • @banjoleb
      @banjoleb 5 лет назад +1

      @@AsJPlovE what handicaps are you talking about? Please I'd like to know

    • @AsJPlovE
      @AsJPlovE 5 лет назад +5

      @@banjoleb Forcing Protoss vs. Protoss on for off race players.

    • @Airdrifting
      @Airdrifting 5 лет назад +5

      @@AsJPlovE TLO is off race, But Mana is main protoss. Get your facts straight.

  • @urielengelpiffer4201
    @urielengelpiffer4201 5 лет назад +469

    This is too disrespectful to TLO. They dont even mention he was playing offrace

    • @Dubanx
      @Dubanx 5 лет назад +69

      Seriously. I watched the public release. TLO kept getting a solid advantage and losing it to minor mistakes and misclicks, and his actions were sluggish. He was certainly not bad, but a protoss main probably wouldn't have made those game ending mistakes.

    • @ysz10000
      @ysz10000 5 лет назад +4

      Get stats in here!

    • @poppyrider5541
      @poppyrider5541 5 лет назад +1

      @Drew G. Maru will be an old man (in the SC2 circle) before this AI would be able to beat him.

    • @bfyrth
      @bfyrth 5 лет назад +23

      @@poppyrider5541 he will be beaten within 6 months for sure, you are underestimating progress rate here

    • @poppyrider5541
      @poppyrider5541 5 лет назад +14

      @@bfyrth They have been at this for a couple of years and they managed to get it to play one map and one match up, and they had to use different agents so the player could adapt. And when the AI had to use the same view as humans it couldn't figure that there was a prism sitting outside it's main. Forgive me for not digging a bunker and prepping for judgement day. ; p

  • @Soul-Burn
    @Soul-Burn 5 лет назад +148

    I want to see a match between DeepMind vs OpenAI on some game. Should be super interesting.

    • @LV-426...
      @LV-426... 5 лет назад +1

      ABSOLUTELY!!!

    • @busTedOaS
      @busTedOaS 5 лет назад +8

      AlphaStar 100 - 0 OpenAI, easily

    • @FMFvideos
      @FMFvideos 5 лет назад +25

      It will be cool until they decide to play together against humans

    • @AvatarSampai
      @AvatarSampai 5 лет назад +3

      would be funny if they introduce AIs in mobas to test the skill of players.
      So all those suckers that sit on their team mates to win wont get placed where they dont belong

    • @brnkis1984
      @brnkis1984 5 лет назад +1

      @GREG GMW not the same thing. you cannot see the type of unit/ building. and you cannot select the unit or building from the minimap... also, you have to click on the minimap or have a hotgroup or hotlocation to move the screen to do those things. it is a fucking really big deal. if the AI was able to optimize itself, then doing these operations, and being limited to 6 effective actions per second would be of little consequence.

  • @jjw238
    @jjw238 5 лет назад +531

    It is biased, a computer game will side with a computer.

    • @malefeministgiangilo287
      @malefeministgiangilo287 5 лет назад +5

      They had a human judge with final say but There wasn't any impropriety. Like, The game went down and there wasn't any controversial moments where you would think, is that biased anyway.

    • @iamcool8835
      @iamcool8835 5 лет назад +2

      Lol

    • @e4Bc4Qf3Qf7
      @e4Bc4Qf3Qf7 5 лет назад +30

      malefeminist giangilo pretty sure thats a woosh

    • @bilibiliism
      @bilibiliism 4 года назад +4

      @@e4Bc4Qf3Qf7 but it actually is. Since AI can interact with data directly, It give it huge advantage to human who need to use monitor, keyboard and mouse.

    • @e4Bc4Qf3Qf7
      @e4Bc4Qf3Qf7 4 года назад +5

      ​@@bilibiliism​thats not the point of what he said. And there are limitations in place on the ai to reduce its ability to win with pure micro.

  • @a2jy2k
    @a2jy2k 5 лет назад +182

    Not once have you mentioned that TLO is a Zerg player being asked to play his worst race. For someone who isn't really into StarCraft you can sort of handwave that away, but it has such a significant effect on the game. Each of the 3 races play so incredibly differently from one another. That being said, MaNa got bodied unfortunately and that man does main Protoss.

    • @MisterL2_yt
      @MisterL2_yt 4 года назад +1

      yeah TLO's protoss was marked at like 5,4k mmr on the graph LOL

    • @Kairat_Tech
      @Kairat_Tech 4 года назад

      Anyone from grand master gets to grand master with any race. He was

    • @joaolemes8757
      @joaolemes8757 4 года назад +1

      It played serral's zerg

    • @mikesully110
      @mikesully110 2 года назад +2

      I know this is an old comment and this version of Alphastar could only play PvP, well for those who aren't aware last year a version of AlphaStar came out that could play P vs any race (originally they had just trained it PVP but now they have trained it PvZ and PvT), it has actually played games online under a pseudonym (there are various levels of alphastar, Diamond etc) and to my knowledge it has won basically 99% of the games it has played. They also did some changes like lowering max APM so it's less "cheesy", plus the AI cannot see the entire map all at once like the stock AI, it has to actively scout. Though I'm pretty sure the AI can "mentally process" where every unit it has vision on is, and what it is doing, and what its intentions likely are based on prior simulations - at all times during a game, this is an innate advantage a machine intelligence will have over any human due to the way it exists.
      I am sure the developers could train this AI to play TvZ, ZvP, whatever they want, however training the AI is a lot of work and I think the alphastar team have moved on to different things now, stuff to do with proteins, medical science etc.
      I wish they could release this AI if possible, I have no idea how feasible it would be (does it need a supercluster to run a game, or does it just need the supercluster for the training and the game itself can run on a laptop) ? I'd love to see how well this AI would do at simpler games such as C&C Red Alert 1 also.

  • @marquised7037
    @marquised7037 5 лет назад +53

    11:34 "weather prediction" means war strategy and mass surveillance for reference.

  • @MobileComputing
    @MobileComputing 5 лет назад +442

    I think the AI community should stop quoting the number of days of training, and instead provide the actual computational power used, hardware acceleration / vector instructions included. For all we know, AlphaStar could have been trained in two weeks, using the computational power of all Google data centers around the world. "Two weeks" is kind of a pointless reference. And I mean a comparable metric like FLOPS, _not_ 5 1080Ti, 2 laptops, 10 iPhones, 3 D-waves.

    • @bagourat
      @bagourat 5 лет назад +128

      That is why they usually give Humain training time equivalent. For alpha star it was about 200years worth of games. This metrics give you a good sense of training time and is independant from the couple processing power vs time. This way it doesn't matter if it was trained for 100y on a potato PC or 2weeks with all google power, 200y of games is 200y (about 5 to 10 millions games played).
      Edit: It was referenced at about 1:40 actually.

    • @SudhirPratapYadav
      @SudhirPratapYadav 5 лет назад +26

      @@bagourat Correct! 200y of game time (real time). Which could be done in 2 weeks or 2 months depends upon computing power.

    • @AySz88
      @AySz88 5 лет назад +18

      No, that would generate some extremely misleading statistics. Consider a FLOP done "dumbly" on a GPU or TPU, where (for example) many operations may up ultimately useless due to a ReLU (for NNs) or depth test (for graphics), vs. a FLOP done "intelligently" on a CPU after several conditionals ensure that the calculation is actually necessary (a tactic inefficient for GPUs and TPUs). Those are very much NOT comparable.

    • @polla2256
      @polla2256 5 лет назад +1

      To most that would be meaningless instead what about number of generations and relative human hours based on above average cognitive ability, a simple metric is fine without the detail as long as it's relates to something

    • @sonatafrittata
      @sonatafrittata 5 лет назад +1

      Human training time is a much better measure! Training days is indeed meaningless.

  • @mildpass
    @mildpass 5 лет назад +35

    What I would really like to see from this AI is what is the lowest APM it needs to consistently beat human pros. Just keep throttling it until it starts losing (I know not really feasible because it will need to be retrained for every APM, but still...). I think it would be much more interesting to see how that thing plays than the micro god we have now.

    • @aminulhussain2277
      @aminulhussain2277 2 года назад

      It can play at any level of apm without retraining.

  • @busTedOaS
    @busTedOaS 5 лет назад +96

    2:53 "noting that ingame time is a little faster than real time" - No, it isn't. This was corrected four years ago with the release of the latest expansion.
    4:04 "the reaction time of the AI was set to 350 ms" - There is a delay, but it wasn't set. That was simply the round-trip time of the neural network running in realtime.
    5:58 "these agents can play any style in the game" - more accurately, almost any style of 1 of 3 races, on 1 of 7 maps. The agent in the 11th match demonstrated inability to defend warp prism harass.

    • @bagourat
      @bagourat 5 лет назад +8

      True, as for your last point I feel like the AI was not trained enough. They said that they started from scratch and only had about 2 weeks to train it. I don't think it was as mature as the other iterations. I would love to see how the previous Mana version would handle warp prism harass!
      Edit: The ultimate test for alphaster would be to battle again verry weird player with uncommon strategy (like HAS). Then we could see how good it can adapt and be more creative. The games against Mana were verry "standard".

    • @adrianbundy3249
      @adrianbundy3249 5 лет назад +2

      Each agent can only play one style though as well, and isn't dynamic to change it... They just happen to have 30 agents, of which they play 11 - so it has a different approach we saw each time. With a lot of practice vs them, I think that might even be semi-predictable.
      And yeah, 1/3 races on 1/7 maps. So to get a complete player, we would need to at least double the amount of agents so they are more dynamic, or figure out how to blend them intelligently. And also repeat the process 21 more times for each map and race - which is a good deal of time on one of them.
      But the AI reaction time? Irrelevant - it was hitting things with effective APM of 1512, and EPM that registered 1243 (and you can make the case that the APM should be EPM in the computers hand)... This here? This is 3x the speed I have ever seen any pro make on this chart ever, in an engagement - that simply isn't possible, is a bad strat, and didn't show us that the computer was any better than humans, short of mechanics, and definitely not smarter.... ibb.co/Tch3DVz

    • @joi1794
      @joi1794 5 лет назад

      @@adrianbundy3249 i wonder if they cant make a system wich reinforces them to try out different tactics and see if they work. open.ai did it in chess with Leela chess but i dont knoww how possible it is in SC2.
      the only oher sollution i could think of is putting all the styles of play that the AI's have (the different agents) and make them select a random (agent) style so overall you would have an AI that can play multiple styles.

    • @stropheum
      @stropheum 5 лет назад +1

      1 of 3 races, in 1 of 6 matchups*

    • @stropheum
      @stropheum 5 лет назад +5

      @@bagourat it was also trained improperly. The fact that it can spike to 1500 perfect apm means it will always fallback to perfect control instead of making correct decisions. So it can afford to make objectively bad decisions and micro its way out of them because no human can play that fast. That's not very impressive and not a good demonstration of learning imo. It needs to have its ability reeled back and it needs to be re-trained with a wider sample of replays

  • @MegaMark0000
    @MegaMark0000 5 лет назад +5

    Its not fair that it can see the whole map at once. The whole point of SC2 is thats it is hard to macro and micro at the same time because micro happens out on the map on the offensive and macro happens at the main base. They might as well be playing different games.

    • @Clawy111
      @Clawy111 5 лет назад

      There's different versions as they explain here ruclips.net/video/cUTMhmVh1qs/видео.html

  • @ForceOfWizardry
    @ForceOfWizardry 5 лет назад +35

    "I'm sorry Dave, I'm afraid I can't do that"
    -HAL 9000

  • @Drizzle52693
    @Drizzle52693 5 лет назад +1

    Starcraft player and machine learning enthusiast here. I’m not entirely sure how you calculated the APM cap but it needs to be tweaked. During the battles the AI was performing insane micro that could never be achieved by a human and it seems like that’s where the AI got its advantage

  • @TeknoKseno
    @TeknoKseno 5 лет назад +37

    Welcome to another episode of:
    "Why is this in my recommended, and why am I watching it".

    • @TwoMinutePapers
      @TwoMinutePapers  5 лет назад +13

      Welcome! :)

    • @ozzyintexas
      @ozzyintexas 5 лет назад +3

      haha me too I don't even play games, but I watched AI play the Korean bloke at , go. it was a pretty interesting doco..

    • @unlink1649
      @unlink1649 4 года назад +1

      Because it’s the kind of stuff you want to watch on RUclips instead of prank videos and beauty blogs.

  • @christophersimms9128
    @christophersimms9128 5 лет назад +2

    I've been working on my own AI that plays SC2 and I'm blown away at how much better deepminds has gotten. 200 years of gameplay is amazing.

  • @ny9988
    @ny9988 5 лет назад +5

    ai has huge advantage in this game because of infinite ability to macro units strategically ie- porting out stalkers one at a time when their health is low is done seemlessly by an AI but requires almost 99% attention from a human player

  • @jerrygreenest
    @jerrygreenest 3 года назад +1

    10:08 "it was asked to do something it was not designed for", - yeah, I agree on that. Though, on another side, it's the main problem with those AIs! In the end we want a general-purpose AI, while this AI is kind of "intermediate AI", - it's made not on top of strict algorithms but on top of neural networks, it's adaptive yet it's still hugely dependent on the task he's made for. "Weak neural network" is probably a good term for that. Maybe at some point we will have some strong neural network. And while on that path, we'll enjoy various weak neural networks yet to come, because they are still useful and sometimes just awesome!

  • @YumekuiNeru
    @YumekuiNeru 5 лет назад +134

    to be honest it seems like it learned the base strategy from the human games it was fed, and supplemented the rest with godlike unit micro that no human can compete against. So it "just" learned how to micro, rather than having good strategies. Just building more and more stalkers when it saw immortals makes it seem like it learned unit micro was all it ever needed.
    The criticisms of APM still sound handwaved away in that reddit post (much like they were during the games when the commentators mentioned it). The spikes to 1500 during stalker micro are pretty ridiculous and kind of makes their efforts to "limit APM" feel like a minimal effort PR marketing move than anything
    I mean it did still learn to do all of this but I feel all of this is glossed over.

    • @ThePhoenix107
      @ThePhoenix107 5 лет назад +17

      I agree, but it is hard to create a reasonable APM barrier. You can't say that the AI can only do an action every x ms as humans are also able to execute multiple actions extremly fast (e.g. muscle memory). You would have to build a whole system just to identify what actions should be worth what amount of time in a given situation and I don't think that is worth it.
      It's hard to make the AI equally handicapped as humans but I think it would be a good choice to limit the AI to something that cannot exceed human level performance but limit the AIs APM to a way worse level than a human on average.
      If they did this it would be really impressive of the AI to win a match against a pro player as this AI would have to be outstanding in a macro and strategic way as it cannot rely on micro anymore.
      Combine this with the limited camera view of the last AI that Mana played against and it will be way more interesting to see how the AI plays.

    • @Dirtfire
      @Dirtfire 5 лет назад +7

      @@ThePhoenix107 people aren't really complaining about APM, even if they think they are. They're complaining about the ability of AlphaStar to *ration* actions for use in certain intense situations.
      The rules were followed, and you can't blame AlphaStar for bending the rules in its favor. It's also kind of funny for someone to lose and then try to change the agreed-upon rules retroactively.

    • @trewq398
      @trewq398 5 лет назад +8

      its still impressiv, there are lots of ai's out there with unlimited apm but they can't beat humans. Even in these micro situations, it still needs a lot of decision making.

    • @0zo723
      @0zo723 5 лет назад +3

      Maybe 4X games like civilization could be more appropriated

    • @ThePhoenix107
      @ThePhoenix107 5 лет назад +12

      @@Dirtfire
      People are complaining about the advantages of the AI. A human player can't ration his actions as he is not bound to some arbitrary APM rule but simply not able to go any faster.
      It is simply not as impressive for an AI to win by better micro instead of better strategic play because, for a computer, micro is much simpler to learn compared to strategy.
      As an extreme example tell a human pro player in a shooter game to win against an AI with an aim-bot. Humans are simply worse at mechanical precise tasks than computers.
      I'm not saying that it isn't impressive what Deepmind managed to do but it was still not a fair game.

  • @simovihinen875
    @simovihinen875 5 лет назад +1

    The in-game time in SC2 USED to be faster than real time but they adjusted it so they match now in Legacy of the Void.
    And as people may have pointed out, the AI doesn't do any "spam" actions to keep its fingers warm so this lowers its APM.

  • @markorezic3131
    @markorezic3131 5 лет назад +5

    0:19 IT'S TIME, TO D-D-D- DeepMind
    *_Yugi-oh theme plays_*

  • @Entropic0
    @Entropic0 3 года назад +1

    One of the biggest problems with the AlphaStar performance is that it was given millions of data points on how humans tend to play, while humans were given basically no information on how AlphaStar tends to play. This is a problem since AlphaStar was basically given an easier version of the SC2 experience. For a normal player, you have to play against other people and as you do you learn about what strategies are best and most common, and you can adapt your strategies to compensate. But, all the other players are doing the same thing to you, so who is best at the game depends on who can stay 1 step ahead at picking their strategies. AlphaStar was shielded from this. It was given endless data on how humans tend to play, allowing it to adapt to humans, but humans weren't given any data on how it played, so they couldn't adapt to it. Adapting your strategies to exploit the thinking of your opponent is probably 9/10ths of the strategy of the game. AlphaStar never had to participate in this cyclic adaptation of strategy.

  • @0dWHOHWb0
    @0dWHOHWb0 5 лет назад +5

    What, no Serral?
    I was almost ready to head out to the town square

    • @0dWHOHWb0
      @0dWHOHWb0 5 лет назад +3

      jk I knew this AI would have no chance against him

    • @christophersimms9128
      @christophersimms9128 5 лет назад

      @@0dWHOHWb0 Can't wait until that changes. We'll have the next Alphago vs Lee Sedol on our hands.

    • @Shmellix
      @Shmellix 5 лет назад +2

      Serral is a deep mind bot

  • @daledael5263
    @daledael5263 4 года назад +1

    The moment alpha star units see information, the information is readily available for processing. Humans have to pan the camera, hit the keyboard and mouse. Even though humans lose, humanity still wins. Imagine wars fought using bot soldiers that have God-like vision. The moment one bot sees information, that information is readily available. It is scary how bots have the potential to take over policing or warfare.

  • @sagrel
    @sagrel 5 лет назад +96

    I would love to see AI that can play something like LOL or Dota 2 but instead of a single AI controling all the caracters, each caracter would be controlled by a different AI and they have to comunicate by the means the game provide (in LoL it would be pings, like "on my way", or "enemy missing"). The idea of AI working and comunicating whit other AI seems promising to me

    • @ProfessorRS
      @ProfessorRS 5 лет назад +11

      I really want to see this also!!

    • @LV-426...
      @LV-426... 5 лет назад +21

      Isn't this how OpenAI 5 operated?

    • @kalterkrieg3
      @kalterkrieg3 5 лет назад +24

      Actually I think that is how OpenAI 5 works for dota 2

    • @sagrel
      @sagrel 5 лет назад +4

      @@LV-426... As far as I know OpenAI used only one AI that controlled the 5 members of the team at the same time and had acces to the information of the 5 characters. What I propose is 5 independent AI agents, each one having acces to the information that a normal person would see, so to work as a team the diferent AI agents would have to comunicate and share their intentions with the other agents. Maybe I'm wrong and that's what OpenAI did, but I don't think so.

    • @LV-426...
      @LV-426... 5 лет назад +31

      @@sagrel
      Look what they are saying on their website:
      1. "Each of OpenAI Five’s networks contain a single-layer, 1024-unit LSTM that sees the current game state (extracted from Valve’s Bot API) and emits actions through several possible action heads."
      2. "OpenAI Five does not contain an explicit communication channel between the heroes’ neural networks. Teamwork is controlled by a hyperparameter we dubbed “team spirit”. Team spirit ranges from 0 to 1, putting a weight on how much each of OpenAI Five’s heroes should care about its individual reward function versus the average of the team’s reward functions."
      Please read their paper here: blog.openai.com/openai-five/

  • @arjanbal3972
    @arjanbal3972 5 лет назад +1

    From what I can see in the comments, most people seem to be complaining about setting a limit only on the average APM, and that seems like a fair complaint. The AI could save on some Actions and use them to micromanage it's units with god-like precision. They should have limited the max APM too, to keep a level playing field.

  • @thirtyonefifty3133
    @thirtyonefifty3133 4 года назад +6

    When Skynet figures out that only 1% of people in the world is smart so it kills off those 1% and enslaves the rest of the population to do its biddings.

  • @LordyHun
    @LordyHun 5 лет назад +1

    I've seen a few matches of AlphaStar. I've seen it winning against an anti-Stalker army by perfectly micro-managing about three dozens of stalkers which were attacking the other army from three different directions (with over 1500 APM at the moment). Not humanly possible, but still an amazing sight.
    And for the people who complain about this: it's not a human, it's an AI. It has things that it can do immensely better than humans, and it's intelligent enough to build on its strengths rather than its weaknesses.

  • @ImrazorZodd
    @ImrazorZodd 5 лет назад +11

    I get it, this is an insane achievement, but could you undersell the advantages the ai had any less? Describing it as "technically" and "slight" advantages is insulting. The fact the pros put up as much of a fight as they did is admirable.

  • @seanwalker1548
    @seanwalker1548 5 лет назад +2

    wonderful video!! first one of yours I've seen but really great coverage of the content and thanks for all the resources. definitely subscribing and can't wait for more!

  • @fathybalamita1537
    @fathybalamita1537 5 лет назад +6

    Hopefully, one day AI will run governments better than humans. Thanks for the video.

    • @burt591
      @burt591 5 лет назад +1

      Skynet for example...

  • @Werdna12345
    @Werdna12345 5 лет назад +1

    Don’t know why they would bother with adding a reaction delay if they didn’t limit the action per minute and the what it could see at one time like a human. I understand that it can’t see they the fog of war but the computer essentially had extra screens to see where all of its units where at instantly.
    I’m curious what hardware it was running on.

  • @eposnix5223
    @eposnix5223 5 лет назад +96

    People complaining about the AI's ability to 'cheat' are missing the point entirely. Prior to DeepMind there had been no AI able to beat a high level pro, even with tens of thousands of APMs and perfect awareness. DeepMind has shown that they accomplished the hard part: getting the algorithms right. Everything from here is just fine tuning.

    • @sergrojGrayFace
      @sergrojGrayFace 5 лет назад +1

      Can you back this up? Did you actually see anybody attempt to make an unbeatable AI? Normally it's simply never a goal.

    • @eposnix5223
      @eposnix5223 5 лет назад +7

      @@sergrojGrayFace There's a community that has been trying to solve StarCraft AI for a while over at: www.reddit.com/r/sc2ai/

    • @erylkenner8045
      @erylkenner8045 5 лет назад +13

      @@sergrojGrayFace As someone currently working in AI research, I can tell you that YES, people's main goal for a while has been simply to beat the pros. Starcraft is a complex enough game that simply surviving early cheese strategies or making decent pushes is a monumental task for an AI. If ANYONE could have beaten a pro before this with their AI, they would have done it without reservation and showcased it to the world. The reason it hasn't been done until now is because believe it or not it is incredibly difficult, and people have been trying to do it but haven't gotten it to work well enough before now.

    • @adelatorremothelet
      @adelatorremothelet 5 лет назад +3

      Oh , that's so lame.
      Then just slow the game by a factor of 10 to get rid of all the advantage of the AI . Let's see if at an adjusted rate of 70 clicks per minute the AI advantage holds.

    • @tomfillot5453
      @tomfillot5453 5 лет назад +4

      With unlimited APM and map awareness, there's been quite a few AI above human level. Some units (marines, stalkers) have almost unlimited potential if you can manage them perfectly. With perfect marine micro, Terran is completely unstoppable if you are a zerg.
      Although I basically agree with you, the hard-work is done. AlphaStar behaves very differently from an unbeatable micro-machine and shows intent and tactical understanding during fights.

  • @solosailorsv8065
    @solosailorsv8065 2 года назад

    it's the near-general purpose logic blocks being developed that is understated so often.
    This "game" can be used to teach, by virtue of a huge database of instances.
    Those developed logic blocks can then be repurposed to solve real issues.
    An obvious real-world application would be crowd dispersal, for one example.
    But then the discussion would switch from the programming advances, to a pointless emotional fog....Game On !!

  • @youtoober2013
    @youtoober2013 4 года назад +5

    Me watching this video about a year after it was uploaded: Oh shit.

  • @them4309
    @them4309 3 года назад +2

    "weather prediction and climate modeling"
    it's always the same line.
    are we seriously to believe that the military is not watching this VERY closely and perhaps spawning bots of their own?
    a machine that can multi-task, make extremely complicated, split-second, life-dependent decisions, and learn by experience at a rate exponentially faster than any human ever could.
    the applications to battle tactics and general war strategy are endless.

    • @glenneric1
      @glenneric1 2 года назад

      Yeah it's creepy/scary that we are teaching these machines to beat humans at games rather than cure cancer.

  • @youngincho9500
    @youngincho9500 5 лет назад +4

    DeepMind’s AlphaStar Beats Humans 10-0
    Korean Player: Hold my beer.

  • @_DarkEmperor
    @_DarkEmperor 5 лет назад

    MANA in game 5 created "armored fist" and had very good composition of units(very good asymmetric counter to AI composition), but attacked to early with too small force and was surrounded and overwhelmed.
    In game 6 he did the same thing, but did not attacked to early, this time he waited and attacked with huge force - win.

  • @Dungeontai
    @Dungeontai 5 лет назад +31

    Nobody seems to know, that the AI was able to see the entire map zoomed out, which human players cant. The one agent, that played from a human perspective lost the game!

    • @AA-iq6ev
      @AA-iq6ev 5 лет назад +3

      However that one was not trained that long

    • @AvatarSampai
      @AvatarSampai 5 лет назад +6

      Thats not how AI works buddy, they trained the agent a certain way and introduced a new variable it wasnt programmed for.

    • @charlie6208
      @charlie6208 5 лет назад +1

      Dungeontai at least there is someone who understands how AI works. People like to be brain washed

    • @AA-iq6ev
      @AA-iq6ev 5 лет назад +1

      @@AvatarSampai According to deepmind blogg the one using camera was a new aggent trained for 7 days. The one with raw interface that beat mana was trained 14 days.

    • @uristits
      @uristits 5 лет назад

      thats literally in the video...

  • @mikejones-vd3fg
    @mikejones-vd3fg 5 лет назад +1

    Good job by the player, the equivalent would be like going up against an aimbot in a first person shooter, reaction time doesnt matter if the aim is perfect, you have no chance! Although in a strategy game you'd hope strategy could win out, but this just shows this games boils down to twitch movement which is pretty cool way to see if your game is strategy enough i suppose, and maybe will make games more interesting / deeper complex as a result in the future.

  • @6Twisted
    @6Twisted 5 лет назад +3

    I can't wait until this kind of AI starts becoming a standard part of games.

    • @abyssstrider2547
      @abyssstrider2547 5 лет назад

      It won't, the AI (especially in newer games) is very dumbed down on purpose

  • @Shadowkyuubi812
    @Shadowkyuubi812 5 лет назад +1

    Your can tell that there is sort of a unfair advantage in favor of the AI. It comes back to a discussion about API vs meaningful actions. You can tell because of how the AI takes the early advantage in almost every game, when micromanagement is more important. It is able to make decisions and gives directions faster than any human could. In the video, I only really saw one game go into what in generally considered "late game" (not the exhibition match at the end), but even then, the human was purely on the defensive and was overrun. I would be more interested to see a game with comparable "meaningful API" between the human and AI, that goes into late game.

  • @ReMeDy_TV
    @ReMeDy_TV 5 лет назад +5

    Neither player is Korean, so these results are deeply flawed. I understand it must be a Protoss, so make it a Protoss Korean main, and ensure he's a native Korean who's fluent in Korean who's only lived in Korea. Yes, you'll need a translator.

  • @brett2themax
    @brett2themax 2 года назад

    Its an assumption behind the reasoning for building more workers. Income increases, however at a diminishing return, until 24 workers on a SCII standard Ladder match mineral patches.

  • @stepananokhin693
    @stepananokhin693 5 лет назад +22

    I have a controversial feelings when seeing how easily AI beats humans in such a games... games which resemble military actions in real world.

    • @aaronfkckcjc6910
      @aaronfkckcjc6910 4 года назад +5

      not really a fair comparison in this case...alphastar won so consistently by having insanely high reflexes and leaning heavily on a style that supports it. In a real world scenario, that's a targeting system, which we already use computers for. They did amend the experiment after this to limit alphastar to human-like reflexes and it started losing quite a bit.

    • @GodlyAtheist
      @GodlyAtheist 2 года назад

      AI won't destroy humanity, at least not for a long time and even that is iffy... But it's a shame it won't do it tomorrow because we all deserve it, and if you disagree you are wealthy and enjoying the suffering of the majority (the poor).

  • @smokingbeetles5793
    @smokingbeetles5793 3 года назад +1

    The ai is literally micromanaging economy and warfare at the same time. Tactics each unit is independently controlled. Humans highlights units in groups. APM for an AI is unbeatable.

  • @briannielsen2002
    @briannielsen2002 5 лет назад +6

    200 years, LMAO.. That's insane: I literally LOLed out Loud

    • @busTedOaS
      @busTedOaS 5 лет назад +5

      rofl'ing on the floor laughing, lmao'ing my ass off while lol'ing out loud right now.

    • @ryanalving3785
      @ryanalving3785 5 лет назад

      If they get a human that did nothing but practice starcraft for 200 years to play it, at identical reaction speeds, then I would be impressed. It had more than a century extra experience, no need to move the camera around, and could act with a precision that is biologically impossible via mouse and keyboard; not to mention double or triple the speed of a human in a pinch.
      Give it a robotic body comparable to a human, sit it in front of a computer; and have it play the game in real time to train. If it wins, it wins fair; if it loses, not up to snuff.
      Or better yet, give it a digital human form, and a virtual room with a virtual computer to train/play on. I'd wager a human player whose brain was plugged into an identical interface to the one used by the AI would be able to outmaneuver it.

  • @thirien59
    @thirien59 4 года назад

    thanks for taking the time to give an indepth explanation to every paper

  • @aurawolf664
    @aurawolf664 5 лет назад +37

    At certain points, the AI would simply eliminate its own military units it deemed a waste of supply space... Let that sink in...

    • @YumekuiNeru
      @YumekuiNeru 5 лет назад +41

      humans do that too

    • @qolio
      @qolio 5 лет назад +3

      Yep, scary.

    • @neohashi3396
      @neohashi3396 5 лет назад +7

      Fun part was when it culled its units only to use the freed up supply to build the exact same units. The ai is quite good but somehow noisy in its decisions.

    • @aurawolf664
      @aurawolf664 5 лет назад

      Tru tru

    • @ED-TwoZeroNine
      @ED-TwoZeroNine 5 лет назад

      It should have killed probes, silly ai

  • @GA-gw6vj
    @GA-gw6vj 5 лет назад +1

    The problem is playing Mass Stalker might be a dominant strategy in Protoss vs. Protoss provided the superior unit micro. Meanwhile in other matchups for example Terran vs Protoss i think an Alphastar would fail playing against tanks liberators and widows mines because you kind of have to adjust to an opponents strategy. There is no simple strategy thtat is dominant or relatively solid against every other strategy. Alphastar does not really care what the oppents builds and in other matchups it has to.

  • @piept4651
    @piept4651 5 лет назад +3

    2 minute papers, more like 13 minute deep reinforcement learning

  • @angelic8632002
    @angelic8632002 5 лет назад +2

    What we are seeing here is an exponential increase in intelligence. If this can be leveraged and reproduced over and over, AI complexity will skyrocket over the next few years.
    Of course its going to take time before society and cultures adapt and change to accommodate this new paradigm, but this could potentially be the starting point for a whole new chapter in human history, much like mathematics, agriculture, language and metallurgic was in the past.

  • @TTBOn00bKiLleR
    @TTBOn00bKiLleR 5 лет назад +3

    the only thing I see here is Alphastar lost under the same circumstances a human plays

    • @kuurozen1
      @kuurozen1 5 лет назад +2

      Yeah not to mention keeping all of its other advantages, just getting rid of the Sauron eye was enough

  • @331tnt9
    @331tnt9 3 года назад +1

    @Two Minute Papers at that point, the AI's APM limitations were very wacky. The rules was alphastar can only do certain actions with in a min, the AI found that if he stores AMP before a battle, he can output AMP at an broken speed(it peeked 1k apm) not to mention AI's actions are 90%+ EAPM. The AI microed 3 armies perfectly which is humanly impossible. Serral(Best player in the world rn)can only barley micro 3 army and micro 2 armies good at the same time. Thinking how the AI have so much APM is unfair(a professional player only has around 300-500 APM and 20-50% of their actions can be useless(depending on how long the the game has played)

  • @kaaditya1
    @kaaditya1 5 лет назад +33

    Singularity is approaching.

    • @broke1986
      @broke1986 5 лет назад +6

      There's still a looooong way yet

    • @erikziak1249
      @erikziak1249 5 лет назад +5

      We are still prettty far away. Like at least 6 years by my estimate.

    • @antoniushe
      @antoniushe 5 лет назад +1

      @@erikziak1249 elaborate how there should be something like a technical (?) singularity in six years

    • @erikziak1249
      @erikziak1249 5 лет назад +5

      This is just my best-case scenario. I am quite skeptical about a "singularity". Currenty all AI research is going not down the path of singularity. The key thing is that if a singularity "emerges" (substitue for any interesting sounding word), the essential part is that nobody, even not the "agent" will be able to tell HOW it emerged, how it gained consciousness . Do you know where your own consciousness and intelligence comes from? I have a hypothesis for that. But I might be wrong. I expect following scenario: super human level AI, but not conscious. It will be ignorant to the world around it and to itself as well. That is the only AI we can "manage control of". And when designed in this way, it might show certain signs of "consciousness", but it will be really dumb. And even not grasp what "dumb" means when applying to itself. I personally disagree with the term AI. Why should there be a difference between natural intelligence and artificial intelligence? They are both intelligence. We as humans need to step back, zoom out and take a fresh look at the universe. Most AI researchers agree that conscious AI agents should not have "human rights". I agree. They should have "conscious intelligent beings rights" instead. Throught history, we thought of ourselves as something special, even when proved wrong again and again. We still see ourselves as something "better" than AI. We are "general". OK... I disagree with that. E.g. you have a child. Do you know WHY it behaves like it behaces, what forms its values, what to expect of it, yet not be 100% sure it will do in a certain situation, so you keep telling it what is the right thing to do? This is the same approacha general and conscious AI needs. And it will not be able to get any useful feedback on why it made a decision X or Y when confronted with a specific situation. Neither can we tell that about another human being. As far as we do not start to treat oruselves as mere machines (which we are), without free will (which we do not have), we cannot fully comprehend any so called "artificial" intelligence, because we are blinded by our cognitive biases. We are far from perfect - not only in "construction", but also, and even more so, in "reasoning". We are bags full of cognitive garbage which explains why not only a single, but also large groups of people behave like iditos. Because we are inherently flawed. And so is intelligence and consciousness. Even if they are orthogonal.

    • @kaaditya1
      @kaaditya1 5 лет назад +2

      @@erikziak1249 >6 years
      Far away? I'm only saying we are on the path to singularity, but this achievement is a pretty solid argument that we will be approaching singularity

  • @Stupor01
    @Stupor01 3 года назад

    To be honest the fact the commentators made that remark on the composition of the army, the a.i was controlling like 14 disruptors simultaneously while also microing other units thats nutty.

  • @MichaOrynicz
    @MichaOrynicz 5 лет назад +5

    I think that a very important point about the game won by MaNa is that he was able to exploit buggy behaviour of AlphaStar where the AI was looping with moving units from one place to another.

  • @MobyMotion
    @MobyMotion 5 лет назад

    Amazing work, yet again. Also, some really interesting comments on this video from experienced StarCraft players- can't wait for these to be implemented in the next iteration of this algorithm. The rate of progress hasn't just been astounding, it's been exponential. It makes me wonder how much further these models have to go, before they can tackle meaningful real-life issues like scientific research.
    One thing that is interesting, however, is that this took 200 years of training in total. That's much more than any human player, and could be related to a point you mentioned in an earlier video about prior information that people already have.

  • @snaileri
    @snaileri 5 лет назад +4

    This is one of my favorite channels

  • @zheega2184
    @zheega2184 5 лет назад +1

    This version of AI was slightly "cheating" (by being able to see the entire map at the same time, not restricted to see just one screen at a time). The fixed version of the AI lost. Also, the player who won that one match seems to have adapter to how the AI plays after seeing those 10 games.

  • @SirajRaval
    @SirajRaval 5 лет назад +10

    Awesome work Károly! Thank you for doing this. I also want to note that I don't agree with the comment at 6:02 "This is terrifying". Its exciting if we think about it in the context of using it to solve our hardest problems (pollution, major diseases, poverty, etc.). It can potentially come up with novel ways of doing things from massive datasets

    • @rainerwahnsinn9585
      @rainerwahnsinn9585 3 года назад +2

      Yes if solving the "problem" does not mean "Kill 50% of all humans"
      AI can be great or like Skynet, or first great and later Skynet(evolution)
      In fact it will not be possible to let AI run vor a long time, because there will always be the next step to Skynet.
      Even if only a hacker moves in the "thinking" that the AI is a Ego

    • @randomzergling3661
      @randomzergling3661 2 года назад

      @@rainerwahnsinn9585 the time has come we kinda need that at this point

    • @rainerwahnsinn9585
      @rainerwahnsinn9585 2 года назад

      @@randomzergling3661 Yes, but Skynet would kill FFF etc first,because they are young

  • @Ilikeurmominminiskirt
    @Ilikeurmominminiskirt 5 лет назад

    Connectivity, management, micro actions and time response is the wining game for anything in this world

  • @kinngrimm
    @kinngrimm 5 лет назад +3

    I'd like to see deepmind take a shot at a Grand Strategy RTS .

    • @Solaristist
      @Solaristist 5 лет назад

      kinn grimm I agree. Europa Universalis 4 is far more complex than SC2.

    • @busTedOaS
      @busTedOaS 5 лет назад +2

      @@Solaristist There is more than one way in which a game can be complex. Hidden information and mind games are a core element of SC2, more so than most "world-map"-type strategy games. From what I've seen, these games play out more like a big board game, which DeepMind already demonstrated to be good at with AlphaGo. SC2 is often compared to a real time version of chess and poker combined.

    • @Solaristist
      @Solaristist 5 лет назад

      busTedOaS EU4 is also all about hidden information and mind games, e.g. multi dimensional diplomatic networks and shifting alliances; complete FoW . Go and Chess are like Tic-tac-toe compared to EU4 and neither are real time or multiplayer like it is. The sheer number of degrees of freedom, the complex diplomatic modelling and the extremely long game time compared to anything attempted by AI developers so far would make it the closest thing to a real world challenge.

    • @kinngrimm
      @kinngrimm 5 лет назад

      @@busTedOaS a while ago i suggested on the paradox forum of Stellaris, that Paradox should take advantage of the offer back then from deepmind developers to addept the deep learning approach to other games. I narrowed it down to the fighting aspects, which are basicly the same as SC2 just with one more demension or when it came to the planet development stages or other aspects to handle them singular independent from the other AI aspects paradox already uses. I was talked down pretty hard over destinctive problems they would have with that to bluntly being told i don't have a clue, which i partly really don't have as i am not an AI developer, just a programmer who saw a chance for an evolution in AI programming. I don't know if meanwhile they have played around with the available libary kit which is publicly available or if they just ignore this progress. I would think there comes now a need to try and implement this otherwise other companies will be far ahead of them in this area. For a game though, it is understandably that the goal can not be always to have an unbeatable AI, i just want one wich is similar limited as we humans are, but would find new tactics and strategies from which we then can learn again and have a challange to beat them ingame. Current game AIs much too often regulate such just by giving more ressource which often is boring to fight against, while having an AI which outperforms us because it is better at "clicking faster" so to speak is not much better. Hearing here how this AI finds new approaches and new compositions of forces is thrilling to me and keeps my hoep alife to get better AIs in the end which would make GSG more of a challange like you would play against other humans.

    • @busTedOaS
      @busTedOaS 5 лет назад

      @@Solaristist I think you're confusing size and complexity here. More options don't necessarily make a game more complex. Think of Go - there is exactly one type of move, setting a stone, creating a complex game with far more game states than particles in the universe.
      More degrees of freedom do not mean more complexity. It is possible (and I would say likely) that the optimal strategy in such games is much, MUCH simpler than the sheer number of choices would make you think. Those choices exist for the most part to make the game more appealing to you. Would you consider yourself to have a solid grasp of all the features of EU4? If so, why do you think this would be hard for an AI, given a few hundred years of training?
      Game length has nothing to do with complexity, since AIs don't get tired.

  • @ComixConsumed
    @ComixConsumed 4 года назад

    I would be very interested to see what would happen if you put in openai agent or two on the human team

  • @aharmlesspie
    @aharmlesspie 5 лет назад +14

    They should have a twitch channel of just this AI doing whatever it's been training to do, I'd watch that, and they'd make some serious cash I bet

    • @rkan2
      @rkan2 5 лет назад +2

      aharmlesspie Looking at the office and the type of people, I think the yearly budget is somewhere around 30-50 million just for that team itself... Twitch income wouldn't even make a dent

    • @aharmlesspie
      @aharmlesspie 5 лет назад

      @@rkan2 then donate it!

    • @abcdxx1059
      @abcdxx1059 5 лет назад +1

      @@aharmlesspie like they don't have anything else to do

    • @aharmlesspie
      @aharmlesspie 5 лет назад +1

      @@abcdxx1059 good heavens, I just think it's interesting and I'd watch it, what is up with the downer twins here?

    • @SETHthegodofchaos
      @SETHthegodofchaos 5 лет назад +1

      @@aharmlesspie probably not as interesting in the end or worth the price upkeep for the PR bump. Espacially since the games are run at high speeds, not at real time. Sure it would be nice to see how it evolves, but I dont think showing bad results is in their interest. Maybe in a few years when their research starts paying off and they feel more secure with sharing (just like SpaceX did).

  • @Aelistenus
    @Aelistenus 4 года назад

    The biggest thing you're missing here is when the alphago team fixed the camera, it lost. The camera angle is a giant advantage for the computer.

  • @powerzx
    @powerzx 5 лет назад +24

    That AI played well, it was a big step forward, but AI won matches thanks to CHEATS :).
    It was controlling each unit individually at the same time, you can easily see when it do macro for more than one unit or groups of units at once. During some fights it was like 3 players vs 1 (like if human player was surrounded by many bots, who were controlling those groups of units). Almost perfect macro, whole map visible, controlling many units with macro at the same time, spikes in pressed keys per second (which were done with inhuman speed 1600+) = cheating.

    • @gosseuffe
      @gosseuffe 3 года назад +2

      I agree completely! Not fair. Also it would use mouse and keyboard to play. Synk a human brain to the mf computer 💻 and then play. 😉

  • @570lucas
    @570lucas 5 лет назад

    It's also important to note that each game wasn't played against the same version of the AI, but agains diferent versions of it, even if they were developed with the same algorithms

  • @CriticalPosthumanism
    @CriticalPosthumanism 5 лет назад +9

    Well, i would like to see an AI that has same limits that human but win with better strategy.
    The matches in this video arent really fair. No human can play this level of micro management.
    And also a human has to constantly watch the minimap and has to drag the camera to the actions and back to base. The IA doesnt have to do that.
    And in the end (when the AI has to move the camera it lost).
    So make it more human like in terms of reflexes and stuff and let it win with pure wisdom.

  • @freshfruit213
    @freshfruit213 5 лет назад

    "MaNa over-exaggerated his construction of probes which he had learn from AlphaStart - isn't that amazing?"
    Damn right it is!!!!! One of the best has become better because of the intuition of an A.I. What a time we live in.

  • @TMSxYouT
    @TMSxYouT 5 лет назад +10

    Wherever possible "cheating" took place, and dispite who win, the really astonishing and important thing in all this, that AI comes with very unusual strategies that humans never even dare to try or think of. I believe this is the most important thing, this what makes it "smarter" than us. Same thing also happened with GO game, it just astonished hard core players.
    Everything else is just adjustable details/issues.

    • @mikejones-vd3fg
      @mikejones-vd3fg 5 лет назад +1

      As much as I agree I think the strategy part is being overplayed here, it did do some unconventional things but i think the difference was obviouse with the percision movement of units, moving them in the right spot at the right time at the right distance, thats like if you had an aimbot in an fps, so what if its reaction time to put the crosshairs on you is slower... when its accurate everytime! So as cool as i think these AI's are, i would really like to see its strategy beat a human, outwit them, not out twitch them and i dont think this game is capable of that, it going to take a game designed to do that, which these ai's destroying these games might do as a result, force them to change the design for the better.. Its like the AI is telling us we're not playing chess here we're playing checkers.

    • @addanametocontinue
      @addanametocontinue 5 лет назад

      Agreed. A lot of people are nitpicking any advantages the AI was inadvertently given, forgetting that the purpose of the AI is to do greater things than to be good at Starcraft. Starcraft was just used as a learning vehicle and to showcase it.

    • @SaithMasu12
      @SaithMasu12 5 лет назад

      An AI is not smart. Its adding and nullifing. Its just a huge database.
      It just calculates so quickly that it seems as if is acting with some sort of intelligence. The word here is "seems".
      Example: The AI looses some workers early on. Now the AI cannot "think" that early scouting could prevent that loss. Instead it goes for the simple solution, which is adding extra workers beforehand to compensate for that.
      It doesent know "why" something is happening, it simply calculates what action is done to counter another action.
      Its raw data. It isnt adaptive. It just seems "adaptive", because the data is so big.

  • @mccloud35
    @mccloud35 5 лет назад

    I wanted to know more tech details from 2MP. What's the architecture, what are the training params, what worked/didn't work, etc. This was basically just showing the same content as the Deepmind video.

  • @M0rn1n6St4r
    @M0rn1n6St4r 5 лет назад +9

    In the human corner... 20-year-old MIKE TYSON! In the AI robotic corner... an 8-armed, 5-legged, and 3-headed "TERMINATOR". But, to make it fair... the TERMINATOR will have 6 of its arms, 3 of its legs, and 2 of its heads... disconnected. smh
    That oughta tell us where we REALLY stand against the machines! You know another type of "unfair" fighting? Boxer vs. Battleship... especially, if the "ring" is at sea. :-)

    • @fanciestbanana4653
      @fanciestbanana4653 5 лет назад

      I think that for the researches it's more interesting to develop ai that wins the game due to superior strategy, not thanks to mechanical superiority. That's why they limit the ai so much. Otherwise you would have perfect singe-unit macro that's completely unfair.

    • @jackshiznit69
      @jackshiznit69 5 лет назад +1

      Not quite... What they are limiting on the AI is that it is inherently software that is directly interfaced to the computer. The game is also software. As software the AI has to be given the gamestate since it lacks eyes and fingers. One limitation it was given was that it did not receive the full game state; those parts of the game a human can not see with eyeballs. Another limitation it was given is essentially the time it takes for a human to see something and send a command to its hand to press a button, i.e. reaction time. In this area computers have been always been faster. So your analogy is somewhat correct for the second limitation, but not the first. In the context of the goal for this test, your analogy fails. The goal is to make the AI and human equal in all areas except one, imagination, and test the AI's ability to imagine strategies that trump humans. It probably succeeded.
      The real goal here is to build AIs that out imagine humans in general, i.e any game. This goal is further away, but probably closer than most think. So fairly soon people will be losing jobs to AIs that computers could not previously do, like say writers, doctors, architects, artists, etc. Jobs that require imagination, improvisation, and intuition.

  • @AntsanParcher
    @AntsanParcher 5 лет назад +2

    I'd like to see them take on cooperative games - cooperating with other AI and with human players. My intuition says cooperating with human players would be really hard.

  • @Sehyo
    @Sehyo 5 лет назад +13

    More like getting beat 1 to 0 in fair(er) games

  • @benrogan1594
    @benrogan1594 4 года назад

    the AI also had significant impact in the pro dota scene, it literally invented using the courier constantly to win the regen battle in mid, usually it was just bottle and thats it, once the AI started doing it in its exibition matches everyones started using the courier mid for all kinds of regen, anything required to win the lane as hard as possible

  • @androsp9105
    @androsp9105 5 лет назад +3

    build a physical robot that holds a mouse and sees the screen that can win at starcraft and i'll be impressed

  • @LetsPlayFolling
    @LetsPlayFolling 4 года назад

    I might be a bit mistaken here, but the graph at 3:51 clearly shows that TLO consistently has more actions per minute than AlphaStar, the blue graph ends far before the yellow one does, or am I missing something?

  • @Dilettant_
    @Dilettant_ 5 лет назад +4

    why didnt you play Zerg? TLO is a Zerg!?

  • @TheStigification
    @TheStigification 5 лет назад

    For Alpha Go, they created a beautiful space, everyone was quiet and respectful and there was a mob of reporters. This guy they just plonk down in a room and shout at him.

  • @Matyniov
    @Matyniov 5 лет назад +3

    11:15 "...long term capabilities against HUMAN (player)...."

  • @markfrellips5633
    @markfrellips5633 5 лет назад

    The applied research and science behind the project is impressive and undoubtedly will have far-reaching implications for other areas beyond gaming entertainment. The problem with the application with Starcraft is that the AI would need to share many more restrictions in a real-time setting to be made "human-like". There were many situations during the games where agents behaved very systematically and were unable to adapt; drop/guerilla tactics. During many of the games, we saw engagements where the AI won through sustained micromanagement which would exhaust or overwhelm a human player, not necessarily through the creative stratagems or strategic management; blink stalker micro vs a larger and varied army. Is it bad? No, but considering the players were placed under considerable restrictions and were unable to play against the same agent more than once, the players themselves were heavily hampered as well.

  • @ChessPampa
    @ChessPampa 5 лет назад +6

    make aplhazero play age of empires II, against theViper

  • @TheStigification
    @TheStigification 5 лет назад +1

    I really don't get why Alpha Star needs it's own special full map view when there is a perfectly good minimap. The AI shouldn't care that it's mini should it? And if it does then that's just another thing it has to learn to deal with. Why don't we teach it go by letting it see inside it's opponents heads?

  • @hypersonicmonkeybrains3418
    @hypersonicmonkeybrains3418 5 лет назад +4

    who else wants to see what happens if it plays its self for a million years first.

    • @ziquaftynny9285
      @ziquaftynny9285 5 лет назад

      After doing some calculations I found you would need a computer to run the algorithm 100x faster for a year for it to reach around 1 million years.

    • @colox97
      @colox97 5 лет назад

      if you move linearly..

    • @hypersonicmonkeybrains3418
      @hypersonicmonkeybrains3418 5 лет назад +1

      @@ziquaftynny9285 Only if you stick to the same budget. It just needs 100x the TPUs thrown at it.

    • @Bartooc
      @Bartooc 5 лет назад

      @@ziquaftynny9285 It was running at 50 average GPUs. What I would like to see is AI learning SC2 without using human games as coaching.

  • @ThanksIfYourReadIt
    @ThanksIfYourReadIt 5 лет назад

    Starcraft 2 is a free game for a while now and there is an option to check if you want to be paired with alphastar. So anyone can try to fight it.
    However you getting matched against it randomly, so you will not know if its an AI or a real player.
    Still from replay you can instantly recognize if it was and the replay is avaiable after every game.
    Alphastar ATm is on Grandmaster level and wins around 92% of its games.
    It can play all 3 races and all maps on the pool.
    The AI simply have unmatched resource gathering and frame perfect timing of using the resources and also omnipresence as it controls the game trough direct calls to units on a modified interface rather then a mouse, control groups and qeuee-d commands with shift key.
    One other thing is that ECM effective commands per minute is constantly leagues above any human player. Since humans APM actions per minute are only used to keep up hearth rate, and reaction time, avareness, and sufficent feed to the brain. They are not actually giving out effective commands with those actions, they just look around by selecting control groups, checking on the enemy, and randomly selecting movnig units to do quick counts, and keep them in the short term memory. So in my very own deduction I belive that the APM given to the AI should be cut to 1/5-t of what currently running on the ladder since AI doesnt need the extra APM players do to keep theire biological self in a certain state.

  • @factsheet4930
    @factsheet4930 5 лет назад +6

    I can't wait for AI playing and navigating in a 3D game and I think by then it will make headlines worldwide, I did not expect us to even consider that possibility so early!

    • @DevinDTV
      @DevinDTV 5 лет назад +2

      Fact Sheet this is already way more impressive than ai playing a 3d game

  • @isiceradew716
    @isiceradew716 5 лет назад

    On the question of DotA 2 or other MOBAs, I think one of the potentially interesting fields of research is AI that collaborates with human agents, a lot of those games require the skill of knowing how to enable your allies whether it be through healing, rooting an opponent, body blocking, etc. Developing AI that can anticipate human needs and set up situations where their allies can thrive seems like a way more useful generalized skill.

  • @outshimed
    @outshimed 5 лет назад +139

    TLO is well above the top 1% of players

    • @CZKing
      @CZKing 5 лет назад +16

      plaiyng for protoss though?

    • @jonmichaelgalindo
      @jonmichaelgalindo 5 лет назад +16

      With #9 WCS rank and close to ~500K players 1v1 last year, he might be in the top 0.003%

    • @ronnetgrazer362
      @ronnetgrazer362 5 лет назад +3

      Above the top 1%? How should that even work? The only way to win is not to play? Sense... you're not making any.

    • @MagicSquid
      @MagicSquid 5 лет назад +17

      @@ronnetgrazer362 Yes, above the cutoff for the top 1%, as in he's in a fraction of the top 1%.

    • @busTedOaS
      @busTedOaS 5 лет назад +2

      TLO stated himself that he is in the top 1% of protoss players.

  • @wildwest1832
    @wildwest1832 5 лет назад

    I am not completely convinced yet this approach will lead to a perfect all around god tier AI that is well rounded and capable of always winning with the optimal strategy. You might end up with agents that have small gaps or holes in their understanding like we saw in the last game where it lost.
    Really we need to see a lot more games to see how it adapts. A human player with many rematches will start picking up on holes and I promise it does have some.

  • @BinaryReader
    @BinaryReader 5 лет назад +5

    If its solving StarCraft already, we should probably be concerned....this stuff is moving way too quickly....I cant think of another game beyond StarCraft for AI to solve, least not one humans are going to be able to participate in....next stop, general AI? Geez, I cant believe they've cracked SC so fast....clearly they've acquired a deep understanding of cognitive architectures and encoding for complex domains, I can hardly begin to fathom how one goes about building systems to play SC in the way they have....its ethereal computer science...scary and brilliant at the same time.

    • @SudhirPratapYadav
      @SudhirPratapYadav 5 лет назад

      What about GTA games. It would be nice to see a AI complete all missions and doing extra stuff in GTA games

    • @scno0B1
      @scno0B1 5 лет назад

      can't fathom it? grab a pen and paper and start writing and then trying that stuff:)

    • @BinaryReader
      @BinaryReader 5 лет назад +2

      Well, I have a relatively good grasp on neural networks, and a fairly good intuition of how they work... But to encode for a domain like StarCraft? I can hardly begin to imagine the parameters you would feed to such a system, let alone having the system produce intelligent actions to win a game. I'd have assumed AI research was years away from even approaching this, but I guess some fundamental, general lessons have been learned by researchers with respect to encoding for particular domains. It's just that the SC domain seems to cross a threshold were you can generalize for pretty much anything. Scary stuff.

    • @ED-TwoZeroNine
      @ED-TwoZeroNine 5 лет назад +1

      It hasn't solved starcraft.
      Go watch the game it lost. It only had to create a Phoenix to stop what mana did. It was too stupid to do that and it lost because of it.

    • @Damzified
      @Damzified 5 лет назад +1

      Their AI was trained only for Protoss vs Protoss and on a single map, which reduces the possibilities by an immense factor. They certainly haven't solved StarCraft 2 yet. Besides these limitations, if you've played a fair amount of SC2, you can also see that AlphaStar is not playing particularly well from a strategic perspective ; it's just controlling its units really well, as you would expect from any half decent AI (even one not built on deep learning). In other words, what they've achieved is impressive, but much less than they made it out to be.

  • @CHERNOMORGAMES
    @CHERNOMORGAMES 4 года назад

    It is amaizing, that DeepMind was able to introduce a strategy of overdroning and different way of mining managment. That blew my mind completly back a year ago when those games happened!

  • @jokinglimitreached1503
    @jokinglimitreached1503 5 лет назад +3

    13-minute Papers!

  • @petros_adamopoulos
    @petros_adamopoulos 5 лет назад +1

    When the holistic view of the map was removed on the AI, it lost pretty miserably. Also, it's completely dumb against things like unit transporter drops. This is expected since it's unable to learn this thing by itself and would need to be showed and taught.

  • @Stiggandr1
    @Stiggandr1 5 лет назад +3

    Its average APM may be human level, but it was leaping up to 1000 apm to micro some of those engagements. That's perfect apm too.
    This thing did not win strategically based on imperfect knowledge, it brute forced its way through.
    A DT swap, multipronged drop harass, hard counters rather than anticipating a more dynamic engagement? (ie sentry immortal instead of trying to pad composition) And Deep Mind would have been in Deep Crap. :P
    This project is far from done, and, gauging by twitch, the SC community is bristling for a rematch now that our opponent is a known quantity.

  • @TurnGameOn
    @TurnGameOn 5 лет назад

    This play mode should be accessible in starcraft 2, that would be so cool. I and I'm sure many other would like to try too.