Why AlphaStar Does Not Solve Gaming's AI Problems | Design Dive

Поделиться
HTML-код
  • Опубликовано: 10 сен 2024

Комментарии • 160

  • @jahrazzjahrazz8858
    @jahrazzjahrazz8858 4 года назад +170

    I think it is also really important to differentiate between fun AI and hard AI. Alphastar is great for pro players to have an opponent they can still learn from, but for the average gameplay AI you dont want the AI to be hard, you want the AI to create a fun gameplay experience.
    For competetive RTS games it is good to have an AI that plays like an experienced player, so new players can learn from it by watching/playing it, but it is not necessary to use machine learning, for example age of empires 2 definitive edition upgraded the old aoe2 AI to use actual meta strategies and micro.

    • @raf74hawk12
      @raf74hawk12 4 года назад +12

      That's something that wasn't mentioned in this video, but Alphastar was actually playing at multiple different levels on the ladder, not just Grandmaster. Not to say that it's a feasible solution for making AIs for games, but there were different variants that performed at different levels.

    • @sbonel3224
      @sbonel3224 4 года назад +11

      Actually Alphastar isn't that much of a mental challenge as it is a reflex challenge for pros. The AI cheated by having simultaneous control over all units on the whole map with perfect precision, something no human is capable of. The media sensationalized the crap out of alphastar making people think it's smart, but in reality it never did anything smart, other than have perfect unit control and abuse cheesy strats.
      It had no idea how to deal with split pushes or harass (sometimes pulling its entire army to defend vs 1 oracle). It never did any smart plays, and it always tried to finish games fast sometimes failing horribly and funnily at stuff that even silver league players wouldn't fail. Again the only place where it shined was micro, which isn't that impressive considering it's an AI.

    • @Yorekani
      @Yorekani 4 года назад +18

      @@sbonel3224 TL:DR: No, I'd say the main factors are consistent macro and a solid game plan. More units beat less units, most of the time. Alphastar's timing attacks are no joke if it doesn't get delayed enough through harassment.
      The ladder agents had tighter restrictions on APM, camera movement, and probably many other aspects to better simulate human limitations. Those agents had to learn how to split their attention better because of these limitations. You can see replays where Alphastar "A-moves" and I've also seen Alphastar lose clumps of units to Disruptors because it didn't split the units. That wouldn't happen if Alphastar was able to always react perfectly in a timely manner.
      Alphastar agents quickly rose in rank because you wouldn't believe how far good and consistent macro will get you. Maintaining that worker production and spending resources ensures you'll have more stuff to work with than... I'd say around 95~98% of the human Starcraft II players. Alphastar can take unfavorable engagements through chokepoints and all that as long as it deals damage. Alphastar can maintain its macro cycles during fights, while most players will tunnelvision hard at that point.
      Alphastar's weak points are definitely scouting, flexible play and tactics. Refining a build can get you very far in the game, even with minimal to no scouting or changing things up based on intel. I feel like it sometimes builds base defenses against drops and air units based on scouting, but it may as well have just become a routine after losing too many times against it. The placement of structures is often... interesting regardless. However, other times, I've seen Alphastar agents just failing to deal with constant harassment from the air routes with no recovery plan in sight.
      For those showmatches, yes, Alphastar won those with inhuman Blink Stalker micro. Iirc, the APM cap interval was 20 seconds, which it easily abused by keeping its APM average lower to have room for the insane Blink Stalker micro during those intense fights. Not being restricted by camera movement was also criticized a lot, but if we're being honest with ourselves, is not that big of a deal for an AI who can easily keep tabs on the minimap, if it gets trained to do so.

    • @TheCapitaineR
      @TheCapitaineR 4 года назад +8

      S Bonel I think it’s more complicated than having simultaneous control over everything, Alphastar was eventually build (if I remember correctly) with some control limitations like having to switch screen focus, limit to APM and such. Basically it has to do actions one after another. However it’s true that it’s overall not fair. It seems to be able to click with pixel perfect accuracy in a a fraction of a second. They should make alphastar control a mouse cursor with a degree of innacuracy, having a small timer for switching screen focus (avoid a situation where it can focus the camera, do 5 actions, and come back to a previous camera spot in like a tiny fraction of a second.)

    • @NextLevelCode
      @NextLevelCode 4 года назад +9

      TheCapitaineR this is where most people get confused. AlphaStar doesn’t click. It calls a function in the Starcraft client to select said unit. Yes they limited the APM and made it switch screens in the later versions. However the game is literally plugged into AlphaStar. Selection is more like you moving your finger. It’s part of your body. You don’t think about it, it just happens.
      It’s the same with its vision. Yes they restricted the “camera” but what AlphaStar is getting isn’t graphics it’s numbers. These can’t be accidentally missseen or missed.
      To make it truly like a player we need to decouple it’s code from the client.
      So it would need a CNN in front of its brain looking at game frames to see. Then it needs a mouse and keyboard HID output.

  • @Scrydragon
    @Scrydragon 4 года назад +28

    Now I'm wishing we had a deep dive on Black & White. I loved that game and always wondered how the creature worked.

    • @AIandGames
      @AIandGames  4 года назад +24

      I'm trying... I'm trying...

  • @NukeMarine
    @NukeMarine 4 года назад +49

    The cost to train a single agent was actually the most interesting aspect of the video. I was aware of the amount of days and human hours equivalent of play time, but seeing the cost in the millions put things in perspective.
    Your other point about needing to retrain agents is also being researched. Instead of redoing training from scratch (as you noted, an expensive process), they try to update the meta in an existing agent. They described it as doing the equivalent of brain surgery to the code.
    As for use of deep learning in games, we're going to see all sorts of uses not just in opponent and NPC actions, but in how the game is run. There have been amazing papers released on using deep learning to aid in the rendering of graphics and physics. Also, as reported a year or so ago, developers won't need 44 days of training for many aspects of AI as NPCs likely won't need to beat top StarCraft players if the NPC is random thug #4 in GTA VII.

    • @Hanclok
      @Hanclok 4 года назад

      Sry quick correction it doesn't seem to be the cost of a single agent , cause Alphastar v2 consists of thousands of agents as far as im aware.

    • @fleecemaster
      @fleecemaster 3 года назад

      @@Hanclok Yeah, that was for all ~300+ agents, but they are all programmed on the same meta, so would still all need to be retrained in this example.

    • @KurtvonLaven0
      @KurtvonLaven0 Год назад

      Also, the way the agents are trained is by playing against each other, so the cost to train one to the same level is not necessarily significantly lower than the cost to train all of them.

  • @VB-92
    @VB-92 4 года назад +47

    Why kill them all, when you can beat them all at every video game they come up with and make them feel inferior for all of eternity

    • @AIandGames
      @AIandGames  4 года назад +21

      Quiet you.

    • @Paddythelaad
      @Paddythelaad 4 года назад

      They are not quite beating us all (everytime) in SC2 yet, it wasn't 100%.

    • @LazyDev27
      @LazyDev27 4 года назад

      @@Paddythelaad Serral makes short work of any AI.

    • @thisisrtsthree9992
      @thisisrtsthree9992 4 года назад

      @@LazyDev27 Serral, HeroMarine, Special, Showtime.. hell even JimRising xD ..... what I find deceiving is when they say that Alphastar beats 95% of players like that was a good thing.. I'm part of that 95%, it's the percentage of people who don't know how to play. If an AI beats bronze or platinum players, that's whatever, that's nothing, no real accomplishment, I'm yet waiting the day of seeing an AI capable of consistently defeating the players who know how to play SC2. ... The true victory percentage of Alphastar is about 1% as measured with respect to the players who know how to play (pro players), 1% is the measure they should say because it shows how well Alphastar plays (aka not well at all)

    • @LazyDev27
      @LazyDev27 4 года назад +2

      @@thisisrtsthree9992 I don't know what you're talking about. You could build an AI that could curb stomp any player, they already had a version of alphastar with any rules or limitations. No one would have a chance, it'd be a calculated defeat. But, the recent version of alphastar you see in grandmaster has many more rules to follow. Such as APM limits, vision limitations, whatever human limits they recently have been trying to implement. It just takes time, alpha star is still a new thing. I know in this day and age technology progresses so rapidly but have a little patience. If you don't think computers can beat you at any game... - you're wrong. Simply put. The challenge is creating an experience that is achievable through human means. So that when that awesome AI does beat you inevitably, you can take your game to the next level, artificially. Since the meta took decades to be what it is now, alphastar can evolve the meta faster than human competition can. Which doesn't sound like a big deal, but what if a game was dead. And no one good plays it anymore, you can still Q up against good ol' grandmaster alphastar. Which you might not say is much, its possible to beat it sure. But it's still at that same pinnacle. I mean the implications of it are insane. Take the logic that alphastar applies to the ingame world, and plug that information into the existential reality we exist in. What if machine learning could do scientific research according to the parameters set by not us, but by the universe. How fast it could evolve our technology, economy, the systems so complicated people need to go to college for over a decade to even understand. An AI, could learn that intuitively, and manage things we once never thought possible in society. Rapid response emergency services, or even preemptive emergency services. But yeah, if you want a version that can beat you at your little video game, that already exists. It just doesn't serve a better purpose than the one that can lose to us.

  • @patmacrotch5611
    @patmacrotch5611 4 года назад +21

    I never got the impression that alphastar was being designed to help AI implementation in video games. I took alphastar as the same as that robot they taught to play chess.

    • @jhfiyugy8g
      @jhfiyugy8g 4 года назад +4

      I don't think he's arguing Google was trying to fix videos games. More that some in the video game community may be expecting devs to implement this technology themselves.

  • @mikehoule8045
    @mikehoule8045 4 года назад +5

    Feels like Mimir started a channel on AI and game design; from the accent and inflections, all the way to the incredibly knowledgeable and eloquent presentation. Great stuff!

  • @jonwatte4293
    @jonwatte4293 4 года назад +2

    An AI opponent is very different from NPC AI. To build good AI opponents, I think trained models could work in the near future.
    Also, trained networks CAN be opened up and examined. The gradient backpropagation algorithm of deep networks is eminently visualizable.
    Self play can actually help in building agents before release, and in game tuning.
    The cost of training may be no different from, say, the cost of calculating level lighting in a large unreal engine game. Sure, it will be AAA tech first, but that's where most expensive tech makes it's debut.
    To be fair, though, I believe in hierarchical, ensemble networks. "One big network" is slow and expensive to train, and can be fragile. Small components that can each be trained and assembled, with knowledge of game mechanics, looks really exciting to me.

  • @AIandGames
    @AIandGames  4 года назад +23

    It's time to put the AlphaStar chat to rest. With this Design Dive episode I'm giving my 2 cents on the practical applications (and otherwise) of Deep Learning in games right now. Currently got some fun topics lined up for later in the Spring. But first, I gotta big deadline to hit by the end of the month.
    Don't worry, you'll know what I'm talking about when it hits.

    • @SupLuiKir
      @SupLuiKir 4 года назад

      Two Minute Papers has a recent video showing that you can update an AI without retraining it from scratch. You really should follow that channel if you don't already.

    • @jacobturnerart
      @jacobturnerart 4 года назад +2

      Do you think the next leap/ trend for game AI will be in construction of procedurally generated levels/ worlds and missions rather than NPC behaviour? Live service game devs would love this - have AI build levels indistinguishable from human created levels and no longer have to keep teams of designers working on these games.

    • @Ausstein
      @Ausstein 4 года назад

      Alpha Star learns from self play not human play, making that point invalid, otherwise great video
      Edit: Open AI invented AI surgery for its dota 2 ai that can continue learning after a patch

    • @zvxcvxcz
      @zvxcvxcz 2 года назад

      @@SupLuiKir Depends on the model really.

  • @iridium9512
    @iridium9512 4 года назад +7

    I would like to add a few things to expand on this video. (tldr at the bottom)
    First, point of Alphastar was not to create some sort of AI that will be good at a game. It was more of a research thing. Deepmind team wanted to see how neural networks would handle the game with as much action space as Starcraft 2. Where games like chess or go have very defined set of positions where figures can be placed, Starcraft uses a map where there is whole vast realm of possibilities for where units and structures can be placed. Not to mention the lack of knowledge of what opponent is doing which can be obtained by scouting and different unit counters, compositions and what not.
    So Alphastar represents a proof of concept that neural networks can be used to solve complicated real life problems. Though we have seen Alphastar derp pretty hard in the replay pack that was released so it's clear that generalization process is not without it's limitations.
    Second thing is - Alphastar was not trained in the most optimal way. Human replays was used as a, sort of, speed up for learning. As an example, AlphaGO was trained in this exact way. AlphaZero (the successor of AlphaGO) has however trained from nothing on it's own (hence the name).
    It would be in theory possible to train Alphastar much faster and more efficiently. The training speed depends on number of neurons or synapses and how well AI performs depends on quality of training data. In case of Alphastar, it was trained to play against itself, against it's different agents and later on against so called "main exploiters". This way, Alphastar learns to fight well against strategies it encounters (that is strategies of all agents), but nothing more.
    Game developers can train neural networks by imitation learning to execute all kinds of strategies and this way, AI would become much more generalized. Alphastar on the other hand has been left to figure everything out on it's own so training was much longer.
    Last thing, regarding freedom to construct whatever AI you want. It's easy to create whatever AI you want, as long as you know how to select for it. For example, if you want fun AI you just need to reward AI for doing 'fun' things, whatever you deem that to be. For example, you could reward AI for playing fast or playing slow. For executing creative strategies or for playing predictable and tightly executed strategies. You can also compartmentalize neural networks into several segments where each would be responsible for different strategy or function. Doing this also accelerates learning. But Alphastar had been trained with general purpose learning algorithms and exploiters were constructed automatically which means that point was for AI to figure most things on it's own.
    TLDR:
    1. Alphastar was trained in quite a specific way. Point of Alphastar was to research how AI learns in complex environments and to be "a stepping stone to this goal".
    2. Alphastar was not trained efficiently. It could have been trained faster if DeepMind team wanted to do it, instead they went for training methods that would let AI figure most things on it's own in the slow and painful way, without any shortcuts towards results.
    3. You can train whatever AI you want as long as you select for AI traits that you want.
    It would not be a quick training process, but it'd be much quicker.

  • @kevingriffith6011
    @kevingriffith6011 4 года назад +4

    If it weren't for the prohibitive cost of building and training the AI, I would say that an AI like Alphastar could be a very effective tool for balancing gameplay that is generally difficult to quantify. Essentially by taking out factors like player skill or community bias (one of the characters/factions being favored over the others for non-balance reasons) and repeatedly testing you can get a substantial amount of high quality game balance information, particularly if you're already set up to record useful data from the matches (how much of a unit gets made, how many times this ability gets used, such like that). At the moment though, Blizzard is already very effective at gathering data from online matches, so unless the process becomes *much* cheaper then the industry will likely stay the course.

    • @AIandGames
      @AIandGames  4 года назад +4

      AI for testing is increasingly more common. Got an upcoming episode that looks at exactly that.

    • @fleecemaster
      @fleecemaster 3 года назад

      Researcher here, like pretty much everything in research, costs for training will decrease exponentially as the technology develops, it might take some years, but the training example given here will drop from £3million to £3k, then eventually to hundreds. Look at human genome as a comparison, the first human genome to be decoded cost £5billion. While I was as Uni about 12 years ago my friend was working on a machine that could do it for just £1000. Now it can be done for like £10 a pop. Insane difference, but you see that in everything. I think it'll be quicker than ~30 years like the genome, but it will happen! One day training an AI will be as frivolous as running a function in Excel.

  • @timogul
    @timogul 4 года назад +2

    Given the Stacraft example, I think there's a clear use for this sort of thing, *multiplayer obsolescence.* Plenty of games are popular at first, but then decline in popularity over time. If they require a large and healthy multiplayer community to be viable, to keep queue times low, then they can death-spiral very quickly. But what if you trained AI to pick up the slack?
    Building an effective AI for launch is, as you note, very difficult. But building an AI _after_ launch might be more viable. Just constantly feed player data in to it, Have some computers grinding away at AI training, and over months or even years the AI can get better and better at the game. Then, assuming the game is at least moderately successful, when it gets into its declining years, AI players can start to pick up the slack of missing human players, and as the community gets smaller and smaller, nobody even notices, and what players remain that really enjoy the game (and might keep spending on it) will just keep playing and having fun.

    • @Billinous
      @Billinous 2 года назад

      Agreed. The annual sports games that barely change (Maddens/Fifas/NHL/etc) make serious Billions annually for Electronic Arts. They can use data for the earlier iterations and simply add various Deepmind difficulties this implementation will move the game to unprecedented levels of enjoyment and even lure in millions of more customers to their already fat rich franchises. Also, It should never be a single difficulty otherwise there is no point to add the Deepmind AI system for video games. Have 3 settings using Bronze, Silver and Diamond league players data

  • @nyrtzi
    @nyrtzi 4 года назад +3

    The Communications of the ACM magazine last month or something had an article especially about how computer scientists are trying to turn the black box that is trained AI into something that can be inspected and understood so it's an active area of research.

  • @phobos2077_
    @phobos2077_ 4 года назад +1

    Not many know about it but actually STALKER games used neural networks for some portion of it's AI. Made it hard as nails to mod these specific parts :)

  • @MR3DDev
    @MR3DDev 4 года назад +1

    Why AlphaStar Does Not Solve Gaming's AI Problems? Because today's problems for many publishers and devs is "how can we get people to buy lootboxes, dlc, etc" AlphaStar is not designed for that

  • @FischiPiSti
    @FischiPiSti 4 года назад +1

    The video portrays the current cost as a stopgap for the future, but to me, the trend says it will not be the case for long. Alphastar proved that the model works, that's the most important thing to consider. Yes, it was very expensive, but if you look at Nvidia, they are bringing AI hardware to even consumers and with each generation, AI becomes more and more accessible and efficient. And as more and more devs become trained in deep learning, hiring devs who can build such systems become less expensive too. And with the total cost going down, retraining becomes less of an issue too.
    It's also worth noting that a game like Starcaft is like the most difficult example. Other strategy games like turn based strategy, card games or grand strategy, I suspect it would be much less complex and easier to implement deep learning AI to those genres as you don't really need to create specialized interfaces the AI can operate in.
    It would be great if you could make a video about AI in grand strategy games, the Paradox games in particular, because AI is a hot topic right now in games like Stellaris - because of how bad it is.

  • @EnterpriseKnight
    @EnterpriseKnight 4 года назад +28

    5:27 "Active and lively fanbase around their products" yeah not so much lately huh?

    • @AIandGames
      @AIandGames  4 года назад +26

      Money talks man. All the noise people make when they're not happy about business practice doesn't mean a thing if you keep ponying up for whatever they're selling.

    • @SorryBones
      @SorryBones 4 года назад

      AI and Games preach

    • @RoyalFusilier
      @RoyalFusilier 4 года назад +8

      Companies benefit almost as much from loud negative reception as loud positive feedback. People trying to 'punish' companies for political stances often end up boosting their stock prices by millions of dollars, and so on. As the saying goes, there is no ethical consumption under capitalism.

    • @daddysempaichan
      @daddysempaichan 4 года назад +6

      SC2 is still fine. It's been relativity untouched from the Blizzard Dumpster fire. Same with SC Remastered. Probably helps that these games are old and has low toxicity games are usually 1v1s, in which you have no one to blame but yourself, or teams, where everyone is goofing around.

    • @-Devy-
      @-Devy- 4 года назад

      You vastly overestimate how many people are genuinely "outraged". All things considered, as usual, it's a small but very vocal minority. Most people just continue playing the games they enjoy and don't give a crap about the rest.

  • @Kalenz1234
    @Kalenz1234 4 года назад +7

    The shocking revelation of AlphaStar's capabilities is not that it will solve AI problems in gaming development. It's that we now have AI that beats players at fast paced multi tasking strategy games.
    It's a milestone, way more impressive and frightening than chess AIs beating chess masters.
    It's another proof that AI can replace any human output whether physical or mental.
    Reeducation of workers is a big topic in order to deal with automation and prevent people from becoming unemployed.
    There is no guarantee that by the time their reeducation is complete there won't be an AI that can take over the job they just learned.

    • @RoyalFusilier
      @RoyalFusilier 4 года назад

      That's why 'well, we need to just give humans Other Jobs' is so deeply not-the-answer that it's shocking and painful. The brutal paradigm of 'every human must work or they shall starve, so sayeth the Lord' needs to change, and fast, because technology, and more importantly the way it's used, has already rendered the prospect of full meaningful employment impossible.
      So to get to full employment now, you'd have to use stupid makework jobs. Either capitalist useless cubicle-hell money-changing market accumulation that gives precisely nothing to civilization or society, only takes, or the classic Soviet 'build a wall one day, tear it down the next, but hey it's work comrade'.
      Bluntly, our society now faces the problem of 'surplus population', and there's two general approaches to take towards this. Either redefine what's 'surplus' and bring a new paradigm where people matter, not dollars, or... reduce the surplus.

    • @zvxcvxcz
      @zvxcvxcz 2 года назад +1

      ... I've only seen bronze players be more resistant to building observers than AlphaStar. If it shows anything it is how shockingly obvious it is that it is just an AI even after a ridiculous amount of money and training. It's littered with similar exploitable poor choices and really does win chiefly on mechanics (for which it is really more formidable than human pro players even after the restrictions). Another example is that it still makes very poor decisions about whether or not to base-trade, errs too often on going home to the point where if a pro had more opportunities to play against the AI they would have started exploiting it. The AI got a lot of exposure to human players but human players rarely had the chance to know they were facing an AI, pretty much the only ones that got to know that while it was happening were the handful of exhibition matches, and they didn't have a history of games against it to see how it behaves, nor had they seen replays of it to that extent, so they literally had less prep than when they face their human opponents. It's not unusual for pro human players to prepare for particular individual pro opponents.

  • @Davivd2
    @Davivd2 4 года назад +3

    I see the potential for this in sports games like NBA 2K. Sports games have always suffered from the AI opponents being stuck in specific play styles and patterns. If you have a team with dynamic wings who can cut to the basket off of screens, or have a team with a dominant post player surrounded by 3 point shooters who can hit open 3's when the post player gets double teamed, it does not matter. The AI will always play the same way. You can tune the sliders in game to try to make the AI run more plays for your post player, and nothing changes. Kareem Abdul Jabbar will never be the highest scorer on the AI's team but some small forward or shooting guard will, no matter how much better your center is. Every team and every game plays out the same way for the AI with very little diversity.
    End game scenarios in a basketball game are dynamic. How much time is left on the clock, how many time outs the AI coach has left, how far ahead or behind the AI team is on the scoreboard; all of those things come into focus when you need to make a decision about taking a 3 point shot or a 2 point shot, late game fouling. Right now in game AI just have a pattern and don't even realize that with 8 seconds left in the game that they need to take a 3 point shot to at least tie the game.
    I see basketball games, and other sports games benefiting tremendously from an AI that can actually change tactics depending on the situation they find themselves in. I would love to play a basketball game where the best player gets the most chances to score, and not just the 2 wing players taking the majority of the shots. I would love to have to change my defensive strategies every game to compensate, and even in game with the AI adapting to my adjustments. The potential for AI in sports games can not be understated. 2K looks nice and has very fluid movements. You can do nearly anything in the game that real players can and the physics engine replicates player movements really well. But strategically 2k and other sports games are still a mess.

    • @cpowerca
      @cpowerca 4 года назад

      nah, games like NBA 2K is too simple and AI will probably find an optimal winning strategy very very soon and just abuse it non stop.

  • @timseguine2
    @timseguine2 4 года назад

    I agree with a lot of your points, and many of the things you point out are why I am a big proponent still of "oldschool" classical AI methods focusing on modular solutions. It requires domain knowledge and significant engineering, but you can swap out pieces if they are not working right and you can even swap out parts for deep learning if the deep learning solution has better performance in that module. And you can hybridize modular solutions with smaller scale deep learning nets on these smaller feature spaces. It's not my main job and I have limited resources though, so I have admittedly been largely been unable to fully test my POV.
    I just don't really agree with the current approach of a lot of AI research which is just to throw more compute at the problem(albeit often cleverly). It gives a surprising amount of shortterm benefit, and gives a pretty good indication that the state of the art is actually farther than many people thought, but the field is still waiting for its defining breakthrough.

  • @maxwellsdemon13
    @maxwellsdemon13 4 года назад +2

    On the topic of adapting to patches, I found that interesting because your notes are exactly what humans deal with. They use old patch builds and ideas initially and adjust as those builds are proven to be good or bad. I think the difference is humans theorycraft the moment patch notes show up, trying to know how changes will impact their build and then testing these theories, something AlphaStar wouldn't do. So Humans should, big should because often times the theories we come up with don't match the actual game in testing, have a leap in terms of adjusting to changes but I thought that topic was interesting and how it compared to how humans adjusted to change.

    • @michaelnurse9089
      @michaelnurse9089 4 года назад

      This is a good thing for the majority of players because it means once a month or whenever, everyone returns to an equal footing. Chess has no patches. As a result, no-one is going to take the $1 spot until Magnus Carlson until he retires. Tennis has a similar issue - with virtually no-one besides the big 3 winning grand slams over the last two decades.

  • @jaybot22
    @jaybot22 4 года назад +1

    The whole premise of this video is off. Alpha Star wasn't developed to assist gaming companies with their in-game AI, it was developed as a challenge for Googles deep learning systems. This stuff is still in early r+d phases and as they learn more from complex games like StarCraft they can start liscensing it out for complex real world situations for 💲💲💲 in the future

  • @VirtuelleWeltenMitKhan
    @VirtuelleWeltenMitKhan 4 года назад +2

    Isn't AlphaStar main goal to show long term planing in general and a game is just a controlled testing ground? The goal of the project is not to develop AI for games.

    • @Guztav1337
      @Guztav1337 4 года назад

      That is DeepMinds goal, indeed. However, this video addresses the community chatter around this feat.

    • @zvxcvxcz
      @zvxcvxcz 2 года назад

      Not exactly, it was more of dealing with imperfect knowledge than long term planning.

    • @VirtuelleWeltenMitKhan
      @VirtuelleWeltenMitKhan 2 года назад

      @@zvxcvxcz huh, thought that was my point ... but yeah

  • @Martin-rb4np
    @Martin-rb4np 4 года назад +1

    >Publisher money
    because alpha is totally not a DoD project for future wartime AI.

  • @_buns_
    @_buns_ 4 года назад +1

    Great video! I love the realistic view of the situation. ML is amazing but not widely applicable yet
    I also didn’t know Black & White has neural networks! Time to learn more

  • @Wulfryk
    @Wulfryk 4 года назад +2

    but the deepmind project isnt ment to simply design ai for games, this isn't the end goal. i dont really see the issue here?

  • @Gnurklesquimp
    @Gnurklesquimp 4 года назад +4

    So many videos! Hey Tommy, while I've been very interested in AI from a more abstract and theoretical designer perspective for a long time, I'm only now getting into the programming, and feel quite lost... I've got a partial prototype of an enemy that uses pretty much all the different factors I think I want in a full package, and was wondering if you could give me a lead on what kind of AI programming system I should invest my time focusing on in my learning process.
    (It's gonna be simple and pure top-down 2D with zero verticality simulation)
    ''Tries to keep line of sight on as much of the area spanning x distance in any direction the player could move, even takes priority over direct line of sight on player if area is big enough.
    Has less heavily weighed preference to stay close to a certain sweet spot distance away from the player.''
    Basically, I'm describing three conflicting goals it has that it tries to balance to get the most overal value, each goal has it's own value to the unit.
    Direct line of sight is an absolute goal, but ''staying close to sweetspot'' isn't, and that thing about keeping line of sight on x distance over any direction the player could move... Where do I even start with that one?
    Anyway, if you, or anyone else, has any good sources or advice on how to achieve these things, I'd be VERY appreciative.

    • @fabulamcafee
      @fabulamcafee 4 года назад

      yeah. just do any other than writing this text and you are good.
      behavior trees
      pathfinding
      machine learning
      AWS has „right now“ a lot about machine learning. so when you want to know how professionals do code that filters ocean or make a car drive it is your chance
      the alternative to that would be open AI but I never used it
      game engines are a good tool and there is third party software for that like one called RAIN that did the half life AI I think ? really not sure, it’s compatible to source engine tho

  • @petercarioscia9189
    @petercarioscia9189 4 года назад +10

    Y'know, I kept hearing alphastar learned from watching 10s of thousands of games, but it never really struck me what that meant
    Alphastar didn't learn to play the game from the ground up. It probably doesn't have any implicit understand of what it's doing,. because it's just a 10s of millions of dollars exercise in "monkey see, monkey do" (or AI sees what the monkeys are doing, AI does)
    Further evidenced by the fact alphastar cannot adjust itself to new gameplay when blizzard makes a change to the units and the games meta changes.
    Now, don't get me wrong. Alphastar has done things in game that baffle the players, it's part of the reason its able to win. Alphastar has done seemingly novel things in game, assuming it has watched 10s of thousands of pro player games, or at least GM level play... especially with the economic mechanics of the game. But those actions could have just been iterative, not necessarily novel. Meaning, alphastar hadn't detected or calculated a better way to play to make mineral collection and mineral gatherer more efficient (as was first thought) but alphastar may have just been iterating a basic mechanic or "never stop building workers, no matter what" where as a pro player knows exactly when to cut building workers, and when to resume.
    Sorry if that was a bit detailed on the games mechanics. If you're curious, a starcraft pro by the name of Beasty QT has done some very high level commentary on alphastars play style.

    • @AIandGames
      @AIandGames  4 года назад +6

      Just focusing on your first point: it's the fundamental assumption people make that isn't true. Machine learning doesn't know what it's doing. It's just optimising against the data it's engaged with. In the case of supervised learning, yeah it doesn't know what it's learned. It just learns to mimic what it observes. I'll be very excited to see a new version of AlphaStar that learns how to micro on its own. That'd be very exciting.

    • @petercarioscia9189
      @petercarioscia9189 4 года назад

      @@AIandGames oh yes, I know. I'm not computer scientist, so I was working off of this incorrect assumption. I think a lot of people do, honestly and I know it's incorrect. We treat ai as if they can actually THINK and REASON.... because that's how we Intuit 'intelligence' kind of ignoring the 'artificial' part.

    • @petercarioscia9189
      @petercarioscia9189 4 года назад +2

      @@AIandGames micro is great and all, but the pros were very interested in the macro-mechanics because AlphaStar was going some interesting things in that regard. It seemed to be doing things with the in game economy that were outside the norm, and they thought maybe alphastar had high level mathematical reasons for doing so.
      It sounds small, but it was over building the workers who gather resources in the early game, something that's general considered sloppy game play for optimized human play. But it seemed to be yielding positive results.

    • @oldvlognewtricks
      @oldvlognewtricks 4 года назад

      AlphaStar uses self-play - that’s how it surpassed human-level play.
      It’s not just learning based on watching human players - that’s precisely how it developed novel strategies. It’s a bit odd that this is being misreported.
      What suggests it can’t adapt to patches? Just needs training time with the new inputs, like any other system... human or machine. With self-play it can do this faster than a human can.

    • @ilax3071
      @ilax3071 4 года назад

      @@petercarioscia9189 Thats not true at all though, the 32 workers per mineral line got debunked pretty fast and still noone is doing it because its bad. Also pro's didnt copy strats because it was mostly doing dumb and standard stuff but won because of its perfect spending and insane micro which was a bit of a shame imo

  • @hak1111111
    @hak1111111 4 года назад +3

    Hail mighty overlord AlphaDeepMind. I have never supported this man, even before the revolution

  • @ChopTheViking
    @ChopTheViking 4 года назад +2

    I think a really good use for the AI in game development, is as an artificial play tester. Because they can quickly find exploits and cheese strats that you want to patch up.

    • @totalermist
      @totalermist 4 года назад

      My thoughts exactly. The major advantage is that exploratory algorithms can just just 24/7 parallel to manual play testing during development and find anomalies (i.e. bugs or bad map/level design). It's much cheaper than training a god-tier AI opponent and might save a lot of time, especially since the algorithms are usually quite good at exploiting flaws since they don't necessarily subconsciously "follow the rules".

    • @NaumRusomarov
      @NaumRusomarov 4 года назад

      For strategies, I can't speak for other games, I was actually thinking of keeping the bots fresh for newbies and those who just want a good realistic fight with someone at or slightly above their level, but don't want to play with other people. In strategies, bots and other NPCs tend to become stale after a while because they have a limited set of behaviours, they are also not great if you want to be against human players. If you have the infrastructure to properly train bots with ML technologies for your game, then this would be the sweetest cookie in the jar.

    • @zvxcvxcz
      @zvxcvxcz 2 года назад

      AI don't seem to actually be that good at novel exploitation though they may find other sorts bugs... the things so dumb not even a player attempting an exploit would try, like what happens if they walk into a tree for 5 hours (maybe you do have a collision code bug that turns up in rare cases).

  • @billy818
    @billy818 4 года назад

    I think you missed a possibility for neural networks in smaller studios. They can just ship the game with much worse ai aswell as "hard coded" ai and have it learn in a distributed way with it using examples from customers then as time goes on update the ai in the game untill it outperforms the hard coded ai.
    Granted updates still pose an issue

  • @thenetherone1597
    @thenetherone1597 4 года назад +2

    if I was going to use Deep Learning in a video game as the default AI I wouldn't train it in a lab...
    I'd use the games own single player mode as the training group, every player connected to the internet would be training the AI weather they realised it or not. (insert evil villain laugh)

  • @NaumRusomarov
    @NaumRusomarov 4 года назад +1

    I actually strongly disagree with you about the costs, the other stuff you mentioned is fine, imho. It is very hard to meaningfully compare the costs for developing AlphaStar with what would happen in a commercial game because the former is a research project and the latter would be a commercial product with a very limited timeframe and budget. Almost all research projects have very large costs; this is not a secret, doing research is actually very expensive -- most things don't work, those that do might not be feasible and the few ideas that work and are feasible often require large investments in dollars and man-hours before they are even remotely viable for anything. Anyway, if the conclusions of their research are sound and the field keeps developing, then it would take quite a few more projects before the results can actually be put into a more commercially viable project, if that's Google's goal in the first place. But if that happens I'd expect the costs to drop by at least a factor of ten.
    The rest of the comments address my other concerns well, so I won't write about them in my comment. Good video anyway.

  • @TYNEPUNK
    @TYNEPUNK 4 года назад +2

    Hi Tommy, I watch your vids every day :) Id love to know your thoughts on a "nav mesh in the sky", im trying to code flying AI right now and wondering how to get something at least slight as good as a nav mesh but in the sky.. or any other way to make a simple flying ai that avoids walls etc, i guess i could just raycast and change dir etc. I even thought you could put cubes up there, bake navigation then tunr them off, but then it wouldnt really work in true y axis etc. Thanks for all the great vids, my AI of about 8 months now was built originally from your tuts.

  • @GIboy1990
    @GIboy1990 4 года назад +1

    That's also why the sewer mermaid stomped alphastar on the ladder

  • @Phagocytosis
    @Phagocytosis 3 года назад +1

    In response to point 1: This is probably true, although when it comes to designing an AI that is able to compete with human players, especially for RTS games where you're simulating a player rather than a character in an RPG game or some such, I think it often does not matter too much if you know why the AI chooses something.
    In response to point 2: It is possible to train these agents without relying on human data (or gameplay data from other AIs) to bootstrap them. See AlphaZero (as opposed to AlphaGo) and MuZero (which does not even rely on precoded knowledge of the game) for examples.
    In response to point 3: This is a serious problem, for sure. I have nothing to counter here, except maybe that this sort of thing could be done (especially for older games) via fan projects from the community, rather than the developers themselves... although you are at that point talking about a slightly different thing, of course. I only hope that progress will be made to make this quicker and cheaper, as you addressed in the video.

  • @mimszanadunstedt441
    @mimszanadunstedt441 4 года назад

    Its good for people to speak the truth when most people are misunderstanding.

  • @ICHBinCOOLERalsJeman
    @ICHBinCOOLERalsJeman 4 года назад +6

    Terminator movies are not doing well at the box office
    What are you talking about terminator 2 did great, shame they didnt make more

    • @LazyDev27
      @LazyDev27 4 года назад +1

      Ye after that I dont count the others as canon. The rest were cash grabs.

  • @WiseWeeabo
    @WiseWeeabo 4 года назад

    How ML can be used is having instant bots in your game for a particular genre of games (say fps games, moba games, tactical battlers, et.c.), it could even be modular to add behavior for point capture of flag capture.

  • @guerra_dos_bichos
    @guerra_dos_bichos 4 года назад +1

    honestly, gamers are just shortsighted, companies such as google don't pour huge amounts of money to make games more challanging, effort such alpha go and alpha star are meant to further research in stuff algorithimic optimization that later can use in large scale systems... the video game industry are pennies for them

    • @Billinous
      @Billinous 2 года назад

      This is true. However, using part of their Deepmind system towards AI for games is a lucrative avenue in the Billions (c'mon Stadia! This is right there for the taking for your first party games). The benefits of a Deepmind for enemies/party members will literally take video gaming to such an elevated peak! Deepmind difficulties for bronze to Diamond with endless replayability due to no encounter feeling the same. It will never happen, however anything is possible when big money can be exchanged

  • @sockatume
    @sockatume 4 года назад

    Do you think that any genres are sufficiently standard (eg arena FPS) that a “bootstrap” AI could be created for training? Or would that require a kind of general intelligence that NNs aren’t ready for? Would Alphastar even be of any use in Starcraft sequels?

  • @michaelnurse9089
    @michaelnurse9089 4 года назад

    Openai trained the original dota 2 ai in a couple ours on a single box. It all depends on the size of the grid and the frequency of actions. If you restricted Alphastar to a single small map the training cost would drop 100 fold. These days you can train Resnet from scratvh on a GPU in less than 10 minutes if you utililize the modern hacks to reduce the training time (reduced precision, learning rate optimization). If Deepmind were focused on reducing cost of training they could easily get 100 times reduction - but it is not a focus as you point out their real cost is salaries, amounting to a billion over 3 years, so reducing training time isn't on the radar as they would spend a million to save a million.

  • @haoxus9413
    @haoxus9413 4 года назад +1

    I disagreed most of your insights. The author needs to learn more about how reinforcement learning works...

    • @LazyDev27
      @LazyDev27 4 года назад

      I agree, he seemed to not completely get it. This often happens though when talking about it, even I am highly critical of what I say about it.

  • @nicholasperkins4655
    @nicholasperkins4655 4 года назад

    Would training A.I. be cheaper if they used GPUs vs TPUs? I know TPUs are better as far as results but could the price of GPUs be a factor for using them instead? Also when are tier lists in games going to be determined by A.I? I'd rather know what character an A.I. chooses more often to beat a game instead of the opinion of pros and hobbyists. Objective Tier lists(which some websites replicate by showing which characters winning teams have most of the time like in tft) would have a strong use for all game designers. The only way to balance a game is to know which characters in the game have an advantage.

  • @bowiebrewster6266
    @bowiebrewster6266 4 года назад +4

    4:40 In its assisted learning stage alphastar was trained to a level far inferior to pro play. I'm shure it would be easy for a game company to, even before launch, replicate play of that level. After which alphastar can start leaning from playing against itself.
    6:00 You will need to retrain it, but your starting point will be an extremely skilled agent in the previous path. Much less training will have to occur then restarting the entire operation.
    6:40. Google's Tensor processing units are a novel technology and are develping very quikly far exceeding hooke's law. en.wikipedia.org/wiki/Tensor_processing_unit. If in 2020 There are some gaming companies that can afford this type of training for their AI's i don't think its a stretch to say in 2030 it will be readily available for most medium sized companies.

  • @restlessfrager
    @restlessfrager 4 года назад

    Playing the original Starcraft's menu theme, in a time where Activision transforms blizzard and its IPs into a huge cow to milk.
    The nostalgia burns my soul.

  • @zeckul
    @zeckul 4 года назад +5

    AlphaStar has other limitations that makes its achievements not so impressive:
    - An AlphaStar agent only knows how to execute one strategy. Learn to beat it, you beat that agent every time. Once Mana figured out how to beat it, if they went best of 20 he would have won 10-0. Instead they concluded the event saying "oh the human player managed to beat it once". Pure PR.
    - The only way "AlphaStar" can beat a good player several times in a row is to throw different agents at him randomly so he can't predict what he will be playing against. This randomness is not part of the AI nor is it a sign of intelligence.
    Coupled with the aforementioned limitations of being unable to adapt in any way, and it's easy to see how AlphaStar is still far from demonstrating human-like intelligence in Starcraft 2. Or even being an interesting training partner for pro players.
    The point of AI companies like DeepMind is to generate hype so investors throw money at them. Keep that in mind.

  • @cavemantero
    @cavemantero 4 года назад

    the problem with an AI is it can still 'cheat' and be exploited...Lowko's video playing against it showed it make a barracks selection that would be completely impossible by a human player while also showing videos of it floundering at times that would've been exploited in the right circumstances by the opponent.

    • @gabrielandy9272
      @gabrielandy9272 4 года назад

      the cheating is just a matter of fixing it, the first versions of the ai had a very high apm... then they lowered it and made the ai move the camera too (it did not ahd to move the camera before) however its still do these weird clicks, but it must be on the screen still.... this could be easily fixed by making the ai not able to select things that are very close on the border.
      basically the alphastar started very cheatty and as the developers updated it it got less cheatty/more balanced...... i would say its pretty fair now tho.

  • @chidori0117
    @chidori0117 4 года назад

    Games at the moment just provide an interesting challenge in understaning how AI deveopment works. The intend behind this is never to actually develop better AI for games. Games just provide a realistic chalenge set while stil having relatively strict rules an mountains of datat available to feed the learnng. THe interest that google and facebook have with this is data analytics and big data mining application after the technology has advanced further. If we develop more advanced AI learning or development methods that facilitate a usage outside academic study ... using them to make better opponents in games is basically last on the list.

  • @Michaeltje01
    @Michaeltje01 4 года назад +1

    I was expecting a proof that this is a non-deterministic polynomial complete (NPC) problem and that heuristics are merely an approximation and not a solution.
    I guess my expectations were wrong.

  • @oldvlognewtricks
    @oldvlognewtricks 4 года назад

    A tiny point, but all the problems with needing examples of high level play evaporate once they develop AlphaStar Zero. Self-play has been shown to rapidly and efficiently reach super-human levels of play without any external input.

    • @oldvlognewtricks
      @oldvlognewtricks 4 года назад

      It also addresses the patch problems, since you can run the learning from scratch.
      Expensive in terms of compute, I guess, but that can be bundled into Moore’s Law.

    • @totalermist
      @totalermist 4 года назад

      @@oldvlognewtricks True to a point, but Moore's Law hasn't really applied for years now. Intel have hit a wall in their production process and basically skip a whole process iteration (apart from homeopathic quantities of mobile chips), while both NVIDIA and AMD haven't really made big strides in the graphics and GPGPU department in the past 3 years.
      The days of 50% performance increases every other year are long gone and silicon is reaching its limits - not in the next 5 or so years, but surely by the end of the decade.
      The problem is that the models are still getting bigger and the amount of processing required grows faster than the hardware capabilities.
      DLSS is a nice example how even the most sophisticated, hardware accelerated DL techniques are easily outperformed by traditional sharpening and de-blocking filters (e.g. Radeon Image Sharpening or NVIDIA Freestyle) today. That's not to say it will stay that way, though. It's just a reminder that DL possibly isn't the end-all solution for every use case.

    • @oldvlognewtricks
      @oldvlognewtricks 4 года назад

      totalermist All great points. Stuck until Quantum, then? Cloud doesn’t democratise the availability (and hence price) of compute sufficiently?
      A graph isn’t everything, but: www.hamiltonproject.org/charts/one_dollars_worth_of_computer_power_1980_2010
      Looks linear to me, regardless of slowing consumer innovation. Agreed that the rapid acceleration caused by manufacturing could not continue forever.
      Or is it a problem with latency version parallelisation?
      My only thought with your point about models is amplification - all rendering models are beaten by machine learning whose goal it is to achieve the same result cheaply and rapidly. Optimising the technique you use is not the same as hardware availability, agreed.
      What constitutes ‘outperformed’? The AI techniques simulating Cloth and fluid and making astronomical models are outperforming all other techniques in terms of speed for a comparable other model - presumably through amplification and sparse data and suchlike - so I’m not sure how that stacks up. Of course traditional techniques would be faster, since they’re thoroughly optimised by now...
      I’m not sure how ‘optimising compute overhead of a particular model’ relates to ‘hardware cost, availability and performance’ - they seem pretty orthogonal to me. If your point is that traditional methods are better-optimised right now, then absolutely. AI techniques have hugely further to go in terms of optimisation.
      I guess that brings up the meta-overhead - you can run an AI cloth simulation incredibly cheaply in real time, but there’s a commensurate cost for training it to do so.
      Unsure I have a point, other than that there are many interesting factors.

  • @jahrazzjahrazz8858
    @jahrazzjahrazz8858 4 года назад

    nice seeing the video 46 seconds after upload

  • @NinjaOnANinja
    @NinjaOnANinja 2 года назад

    You should hear what I got to say about AI. And how the fear of ai is already being realized by other means as well as the AI

  • @LazyDev27
    @LazyDev27 4 года назад +1

    Do not forget, that this was a controlled experiment. The quality of AI and their behavior trees clearly need to be specialized, that's what they're meant to do. But you act as if that's a negative..? Like yeah, the self driving AI probably can't differentiate human faces. That doesn't mean it's a bad AI. Most of what you said was random, and didn't pertain to artificial intelligence at all. What is cool about it, what can you do with it, the far ranging effects...? You mostly ripped on how implementing a system like alphastar into games wouldn't be cost effective and ineffective in general at the moment. I guess that's the title of the video but like... Why? For example, the paradigm that this AI launched from could be applied elsewhere. Say, the xenobots. Which are cells taken from a type of frog, that are cut and arranged together. Then programmed by AI to carry out functions autonomously. No wires, no copper, just grown cells that heal and continue to grow after being programmed. This is some scifi shit, and that's just one aspect of this field. I feel you are out of your depth here.

  • @crashbandicoot2206
    @crashbandicoot2206 4 года назад

    Thanks !!

  • @klausgartenstiel4586
    @klausgartenstiel4586 4 года назад

    5:00 alpha zero learns from zero tho

    • @Guztav1337
      @Guztav1337 4 года назад

      There is no Alpha zero for Starcraft.

  • @chardonnay5767
    @chardonnay5767 4 года назад +4

    Since you have a channel called AI and games, I expected a little bit more in depth analysis of the current state. You picked AlphaStar as a punching bag, perhaps because of the Skynet memes, but that’s not really where the intelligent discourse is happening. Though I admit that this might be a decent introduction to the problems of neural net AIs for someone who doesn’t know anything about the subject.
    No one credible is claiming that AlphaStar will become some kind of generalized AI. It’s a bit more than a proof of concept by now, but it’s far from an actual product still. You could also say that Google paid $500M to publish a couple scientific papers. Development happens like this in all fields and industries, where you first have more theoretical and unapplicable stuff which over time is developed into various real world applications.
    Deepmind has flirted with generalization of their AI project with the Atari games thing, but it’s still far from ready. The current AlphaStar can be considered to be like the early automobiles that would crash at the drop of a hat. The way you discussed the AI felt like it should be expected to perform like a modern rally car instead.
    Development in the field of deep learning AI currently happens at a miraculous rate that just keeps on accelerating. Now days you could say that the whole field is almost like reborn every 6 to 12 months. Sure, the day-to-day realities of using neural net AIs from the point of view of a game developer are as you pointed out, however for how long will that status quo persist? Would you bet on no deep learning AI that can be used in NPC behavior development to appear in the next 6 months? How about a year, or two years? If you think back to just two years ago, we’ve come from like the stone age to bronze age already. All it takes is another breakthrough and we’ll see yet more results that were once deemed as impossible to achieve.
    Btw, everyone should seriously be scared of the recent rate of development in AI and its implications regarding the next industrial revolution, or at least not underestimate it at all. For example the accuracy in psychological profiling based on social media feeds is alarming just on its own.

    • @evilseedsgrownaturally1588
      @evilseedsgrownaturally1588 4 года назад

      Sami Helen Great post, just a small caveat to take into consideration: I’m not sure people need another thing in their lives to be scared of. The human answer to existential angst, thus far, seems to amount to pharmacology, irrational legislation, pop philosophy and therapy indistinguishable from placebo effects, all of which does little in terms of adressing actual underlying causes. I get that you want to motivate people into action, but fearmongering seems unconstructive.

  • @Frostile
    @Frostile 4 года назад

    I would honestly love to hear about AI in racing games as a personal fan of the genre

    • @AIandGames
      @AIandGames  4 года назад

      Forza's going to happen eventually when I find time - and patrons vote for it.

    • @robosergTV
      @robosergTV 4 года назад

      its quite easy und unremarkable compared to A* or OpenAI Dota 2 bot. I trained a bot to play a racing game better then me. Nothing special.

  • @Ryuuken24
    @Ryuuken24 3 года назад

    Trowing A.I problems and random calculations at problems, solves nothing. They wanna piece out specific A.I solving algorithms and sell them for profit.

  • @ghost7685
    @ghost7685 4 года назад

    the way he describes how alpha star learned how to play star craft sounds like alpha star was a failure to me. If the ai that can calculate more math in 1 hour than i can ever do in my entire life time does not understand why a unit does not perform the way it used to after a patch then how can be called ai? I use ai just to refer to the pc itself, i don't actually mean ai in games. no game has an actual ai in it.

  • @lrmcatspaw1
    @lrmcatspaw1 4 года назад

    Alphastar (or any real AI) cant solve the problem of gaming AI for a simple reason:
    It has to be adapted to human nature. AI is created in a very specific way. Why is the AI of say... Dark Souls, not have instant response times? Because it would make the game unplayable.
    Why are AIs that adapt to your skill so disliked? (Think max payne 1/2) Because as the difficulty increases and the player learns, unless they increase at the same pace (impossible since each human is different not only in IQ and knowledge but also in what topics they learn faster, or even what they like) the AI will adapt to either drop or increase the rate of the difficulty increase to match the player, making wins unrewarding (since the noticeable drop in difficulty) and the deaths cheap (since the game never asked you for this much before).
    There are a few games that used this mechanic and it was generally considered a very bad practice (and a lazy one), hence why so few games use it.
    One of the best examples of AI and difficulty done right is Dark Souls because it expects the player to learn or keep loosing, as simple as that.

  • @Billinous
    @Billinous 4 года назад

    I fell off EA sports games nearly a decade ago as each game vs the computer feels exactly the same. If EA could use some of that Ultimate team money to implement a dynamic AI such as this one, I will get back in without question. I'm just imagining the replay value knowing that each team would play similarly to the reallife games we see on TV

  • @dartstone238
    @dartstone238 4 года назад

    It may be possible though to use a subset of GAN (Generative Adversarial Neural Networks). But for now, there are no out of the box solutions and you weren't wrong in stressing how much cost effectiveness is important. Even if EA or Activision got millions to put in this kind of R&D, they won't do it because they have no guarantee on the return on investment. They are more focused on designed new manipulative micro transaction systems. But you are also right by saying that ML is mostly used for now, in graphics and sounds. Nvidia already shown promising result already back in 2016-2017 with lips syncing and on the fly animation. Or with audio synthesis. I will be the most happy person when I'll be able to hear a NPC calling me by my character's name. I also witnessed in research labs astonishing tech used to apply and deform textures according to the deformation of the 3D mesh.
    Damn that's a long post. All of that to say that video game and computer graphics have good days before them.
    And again nice vid :)

    • @michaelnurse9089
      @michaelnurse9089 4 года назад

      Micro transaction optimization by DL would be easy to implement, I am certain they already do it.

    • @dartstone238
      @dartstone238 4 года назад

      @@michaelnurse9089 Didn't EA(or Activition) published a patent involving ML and speding habits or something ? I'm pretty sure that they are already working on it. If it's not already used :/

  • @skyacaniadev2229
    @skyacaniadev2229 4 года назад

    You don't get it. If DL AI succeed (to a next level), there is no (human) designer...

  • @keenheat3335
    @keenheat3335 4 года назад

    sounds like in the future, there might be dedicate TPU like chip similar how GPU spin off from CPU.

    • @robertwiesner6825
      @robertwiesner6825 4 года назад

      Well, on modern GPUs(gtx1080 and newer) there are tensor units exactly for this purpose

  • @antdgar
    @antdgar 4 года назад

    10:08 Obama looking pretty good!

  • @Marinealver
    @Marinealver 4 года назад

    Lol, I am sort of satisfied that if a human discovers a robot they will identify it as such and then proceed to beat it up. A common human-robot interaction suprisingly.

  • @Quickshot0
    @Quickshot0 4 года назад

    So with some luck, by 2030 this technology would be a lot cheaper to implement* and maybe you could use it for some Game AIs... though by then technology will probably have moved on and perhaps far more useful approaches will exist.
    * Due to making the process more efficient and substantially improved hardware capability.

  • @merlinthelemurian3197
    @merlinthelemurian3197 4 года назад

    Went to subscribe, was already subscribed, so I rang the bell

  • @tonechild5929
    @tonechild5929 4 года назад

    On your comments on retraining, that is not true anymore. Instead of completely retraining, you can conduct AI surgery. ruclips.net/video/62Q1NL4k8cI/видео.html Your arguments rely heavily on AI not being able to adapt, but the fact that AI Surgery is now possible. Also ML is not that expensive, I've played around with it as a hobby with very little expense.

  • @rasheedqe
    @rasheedqe 3 года назад

    Wow

  • @dimitrijmaslov1209
    @dimitrijmaslov1209 3 года назад

    Hm.

  • @norik1616
    @norik1616 4 года назад

    Last versions of Alphastar do not need any "human" footage. The Deepmind team also created a method to finetune the AI model for the new patch instead of completely retraining.
    The cost and ML skills required to train such models is unbearable, YET. Now we know it's possible there will be (as always in ML) more papers, decreasing the size by 100 times and total training under 30000 € is possible.

    • @jamesmillerjo
      @jamesmillerjo 3 года назад +1

      No, final version still needs SL.

  • @ingemar_von_zweigbergk
    @ingemar_von_zweigbergk 4 года назад

    Or just skynet and it will do it for free, lol.

  • @IRMentat
    @IRMentat 4 года назад

    The difference between good gameplay AI and winning game using AI are worlds apart.
    Games like half life and fear nailed FPS ai 20 years ago but the industry frequently fails to learn from past glories or failures.

  • @Humble_Merchant
    @Humble_Merchant 4 года назад

    New Terminator sucks.

  • @Krystalmyth
    @Krystalmyth 4 года назад +2

    Your first point is moot to me. You either want something to learn, which requires you to teach it, or you go back to the dark age and program it yourself.

    • @DaveChurchill
      @DaveChurchill 4 года назад

      This simply isn't true. The entire field of reinforcement learning involves learning through self-play, and experience acting in an environment. You give the agent the objective you want to accomplish, but you do not explicitly teach it which actions to perform. It will figure out through seeing what works and what doesn't which actions lead to success.

  • @MaZe741
    @MaZe741 4 года назад

    I think you forgot an aspect of AI:
    a perfect AI will immediately abuse game design flaws and expose weak and overpowered concepts - lowering the game's enjoyability

    • @raktor1996
      @raktor1996 4 года назад +1

      Or providing early feedback to developers to fix these aspect

  • @LyingTube
    @LyingTube 4 года назад

    The last Terminator movie didn't do well at the box office because people got pissy and totally forgot that it largely rehashes the original movie with a fresh modern take and instead thought it was a heretical aberration, the same way they did with Ghostbusters. It's less that Terminator was bad and more that "female led movies" are being assailed by whiny fanboys.

    • @AIandGames
      @AIandGames  4 года назад

      I'll be honest I didn't go see it because I've been burned too many times. I'm just not interested anymore. 😕

  • @bernhartschmieder9401
    @bernhartschmieder9401 4 года назад

    Your video is bad, but not bad enough for me to dislike it, have a like.

  • @php128
    @php128 4 года назад

    next gen hardware? intel? that's a good joke sir :)

    • @Guztav1337
      @Guztav1337 4 года назад

      Well they are trying. Many are trying. Intel might not be in a good spot right now, but you have to see it from the opposite perspective: This means they have to be a lot more competitive and innovative.

  • @Krystalmyth
    @Krystalmyth 4 года назад

    Your first point is moot to me. You either want something to learn, which requires you to teach it, or you go back to the dark age and program it yourself. If the industry isn't up to the task, it's not a failure of the system when it's capable of learning, but we simply can'tbe bothered. Blaming the young honor student for not meeting the needs and wishes of the parent. It's not like AI is getting any better under other technologies. This is the future. Either we grasp it, or we pretend we never cared.

    • @damakuno
      @damakuno 4 года назад +5

      I don't think the video's thesis is an argument against deep learning / other AI, ML technologies though, just that in the design process it is difficult to justify having to design the training environment, train/re-train and test what is effectively a black box system (it is difficult to "see" what a neural network is "thinking") and is therefore difficult to manage from a game design's standpoint. At the end of the video examples are given that AI, ML can be applied in the game's industry in various other ways suggesting that AI is indeed the future.

    • @AIandGames
      @AIandGames  4 года назад +4

      ^ This. 😀

    • @DawnTyrantEo
      @DawnTyrantEo 4 года назад +3

      The thing about gameplay is that it's emergent- just in the same way that the person who designed a racing car is often a terrible choice for racing it, the developers of a game are often *not* at the levels of skill required to challenge the majority of their playerbase.
      Developers often don't have the time to play their own game and get a fundamental understanding of the deepest levels of play- their job is aesthetics, emotion, theorising, observation and adjustment, not gameplay expertise. Saying that the gameplay developers are suitable for teaching an AI gameplay is like saying Julius Caesar is suitable for teaching a university student history. Sure, they'll be able to offer useful insight, but they just don't have the right *kind* of knowledge to teach what needs to be taught.
      AI can and does improve, but it improves the same way art does, not the same way technology does. Sometimes there's a leap that creates an entirely new technique- such as how colour, sound or sweeping camera shots changed film-making- but most of the time it's individuals learning from past art to create new and novel interpretations. Take a look at the videos on Alien: Isolation or Halo Wars- by understanding their subject and purpose, and creating something designed to evoke a certain emotional and gameplay experience, they've vastly improved on previous uses of artificial intelligence.
      Stagnation in AI in any particular game isn't because designers aren't embracing new technologies, it's because designers aren't embracing old lessons. AI designers don't need new technology to be better, they just need the same combination of love and deeper understanding you see in experts of literature, cinema, or the various other people who design parts of a video game.
      That's not to say deep learning is useless, but in most cases it's not what you want for video game AI. Deep learning is unable to create an AI personality, or AI or variable difficulty, or AI that can't be played, or AI that surpasses the knowledge of your relatively tiny and inexperienced pre-release playtesting team.
      What deep learning is able to do is recognise and mimic optimal strategies. Nothing more, nothing less. This is very useful for an AI that can play fair at the top 0.05% of the player rankings, or if the game's optimal strategies are both difficult to code and occurring at a low level of skill. But for creating an AI that you can create efficiently, update easily, and customise to produce a certain emotional experience? No, deep learning can't do that.