The fact that the concept of bucketing was introduced while showing a sim playing Tetris really made me smile. I wonder how many of these references I have missed in earlier videos ^^
Years back I was looking at different books on AI in games and I assumed they'd be mostly about this subject. Several of them were 100% about pathfinding and 0% decision making. I think part of the problem is that Utility AI is far more subjective than pathfinding and depends more on the kind of game you're making. The other part of the problem is that essential stuff like A* wasn't well known at the time so you could easily fill up a book just with pathfinding algorithms. But for utility AI (or as I called it "decision making algorithms") to not appear in multiple books on "AI" at all was very disappointing at the time.
I'm actually doing something similar; Heirarchical Task and Goal Oriented Action Planning. It works off of a heirarchy tree. This is to simulate how a pod of dolphin works. The matriarch/partiarch has a few goals of note. 1: achieve the overall objective (usually to find a key item). 2: Adapt the gameplay to what the player is doing. 3: evaluate the state of other hierarchies. They will then formulate a basic plan and, tell the others within their "pod". Following that, each entity will develop a plan to achieve that plan. They do this with two goals; provide as much of a threat to the enemies as possible while, minimizing the threat that those enemies pose to their boss. That last one can be overridden in specific situations; if they are unable to find an acceptable plan, they will temporarily override it with protecting themselves until either the enemies are gone or their "boss" is removed somehow. If it's the later, the command structure is reevaluated.
you go any good examples or tutorials on HTN i cant seem to find anything but concepts and i learn best by seeing something in action. How the code is structured
You could do a part 2 on utility AI being intentionally gimped by developers in order to make games more "fun" too. As an example, if I'm understanding this topic correctly (I'm not very smart), though I do forget which game it was if it was uncharted. Their AI would move away and jump away from grenades thrown by the player if thrown directly beside them, but they would only move and jump away from the blast radius just enough for the grenade to still hit them, giving the players the impression they did something right. The perfect execution of utility AI could make it exponentially difficult for players to play against the AI. Now that I think about it, if developers think of improving the utility AI "per difficulty increase" rather than just adding/subtracting damage numbers. That would probably make action/shooting games a lot more difficult without making bullet sponges.
I'd love to watch a video comparing the pros and cons or different uses of the various approaches to video game AI you've covered. State Machines, Behavior Trees, GOAP, and Utility see to have some overlap but also seem to excel in different areas. I've personally worked with the former two, but I'm sometimes struggle a bit to see how the four differ or why one might use one over the other. How would the Dragon Age example benefit or suffer by using GOAP instead of Utility? How might the performance impact change on Total War if it used Behavior Trees instead of Utility? And of course there are many other example games or other questions one could ask. Maybe a venn diagram comparing strengths / utilizations or comparisons of games that accomplish similar objectives with different AI approaches would help clear things up. A comparison video might also help illuminate why state machines and behavior trees are (to my knowledge) the most commonly used in games, instead of GOAP and utility. In my own experience, the general advice goes to tend toward state machines because they're quick and simple, but swap to behavior trees if the AI needs to be more advanced. So what space does that leave for GOAP and Utility, and what should one consider that might make one want to use one of them instead?
The incredibly reductive version of the answer is: State Machines are simple to code and understand, but are usually less flexible then other options. Behavior Trees work better when you you are dealing with decisions and actions in the short term (in an FPS, that could be, I'm being shot, do I charge forward for a melee attack, or dive for cover, RIGHT NOW; but in a strategy game, it could be, should I build a new group of units or start my new base). GOAP is useful when the AI needs to care about long(-ish) term goals: am I trying to kill the player or just slow them down, do I need a bigger army or more economy, but creating a GOAP tends to be more complicated, because you usually need a bunch of lower-level pieces to check all the information used to make those decisions. Utility systems are particularly good at evaluating dynamic information, i.e. there are a lot of enemies, a couple cover positions, you have several weapons each with a separate ammo amount, and a health pool. A Utility system can crunch through all of those values and based-on the utility functions the designers have created, will tell you which enemy/cover/weapon combo makes the most sense right now. They also tend to be the most flexible overall, but can be more math-heavy and may need optimization to not lag the game, depending on how much data you're crunching. Also, State Machines, Behavior Trees, and GOAPs are generally about how the character's actions are organized relative to each other, whereas Utility systems are more about which action is most important given the current game state. They can all mix around a bit, of course, but that's mostly where they fall. Credentials: I'm a programmer at Gearbox Software, and spent a good portion of my time at college studying game-related AI systems.
@@sethrenshaw8792 Thank you, this is actually a very clear and helpful answer. I definitely had their uses and strengths tangled in my head a bit (especially Behavior Trees, GOAP, and Utility), but this actually helps me separate them. I appreciate it! I'll probably try to do some additional research into the details, now that this information gives me a launching point to start on. The points you mentioned on GOAP and Utility make me want to explore them more, in case they could fill a role for me in the future.
@@NeverduskX The book I read to understand the technical side of Utility systems is "Behavioral Mathematics for Game AI" by Dave Mark. You can also look up some GDC and other game-dev talks Dave Mark has given to see some of the systems he's helped build.
I've been subscribed for 4 years now, and as a gamer who have no understanding of coding/ Maths confuses me: I love this channel. I understand my favorite games better and knowing what kind of AI system is involved with the game I have learned to take advantage of some of them to make my gaming sessions a little more interesting. And a few years ago I had actually used a video on this channel, along with another channel's video on why adding AI to your game will help increase it's longevity, to get the developers of my favorite Free to Play game to actually ADD artificial intelligence to their game. So thank you for making something so complicated actually entertaining to watch and easy to understand. It's oddly helped me out a lot..
I don't know if any game does this,, but you can use the utility as a weight when randomly choosing an action. So e.g. if attacking has utility 0.8 and running away has utility 0.4, you would attack twice as often as
Talking about Dragon Age: Inquisition, is there any way to edit AI behavior files to study and better it? There's so much to improve, and I so would love to be able to.
Love your stuff! I've taken it to heart and it's given me a different direction for my PhD. Serious question for you, Dr. Thompson: Given these techniques and others, why do so many games have such terrible AI in practice? (Indeed, an episode of AI 101 I'd love to see would be "Why does AI fail?") I'm not intending to call out any specific game in particular, but there are plenty of examples (okay, here's one because it's funny ruclips.net/video/p24cqO7rP8o/видео.html ). I'm really curious what goes wrong because the techniques seem sound. It seems like FSM, Behaviour Trees, HTN, GOAP, or whatever other solution should be able to work out what to do and we should see great AI in practice. While it is amazing that video games and AI exist at all (much like it's amazing that cars and airplanes even exist), in that context, it seems surprising that NPCs walk into walls, or don't react to their surroundings, or display other "broken" behaviours that seem to be foundational to the very nature of making a game AI. I'm not necessarily talking about nuanced balancing of very complex systems, like Total War's diplomacy negotiations. There is a different, built-in kind of complexity when it comes to balancing vast multivariate situations where utility can be difficult to define. I'm more talking about 1st or 3rd person games where AI controls NPC avatars in a 2d/3d world. Yes, they are also complex multivariate situations, but the options for an AI are relatively limited, e.g. idling, barking, moving, attacking, specific variants (e.g. 'taking cover' is a variant of moving), and any combinations thereof (e.g. attacking while moving). It seems like these limited actions should work consistently by now, and should work in ways that would reliably convey the "intelligence" of the agents. These kinds of scope-limited actions have been part of games for at least 20 years. They have been successfully implemented in the past. Why are they not consistently implemented successfully? That is, I'm especially interested in the practical description of how a new game can end up with AI that fails at behaviours that were already successfully "solved" in previous games, sometimes in games from a decade ago or earlier. Put another way: Why do the AI implementations in Alien: Isolation and F.E.A.R. actually work properly, but had broken AI? How come a game as old as Perfect Dark for the N64 had functional AI, but felt more like Artificial Stupid than Artificial Intelligence? "Not enough dev time", "Not enough QA", "Not enough processing power", and "There will always be bugs" all seem inadequate here. These are problems that have been solved before, so how could these problems fail to be solved now? What makes AI break? And how come it doesn't dynamically recover? Thanks for the awesome content!!
Also, I (for one) don't find the math "dry"! I'd watch a more math/detail oriented series, though you know the demographics of your channel more than I would :)
Sorry for not being Tommy, but given he doesn't seem to have replied in 3 months, nor anyone else, maybe i'm your next best bet, and i don't think i can give as comprehensive and general of an answer, but i can approach some things i suppose. I haven't played the game so i have no clue how the character is expected to behave. If it just happens sometimes, probably just a bug where part of the AI is no longer operational. If it's reproducible, it might just be stunlock, which is an issue of a character controller design. Just like you have something that translates from your inputs to the character animation and location in simulation, an AI must as well. And just like in some games, you can't get an input in if you're hit, because your character is playing an uninterruptible hit animation, and by the time it's done, you're hit again and you're stuck in that animation loop, the same can happen to an NPC. Then there's artificial stupidity. Like you want the game to be playable, so you make the AI deliberately blind to what's happening behind it, to allow you to sneak up on it. Then you give it a notice delay, like if it has a sensor on you, such as audio cue or line of sight, you still want it to not react immediately, as it often creates an impression that you were unfairly spotted just because you were exposed for like a couple milliseconds, the player wants to be able to come up with a plan and execute it in spite of execution not being 100% perfect. But there could be a forgotten condition that turns out to reset such a timeout, so there can be actions which can cause it to be stuck in a nonreactive state. Any adequately capable, even fairly trivial AI is generally unwinnable, as it will just hunt you down wherever you are and headshot you from a mile away, so artificial stupidity is a hard necessity and one of the largest efforts in AI design. The second example where it's stuck walking into a stone might be a map bug, because the physical map that the character traverses fundamentally isn't the same as the one used by AI to plan movement. Physical map is basically the same as visual but with detail scaled down, it governs collision and dynamics. The AI map is a connectivity map of traversable patches, often called navmesh. If the AI map says "you can go this way" but the physical map won't allow, you get AI that is stuck on the scenery. The AI isn't really equipped to react to the fact that what it's trying to do isn't happening, because this is normally not a condition that needs to be handled. Keep in mind, it's also possible for a player character to be stuck on the scenery just the same by walking where it LOOKS like they can go but then being stuck on an obstacle they didn't perceive as important, but normally, we're equipped to know that what we expect isn't happening, and we just try another route or movement method within like a fraction of a second often without noticing. Imagine you did give AI the ability to detect that it's stuck and try something else, it would be a substantial development effort adding that functionality, and since it won't be tested comprehensively, as it would be a system that is rarely activated, it will also introduce bugs that will interfere with other areas of the game, like an AI that tries to do something else in spite of the plan that it has laid out being correct. Apropos player characters being stuck on scenery, i can recommend "Killing the Walk Monster" talk by Casey Muratori, it's not quite on your topic, as there's zero AI talk, but i expect you might like it, it might prove useful eventually as well, since map processing is very similar, and the math is DELIGHTFUL and also easy to understand the way he explains. I might be running out of space so i'll continue in another comment.
Continued. I do think "Not enough dev time" is actually usually pertinent. You can have 3 or 5 years of development timespan but the development timeline is approved in conjunction with development scope, so the necessary features are chosen such, that they will fill out all the development time for a given team capacity. The publisher wouldn't just approve paying 20% more for a title than it "should" cost in development. Timelines get decided roughly in advance, with agreed milestones every few months, in which the production company delivers an intermediate state of the product with work agreed upon done and gets paid, so they can keep the lights on. Unfortunately, AI usually isn't very visible to the producer working for the publisher outside the studio, so it ends up being deprioritised under constant time pressure. So on the one hand, checklist style or flashy progress takes precedence over necessary steps, and on the other, no matter the agreed timeline, it's always a little too tight to make it, as the scope of development must grow accordingly. This is also why project-end crunch is ALWAYS a thing, because studios are forced to underdeliver prior, where the milestones seem to contain the feature requested but not at a quality that is shippable. When you speak of N64 games where the AI is functional, for one, those stand against many of the era which were complete garbage, famously Daikatana. Teams ran with a lot less oversight because production timelines were shorter and budgets were small so the publishers had less need to control the risk on every project. It was both a good and a bad thing, because while some titles benefitted from lack of hostile management, many titles have failed arguably from not enough oversight, like that same Daikatana. Then the AI systems had to be designed deliberately simple because that's all the hardware was good for; so was the level geometry and then of course the levels had to be designed around the actual AI capabilities. Simple means predictable. That same Golden Eye that you mention, the visibility between sectors is hand designed, and so is each and every path segment that an NPC can take, as well as any position dependent action, such as taking cover. Today, with exploding scope, such an approach would never get approved, as the amount of manual labour would similarly explode! Apropos simple navpoint graph stuff, it's no use if every link between nodes isn't excessively tested, you can't just plop them down and expect them to work! This is probably why Daikatana doesn't work, they often forgot to update the navpoints after the level was changed, so pathfinding is completely out of sync. I think other issue is the explicitly arrogant mission statement, summarised by Romero's quote: "Design is law. What we design is what's going to be in the game. It's not going to be that we design something and have to chop it up because the technology can't handle it or because some programmer says we can't do it." But then, if you explicitly work to spite the tech foundation that you have on hand rather than honour it, how can you hope for a successful result? He insisted on super narrow walkable areas, areas that need to be accessed by jumping, lifts/pods etc, reasoning, whatever the player can do, the AI should accommodate the same, and that's where it still gets stuck after the navpoint graph has been fixed by the enthusiast community, but at the same time, there isn't nearly enough markup power in the navpoint graph as implemented in this game and related systems to accommodate. Odds are, all failures happen for somewhat different reasons; but all successes are in a way similar, that technology, scope, production process, budget, team etc were just right for the project at hand, even if the specific ingredients how they achieve that are all different.
"Why do people still make bad chairs? Humanity has been making chairs for millennia!" It actually is an issue of execution, & not design or academic systems.
The fact that the concept of bucketing was introduced while showing a sim playing Tetris really made me smile. I wonder how many of these references I have missed in earlier videos ^^
I haven't been inspired to take down notes like this in a while.
Years back I was looking at different books on AI in games and I assumed they'd be mostly about this subject. Several of them were 100% about pathfinding and 0% decision making. I think part of the problem is that Utility AI is far more subjective than pathfinding and depends more on the kind of game you're making. The other part of the problem is that essential stuff like A* wasn't well known at the time so you could easily fill up a book just with pathfinding algorithms. But for utility AI (or as I called it "decision making algorithms") to not appear in multiple books on "AI" at all was very disappointing at the time.
I'm actually doing something similar; Heirarchical Task and Goal Oriented Action Planning. It works off of a heirarchy tree. This is to simulate how a pod of dolphin works. The matriarch/partiarch has a few goals of note. 1: achieve the overall objective (usually to find a key item). 2: Adapt the gameplay to what the player is doing. 3: evaluate the state of other hierarchies. They will then formulate a basic plan and, tell the others within their "pod". Following that, each entity will develop a plan to achieve that plan. They do this with two goals; provide as much of a threat to the enemies as possible while, minimizing the threat that those enemies pose to their boss. That last one can be overridden in specific situations; if they are unable to find an acceptable plan, they will temporarily override it with protecting themselves until either the enemies are gone or their "boss" is removed somehow. If it's the later, the command structure is reevaluated.
you go any good examples or tutorials on HTN i cant seem to find anything but concepts and i learn best by seeing something in action. How the code is structured
You could do a part 2 on utility AI being intentionally gimped by developers in order to make games more "fun" too.
As an example, if I'm understanding this topic correctly (I'm not very smart), though I do forget which game it was if it was uncharted. Their AI would move away and jump away from grenades thrown by the player if thrown directly beside them, but they would only move and jump away from the blast radius just enough for the grenade to still hit them, giving the players the impression they did something right.
The perfect execution of utility AI could make it exponentially difficult for players to play against the AI. Now that I think about it, if developers think of improving the utility AI "per difficulty increase" rather than just adding/subtracting damage numbers. That would probably make action/shooting games a lot more difficult without making bullet sponges.
I think it was actually Spec Ops The Line that did the grenade thing but I won't be surprised if uncharted did it too.
I 100% agree, not enough games make foes smarter on high difficulty, they just up damage and health and call it a day.
I'd love to watch a video comparing the pros and cons or different uses of the various approaches to video game AI you've covered. State Machines, Behavior Trees, GOAP, and Utility see to have some overlap but also seem to excel in different areas. I've personally worked with the former two, but I'm sometimes struggle a bit to see how the four differ or why one might use one over the other.
How would the Dragon Age example benefit or suffer by using GOAP instead of Utility? How might the performance impact change on Total War if it used Behavior Trees instead of Utility? And of course there are many other example games or other questions one could ask. Maybe a venn diagram comparing strengths / utilizations or comparisons of games that accomplish similar objectives with different AI approaches would help clear things up.
A comparison video might also help illuminate why state machines and behavior trees are (to my knowledge) the most commonly used in games, instead of GOAP and utility. In my own experience, the general advice goes to tend toward state machines because they're quick and simple, but swap to behavior trees if the AI needs to be more advanced. So what space does that leave for GOAP and Utility, and what should one consider that might make one want to use one of them instead?
The incredibly reductive version of the answer is: State Machines are simple to code and understand, but are usually less flexible then other options. Behavior Trees work better when you you are dealing with decisions and actions in the short term (in an FPS, that could be, I'm being shot, do I charge forward for a melee attack, or dive for cover, RIGHT NOW; but in a strategy game, it could be, should I build a new group of units or start my new base). GOAP is useful when the AI needs to care about long(-ish) term goals: am I trying to kill the player or just slow them down, do I need a bigger army or more economy, but creating a GOAP tends to be more complicated, because you usually need a bunch of lower-level pieces to check all the information used to make those decisions. Utility systems are particularly good at evaluating dynamic information, i.e. there are a lot of enemies, a couple cover positions, you have several weapons each with a separate ammo amount, and a health pool. A Utility system can crunch through all of those values and based-on the utility functions the designers have created, will tell you which enemy/cover/weapon combo makes the most sense right now. They also tend to be the most flexible overall, but can be more math-heavy and may need optimization to not lag the game, depending on how much data you're crunching.
Also, State Machines, Behavior Trees, and GOAPs are generally about how the character's actions are organized relative to each other, whereas Utility systems are more about which action is most important given the current game state. They can all mix around a bit, of course, but that's mostly where they fall.
Credentials: I'm a programmer at Gearbox Software, and spent a good portion of my time at college studying game-related AI systems.
@@sethrenshaw8792 Thank you, this is actually a very clear and helpful answer. I definitely had their uses and strengths tangled in my head a bit (especially Behavior Trees, GOAP, and Utility), but this actually helps me separate them. I appreciate it!
I'll probably try to do some additional research into the details, now that this information gives me a launching point to start on. The points you mentioned on GOAP and Utility make me want to explore them more, in case they could fill a role for me in the future.
@@NeverduskX The book I read to understand the technical side of Utility systems is "Behavioral Mathematics for Game AI" by Dave Mark. You can also look up some GDC and other game-dev talks Dave Mark has given to see some of the systems he's helped build.
@@sethrenshaw8792 I'll search them up and give them a look. Thank you for the advice!
I've been subscribed for 4 years now, and as a gamer who have no understanding of coding/ Maths confuses me: I love this channel. I understand my favorite games better and knowing what kind of AI system is involved with the game I have learned to take advantage of some of them to make my gaming sessions a little more interesting. And a few years ago I had actually used a video on this channel, along with another channel's video on why adding AI to your game will help increase it's longevity, to get the developers of my favorite Free to Play game to actually ADD artificial intelligence to their game. So thank you for making something so complicated actually entertaining to watch and easy to understand. It's oddly helped me out a lot..
Looking at using Utility AI for a tycoon sim type game. Finding all these AI videos super helpful even if I am just using off the shelf libraries.
So glad I found this video, what a huge help this has been. It has been suprisingly difficult to find content like this.
I don't know if any game does this,, but you can use the utility as a weight when randomly choosing an action. So e.g. if attacking has utility 0.8 and running away has utility 0.4, you would attack twice as often as
New upload? Stopped current video for this. Love these videos man.
Gotta love your videos and content bro, keep making video like these
Can you make a video about modern machine learning in game? For instance: hello neighbor, dota 2, and others?
You lost me at quadratics. You won me back with the Sims
Could you please make a video about Dragon Age Origin´s Companion AI that is customizable by the player?
I don’t know why, but I was somehow unsubscribed to this channel. I resubscribed immediately
This was a meaty video - thanks so much!
Talking about Dragon Age: Inquisition, is there any way to edit AI behavior files to study and better it?
There's so much to improve, and I so would love to be able to.
Do you program or direct games?
Love these!
Thanks for the vid
I wanna thank Bioware's AI for carrying my ass most of the time when I had a shitty keyboard and controller.
I see the word NPC, I remember the movie free guy!
Is it worth watching? Ryan Reynolds has a very mixed tracked record. 😅
@@AIandGames Yes it is!
That's a great movie!
Viability AI
Love your stuff! I've taken it to heart and it's given me a different direction for my PhD.
Serious question for you, Dr. Thompson:
Given these techniques and others, why do so many games have such terrible AI in practice?
(Indeed, an episode of AI 101 I'd love to see would be "Why does AI fail?")
I'm not intending to call out any specific game in particular, but there are plenty of examples (okay, here's one because it's funny ruclips.net/video/p24cqO7rP8o/видео.html ).
I'm really curious what goes wrong because the techniques seem sound.
It seems like FSM, Behaviour Trees, HTN, GOAP, or whatever other solution should be able to work out what to do and we should see great AI in practice. While it is amazing that video games and AI exist at all (much like it's amazing that cars and airplanes even exist), in that context, it seems surprising that NPCs walk into walls, or don't react to their surroundings, or display other "broken" behaviours that seem to be foundational to the very nature of making a game AI.
I'm not necessarily talking about nuanced balancing of very complex systems, like Total War's diplomacy negotiations. There is a different, built-in kind of complexity when it comes to balancing vast multivariate situations where utility can be difficult to define. I'm more talking about 1st or 3rd person games where AI controls NPC avatars in a 2d/3d world. Yes, they are also complex multivariate situations, but the options for an AI are relatively limited, e.g. idling, barking, moving, attacking, specific variants (e.g. 'taking cover' is a variant of moving), and any combinations thereof (e.g. attacking while moving). It seems like these limited actions should work consistently by now, and should work in ways that would reliably convey the "intelligence" of the agents.
These kinds of scope-limited actions have been part of games for at least 20 years.
They have been successfully implemented in the past.
Why are they not consistently implemented successfully?
That is, I'm especially interested in the practical description of how a new game can end up with AI that fails at behaviours that were already successfully "solved" in previous games, sometimes in games from a decade ago or earlier. Put another way: Why do the AI implementations in Alien: Isolation and F.E.A.R. actually work properly, but had broken AI? How come a game as old as Perfect Dark for the N64 had functional AI, but felt more like Artificial Stupid than Artificial Intelligence?
"Not enough dev time", "Not enough QA", "Not enough processing power", and "There will always be bugs" all seem inadequate here. These are problems that have been solved before, so how could these problems fail to be solved now?
What makes AI break?
And how come it doesn't dynamically recover?
Thanks for the awesome content!!
Also, I (for one) don't find the math "dry"!
I'd watch a more math/detail oriented series, though you know the demographics of your channel more than I would :)
Sorry for not being Tommy, but given he doesn't seem to have replied in 3 months, nor anyone else, maybe i'm your next best bet, and i don't think i can give as comprehensive and general of an answer, but i can approach some things i suppose.
I haven't played the game so i have no clue how the character is expected to behave. If it just happens sometimes, probably just a bug where part of the AI is no longer operational. If it's reproducible, it might just be stunlock, which is an issue of a character controller design. Just like you have something that translates from your inputs to the character animation and location in simulation, an AI must as well. And just like in some games, you can't get an input in if you're hit, because your character is playing an uninterruptible hit animation, and by the time it's done, you're hit again and you're stuck in that animation loop, the same can happen to an NPC.
Then there's artificial stupidity. Like you want the game to be playable, so you make the AI deliberately blind to what's happening behind it, to allow you to sneak up on it. Then you give it a notice delay, like if it has a sensor on you, such as audio cue or line of sight, you still want it to not react immediately, as it often creates an impression that you were unfairly spotted just because you were exposed for like a couple milliseconds, the player wants to be able to come up with a plan and execute it in spite of execution not being 100% perfect. But there could be a forgotten condition that turns out to reset such a timeout, so there can be actions which can cause it to be stuck in a nonreactive state. Any adequately capable, even fairly trivial AI is generally unwinnable, as it will just hunt you down wherever you are and headshot you from a mile away, so artificial stupidity is a hard necessity and one of the largest efforts in AI design.
The second example where it's stuck walking into a stone might be a map bug, because the physical map that the character traverses fundamentally isn't the same as the one used by AI to plan movement. Physical map is basically the same as visual but with detail scaled down, it governs collision and dynamics. The AI map is a connectivity map of traversable patches, often called navmesh. If the AI map says "you can go this way" but the physical map won't allow, you get AI that is stuck on the scenery. The AI isn't really equipped to react to the fact that what it's trying to do isn't happening, because this is normally not a condition that needs to be handled. Keep in mind, it's also possible for a player character to be stuck on the scenery just the same by walking where it LOOKS like they can go but then being stuck on an obstacle they didn't perceive as important, but normally, we're equipped to know that what we expect isn't happening, and we just try another route or movement method within like a fraction of a second often without noticing. Imagine you did give AI the ability to detect that it's stuck and try something else, it would be a substantial development effort adding that functionality, and since it won't be tested comprehensively, as it would be a system that is rarely activated, it will also introduce bugs that will interfere with other areas of the game, like an AI that tries to do something else in spite of the plan that it has laid out being correct. Apropos player characters being stuck on scenery, i can recommend "Killing the Walk Monster" talk by Casey Muratori, it's not quite on your topic, as there's zero AI talk, but i expect you might like it, it might prove useful eventually as well, since map processing is very similar, and the math is DELIGHTFUL and also easy to understand the way he explains.
I might be running out of space so i'll continue in another comment.
Continued.
I do think "Not enough dev time" is actually usually pertinent. You can have 3 or 5 years of development timespan but the development timeline is approved in conjunction with development scope, so the necessary features are chosen such, that they will fill out all the development time for a given team capacity. The publisher wouldn't just approve paying 20% more for a title than it "should" cost in development. Timelines get decided roughly in advance, with agreed milestones every few months, in which the production company delivers an intermediate state of the product with work agreed upon done and gets paid, so they can keep the lights on. Unfortunately, AI usually isn't very visible to the producer working for the publisher outside the studio, so it ends up being deprioritised under constant time pressure. So on the one hand, checklist style or flashy progress takes precedence over necessary steps, and on the other, no matter the agreed timeline, it's always a little too tight to make it, as the scope of development must grow accordingly. This is also why project-end crunch is ALWAYS a thing, because studios are forced to underdeliver prior, where the milestones seem to contain the feature requested but not at a quality that is shippable.
When you speak of N64 games where the AI is functional, for one, those stand against many of the era which were complete garbage, famously Daikatana. Teams ran with a lot less oversight because production timelines were shorter and budgets were small so the publishers had less need to control the risk on every project. It was both a good and a bad thing, because while some titles benefitted from lack of hostile management, many titles have failed arguably from not enough oversight, like that same Daikatana. Then the AI systems had to be designed deliberately simple because that's all the hardware was good for; so was the level geometry and then of course the levels had to be designed around the actual AI capabilities. Simple means predictable.
That same Golden Eye that you mention, the visibility between sectors is hand designed, and so is each and every path segment that an NPC can take, as well as any position dependent action, such as taking cover. Today, with exploding scope, such an approach would never get approved, as the amount of manual labour would similarly explode! Apropos simple navpoint graph stuff, it's no use if every link between nodes isn't excessively tested, you can't just plop them down and expect them to work! This is probably why Daikatana doesn't work, they often forgot to update the navpoints after the level was changed, so pathfinding is completely out of sync. I think other issue is the explicitly arrogant mission statement, summarised by Romero's quote: "Design is law. What we design is what's going to be in the game. It's not going to be that we design something and have to chop it up because the technology can't handle it or because some programmer says we can't do it." But then, if you explicitly work to spite the tech foundation that you have on hand rather than honour it, how can you hope for a successful result? He insisted on super narrow walkable areas, areas that need to be accessed by jumping, lifts/pods etc, reasoning, whatever the player can do, the AI should accommodate the same, and that's where it still gets stuck after the navpoint graph has been fixed by the enthusiast community, but at the same time, there isn't nearly enough markup power in the navpoint graph as implemented in this game and related systems to accommodate.
Odds are, all failures happen for somewhat different reasons; but all successes are in a way similar, that technology, scope, production process, budget, team etc were just right for the project at hand, even if the specific ingredients how they achieve that are all different.
"Why do people still make bad chairs? Humanity has been making chairs for millennia!" It actually is an issue of execution, & not design or academic systems.
Cool
wow