Blog

ProjectBlog: Artificial Intelligence and Gaming

Blog: Artificial Intelligence and Gaming


When it comes to video games and artificial intelligence, most people tend to think that AI controlled opponents (or even allies) tend to be exploitable or just downright incompetent. While this may be the case with many games that utilize AI for practice environments and filler roles, or where the focus is more on player interaction, that’s not to say that all AI’s are terrible. There are AI’s that have been programmed to learn from interactions with human players as well as from records of hundreds or even thousands of previously recorded matches to the point where AI bots such as AlphaGO and OpenAI bots who have beaten pro players in GO and DOTA2 respectively.

When it comes to designing an AI though, the primary goal usually isn’t to design something that will wipe the floor with the player, unless you’re looking to make your game unbeatable or an extremely challenging experience. Usually, AI bots are programmed to interact with human players to help enhance the gaming experience. One of the more common ways of doing this is by making an FSM, or Finite State Machine. These FSM based AI’s run through a generalized flowchart of all the possible situations the AI may encounter, and have individual programmed responses for each. For example, if the player is in sight, they may be programmed to attack until their own health is low. Once this happens, they’ll be programmed to back off until they are healthy enough to fight. If they lose sight of the player or the player retreats, they may be programmed to wander or patrol a set route. While great for general AI functionality for base enemies and such, they don’t really learn much from player actions, and can be easily exploitable once a player understands how they are programmed to react. For example, popping in and out of their triggered range taking pop shots at them while the AI continually takes two steps forwards and then loses interests and turns around.

This general pathfinding algorithms have been used in gaming as far back as the early days of the NES with the original Super Mario Brothers, and have been the main two AI methods developers have depended on for decades when making their games. There is a very good reason for this though, and why most developers avoid making AI’s that can think and play a game just as well as a human player would. While unpredictability in an AI and one that gets better as it learns from the human player’s actions can make for an interesting experience, having an AI that can learn and adapt faster than a human player is able to predict it can get in the way of a narrative that the game is trying to tell, especially if the AI becomes too proficient and makes the game flat out unwinnable. While having AI such as these beating chess grandmasters and pro players in games like StarCraft makes for a great achievement in AI advancement, having these kinds of opponents in your standard CoD or Halo game might be a bit overkill.

Self-learning AI will continue to advance in leaps and bounds over the next few years both in and out of the gaming arena, but when it comes to your run of the mill AI coded enemies, it might be for the best they stay just as predictable as they always have been.

Source: Artificial Intelligence on Medium

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top
a

Display your work in a bold & confident manner. Sometimes it’s easy for your creativity to stand out from the crowd.

Social