Game Development Community

dev|Pro Game Development Curriculum

More AI musings

by Daniel Buckmaster · 11/16/2012 (1:07 am) · 9 comments

I'm a little glad I'm sitting down to write a non-Walkabout blog! While that continues to be an exciting journey, I've also resumed work on my grand plans for an AI framework. Before I go any further, I really want to mention I've finally got my hands on Bryce's Tactical AI Kit. It's tops, and I can't wait to start properly integrating it with Walkabout. Hopefully the integration will be as painless as with the UAISK!

Anyway, to the matter at hand.

Artificial intelligence!

Yes, I've blogged on the subject before, but really have little to show for it. Except lots of hours of research, many, many iterations of code, and hopefully, this time, a plan. See, AI is one of those things that's very tricky to pin down, especially if you're like me and you're never happy to implement something that works right now when you could sit down for several hours, design something that works for all possible cases ever, then try to implement it and realise your goals are far too lofty.

img705.imageshack.us/img705/2716/stalker005pn5.jpg
AI combat in one of my favourite games

I should interject at this point and make it clear: though I own both of the major AI kits mentioned above, I'm writing my own AI. Why? Because although they're both fantastic, I like programming, and I like programming AI, and I'm never satisfied with anything I didn't do myself. So there.

Over the last several iterations of my failed attempts at writing an AI framework, I seem to have hit upon a common theme, which I think I am finally close to perfecting. I'll go into that a bit further down, but for now, please indulge me-

An AI framework?

Yes. Because of the type of person I am (see above), I'm not into building AI for a shooter character. Or AI for a turret. What I'm talking about is building the real fundamentals of how the AI operates - building the tools instead of the house, you might say. I guess there's a few reasons for that.

  1. When I actually get around to building specific AI behaviour, I'll have the tools ready
  2. And hopefully, they will make it easier to create high-level AI!
  3. I can reuse the tools in any other project I may work on
  4. Hopefully tools will be useful for other people as well!
I guess those reasons are the same you might think of for any sort of library of code. But I think it's important to be clear about what my goals are and how they benefit me - and anyone else who may want to use my code.

With that said, what exactly goes into an AI framework? What does all AI need to be able to do, what can I assume and what can't I? For me, AI boils down to two core problems. The first is decision-making. How does a character, given information about the world, decide how it should act? The second problem is appearance. An AI needs to appear to be intelligent, because let's not kid ourselves here, we're building illusions.*

www.orangescaffold.co.nz/images/scaffolding.gif
That's one heck of a framework.

One thing I'd like to be clear about is that I don't really consider navigation to be part of AI. You could argue that it comes under the 'appearance' part of the problem: navigate as if you're a real person. But for the purposes of this AI framework, there are too many kinds of navigational needs and methods to fit them into a general framework. Different genres of game have different requirements, and even different characters within a game. So I leave that to specific behaviours to take care of.

Also, I haven't mentioned sensing - how an AI character actually receives information about the world. This is an interrelated topic, and I haven't really decided whether it falls under my fancy-pants ideological definition of AI or not - but it's a problem I think I'm close to solving, and it's very, very important to practical AI in a game.

*I do tend to lean towards the 'simulation' end of the AI spectrum: I prefer AI that actually tries to be intelligent, has limited knowledge and resources, etc. But when you take a step back, even AI that is good at playing a video game, or good at imitating a soldier in a very strict subset of soldierly activities, isn't really intelligent.

The behavior* manager

The underlying principle of the AI system I'm working on is the behavior manager. What I realised was that many complex AI behaviours can be broken down into very simple atomic parts. To take an example from shooters, which I happen to be a fan of, suppressing an enemy in cover is a case of aiming at the target, maintaining a firing pattern, possibly moving to your own piece of cover, and potentially calling out something to your allies. Each of those actions can take place individually, and rearranged, they form an entirely different high-level behavior. So an AI framework should allow me to define lots of little generic actions that can be used to build behaviours.

Being able to do things in parallel is very important. Imagine if an enemy could only shoot at you or move to cover! It'd be like they were playing Resident Evil! It's a concept that's popular in behavior trees - the idea of a 'parallel' node that executes all its children at the same time instead of one by one. So that needs to be in there - some way to perform multiple actions at once.

But free-for-all parallelism is a problem, too. A character can't aim at two enemies at the same time (unless they're a dual-wielding ninja raptor), nor can they go to two places at once or say two lines of dialogue simultaneously. Behavior trees leave this to the designer to sort out - don't design trees where that happens! But I'm building a framework, rather than affixing myself to a specific AI design paradigm like a behavior tree. So there has to be some way to decide between these actions that use common resources. In effect, each action should say 'I need to use this piece of the character', and the manager should sort out who gets to use what.

Finally, I feel like actions should have a sense of priority. If we're walking along on patrol, and suddenly an explosion happens, we may have two move actions: one to move to the next patrol node, and one to move to cover. Which one gets to go? They both need to use the movement resource, but we still can't decide. Each action needs to be scored based on how urgently the character should perform them. In this case, the cover movement action is a bit more immediate than continuing the patrol!

What I ended up with, then, looked something like this:

imageshack.us/a/img607/4009/behaviormanager.jpg

Don't be alarmed, I will explain. The diagram shows a behavior manager in two states. The top state involves a character walking somewhere while looking at something. Probably where they're going. This is represented in a table where each row is a resource on the character, and each column is a priority level. To the right are higher priorities; to the left are lower ones. You can see that the two existing actions are at a low priority, but they're both strong orange - currently active - because they're the only actions that exist.

Then, something catches their eye and they decide to look at it and say something (like 'who goes there?'). This pushes two new actions onto the manager: another look_at, to look at the disturbance, and an exclamation. These actions come in at a higher priority, since a disturbance is a more interesting event than a routine patrol. The net effect is shown by the action colours. The character continues to patrol (the same move action as before), but instead of looking where they're going, they glance to one side, and say something. The original look action is pale-coloured because it has been interrupted by a higher-priority action on its resource.

Now, I'm sure you're very impressed by that display, but how does it actually translate into actual AI? A framework, of course, is useless if you don't put it to work! I'm getting there, but first, I want to introduce the third tooth in the cog: behaviors themselves.

*You'll notice I've dropped the British spelling. As a wise man once told me: American English is the language of code.

Behaviors

Astute readers may have realised that the behavior manager seems to be managing actions, not behaviors. Behaviors are basically a way to coherently group and organise actions into patterns, and will probably ring familiar bells: in my system, a behaviour tree is a type of behavior, as is an FSM or even a GOAP planner. Or, even more simply, I have implemented behaviors that are like a step down from behavior trees, sequencing actions or simply executing lots of actions at the same time.

I just have one question: why?

It may seem like I've just created a complex backend that achieves precisely what you'd see using any of the standard AI techniques above, now implemented as behaviors. Actually, I reckon there are several advantages to this shared architecture:

  1. You can use whatever sort of behavior suits a particular character, but still share common actions. For example, zombies and soldiers behave very differently - a zombie may use a single simple FSM whereas a soldier may have an entire forest of behavior trees - but they can share common movement and aiming actions.
  2. You don't have to use behaviors for little details. For example, characters that are hurt in combat can try to run a single action that plays a pain sound effect. You don't have to account for these little things in your behavior trees or state machines - just chuck them into toe behavior manager, and if the character's doing something more important with its voice or whatever, then it will be ignored.
  3. Behaviors can represent anything you want. For example, I would implement conversation trees as behaviors, as well as cutscene scripting or other story events. These behaviors, since they hook into the same framework as other actions, play nicely and become more modular and reusable.
  4. Multiple behaviors can be in effect at any given time. This is one of the real benefits of this system. For example, a conversation behavior can be running at the same time as any sort of combat behaviors, and co-exist sensibly. That's certainly something I've wished for in a lot of games!

Actually, make that two questions: can you substantiate these claims?

Well, yes and no. I have no demonstration yet, but if you want to see my progress, it's in a branch of my T3D fork on Github. Unfortunately I can't really add scripts to the fork, but I'll be adding an AI branch to my Moment script project when enough code has been implemented.

And, of course, I'll be keen to show off my progress here as it happens!

About the author

Studying mechatronic engineering and computer science at the University of Sydney. Game development is probably my most time-consuming hobby!


#1
11/16/2012 (1:39 am)
Well, I'll certainly be following this blog! Good to see you working on AI Daniel. Unfortunately I'm going to have to wait. Wait! Why you am make me wait?!

Something I've always wanted to see in AI for games isn't so much perception, navigation or appearance behaviours, but preferences. The AI should have a preference for avoiding death, performng a task of travelling from point A to point B. My demise should factor highly on an enemy soldier's list of preferences, but the town blacksmith should be more interested in making or selling me armour.

With a framework approach, hopefully we'll be able to see this kind of decision making in action. I'm also hoping that, with you on the job, SkyNet may indeed become a possibility.
#2
11/16/2012 (4:14 am)
Yes, the idea behind this sort of framework is for it to support almost any high-level form of decision-making. Some goal-based systems definitely seem to echo what you're looking for: of several high-level goals, all of them are ranked, and then the AI works to achieve the goal it thinks is the most important right now.

Quote:I'm also hoping that, with you on the job, SkyNet may indeed become a possibility.
Just preserving this for posterity. :)

Also, I'm not sure how that third 'o' slipped into the diagram. I'm sure the meaning was clear, regardless...
#3
11/16/2012 (4:42 am)
More AI is always good:) Hopefully this will handle also some animation threads like Pain and attack, if a model content this animation.
#4
11/16/2012 (5:49 am)
This sounds very much like the design concept behind the AI in Outcast - and those NPC's were very cool. A worthy goal.
#5
11/16/2012 (7:50 am)
Quote:
I'm writing my own AI. Why? Because although they're both fantastic, I like programming, and I like programming AI, and I'm never satisfied with anything I didn't do myself. So there.
Damn straight!

Writing Ai is fun.
#6
11/16/2012 (5:53 pm)
Yes thanks Daniel!!
#7
11/17/2012 (3:41 pm)
Daniel:

Thanks for sharing your thoughts. Your blogs have inspired me to re-do my own work more than once... As you point out there's a basic semantic issue with using the word behavior because behavior is an element of 'intelligence'. You also mention two essential elements in procedural AI: Sensing and priority.

In my own work I have been dealing with many of these same issues with my vehicle AI: As much as procedural level generation/modification present navigation issues for AI, they also present numerous other decision-making issues. Thanks to guys like you the navigation issue has been dealt with but there's still a lot more to worry about. I am implementing a similar approach, adding a master architecture [I often prefer the word 'connector' over 'manager'], through which my various AI types can interact. This makes development time much, much longer, but the advantages in the long term are exponential...

#8
11/19/2012 (7:00 am)
Daniel
What is in the branch of your T3D fork on Github in the download?
#9
11/19/2012 (12:45 pm)
Kory: in the ai-behavior-manager branch are the source files for the behavior manager. In the ai-sensor branch is the source code for the Sensor class. Both of these have been merged into the eightyeight branch, along with lots of other stuff you may not want. If you download the zip, you'll get the eightyeight branch. If you just want the AI stuff, grab the source/ai folder and trigger.h from the download.

Gibby: seems like I have a supernatural power to make people believe they're doing it wrong ;P. I don't like the term 'manager' either - too bureaucratic - but in this case I thought it was reasonably appropriate.

Progress report: I think the behavior manager part of the system is nearly complete. The one thing it lacks is time-based updates: I purposefully built the manager to be completely event-driven, which means that running actions don't get ticked regularly. It's made it an interesting challenge to implement actions that actually do have some time dependence, such as a 'glance at' action which is supposed to just end quickly, as opposed to 'look at' which lasts indefinitely. At the moment I'm using a hacked-in schedule to tick the action manually, but I'm working on a more robust solution.

I also have a pretty-much-functional sensor object. I might make a blog on that at some point, because I reckon it deserves a bit of examination!