Game Development Community

really advanced AI

by Aegis Lord · in Technical Issues · 12/10/2000 (5:13 pm) · 29 replies

I'm creating an RPG and I was hoping I could find some information on how to create AI. My goal is to create an enemy AI within my RPG where the enemies learn the players attack patterns and form their own strategies based them, what actions had the greatest effect, predicted damage, if the enemy scanned the party, enemy type, characters hit, and how many battles the player fought within the field or a certain are of the world map.Also, they'll from strategies based on their enemy types. Enemy strategies will range from single, tag-team, and group. I'm currently creating this in Borland C++ 5.0. Any suggestions or comments will be appreciated. Thanks!
Aegis Lord
Page «Previous 1 2
#1
12/10/2000 (5:19 pm)
While I'm not versed on AI at all, I think what you're looking to do would work as a Neural Net. Do some looking on that.
#2
12/10/2000 (10:31 pm)
You might want to start by checking out this site and see what approach fits your game the best.

Game AI Page

If you find the resource useful don't forget to rate it so others can benefit from your recommendations in the future

--Rick
#3
12/10/2000 (11:00 pm)
Even better then the AI information is the "You know your game is doomed when..." list. ;-)
#4
02/15/2001 (3:06 pm)
You don't need neural nets. There's a much easier way that will be almost as good (the player wouldn't be able to tell the difference between the two AIs if he wasn't told which was which).

List every possible option for the given enemy. Attacking a different character counts as a different option.

Example (if there are at most 4 people in the player's party at once):
Attack Person 1
Attack Person 2
Attack Person 3
Attack Person 4
Defend
Cast Spell A at Person 1
Cast Spell A at Person 2
Cast Spell A at Person 3
Cast Spell A at Person 4
Cast Spell B at Person 1
Cast Spell B at Person 2
Cast Spell B at Person 3
Cast Spell B at Person 4
Heal Self
Heal Friend


Give each one of these options a weight (an integer. The weights should all start the the same. Also store the average amount of damage the monster does per battle. You can just set this to 0.

Add up the total amount of damage the monster does to the characters during the battle. After the battle, check to see if the monster did more or less damage than average. If it's more, then increase the weight of each action by the number of times it did that action during the battle. If it's less, then decrease the weight of each action by the number of times it did that action during that battle.

You have to make sure to store all the variables correctly.

This is pretty much the same idea as neural nets, but it should be alot easier to understand. Also, it's very hard to set up neural nets so that they actually work the way you want them to.

It will work better if you split the monsters into several types, where all the monster species in each type have similar attacks.

Example (the all-around tough monster group):
Attack
Defend
Cast best offensive spell
Cast best defensive spell
Heal most wounded ally

Each option can have a little AI to determine exactly what to do (like who to attack/cast spell on).

This way, all monsters of each type learn whenever 1 monster of that type fights, rather than just when a monster of the same species fights. If it were only when a monster of the same species fought, it wouldn't help the monsters much because the PCs will just go up some levels and go to the next area (and fight new monsters).

You'll note that this idea doesn't take the PCs' hp into account. You'd have to add some things or do something different to do that. However, I think this idea is good enough as it is.

- Steve Fletcher
#5
04/28/2001 (2:53 pm)
Was wondering, what type of mechanism do you use to prevent the enemy from being too smart?

Seems to me that there should be some type of random/weighted variance appied to the approach suggested in above post by Steve.

Brian
#6
04/28/2001 (6:59 pm)
I don't think you will really need to worry about an AI getting too smart if the game design is good. For example, does a class of monster become resistant to certain types of attack, does a certain weapon always work that well and what if the player or another monster does something different?

What I think you need is an AI that will behave unpredictably. In the backburner of my brain I am working on a NN design that will give an AI 'character' and 'emotion'. The NN will be small and perform only these higher cognitive functions, conventional AI will handle the rest (pathfinding, navigation, memory(as above) etc.). By keeping the NN small and specific I hope to make it fast enough for Action and FPS games.

If all the AIs behave unpredictably then chosing the best method of attack for players and other AIs will come down to intuition as much as number crunching. Hopefully the NPCs will achieve a level of intuition by feeding off the emotions of other characters.

It will be a fairly long and difficult project when I finally get started.
#7
04/28/2001 (7:15 pm)
Articifial Neural Net => The computer equivalent of building an airplane using flapping wings for propulsion...

:-)
#8
04/28/2001 (11:17 pm)
Well I know you're just kidding but the difference is that we have a very good understanding of flight but there's a lot we still have to learn about our own brain. You can create AI based on what we can logically surmise about our own thought patterns but when you use a NN you are often pleasantly surprised to find that it has features you hadn't even thought of. I'd better stop now, I could go on and on about NNs :)
#9
04/29/2001 (8:55 am)
Robert/Steve,

As a thought: what happens when you take Steves method and instead of assigning points to "Actions" such as "Attack person 1" you assign points to "Emotions/Traits" such as "Aggressiveness". The thought is to then assign a value for each "Trait" to each "Action". In other words to evaluate (a very simple example):

Monster A has an "Agreesiveness" [Trait] of 5; "Attack player 1" [Action] requires an Agressiveness [Trait] of 4 to be carried out by the monstor; A "modifier" of 60% --adjusted in game as a result of success/failure for the "Monster's breed" means that Monster A will carry out [Action] 60% of the time that the evalutaion resolves to "TRUE". After 10 successful [Actions] in a row/period of time the modifier becomes 65% instead of 60%....

Almost sounds like an RPG.....Comments?
#10
04/29/2001 (11:46 am)
Well I wasn't joking about it.

I just mean that a network of neurons is nature's way and nature's way is rarely ever the best way to do something artificially. Do we drive cars that walk?

I'm a proponent of genetic algorithms myself. Now I will say this, in theory a neural net is an efficient way of storing information without using up a heck of a lot of resources - that is, to store the equivalent (and equivalence is kinda hard to define here...) amount of information with conventional methods would require significantly more resources. But as far as their ability to simulate intelligence, I still say genetic algorithms are the way to go.

But then comparing these two is more like comparing helicopters to airplanes... but I won't say which is which :-)
#11
04/29/2001 (5:14 pm)
>I still say genetic algorithms are the way to go. [Snip]
But then comparing these two is more like comparing helicopters to airplanes

I disagree, both approaches are great and combining them is the way to go (I'm talking about AI generally here, not necessarily for computer games). In fact I have an idea for a 'game' that does just that.

BTW doesn't the USAF have a plane that can rotate its props to take off vertically :)

>Do we drive cars that walk?

Do we drive cars that evolve? (please don't say yes :), the evolution of cars is completely different to cars that evolve)
We still have a lot to learn about biological neural nets and evolution, until we understand them who's to say what the best implementation is. In my experience you get more interesting results using a neural net than using a rule driven approach. Both have their advantages for computer games and that is why I'd like to try and combine them.

Brian:

Steve has basically described a NN using reinforcment learning and it should work fine.

What you describe is OK but what you've done is created another characteristic and given it an emotional label rather than actually giving it 'emotions'. Anything you do for the AI needs to apply for the player as well. How do you evaluate the player's agressiveness? Agressiveness is a misleading example, I'm talking more about mood/morale, things like anxiety, fear, even love is just a brain chemical.

I don't want to give too much away but in biology various chemicals alter the way our neural nets work. So the AIs will still be recieving the same info but they will be processing it with a different mindset. I probably threw you with the 'feeding off emotions' comment, thats probably just a pipe dream.

A reinforcement network has already been developed that uses reinforcement learning and has (for want of a better term) an anxiety level. Basically I wan't to develop that idea further and try to make it general enough to be at the heart (soul?) of any games AI.

Before I stop blabbering, I'd just like to point out that I am NOT talking about backpropagation or even any of the common reinforement learning models. I will need to do some more research(reading) before I even know exactly what form the network will take, I have some ideas but there is a long way to go and right now I need a paying job to support it (hint, hint).

There, see what happens when you get me started :)
#12
04/29/2001 (5:38 pm)
Robert/?,

Some random thoughts....

I see what you mean --but as a novice-- if I were to attach a "Fear" value to each of the possible "Courses of action" open to Monster A --and then based on past performance the individual Monster would decide to pursue that course of action or not --would this not relate to an "anxiety" conditioning ---as the Monster looses more and more, the monster would fight less and less (or realistically tend toward the side-on or flanking tactics); conversely successive wins would make the monster bolder and more prone to frontal attacks.....

This presumes that the Monster actually lives passed the initial contact --how or when would Monster A decide to withdraw from an "unwinable" position? Is not one of the first "requirements" the desire for self preservation? How can or should this be quantified?
#13
04/29/2001 (5:55 pm)
Just thought I'd pitch in here, Ai is something I'm really interested in - doesn't mean I know anything about it though :)

As far as I know, making a Neural Net to all the monster's behavior will be far too processor intensive if you have a lot of them.

A possible AI method to use is to combine a finite-state machine with a genetic algorithm. In the FSM, just list all the possible actions that the monster can do. Typical actions would be:_

Attack 'target';
Defend from 'target';
use spell 'X' on 'target';
move 'location';
retreat;
pick up item 'X';

and so on...

Here's where the genetic algorithm comes in. Instead of randomly choosing an action, you gat the creature to learn what action is best, in what environment. To control the creature, it would make sense to have some sort of script. It would only need to be basic - maybe just use conditionals and actions - example:

monster1 script:
if (party(enemy).strength > party(friendly)) {
retreat();
}
if (BeingAttacked(AnyPerson, Sword) {
block(Attacker);
}

just a quick example, probably not very useful. But depending on the complexety of the game, you could make up actions etc for it. Now, the genetic algorithm. Using the GA, you would make random scripts for a bunch of monsters. So for every event that can happen, script it with a conditional, and use a random action. Using random values (within reason) for X in the event:

if(health > X) then...

Or whatever. Now that you have made some AI scripts, you need some criteria to judge it by. An obvious one would be who one, and by how much health - it could be diferent depending on the game.

After you have the AI, and the criteria to judge it by, you are ready to start the trials. For this you would use the combat system in the game, and pit a monster with it's random script against a human player (ie. you). Record the statistics for the game, and judge how the monster did with your criteria. Repeat with as many monsters as you can be bothered with (not too many though, your going to have to repeat this.). After playing against all of your batch, it's time to see who did best. If you played against 10 monsters, then maybe choose the 5 best, and mutate them to create 5 more to bring the set back to 10.

When you are mutating, copy the ones you have selected, and radomly change some values of X, and change a few actions randomly as well. Don't change everything though.

After that, you'll have a whole set again. play again, and judge again, pick the best 5, and mutate to create 5 more. The idea is that only the fitest survive. If you mutate a really fit one, and an action gets mutated badly, then they will be dropped out. So without coding them yourselves, they will get better and better each generation. Try to stop before they get too good though. An idea might be to keep ALL the scripts generated. And use some earlier ones for Easy diffiuculty, and later ones for Hard.

Now that you have your script's, you would just use them in your game, and the good scripts should play well, hopefully coming up with some tactics you never thought of.

Getting quite long now :) but another thing you could do, would be create a "tactics" script if you gave a party-based system. Obviously the monsters you have created already could be used in a party already, but if you wanted to add the sense of an overall plan, you could use another script that influences the whould parties actions so that there would seem to be a cohesive plan. I'm sure that using the techniques above you could find a way to use a GA to make a script.

I hope you find some of this useful cos it took a long time to write :) Oh.. another point (couldn't help myself), implementing a scripting system for combat is a good idea for the player too. If they have computer controled allies, then if te player was feeling adventurous, they could edit the character's script to suit their own fighting style. Just a thought...

Oh no! here I go again... To make the creatures seem to learn DURING the game. then notheing as complex as above needs to be used, but something simpler, just to try and counter the player's tacticts. So even if they have found a way to always beat the monster's, there might just be a surprise in for store. Each monster would be randomly assigned a script, and if you record how each script has fared against the player, then you can see which monster's are doing worse against the player. If all the monsters with script9 keep dying, then you could just randomly mutate it and shove it back in. Do that with the worst few scripts, and they may get worse (but then they'd get changed again), but in the long run, it would look like your monsters would adapt to the player's fighting style, because the bad scripts would keep getting thrown out. This means that over the course of the game, if the player keep's coming up with new tactics, then all of the scripts may end up being 'recycled'.

Anyway. Thats probably along the lines of what I would do. Obviously it would be quite simple to change depending on the game. And monster characteristics could be added again. For example, aggresiveness could act as a modifier to certain actions, and the characteristics could be mutated as well.

Hope this wasn't just a gigantic waste of space :)

Happy coding.
#14
04/29/2001 (6:56 pm)
William,

I like your idea about the GA as you describe it --however there would potentially be no way to control the mutations --- if weighted or scored incorrectly could you not end up with a platypus (no slam to the platypuses intended)? How could you guarantee success enough to interest the majority of players?

On the other hand the use of predefined attack scripts could be straight forward as all I would potentially do is keep track of which of the possible attack scenarios/tactics worked best against each of the adversaries --much like a field commander would do, i.e. is this enemy susceptible to flanking vs. straight on assaults; are big guns better than flame throwers, etc....

Brian
#15
04/29/2001 (7:51 pm)
Hi Brian, thanks for your intrest.

It looks like I haven't explained that bit too well. You are not wrong in what you said, because the mutations are randum, you WILL end up with some real platypuses. But because only the fitest scripts are allowed to progress to the next 'round' of testing/mutation, you should end up weeding out all the worst scripts. For every generation you play with them, they should keep getting better and better. I'll try an example this time :)

10 scripts:

Generation 1 : These are your randomly generated scripts, chances are they are pretty pathetic. Find the 5 which did least badly, and take them to Generation 2.

Generation 2 : 5 scripts from the last generation made it. Each of the 5 is going to be copied then mutated. After testing the new 10, you find that of the ones that were mutated, 3 did worse then the originals, but in 2 of them, the changed actions made them better then the originals. This means the 2 that did better go to generation 3, along with 3 of the original ones.

Generation 3 : copy and mutate again. This generation should be slightly better then the last one. Take the best 5 again to next generation and repeat LOTS

I hope that helps. Obviously you can play around with things like how many actions in the script should be mutated, and how many to take to the next generation. With a bit of experimenting, you should be able to get quite good scripts. The more scripts you use per generation the better. Using 100 scripts and advancing a proportion of them would get better scripts faster then using 10 per generation as there are more mutations. The problem with this task is that to make them good against human's, you have to test them against humans. This is SLOW. By testing scripts against other scripts, the scripts could just be automatically faught, and you could have loads of scripts, and it would take no time to test a whole generation. The problem with that though, is that the scripts wouldn't be optomized to play against humans.
One solution would be to run 10 odd generations against each-other, but then play against them your-self. That would mean that every time you play against them, the ones which were good against the computer may get weeded out because they aren't very good at fighting humans, by repeating that a few times you could get quite good results fast. Even better would be if the player could play against multiple enemies at the same time, as more scripts would be tested in one go making it faster, but that depends on the game.

Summary (about time). By mutating the scripts, and then weeding out the bad ones through testing - the mutations which lowered performance would get thrown out, but if a mutation increased the performance of a script, then it would be kept over the original - generating constant improvement.

Hope that's helped :)
#16
04/29/2001 (8:16 pm)
William,

The clarification helped....thanks.

I figure that for this approach to be successful in real-time you would have to start with major adjustments/corrections eventually refining actions down to minor corrections. Meaning that at the start of the game/scenario the actions of the AI would appear random and unfocussed, and over time the AI's actions would become more refined. Would this not mean that this type of approach lends itself more to long strategic play rather than quick play in which the player is continually--- perhaps mindlessy ---playing along. The pre-computing of scripts would permit for realtime selection based on certain criteria ---for example difficulty level--- however it would appear to be fairly time consuming compared to the use of the other approaches presented.....

I am currently in process of writing a small test app to examine this and some other ideas ---more to follow when I have a chance (I have very limited time these days)....

Thanks,
Brian
#17
04/29/2001 (8:35 pm)
He he, another bit not explained properly :)

The idea with the full blown testing/mutation is that it's done 'outside' of the game itself. A test program would be written seperately from the game consisting of just the battle module. The developer would run all the generations BEFORE putting the scripts into the actual game. Meaning that the AI can be at whatever level you want to start with.
What I was saying about doing it IN the game, was a very cut down version, suitable for in game as it's not too intensive. By doing it in the game, the enemies would slowly respond to new tactics/situations in the game. Meaning that you have a good AI to start with, and as a bonus, you can let it evolve in the game as well.
I you need any more things clarified feel free to ask or even E-mail me at Shudder@byteme.co.uk. Whatever you want.

If you are going to be testing stuff with GA's, I think the Finite State Machine is the easiest way to use a GA. Each object has a finite set of actions. And the GA mutates the order they are carried out in, or what you should react to etc. It's nice and easy with a FSM, because if you have a specific condition for judging it by, you can make ultimate objects for it. But if your conditions are a mixture of things. Then it can produce more versitile objects.

Good luck with it :)
#18
04/29/2001 (8:40 pm)
Don't really want to talk about monsters specifically but thats how the thread started so I'll continue.

Fear is something that can and should be learnt (and/or hard-coded to an extent) - another bad example on my part. So if Monster A is alway beaten by Monster B then A will learn to be afaid of B but this negative reinforcement will be balanced by other needs. For example a need for food or just a need for combat if you like its your choice. Even monsters need to have an aim in life. So whether you use a GA or a NN or anything else you don't need to worry about all the monsters learning to be wimps.

Anxiety on the other hand is not something that is learnt but is instead based on the recent history of the individual. In the NN that I mentioned above, anxiety increases if the individual receives lots of negative feedback. The negative feedback causes changes to recently activated parts of the network, if anxiety is high then these changes are more extreme. So as you start hacking a monster to pieces he becomes more and more anxious, he starts behaving eratically and more extremely like an all out attack or running away. Anxiety is just one example and you can model this other ways also. But there are many chemicals in the brain that effect the learning and behaviour of the network and I believe that a NN will give the most satisfying results.

You don't need to worry about A being killed by B because the core/learning part of the AI will be common to all monsters of that type. So when monster A is defeated by B all monsters of type A learn from it.

This isn't all that realistic so you could have a monster breeding program using GAs which is kinda like a the game idea I mentioned earlier. If you use a typical GA algorithm like above its pretty easy to implement but it takes a lot of generations of monsters before you see any real improvement. If you combine a GA with a NN then you need a monster training program as well as a monster breeding program. I think this would be a fun world to play in and one day I'll look into it but it does get a little complicated to implement (not giving away any secrets there either :) )

Just to reiterate, what I am proposing will not replace whatever other AI system you are using but complement it. I do not recommend using a NN or similar system for all parts of the AI for the reasons stated above. This NN would be small, much of it would be common to all creatures of the same(or similar) type with only some details needing to be stored individually. If it were not fast enough to use for up to 100 NPCs in a FP action game then I would consider the project to be unsuccessful, an outcome which I think unlikely.
#19
04/30/2001 (10:30 am)
I understand what you are saying, and I thinnk your AI system would probably work well. But I was advocating the above system, under the assumption that it was just simple combat, and therefore the emphasis was on speed. After training the monsters using the GA, all that needs to be done is to run those scripts in the game. This would be very fast as all it is doing is interpreting the scripts. This leaves a lot more CPU cycles for graphics, and general gameplay. By doing all the 'real' AI before hand, you can get intelligent creatures, wihtout any 'real' AI in the game.
Using your system, I'm sure it would be possible to train up the creatures first, and then try and extract the information in the NN, when it has reached an 'intelligent' state of weighting, and then implement it in the game using some sort of procedural method. Although doing it like this would NOT produce AI that can quickly react to new situations in-game, it would produce good combat AI, with very little CPU cycles.

Implementing a full blown NN/GA in-game would make the better AI in the long-run undoubtably, but it all really depends on how much proccessing speed you are willing to allocate for the AI in-game.
#20
04/30/2001 (4:40 pm)
I think its time we stopped skimping on CPU cycles for AI, the AI in most games sucks. Not just for NNs either but for all AI. Pathfinding, for example, is atrocious in some games.
Page «Previous 1 2