ragamer wrote:Honestly I find actual cooperation to be more artificial than people just happening to end up at the same place at the same time. Because actual cooperation is something that exists only in the realm of extremely well trained militaries and choreographed dance numbers.
Mmmm... Have you played some Coop games? Not just stupid deadmatch ones?
But even on multiplayer games, even with random ppl, you do basic things as to follow another guy and tag together when you see him firing at someone or been fired at...
...Then you also pay attention to FF (and when it's off to simply pay attention to NOT block your friendlies LoF).
All of the above are natural things to do for a gamer... Maybe you haven't thought on it but... Most AIs simply fail at this basic steps.
One day, you will find a challenging game with a Coop mode (Or even rarer, a MMO with challenging content) and you will see how developing team tactics is not something reserved for a few hardcore gamers... It's a natural way to progress... And quite immersive and fun also.
If I see a guy, I follow him so he can get shot first and I can shoot back.
If I see him shooting something, I shoot it too so I can get the kill/it can't shoot me.
I don't shoot friendlies because there's no point.
I avoid running in front of people because I don't want to get shot.
Of course this may LOOK like teamwork to an outside observer, but in reality it is entirely self-centric. Which is sort of the point. Self-centric behaviour is all you really need most of the time. Very few things require actual organisation.
Honestly there is very little in there that requires cooperation. Simple 'find targets, shoot them, avoid getting shot.' code would make all of the instances you listed seem to occur. It doesn't require any communication, just every individual to do what is best for it based on how it percieves the world. The individuals will arrange themselves into an optimal pattern eventually, it just takes a bit longer.
You could even take things further, and add in the ability to assess what makes a particular solution to a problem 'good' and then save and organise previously tried solutions according to effectiveness. If you add in a little bit of randomness on top of that, you get a genetic algorithm that will automatically find and use the best possible solution if you give it long enough.
Say you write a fighter script for X, it contains fairly simple things like 'check sensors for hostile targets, upon finding hostile targets, fly towards them, upon entering weapons range, fire all guns at the target'. That's roughly what you have now.
But if you then added in a little bit of randomisation, say 'fire weapons at random intervals, hit the strafe drive randomly as you fly towards them' you start to get random deviation, which is the key component of evolution.
Once you have that, you need to add propogation, the ability to recall patterns and also judge effectiveness. So say fifty fighters face off against the player. Out of that fifty, ten or so make random strafe adjustments which cause them to be harder to shoot down, and they survive longer agains the player. One of them might learn to do something approximating a zigzag, or the up, right, down, left, up, cycle which I usually use. This one will probably last the longest, so its solution becomes the best and gets recorded. Then when the next fighter comes up against the player, it looks at the database and automaticaly starts following the rotating strafe drive pattern. Propogate this to every fighter in the game, and suddenly every fighter knows how to dodge.
You can apply this to more things too, you just have to track more variables for the AI to track, it doesn't have to be told to do anything about them, but it does need to track them.
So you could add in 'is the player shooting?' 'Is the player oriented at me?' 'What type of ship is the player using?' 'Am I being shot?' that sort of thing.
If the AI can track these and match situations to behaviours, it gets even more intelligent. Suddenly the AI starts to notice that firing a torpedo when under fire results in an immediate destruction of the ship, so any routines which result in that behaviour get immediately selected against. And fighters stop shooting missiles when you are attacking them. Then you have things like flying close to the player when they are armed with PBEs resulting in getting your shields fried, and those routines get selected against.
Once you have that in, you can code in some sort of randomisation constant, which takes into account absolute effectivness of particular routines. Which is to say if a routine gets say 50% kill rate on the player ship when used, it's an amazingly good routine and you probably shouldn't bother trying to improve it. If that kill rate falls, then start trying new things. That gives the AI the illusion of intelligent adaptation. If it works, it'll use it, but if it doesn't, then it tries new things. By editing the constant up or down you can switch between early-capping of the skill level at predictable, fairly effective behaviour, to a constant drive to improve the AI, resulting in more random behaviour but also a steady increase in effectiveness over time. This lets you set the max difficulty essentially. Add in another constant which, for example, automatically discards 10-50% of solutions, and you can speed up/slow down the learning rate, by making it more or less likely that a given solution will actually contribute to the AI's store of knowledge. This sets the difficulty curve.
And of course you could run this AI through a bunch of fights before the game even ships. Every time you do QA, get your testers to do some fights in all different types of ship against all different types of enemies, and ship the resultant database with the game. You're already starting with AI at least as good as X3, and your database will build on top of that.
Even more, you could actually consider integrating this data collection into steam or something, so all your hundreds of thousands of players could be building this amazingly detailed AI database, which you could then release in patches.
That'd be a cool way to do AI. Also theoretically a lot easier than actually programming all this data in by hand.