A.I. too overpowered

Discussion in 'Planetary Annihilation General Discussion' started by antonyboysx, October 30, 2013.

  1. Quitch

    Quitch Post Master General

    Messages:
    5,885
    Likes Received:
    6,045
    The Overmind is not a good game AI. It could play a single side with a single tactic and would be broken by any change to game balance. Compare this to the neural network approach where the AI adjusts to balance changes automatically.

    It was an interesting experiment though, and there are lessons to be learned from it.
    godde likes this.
  2. arsene

    arsene Active Member

    Messages:
    166
    Likes Received:
    114
    The winner of an AI competition is not "good" because it stops working if you completely change the game? A disgusting attitude to be honest, please give credit where credit is due.

    Starcraft is not going to change anymore. Planetary Annihilation is, therefore the requirements for its AI are different, but it doesn't make the neural network approach better. It is simply more suited for this project.


    If before deciding on whether to engage in a fight you run a simulation of the fight to see what the actual result will end up being and use that to inform your decision making, I think that would be a very useful for an AI. (of course there are some difficulties with this approach)

    I don't know if this is really playing fair though.
    Last edited: November 6, 2013
    stormingkiwi likes this.
  3. Quitch

    Quitch Post Master General

    Messages:
    5,885
    Likes Received:
    6,045
    As a game AI? Not really, it ignores the reality of game development. They spent over a year developing an AI that had one tactic for one of three sides and would break if the balance of the game were changed.

    So, like I said, interesting, but not exactly stuff that's going to change game AI as we know it. I liked the way they handled the threat model, clearly something that could potentially be adapted for use with flyers in other RTSs.

    You might want to turn your drama dial down a notch.

    Uh, you just said better using different words.
  4. stormingkiwi

    stormingkiwi Post Master General

    Messages:
    3,266
    Likes Received:
    1,355
    Planetary Annihilation only has a single side.

    It wasn't built to vary its tactics to keep the player interested, but please read the article.

    They detail that the agent responds to threats appropriately, balances economy and military. It knows how much damage it does to a target per second, how long an engagement will last, etc. Balance changes would only change the values of variables the agent uses, and otherwise mean nothing. If a unit has more AOE Anti air damage, it knows that it will be more of a threat to its units from the numbers, its behaviour would respond accordingly. That's what an emergent AI means. If it does less damage, it spends more time harassing targets before it has the critical mass it needs to gain the win.

    There wasn't just a change to game balance - the game fed the agent incorrect information. The Overmind still won.

    You are completely missing what I'm trying to say. I'm not talking about the computer "solving" the game. My code didn’t “solve” Uno. It just played the game.

    The point is that the CPU runs multiple calculations per second, enabling reaction time to be short. As far as the AI is concerned, APM is meaningless, because it updates its understanding of the games current state several times a second, and updates orders accordingly.

    If you adjust the AI so that it updates the games current state, makes a decision, executes orders, and then pauses for a given period of time you just take away the only advantage that the AI really has.
    I fixed that for you.
    My question about what you mean by strong AI is at the end of this.
    Computers are only as smart as the people using them. It cuts both ways.

    Please read the article, or the quoted extract. In the article the AI does match a former world champion, so he has to play seriously, and he does lose.

    The AI agent updates its game state several times a second. Human reaction time is around 200 milliseconds. The "reaction time" of an AI is as fast as its code is processed by the CPU. Probably in the microseconds. The AI can act as soon as gamestate changes in favour of it. Computer hardware beats human nervous system, every time. It's why we invented them. It's why computer hardware > humans - the software comes to the "best" decision faster than a human would, and executes that plan. It can make many suboptimal decisions quickly and still win.

    The reason why AI in RTS games is poor is because often they are not that sophisticated, their behaviour is roughly predictable and sometimes exploitable, the programmers don't put a lot of development time into making good human decision making and ironing out all the exploits that are present, instead making the AI practice for multiplayer.

    An AI agent is fed gamestate information. It doesn't know everything. It is aware that it doesn't know everything. It makes the most optimal choice based on the information that it knows.

    I'm not sure what you mean when you say it should be the best AI first, and then toned down, and how that differs from making good decisions most of the time but with the capacity to choose less optimal decisions; and as difficulty decreases make more mistakes.

    The reason why it shouldn't always make the best decision is because of predictability. If the AI always arrives at the same decision it's easy to exploit.
    (Sparky the Wonder Drone, Age of Empires wall exploit, Total War fortification exploit, in Sins of a Solar Empire the AI never bypasses stationary defences and always retreats in the face of a superior foe, etc.)

    (The latter is a disadvantage because it tells its forces to retreat from battle if the odds are too great. Most of the time, retreating makes it unable to fire on your units, so you get free kills without casualties, and if it immediately attacks again you still have some defence to stop the bleeding until reinforcements arrive. Plus it gets stuck at easily defensible systems and never attacks your defenceless economy.)

    Unless there is a perfect solution to the game, choosing the best choice is not actually the best thing to do, because it's countered through human learning. If you can accurately predict that the AI will scout you in the first minute with an aerial scout, and send doxes if you have air defence, bombers if you don't, and then you just make sure your first stinger doesn't roll off the assembly line until after the scout visited.

    The best AI will be unpredictable in its decision making, because otherwise we'll solve the AI game.

    If the AI is always an economist expansionist, we'll master the rush build. If it always rushes we'll master the defensive build.

    The strongest AI possible would express and rank multiple choices that are available to it and choose the better ones.

    It's then pretty easy to make it choose less than ideal choices more often to lower the difficulty. Which means instead of explicitly telling it that at lower difficulties it can't build more than x fabbers, you just tweak it so that it places less importance on having multiple fabbers per project. It can still "panic build" fabbers, or it still has the potential to realise it needs more fabbers and build them to expand aggressively, but it is less likely to do either of those, and will make mistakes in other areas too.

    Novice players can build not enough fabbers and fail to expand. But they can also expand too much and not defend their gains. Or do everything perfectly and over-commit themselves. If you just make the novice AI really bad at everything, you don't actually model novice human behaviour

    The AI does.

    It's a very basic calculation. You do it in your head without conscious thought. The AI knows exactly what the ranges of weapons are, what the dps of combatants are, what the health is. It can calculate the exact time that one combatant comes into range of anothers guns and vice versa, and ultimately who wins the match-up. That entire calculation will probably be updated several times during the engagement.
    Last edited: November 6, 2013
    popededi and Quitch like this.
  5. arsene

    arsene Active Member

    Messages:
    166
    Likes Received:
    114
    Well, fights are a bit random and you'll have difficulty modeling human micro behavior in your simulation. But you could indeed get a good feeling for whether a fight is in your favor by running simulations. I think it's a bit daunting to face an AI like this however, (presuming it won't make any mistakes). If it's ever posturing aggressively you better start panicking and run like hell, because the outcome of the fight is predetermined and it won't be in your favor. Quite scary, really.
  6. Quitch

    Quitch Post Master General

    Messages:
    5,885
    Likes Received:
    6,045
    Once the neural net is in the AI will have run simulations. That's the whole point. It won't be quite the same as Overmind because this type of game plays quite differently from Starcraft, but the AI will already have an idea going in (like a player) whether it's going to win that battle or not.

    APM is not meaningless, the AI is making changes to the gamestate the same as any player.
  7. arsene

    arsene Active Member

    Messages:
    166
    Likes Received:
    114
    Neural networks used like that are quite different from the type of pre-fight simulation I described.
  8. Quitch

    Quitch Post Master General

    Messages:
    5,885
    Likes Received:
    6,045
    You're talking about calculating whether you'll win a fight, so no, in terms of output they're not that different. One is based off a historical model and one is based off live calculation, and the latter should be more accurate, but output wise they shouldn't be that different.
  9. arsene

    arsene Active Member

    Messages:
    166
    Likes Received:
    114
    Please, are you trying to be wrong about everything today? The latter is not 'more accurate', it is accurate period. It's like traveling into the future to get perfect information about present events.

    What's in this wooden box? Well historically it's a red ribbon 70% of the time and a blue ribbon 30% of the time, let's make a guess.

    What's in this wooden box? Let's open the box to find out and then use that to inform our choice.

    Which one do you think is more useful?

    I don't think the latter is even feasible, (if it's 100% trustworthy) since there are some dubious assumptions you have to make, but I think if you meet those assumptions that it's an order of magnitude more powerful than your neural network approach. They're not in the same class.
  10. Quitch

    Quitch Post Master General

    Messages:
    5,885
    Likes Received:
    6,045
    You said yourself that you can't model micro 100%, so no.
    godde likes this.
  11. godde

    godde Well-Known Member

    Messages:
    1,425
    Likes Received:
    499
    I have read the article. Arguably, the AI only have a single strategy. Win through superior Mutalisk micro. It basically only uses 2 mobile fighting units(3 if you include workers).
    What if there actually were a stronger long term Mutalisk counter?

    An AI can basically be integrated into the game giving it zero reaction time.
    A human have big problems fighting the UI in Starcraft. Even making workers optimally requires the player to player to hit the button at simulation tick/frame rate precision. A perfect worker split at the start of a game is basically impossible for a human to perform.
    I think that the PA UI should allow the player to make optimal decisions without requiring frame rate precision.


    I think you should try to make an optimal AI that is able to beat any human and from there it should be relatively easy to downscale it and chose what areas the AI will perform worse as you lower the difficulty.

    If you can exploit it, it isn't the best decision.

    That does really not sound like an optimal solution or good play from the AI.

    Here we arrive at an interesting conclusion. There might be a move that is considered the best move and it can even be calculated to be the most versatile move. However if there is a counter to this move then we can't be sure that performing that move is the best thing to do. This is a vital part of any strategy game. Trying to dominate the map with Mutalisks should have a counter for example.
    Last edited: November 6, 2013
    Quitch likes this.
  12. Quitch

    Quitch Post Master General

    Messages:
    5,885
    Likes Received:
    6,045
    AIs not always making the "best" (i.e. predictable) choice is pretty standard AI coding as far as I know. I think programmers refer to it as introducing fuzziness, or fuzzy logic, or something.

    It does, but the point of the article was the AIs micro could counter the counter because it could handle the mutas in ways a human could not due to speed.

    I concur, though doing so in a way that feels naturally is surprisingly challenging. You see this in things like chess AIs, where some when put at a lower ELO will play great and then make the occasional terrible move, and it doesn't feel natural at all. Less like you're beating a low ELO opponent and more that it's intentionally throwing the game.

    I think real-time actually helps here because you already have things like APM which are so key that you can limit.

    I definitely hope we see this form of handicapping available for everything below non-cheating, because I think it creates a far more natural feeling of poor play than economic handicaps, where the AI is playing a different game from you and the lessons learned at that level are not necessarily applicable to the next one.
    Last edited: November 6, 2013
  13. godde

    godde Well-Known Member

    Messages:
    1,425
    Likes Received:
    499
    If the AI always make the strongest move and there is a counter to that then move then yeah, fuzzy logic is quite important in those cases.

    Relevant read:http://www.sirlin.net/articles/yomi-layer-3-knowing-the-mind-of-the-opponent.html

    I'd say that the game balance simply breaks at that point. It should be possible to balance the game for all skill levels although it is a bit harder. However I'd argue that if we can exclude high or massive APM usage as a major advantage then we can make the game easier to balance at all skill levels and it becomes more strategical as well.

    Edit:
    http://www.sirlin.net/blog/2012/7/16/execution-in-fighting-games.html
    Last edited: November 6, 2013
  14. stormingkiwi

    stormingkiwi Post Master General

    Messages:
    3,266
    Likes Received:
    1,355
    I read the article to imply that the AI will happily build up more than 2 mobile fighting units to be deployed defensively. I agree it's a shame they didn't make it a master of the air force, it could have done a lot using more units than just mutalisks.

    The long term mutalisk counter would be the other air superiority fighters. I guess it's one of those times where you fight fire with fire. No other agent must have used them effectively (they cost slightly more - 150,100, instead of 100,100).
    Ok. When I say reaction time, I mean how fast the code processes on the computer. For a human, 0.02 seconds may be inconceivable, but for the CPU it's pretty long. That's only 50 hertz, and your computers clock speed is several gigahertz.
    Absolutely agreed!

    I'm actually partially in favor of AI for each unit, so that your fabs find something useful to do (e.g. repairing buildings and so on) - Sins of a Solar Empire was actually pretty good in that regard, I think they developed the game to reduce micro for the players so ships do make semiautonomous decisions by themselves. The problem is the actual AI controlling the AI agents is extremely limited.
    Sorry. Those paragraphs got kind of wooly. If you can exploit it, it isn't the best decision was the point I was trying to make. The issue with a non-decision making AI is that it always retreats its fleet when its going to lose the fleet.

    The problem is that it still loses the entire fleet, so retreating it was pointless,

    Thank you Quitch - I am indeed talking about fuzzy logic. I forgot the exact terminology.

    And yes, those examples were all examples of poor RTS AI. (Sparky the Wonder Drone takes advantage of the core games bad programming). The AI needs to have decision making so that it reduces the amount of exploits available to the player

    Dominating the map with fast air has a combined counter - you fight fire with fire using the equivalent air-superiority fighters, you contain the situation and you . Don't forget the strategic value of the overminds - if the agent is blind, it's threat map is reduced in accuracy.

    Think about what the runner up could have done if it had used fighters to counter the mutalisks instead of building tanks - it could have maintained much more map control and won the battle of attrition.

    Exactly.

    Now I'm going to attack the AI. I was happily winning 1 vs 3 AI's during FFAs. Let's see what happens going back to 1:1.
  15. l3tuce

    l3tuce Active Member

    Messages:
    318
    Likes Received:
    76
    Just tried a round with the new AI. It surprised me but I still beat it fairly easily without getting to T2. It's learned a few more tricks but still seems vulnerable to raiding.

    My air patrols found some buildings early, however when I sent a raiding force it turned out to be a proxy base. What made things really interesting was that this proxy base was on a peninsula with the only land route being to my base. The proxy base had been built by air fabbers dispatched from an ocean base just offshore.

    Unfortunately the AI didn't know how to build naval units to protect it's base. I was able to severely damage it with lobbers and boats. Unfortunately with the enemy comander underwater, I was unable to take him out, Even submarines were unable to target him unless he was standing in really shallow water.

    I got a few land raids from some other proxy bases the AI had built on land, but my static defenses took care of them, and with it's main base in ruins, the AI was unable to mount a real offensive. He eventualy did start trying to rebuild another main base away from my territory, but his eco was apparently eaten up by his land factories which were just throwing units away on pointless raids. If he had shut them off he might have been able to rebuild faster.

    Fortunately for me this new base was on land, and my bombers were able to swarm him just as my T2 factories were starting to come on line.
  16. zaphodx

    zaphodx Post Master General

    Messages:
    2,350
    Likes Received:
    2,409
    That's unusual, I can target the commander in deep water using submarines, not the t2 ones though. Perhaps that was it?

    Btw I don't think the AI even recognises naval units at the moment so don't expect any resistance on water maps.
  17. maxpowerz

    maxpowerz Post Master General

    Messages:
    2,208
    Likes Received:
    885
    The commander seems to recognise DEEP water as cover, after i tried to snipe it, it appeared to run for deep water cover (it may have been a coincidence).
  18. l3tuce

    l3tuce Active Member

    Messages:
    318
    Likes Received:
    76
    Hmm, it was kind of buggy really. I think at one point bombers were able to target him even though he was completely submerged. I never got T2 subs out because IIRC they are currently useless. I think this was actualy the first time I ever built submarines at all, just because I thought they might be able to attack a submerged comander.

    More oddities, the AI seems to be capable of building MEX on coastlines or rough terrain even when the player can't. Even worse is the player can't target or destroy these either. Also some of the AI's water buildings were half submerged.

    By the way, I like the idea of submerged vs floating for water bases. I hope construction subs exist to do that.
    stormingkiwi likes this.
  19. stormingkiwi

    stormingkiwi Post Master General

    Messages:
    3,266
    Likes Received:
    1,355
    I think in that clip, the commander actually goes to build a mex at the point he's approaching. The AI doesn't understand that if it loses its commander, it loses the game.

    That seems like a bug. Whenever the commander is underwater for me I break out the t1 submarines and destroy him.

    I've beaten the AI without trouble every-time I've spawned close to him. I've been caught by surprise with the full aggressiveness of its expansions on larger maps, but so far I haven't actually had a game where I've managed my economy or expansion properly (it generally is running at 40 - 50%) and I've played badly in general to see what it will do.

    Just a point on oddly positioned fabbers - sometimes they are built using aerial fabbers. Sometimes they can only be built by submarines/submersed commander. I also have a suspicion that the game believes that any metal spot is valid build placement for the metal extractor. I think that the AI spawns the metal extractor at the exact right coordinate, which is unavailable to the player using a cursor.
    It's not just simulations - the AI will adjust the values for the attack priority based on the simulations. And it should also still be running a "how much damage can I do to this unit? How much damage can it do to me? Is it guarded? Is it an economic target" check. The difference is it understands how much range, speed and numbers are a factor,.

    Exactly. The disadvantage of the mutalisk swarm is that it doesn't appear to split the swarm up and run multiple swarms at once.

    A query I have about the neural net has is its platoon structure. If you manage to get one of your first raiding platoons into the enemies base, you start splitting the platoon up to attack multiple targets, (assuming the base isn't well scouted). If one group runs into a platoon of tanks or the commander, you still have the second group wrecking havoc.

    So I'm wondering if the intelligence is "per unit", with the platoon structure allocating the units in that platoon to be most effective.
    APM is not meaningless, the AI is making changes to the gamestate the same as any player.[/quote]
    For the AI, the Actions per Second is so much higher than the players that it is meaningless. The computer shouldn't be limited to making actions on a human timescale. If you want to win the micro game, play a human.
    Last edited: November 8, 2013
  20. sqweek

    sqweek New Member

    Messages:
    6
    Likes Received:
    1
    No, it gets scary when the AI *understands* that you'll panic if it postures aggressively and starts bluffing you. ;)
    stormingkiwi likes this.

Share This Page