AI questions

Discussion in 'Backers Lounge (Read-only)' started by Aelreth, February 27, 2013.

  1. Aelreth

    Aelreth New Member

    Messages:
    18
    Likes Received:
    0
    I recall learning that our AI opponents will be largely AIs that learned from other AIs. Will these AIs also have more distinct personalities or will be so random that we are unlikely to notice it?

    Also could we have an AI that learns from myself the player? So it always exploits my weaknesses to teach me to be more adaptable. We could have it be an observer. Much like the first AI that was used to demolish that Chess world champion. This AI would be a persistent AI that would evolve after every engagement we have.

    Yes I am creating a monster.
  2. AusSkiller

    AusSkiller Member

    Messages:
    218
    Likes Received:
    0
    The problem with training AI is that it requires a lot of iteration and a lot of trial and error on it's part, if you were to train the AI against yourself you would likely need to play tens of games before you would begin to notice any real difference and hundreds, perhaps thousands of games before the AI would know you well enough to be a real increased challenge. This is one of the reasons AI is usually trained against other AI or it's self as it can be done as fast as a CPU can handle. The AI is also likely to try a lot more stuff that causes it to lose before it tries to do anything that helps it win which would actually make it much easier to beat while training it and that would make the training matches a lot less fun too.

    However having it be an observer might be an interesting idea, if top ranked match replays were sent back to Uber and somehow used to train the AI (compare what the AIs actions would be to that the winning player?) it could lead to a real monster of an opponent.
  3. Pluisjen

    Pluisjen Member

    Messages:
    701
    Likes Received:
    3
    Ultimately an AI would need to play so many games to work, and I don't know if replays work.

    What would be cool is giving the mod community a hook into the learning tool, so they can upload their own AI running certain specific strategies and tactics and arming the primary AI against them. That way, when new strategies come up, the AI can learn to adapt to them faster if someone creates an AI that constantly performs that kind of play.
  4. Joefesok

    Joefesok Member

    Messages:
    88
    Likes Received:
    19
    Grab a tournament replay.
    Put it ingame.
    Set up the replay so that it's dominated by an AI that will ATTEMPT to do everything the first players did.
    Put in other random AIs to fight them.
    goodbye universe
  5. radistmorse

    radistmorse Member

    Messages:
    59
    Likes Received:
    1
    You make it sound so easy: grab a replay, feed it to some magical software and the ultimate AI is ready for action.

    Just how do you think this "analyzing software" should work anyway. Even an experienced human will have a hard time understanding why player 1 won and player 2 lost based solely on replay. It requires a high cognitive actions to find out which actions of player 1 at what time was a key to future success, or what actions of player 2, again - at what time, was so devastating for him.

    So this magical "analyzing software" should be an AI by itself, moreover it will be a way more complex AI, capable of self-learning on a high level (which is only a couple of steps away from writing a true AI), compared to some regular RTS-AI capable of decent playing.
  6. garatgh

    garatgh Active Member

    Messages:
    805
    Likes Received:
    34
    *Wishing so hard for the day when true AI is created (if ever)*
  7. menchfrest

    menchfrest Active Member

    Messages:
    476
    Likes Received:
    55
    My limited understanding of AI, and machine learning in general, is that most of the techniques do not involve the AI understanding what happens, but knowing what worked well and what does not work well, given some definition of well.

    So typically you throw random inputs at it, or known good inputs at it (depending on what approach), and some "formula" that best turns some set of inputs (variables you've set up, like time, number of known enemy units, whatever you can think of) and turns it into what to do next.

    You might be able to use replays in the "throw good cases at it" versions, as long as it can access all the information it needs for variables and to quantify it's "worked well" formula. This may be the primary training method in that case. In the "throw random things" versions, your replay is going to only be slightly helpful.

    *Yes I am simplifying my limited technical knowledge of this subject, for layman's clarity, now flame away*
  8. radistmorse

    radistmorse Member

    Messages:
    59
    Likes Received:
    1
    "understanding" is a vague term, which doesn't go well when we are speaking about software. Software never "understands" what it's doing, it's just programmed to do so.

    Generally, you got it correctly, but devil is in the details. Learning what worked well and what didn't is indeed the target, but how can you describe it if you doesn't even know what this "what" is. The player constructed a bunch of robots, ordered them to go somewhere and to destroy something, and in the end won. What was that, that he did? Was it a scout mission? Was it a diversion mission? Was it a precisely timed attack from behind? And what is the definition of a "scout mission", "diversion mission" and "attack from behind" in the first place? The only thing that the computer can learn without means to recognise these, is that: to win you must produce the exact same bunch of robots and order them the exact same commands at exactly the same time, which of course won't work.
  9. menchfrest

    menchfrest Active Member

    Messages:
    476
    Likes Received:
    55
    My impression with neural nets you can make it responsive to many inputs, timing of purchases being one of those, the input to the net can be almost anything I think, for example, do I know the location of the enemy commander, how many air units do I see, how much radar coverage you have, any free metal spots ,etc.

    You set up the inputs, the measurement function, the framework of the net, and start running it through training.
  10. syox

    syox Member

    Messages:
    859
    Likes Received:
    3
    The problem is not the soft- its the hardware. Von Neumann arcitecture is imo not suited for AI.
  11. Aelreth

    Aelreth New Member

    Messages:
    18
    Likes Received:
    0
    My apologies then thank you all for your input.

Share This Page