TITANS PTE build 88043-pte is now live!

Discussion in 'PA: TITANS: General Discussion' started by mkrater, September 18, 2015.

  1. tatsujb

    tatsujb Post Master General

    Messages:
    12,878
    Likes Received:
    5,374
    you mean epic right? I'm sure you mean epic.

  2. Sorian

    Sorian Official PA

    Messages:
    998
    Likes Received:
    3,843
    In addition to that, any games played while training would be crap. The AI learns by not ever retreating and by randomly making tactical decisions. For reference, the current neural networks have 300 games played, each. So, 1800 games for all 6 (3 in PA, 3 in PA:T) neural networks.
    Remy561, tatsujb and Quitch like this.
  3. emraldis

    emraldis Post Master General

    Messages:
    2,641
    Likes Received:
    1,843
    Does it make smarter tactical decisions as time goes on during training? Because if so, couldn't you just train it for the first 300 games against an AI, and then let it run loose and keep learning?
  4. cybrankrogoth

    cybrankrogoth Active Member

    Messages:
    191
    Likes Received:
    57
    @Sorian What about the reverse kind of thinking?
    Where it learns by being forced to defend?
    For example if you set up an attitude of, expand quickly and see when I get attacked?
    That way, if I play aggressive from the start, it defends itself, but if it hasn't been attacked, it goes back to expanding once it has "a" defence/attack force?

    Can that work at all? I think some people (including me) probably wouldn't mind training the ai to make life easier, if there was a way to do it that wasn't play 300 games doing the same thing every game. It would also teach players how to respond to AI and each other by looking for weaknesses more efficiently and games would organically become better quality.
  5. Sorian

    Sorian Official PA

    Messages:
    998
    Likes Received:
    3,843
    Nope. It makes random decisions during training to evaluate the results.

    As it was mentioned earlier in the thread, the neural networks are used for tactical purposes only, not strategic.
    tatsujb likes this.
  6. emraldis

    emraldis Post Master General

    Messages:
    2,641
    Likes Received:
    1,843
    Huh. That makes more sense then I guess. Oh well.
  7. cybrankrogoth

    cybrankrogoth Active Member

    Messages:
    191
    Likes Received:
    57
    Okay, so I'm watching the video @Quitch linked me, I have a couple of questions.

    Zero) I couldn't get onto that AI thesis summary the Havok guy put up, do you happen to have it or something similar?

    One) Is it possible to make it so that players can play normally, but training be on permanently in the background? For example logging all inputs and syncing back to a database for Sorian?
    For example, (going off the sup com AI neural net) permanently turn on training, but leave the output decisions off? Then set it up so every PTE or Live update that gets released, the game also syncs feedback from each game played back to Uber servers.

    Two) If this is hypothetically possible, couldn't you then set up some kind of algorithm to automatically decide which decisions were better or worse based out of all the sample sizes, and pick that (As if you were doing regular training)?

    Three) For a defensive posture (when being attacked) could you set some threat weights so that the ai evaluates the current Point defence and military strength and compares that to the enemy attacking force?

    Four) Typing as someone who does not understand the literal side to coding, Is it practical and possible to give strategic weights/goals to AI? Or is there a neural network/ some other kind of AI hub for strategic goals?

    If it is, is there any reason you can't simply set weights as:
    1) Protecting own commander
    2) Killing enemy commander
    3) If (air) scout finds nothing on local planet, prioritise orbital (finding planet enemy is one) and advanced technology?
    4) If scouts find enemies on local planet = have some defence, and build army for highest threat enemy
    Last edited: September 22, 2015

Share This Page