New neural networks coming to PA

Discussion in 'Planetary Annihilation General Discussion' started by Sorian, February 13, 2015.

  1. Sorian

    Sorian Official PA

    Messages:
    998
    Likes Received:
    3,844
    fuzzels, slocke, goofyz3 and 47 others like this.
  2. lapsedpacifist

    lapsedpacifist Post Master General

    Messages:
    1,068
    Likes Received:
    877
    I understand... Nothing!

    What I gathered: AI gets more cleverz. Yay!
  3. cptconundrum

    cptconundrum Post Master General

    Messages:
    4,186
    Likes Received:
    4,900
    So.... you took the day off of work so that you could work on PA? :D

    I like you.
    carn1x, slocke, tatsujb and 24 others like this.
  4. felipec

    felipec Active Member

    Messages:
    465
    Likes Received:
    190
    Really nice to see things like that are still being considered in the stage the game is! Also, finally a blog update !
    Awesome!!
    Nicb1 likes this.
  5. zihuatanejo

    zihuatanejo Well-Known Member

    Messages:
    798
    Likes Received:
    577
    Fascinating! Thanks for sharing.

    How much CPU time is the AI currently allocated? If indeed it is even capped?
    tatsujb likes this.
  6. plink

    plink Active Member

    Messages:
    176
    Likes Received:
    89
    Really great read, thank you for sharing Sorian!
  7. Sorian

    Sorian Official PA

    Messages:
    998
    Likes Received:
    3,844
    It is not capped, but the AI does not typically use much. The neural networks are a tiny fraction of the CPU used. Most of the CPU is in finding buildable locations and finding places to attack because those rely on some more expensive pathfinding and spatial queries.
  8. Remy561

    Remy561 Post Master General

    Messages:
    1,016
    Likes Received:
    641
    Awesome!! Will this allow the AI to kill us even more efficient?
  9. exterminans

    exterminans Post Master General

    Messages:
    1,881
    Likes Received:
    986
    Appears reasonable so far, maybe except for that part where you bound the activation function to layer types rather than actual layers.

    How do you scale the hidden layers? Same number of nodes for each layer, or configurable width?

    input -> rectifier -> rectifier -> sigmoid -> output, or how does your current test setup look like?
    Last edited: February 13, 2015
    jtibble and philoscience like this.
  10. Tontow

    Tontow Active Member

    Messages:
    459
    Likes Received:
    64
    Say hello to your new robot overlords.
    #skynet
  11. davostheblack

    davostheblack Well-Known Member

    Messages:
    364
    Likes Received:
    313
    For the intellectually challenged among us (IE me), what tangible impact can we expect to see from your redesign of the neural networks?
  12. superouman

    superouman Post Master General

    Messages:
    1,007
    Likes Received:
    1,139
    The AI will be the next King of the Planet. You heard it here first!
    DalekDan, Remy561 and stuart98 like this.
  13. philoscience

    philoscience Post Master General

    Messages:
    1,022
    Likes Received:
    1,048
    Great stuff! I'm just starting in machine learning and network modelling, but I work at the institute where Demis Hassabis did his PhD and founded Deep Mind. I take it that it requires more than just multiple layers for this approach to qualify as so-called 'deep learning'? My basic understanding is that the recent movement towards deep hierarchical networks isn't fundamentally different from that of previous neural net implementations, the networks themselves are just more complex with more layers (and associated changes in algorithms to handle that complexity)? It's interesting to know that as you move upwards in the hierarchy learning becomes less sensitive to input (sidenote - my research suggests this sensory insulation is also true of the brain's deepest, most central networks!). As I understand it in the context of decoding, this allows the higher levels to encode gradually more complex features that integrate over multiple lower level inputs? E.g. in a neural net with enough layers one should expect the highest levels to code the most general categories. It makes sense that these should be insulated from the random noise of regular input; curious to hear more about how your changes will impact the interaction between high and low levels. Anyway i'm just curious how this plays out for PA's AI. Does the highest level encode something like meta-strategies? Could you improve the ability of the AI to respond to discrete events (i.e. snipes) by adding more layers? Also I won't ask you why the network isn't Bayesian as Demis recently revealed even approximate Bayesian inference is too costly for deep nets. But damn if it wouldn't be cool if PA's brain was Bayesian!

    So cool when my work and play collide :)

    edit: confusing as I just realized 'top' and 'bottom' and 'input' and 'output' are reversed in your parlance from the typical use in neuroscience (bottom = input). Also the sidebar on your blog is jittering intensely on chrome for me!
    Last edited: February 13, 2015
    drz1, theseeker2 and jtibble like this.
  14. crizmess

    crizmess Well-Known Member

    Messages:
    434
    Likes Received:
    317
    Oh my. Who does that? Everybody knows that information flows downward (always!).
    ;)
    philoscience likes this.
  15. philoscience

    philoscience Post Master General

    Messages:
    1,022
    Likes Received:
    1,048
    Haha, basically all of neuroscience ;)

    Well, maybe not if you are really into the Bayesian brain, then you would at least view information processing as a reciprocal process. From the classical view however (otherwise known as 'feed-forward'), information flows 'upwards' or 'forwards' from the sensory epithelium to the cortex, is manipulated, and then transformed into an action signal (backwards/output). Really it's all a perception-action circle though!
    crizmess and drz1 like this.
  16. crizmess

    crizmess Well-Known Member

    Messages:
    434
    Likes Received:
    317
    This is why I studied computer science and never turned to neuroscience, so many weird ideas about information.
    Now, I go back to my desk and draw more trees.
  17. Sorian

    Sorian Official PA

    Messages:
    998
    Likes Received:
    3,844
    I am most definitely not a neuroscientist :) Interesting that the typical mental layout is input on the bottom. There are certainly more strategies for deep learning networks (and I would not call PA's neural net a deep one). There are various ways to calculate error, you can employ weight normalization, you can use batching, and that doesn't even get into the different neural network structures.

    [Edit] One thing to add, for someone who gets it. Weight initialization is a bitch. Went through at least 3 passes with failed neural network training just because of the initial weights. [/Edit]

    It means the AI should use its units differently and possibly better. I am finally seeing the AI wiggle its Dox, for instance.

    Screwing around with the AI is both work and not work, if that makes any sense. Yes, it is my job, but it is also a passion.

    Each hidden layer has the same number of nodes. Sure, it is a limitation I could remove, but it does not appear I need to quite yet. With Rectified Linear hidden layers giving me sparsity, it seems to make up for not having a configurable width for the moment.

    Current neural net hierarchy is: Input -> Hidden(rectifier) -> Hidden(rectifier) -> Output(sigmoid)
  18. philoscience

    philoscience Post Master General

    Messages:
    1,022
    Likes Received:
    1,048
    You rule anyway! If you want to pretend to be a (theoretical) neuroscientist, you could hook up the entire C Elegans connectome to PA and see if it is a better player than @elodea. I'm sure there is a nature paper in that ;)

    Sounds like some awesome work, I look forward to seeing that wiggle and pondering the network behind it!
    stuart98 and crizmess like this.
  19. Sorian

    Sorian Official PA

    Messages:
    998
    Likes Received:
    3,844
    Is this something to visualize the neural network with all of its connections?

    [Edit] Never mind, found a page the describes the project, finally. [/Edit]
  20. philoscience

    philoscience Post Master General

    Messages:
    1,022
    Likes Received:
    1,048
    Essentially, but also much more than that. Interestingly we've known the complete 'connectome' (map of the entire network and all connections) of the c elegans for nearly 50 years but it hasn't led to many breakthroughs. This group is trying to get the whole thing (it's only about 300 neurons in total) simulated online and open source so people can hook it up to funny things like worm-robots. In practice I doubt it would be very useful for your application (which I think is orders of magnitude more simple than even this network?) but it sure would be funny (and buzzworthy).

    edit: the future of PA? :p

    stuart98 likes this.

Share This Page