How much can the game be Optimized?

Discussion in 'Planetary Annihilation General Discussion' started by ghoner666, August 21, 2014.

  1. cdrkf

    cdrkf Post Master General

    Messages:
    5,721
    Likes Received:
    4,793
    Thanks for the replies guys, my programming knowledge is limited to old fashioned C for embedded micro controllers so I'm a bit lost when it comes to 'proper' software for PC...
  2. exterminans

    exterminans Post Master General

    Messages:
    1,881
    Likes Received:
    986
    Don't count on it. FPGAs may be faster than performing the operation by software, but it's still much slower than actually having a specialized instruction for that task - and it also costs way more energy.

    Just to put it into relation, using Bitcoin as an example (just because there is a fine amount of data on it):
    CPU up to 70Mhashes/s at 190W
    GPU up to 800Mhashes/s at 200W
    FPGA up to 800Mhashes/s at 80W
    ASIC up to 1000Mhashes/s at 2W

    Yes, the FPGA is 20x times more efficient than the CPU (and 10x times faster), but it's still beaten by far by a pure ASIC. Modern CPUs have quite a lot of such ASICs embedded, Intel GPUs e.g. even have one for encoding h264 video streams.

    Also note how the FPGA only managed to beat the GPU by factor 2-3x in terms of energy consumption?
    Now take that compiling SystemC, Verilog or VHDL code or alike for use on a FPGA is a quite time consuming task, and that FPGAs can only excel in bit operations and sometimes in integer arithmetics, but completely fall behind as soon as floating point arithmetic is required.
    For the stuff PA is doing, going for GPU acceleration is most likely more efficient than trying to squeeze the logic onto an FPGA image.
    Last edited: August 22, 2014
  3. ghoner666

    ghoner666 Member

    Messages:
    38
    Likes Received:
    21
    Nice video, but all I could think of while listening to the guy was "256 k should be enough for everybody". I'm not a programmer or a quantum scientist, but my common sense just tell me quantum computer are gonna put all the other stuff to the trash. Not this decade, but it's gonna happen. Like the "watching a video" example he gave. Do you need 7 cores with hyperthreading to watch youtube? No. Does having 7 cores with hyperthreading start your video much faster than an average laptop? No. Quantum might not end up doing classic math faster than regular computers, but it's gonna be the next thing anyway, because it has more possiblities. And Scientist are still developping the thing, and they're developping it for a particular purpose, just like the military developed the Internet for military purpose, but it became so much more. Same for quantum computers. Once companies and scientists will get their hands on one, they'll develop all kind of tech and software and improve it beyond what we can even theorize right now.
  4. exterminans

    exterminans Post Master General

    Messages:
    1,881
    Likes Received:
    986
    That's not how it works. That's not how it works at all.

    Quantum computers allow you (in theory!) to reduce problems from a certain complexity class (NP := All problems which need up to exponential time to find a solution, but a found solution can be verified in polynomial time) to a less problematic complexity class (P := All problems where the solution can both be found and verified in polynomial time).

    For problem which are already in P, quantum computers can't gain anything. On the contrary, they are actually going to be slower(!) for these type of problems.
    For problems which are in NP, there may (or may not be, like I said, it's theory) algorithms which can only be executed on quantum computers which can solve the problem several magnitudes faster than a classic computer.

    And the there is also the nasty stuff in EXP ( := All problems which need exponential time to find a solution - and also exponential time to verify a given solution), which neither quantum nor regular computers can solve efficiently.

    So what's left to speed up classic computing?
    Not much. More parallelism, increasing clock speeds, shrinking structure sizes and increase width of the integrated circuits for smaller delays. However silicon based circuits are hitting hard physical limits. Another size reduction will be possible with carbon nanotubes, and that's about it. From that point on, only step left is to move even further into 3D when building integrated circuits, until heat dissipation becomes eventually impossible.
  5. maxcomander

    maxcomander Active Member

    Messages:
    328
    Likes Received:
    129
    looking at my system resorce monitor, I still see 1 core at 95% with 7 other cores used between 20 & 60% and 4 cores unused....

    Guessing either to the client is optimised for qaud core hyperthreaded cpu's, or the server is limiting things before the client can fully utilize all cpu assets.

    I wonder if we'll ever get to the point that workloads are spread equaly across all cores in a game!!
  6. exterminans

    exterminans Post Master General

    Messages:
    1,881
    Likes Received:
    986
    This. Network, GPU and server are all limiting.

    Once dedicated servers are released, single player games can be run on your local CPU, so it will be fully utilized.
  7. ghoner666

    ghoner666 Member

    Messages:
    38
    Likes Received:
    21

    I found an article that explained my point better than I could, so I snipped this part:

    Therefore, if we run the the same algorithm on a quantum and on a classical computer, the classical one will usually win. Quantum computers will only be better if an algorithm exists where the presence of entangled quantum states can be exploited to reduce the number of steps required in a calculation.


    At this stage we don’t know of any quantum algorithm to reduce the complexity of, say, web browsing or text editing, but the search is on.


    You're right. Classic computers currently beats quantum ones at basic problems, I know that. My point is that with so much theory and unknown ahead, I just can't believe that something that advanced can't have more advantages than cracking only certain types of problems. We're not using it at its full potential because it's still brand new and in development. And I'm certain I'm not alone thinking this, or there wouldn't be millions poured in the project.
  8. cwarner7264

    cwarner7264 Moderator Alumni

    Messages:
    4,460
    Likes Received:
    5,390
    Quite frankly, if PA can't run at 60FPS on a Raspberry Pi before release, I think we can all consider that the game has been an abject failure.
    cdrkf, Gorbles and optimi like this.
  9. exterminans

    exterminans Post Master General

    Messages:
    1,881
    Likes Received:
    986
    Actually, even cracking only this limited class of problems is already worth the millions.

    I don't think you understand what cracking these problems actually means: These problems are considered impractical to solve with classic computers. Impractical means algorithms do exist, but these algorithms can only be used for very, very small datasets, because their runtime grows in exponential numbers with the size of the input.

    And it's not only theoretical problems, even "basic" real life applications like route planing for forwarding agencies and alike fall into this category. Plus, there's also cryptography, which mostly relies on the assumption that these problems can't be solved using traditional computers.

    It might be possible to speed up other, much simpler problems too, using quantum computers, but the possible gains are diminishable compared to slaying these more complex problems.

    Also note that quantum computers have a huge disadvantage compared to classical computers: In order to solve a problem of a specific input size, you also need a quantum computer which has at least a corresponding size. You can't just say "I will trade speed for cheaper hardware". Either there are enough entangled quantum states to handle a specific input, or there are not. Not as in the classic computer science, where you can always use a much, much simpler computer to solve the same problem - just at the expense of additional computation time.

    These are limitations you can't overcome, they are inherent to the idea behind quantum computers.
    liquius likes this.
  10. cdrkf

    cdrkf Post Master General

    Messages:
    5,721
    Likes Received:
    4,793
    We just CAN'T expect people to have to INVEST the kind of money that a Pi costs... I mean £30 on top of the price of the game... Ridiculous! PA when properly optimised should easily run on my 128K +2 ZX Spectrum (the DELUX model with *built in tape drive* no less).....
    Last edited: August 22, 2014
    cwarner7264 likes this.
  11. cdrkf

    cdrkf Post Master General

    Messages:
    5,721
    Likes Received:
    4,793
    This is similar to the issue of accelerating highly parallel code using a dGPU. The smaller the problem, the less the gain. Eventually the time it takes to copy data between main memory and the gpu makes the process *less efficient* so there is a practical limit to the problems that can be accelerated (although AMD's HSA APUs are looking to change that by removing the memory copy operation altogether).
    Last edited: August 22, 2014
  12. maxcomander

    maxcomander Active Member

    Messages:
    328
    Likes Received:
    129
    Had a hunch this was the case, that's why I havnt sold on my my 930 based system. I'm hoping to to use it as a server for my pa games ; )
    cdrkf likes this.
  13. cdrkf

    cdrkf Post Master General

    Messages:
    5,721
    Likes Received:
    4,793
    I'm not sure theres anything that worth while upgrading to from a 930 really (esptially runinng at 4.1 ghz)... I mean a 4770K is what, 20% faster? Hardly seams worth it to me :p
  14. ghoner666

    ghoner666 Member

    Messages:
    38
    Likes Received:
    21
    Whatever limit we THINK there may be, it can be removed or worked around. We can't go faster than light? we'll just bend space then. Input too big for your home quantum computer? No biggy, you'll be wirelessly connected to a quantum internet cloud made of much bigger quantum computers that will break it down for you and allow your machine to process it. But more down to earth, just like you have weak computers that can't run certain games or programs, well if your quantum computer can't run a certain type of input, you'll need to get a bigger one, or accept you can't process that type of input. Everything will become structured like it is now, quantum computer will have units of measure that will tell the average consumer what they can or can't do with it.

    We can't say it's impossible or unpractical only on theory, you have to experiment and observe, that's the core principle of Science. Maybe for a time we'll have hybrid computers that have a "classic" part along with the quantum one, similar to GPU working together with CPU, but I believe Quantum Computer will eventually become the only thing, and classic computers will be just a relic, like the typewriter and tape recorder. We'll probably know who's right in the next few decades :)
  15. garat

    garat Cat Herder Uber Alumni

    Messages:
    3,344
    Likes Received:
    5,376
    My 4770K at home absolutely destroys any of the 9x0 series chips. But my use case is different. Compiling, photo editing and video compression are make use of all the huge improvements on the 4770k. Game will play faster on a 4770k, but maybe not, if you have plenty of memory and a decent GPU.

    And regarding optimization. There are always more optimizations you can do. Basically you can work on them forever. General rule of thumb is though you stop when any individual optimization falls below a certain threshold. Be that 5%, 3%, 1%, 0r .5%. Of course they add up, and where that threshold lives changes - later on in the project, when more optimization is done, you except a lower threshold, as you've already got most of the low hanging fruit that provide the big wins.

    But then, some engineer that's way too smart will come along and find "Oh my goodness, we shouldn't be doing that" somewhere in the code - and we have a LOT of lines of code - and out of the blue you might see another huge win. We've already had that happen a number of times.

    Fairly certain there are more, and we already have months of work lined up for PA.
    Remy561, ace63, drz1 and 7 others like this.
  16. doud

    doud Well-Known Member

    Messages:
    922
    Likes Received:
    568
    When optimizing, aren't you taking the risk of preventing additional features to be easily added ? Or are you simply optimizing parts of the code that for sure will not have to be modified in case you need to add a new feature ? By new feature i do not mean "game" feature, but much more "core server" feature. Is there a way to find a good balance ?
  17. vackillers

    vackillers Well-Known Member

    Messages:
    838
    Likes Received:
    360
    If a lot more optimization can be done then that would be brilliant!! as we all could use a lot more performance gain especially in late game, but I do wonder if the offline matches perform a LOT better than using Ubers online servers? This is something I've been toying around myself in my thoughts over the last couple months and I'm thinking whenever the game allows us to go offline there should be a lot of gain because there is no more restrictions technically cept what your own machine specs are.
  18. garat

    garat Cat Herder Uber Alumni

    Messages:
    3,344
    Likes Received:
    5,376
    ... Yes? :) It's rarely either or. Few optimizations I'm aware of cause significant "do this or do that, but never both" feature decisions. It can happen, but more often it just necessitates designing something a bit different, rather than removing the feature.
    Remy561, drz1 and tatsujb like this.
  19. japporo

    japporo Active Member

    Messages:
    224
    Likes Received:
    118
    Good summary but it completely misses the forest for the trees. Of course, a custom ASIC will be fastest for a given task but it costs millions to tens of millions from design to tape out and requires a heck of a lot more expertise than knowing the HDLs you listed. Moreover, if you need to update it significantly, you get to pay again. Worthwhile if you're doing Bitcoin or video encoding/decoding or something else well defined and fixed but bad from a flexibility standpoint.

    I mean the whole point of the general purpose computer is that it can be configured to do nearly anything just by switching out the software and, while it might not be optimal, can do those tasks efficiently enough to make the tradeoff worthwhile. Embedding reconfigurable logic onto a processor is just extending that concept to its logical conclusion and we're just about at the right point in technology where FPGAs are getting capable enough and progress in processor development has slowed down enough that it's going to happen, in my opinion. Intel's offering is evidence of that. I'm aware of several projects already in preparation to deploy FPGA accelerator cards for their services and have heard they were highly interested.

    (And aside from all that technobabble, I was simply making a joke anyway. :) )
    Last edited: August 22, 2014
  20. thetrophysystem

    thetrophysystem Post Master General

    Messages:
    7,050
    Likes Received:
    2,874
    I had better be able to install it on my wristwatch.

    I don't care if its analog, not digital.

    Drag and drop Uber, drag and drop!

    #namethereference

Share This Page