Just wondering when dev's are planning on focusing on game optimization, i know that atm its not a major focus (even though the game seems to run smoother with each update). Just gonna be happy when we don't have lag issues on a single planet. Not too worried about final system requirements since my PC can steamroll most games but the best hardware in the world wont help you if the code still isn't optimized.
As you said, each update tends to run better than the last, Optimizations aren't the kinds of things you always leave for last, a lot of them can be slotted in along the way. I expect Early-mid Beta will be where we see lots of content and other small quality of life features are focused on with some balance thrown in. Mid-Late Beta will likely focus more so on Bug Fixing and Optimizations comparatively. Mike
Agreed with knight; if you compare performance now to the start of alpha, we're leaps and bounds ahead (with more content and scaling to boot).
Not necessarly. If, as some people have said, each planet is its own thread performance should stay very similar as long as enough cores are available.
Unless they work at the same time on optimization i've allready noticed many planets can move together without FPS drop since very last update.
I assume we get a LOD system so different planets on display don't show the polys of all units at the same time. Being zoomed out far enough should actually be the highest graphical fps as the game would only need to render the strat icons.
If they only render 1 planet at a time (the one you are currently viewing). Then graphics performance should be fine. Im just thinking the processor will get a workout when you have thousands of units across the galaxy performing all sorts of tasks. Might be best if they support up to 4 cores at least.
I believe the intention is to have multi-window support at some point. I imagine that's going to be pretty heavy on the ol' graphics!
Optimization comes in kinda late, but the guys are constantly crunching on various bugs and stuff that come up in the alpha.
From my limited knowledge on game development, optimisation is usually focused on towards the end because you want to limit the new bugs you will probably introduce. Part of crunch-time at the end, when devs will be in sleeping bags under their desks But some of the more obvious issues will be fixed along the way.
We actually have a guy who's job it is to spend all of his time optimizing. He's currently coming up to speed as he didn't write the engine but he's good at this kind of stuff.
Well the perf improvements from the last few builds have been massive, so he (and whoever else is looking at it as part of their role) is doing a great job!
And how is the optimization on the netcode going? I'm asking because they game is pretty much unplayable on a dedicated 3Mbit connection for now, with merely 1000 active(!) units in games it's already down to about 0.5-1 gamestate FPS which fit the bandwidth. Rendering still runs smooth at that point, but that doesn't matter at that point since it already feels laggish with units and projectiles merely jumping. What happened to the promised extrapolation(!) of entity paths on the client? Right now, units will just "jump" around in unpredictable way when the bandwidth is used up and there appears to be no extrapolation whatsoever. If I would sniff the traffic, I would probably find that you only transmit location and orientation, but not movement vector, timestamp or any other attribute which would be required for that task. The frames itself aren't even consistent, the flocking behavior appears to work properly when watching the replay. But in live game it appears more like the server transmitted the most recent position for each unit at the point of transmission only instead of consistent frames, and the client will only render units at confirmed positions with no extrapolation of the movement path, leading to a strange picture where the movement animation is playing, but the unit itself only jumps around, colliding with other unit models all the time. Interpolation is apparently not used either once the locations are no longer "continuous". Interpolation of past positions alone is just not sufficient, in case of projectiles like rockets this gets even weirder since there is only a SINGLE reported location per projectiles (it has hit) and the destruction event of the projectile is omitted completely, leading to projectiles which just stop midair at the edge of the collision box until the entity has decayed on the server. (The decay event seems to be transmitted again, only the explosion animation is missing.) Connection speeds of 2-6Mbit downstream are actually quite common in Germany, with an average connection bandwidth of merely 6Mbit whereby this number is highly distorted by 50-200Mbit connections in the larger cities, and a weird definition of "broadband" by our government which considers 1Mbit as sufficient for countryside and smaller villages. I a short test, PA already used up my bandwidth with only ~100 units moving, interpolation failed at ~300 units moving. With only 500-600 (stationary) units attacking, projectiles did no longer explode but stopped dead midair until decayed.
I've wondered about how the server communicates to the client also. I haven't done the exact math, but when we consider the theoretical '1 million units' that has been mentioned a few times, back of the envelope says that if each one was updated once per second, one BYTE per unit, that would be 1 megabyte/s. I'm sure even ingeniously encoded basic locational/direction data would be more than several bytes, and assuming updates must happen say 5 times a second, it's adding up. 3 bytes * 1000000 * 5 = 15 megabytes per second. Ooooh! Some kind of grouping or clumping by the engine will be needed there!
"1 million units" is type of unrealistic. Not only is the number itself "a bit" exaggerated, but you will also never see that number of units at a time. That's the target unit count for the whole simulation, spread over all planets. The idea is, to only communicate units and their properties for objects which currently are in any of your viewports and only the properties which are actually required for the current level of detail. You can't express a unit in a single byte though. It's more like a minimum of about 40-100 bytes (already assuming optimal compression) per unit and frame / event. So in theory, you could get at most ~50.000 unit updates per second over an 3Mbit connection, if you consider smart extrapolation you can do with ~5 updates/events per unit and second, so you SHOULD be able to display about 10,000 units at the same time across all viewports at the same time without noticeable lags. Well, let's keep it real and say that half the number is about reasonable, that leaves us still with 5,000 units which the netcode SHOULD be able to handle. Right now, it can't even handle 1/10th.
Good point - as you say, you would only need to update things in current view. My '3 byte' example was a very generous minimum, I thought as soon as I chose something a bit higher people would appear saying 'noooo, it could be 4 bytes less by doing such and such' which wasn't central to my point