Graphics Update, V1.2

Discussion in 'Planetary Annihilation General Discussion' started by varrak, March 15, 2014.

  1. pcnx

    pcnx New Member

    Messages:
    13
    Likes Received:
    12
    Okay, so you have the GBuffer of the whole scene and for each light you drew a frustrum, sphere etc and in the shader you then used the Gbuffer to compute the lighting for each fragment? (Havent yet worked on a completely deferred pipeline, so im just being curious :) )

    Are you now using hardware instancing with multiple transformation matrices or a big vertex buffer with each light geometry in it?

    Oh and since dynamic shadows are (besides the sun of course) a no-go when there are that much lights involved, do you think of using (or already use) ambient occlusion?
    Thanks for the answers, as someone who is almost finished with his studies and wants to get a food in the door that is computer-graphics, i find it very interesting to talk about that stuff on a game in the works!
  2. byrnghaer

    byrnghaer Active Member

    Messages:
    109
    Likes Received:
    55
    Very interesting. Can't wait to see and feel the improvements :)
  3. doud

    doud Well-Known Member

    Messages:
    922
    Likes Received:
    568
    20ms => 7ms :eek: hopefully most of people here realize how this is a huge improvement and can only be achieved by Uber coders :) We're definitly back to these old days when a saved cpu cycle was a victory and made all the difference. I really like this.
    Varrak, with 1.0/1.1/1.2 you know what it means : We need 1.3 and above :)

    Was just wondering : Are sli or crossfire good candidates for scaling even further ?
    varrak and drz1 like this.
  4. vrishnak92

    vrishnak92 Active Member

    Messages:
    365
    Likes Received:
    118
    Would love to see a user controlled option for graphic settings, with my computer I worry less about how pretty it is & more about how playable it is. Preloading was one of those things that REALLY helped
  5. lazlopsylus

    lazlopsylus New Member

    Messages:
    22
    Likes Received:
    20
    These are always great to read. Keep it coming for those of us that like hearing about some of the magic behind the curtain.
    varrak likes this.
  6. bgolus

    bgolus Uber Alumni

    Messages:
    1,481
    Likes Received:
    2,299
    Yes, this is fairly accurate.
    Our GBuffer layout currently looks like this:
    1. RGBA8: Diffuse R, Diffuse G, Diffuse B, Specular Mask
    2. RGBA8: Normal X, Normal Y, Normal Z, Specular Exp
    3. D24S8: Depth Z (Stencil unused currently)
    We then also have an accumulation buffer that's either RGBA8 or RGBA16 depending on if you're rendering HDR or not.

    Here's a breakdown of our rendering steps.
    1. Render out GBuffers for opaque geometry, as well as render ambient light and emissive (self illuminating) in to accumulation buffer. Alpha of the accumulation buffer is used for storing exposure contribution; this lets us mask out or reduce elements like the sun's contribution to the exposure level when rendering HDR.
    2. Lights are rendered as low poly spheres (stretched in vertex shader for capsules) or boxes (really frustra). Fragment shader reads from the depth GBuffer and calculates the distance from that light instance's center, discard if outside of radius. Otherwise read normal and diffuse color buffers and do lighting calculations. Currently fog of war and range rings are rendered at this time as well as part of the lighting passes.
    3. Transparencies that are affected by or have effect the HDR exposure are rendered as a traditional forward rendering pass, mostly unlit. This is mostly particle effects, explosions and the like. The water shader gets the planet light, shadow, and ambient light information to calculate its lighting.
    4. Optional HDR pass. Accumulation buffer is shrunk down and converted to luminance, then shrunk down further to a single pixel over multiple steps to find the average luminance of the scene. This is blended with a persistent luminance texture to control exposure over time. The accumulation buffer is then adjusted to this final luminance level.
    5. Optional bloom pass, if HDR is active. Exposed accumulation buffer is copied to a smaller buffer and adjusted with a -1.0 and clamped so we only have color data that's brighter than can be displayed. A fast gaussian blur is applied in two passes then added to the scene. Technically steps 4 and 5 occur intertwined and in a slightly different order than I'm writing here, but I split them like this to make it easier to explain.
    6. Non-HDR affecting transparencies are then rendered as another set of forward passes. This is for "UI" elements, like mouse clicks or orders.
    7. Optional resolution scaling / FXAA pass. As we allow rendering the GBuffers and accumulation buffer at arbitrary resolutions we have a pass to scale this back up to the actual display resolution. With FXAA turned off this is just done with normal texture filtering. With FXAA turned on this is done during the FXAA pass.
    8. The HTML5 UI elements are then rendered over this.
    Last edited: March 15, 2014
  7. glinkot

    glinkot Active Member

    Messages:
    250
    Likes Received:
    28
    Very interesting posts from varrak and bgolus. Thanks guys!
  8. pcnx

    pcnx New Member

    Messages:
    13
    Likes Received:
    12
    Oh wow, im amazed at how similar this is to something i would use as a rendering system :D
    You guys dont need a student fresh from the university to assist with programming by accident, do you? ;)

    You didnt mention shadow mapping, i assume you have it in a pre-pass and its already in the accumulation buffer by the time the "real" rendering starts?

    Luminance calculation is done by mipmapping i assume?

    When i think about how beautiful it could look when you guys implement a realistic atmosphere (e.g. by the approach by bruneton or the gpugems approach :D) and the sun rises with lightshafts along the buildings and correct HDR/Bloom :)

    Any ways, thanks very much on the insight, as someone who codes stuff like this in his freetime
    or for the university any game related graphical stuff is of insane interest of mine and your insight is much appreciated!

    Keep up the amazing work, cant wait to read/see/play more :)
  9. bgolus

    bgolus Uber Alumni

    Messages:
    1,481
    Likes Received:
    2,299
    Shadows are done during the lighting passes. Each planet has it's own box light and shadow map for the sunlighting.

    You can actually see the gbuffers in game by bringing down the console (~) and typing "viz gbuffer".
    Top left is the depth buffer, top right is normals buffer, bottom right is diffuse (material) buffer, bottom left is the final image.
    [​IMG]

    Luminance downscaling is not done with mipmapping. We have non-constant weighting on pixel luminance and a multipass downsample resolve.

    First we very naively scale the accumulation buffer down to a fixed buffer size (I forget the resolution) by just sampling 4 points with in each pixel region and using the average, we also render out a weight map to give the center of the screen greater influence. After that we do a few more passes scaling down by 3x each pass by sampling 9 pixels at a time. We weight each pixel by multiplying the luminance by the weighting and divide by the total summed weight of all 9 pixels.
    You can get a similar view of the luminance passes with "viz luminance".
    [​IMG]

    edit: As for why we use the non constant weighting to begin with:
    [​IMG]
    In this situation the sun would normally blow out the brightness and the planet would just go black. The fringing out the outside edge is a product of the initial naive downsampling pass. We can't just black out the area with the sun either because that would artificially darken the exposure with a straight mipmapping.

    edit: Updated the sun luminance image with notes.
    Last edited: March 16, 2014
  10. chronosoul

    chronosoul Well-Known Member

    Messages:
    941
    Likes Received:
    618
    i'm slightly confused by the red circles slightly browned circles.. I can't make sense of it. Maybe i'm not tech savvy enough. Interesting read none the less.
  11. acey195

    acey195 Member

    Messages:
    396
    Likes Received:
    16
    Instancing lights, and I with my silly head thinking it was only possible with meshes :p

    Ahh so that explains my GPU fan going bezerk when I zoom in very close, I already found it quite stange that it was less of a problem further away, even though I am rendering more triangles (I don't know how far the LODding of the units is proceding)

    The red sun map is to "normalize" (like with audio files) the brightness for bloom and HDR effects. Basically a monitor can only display brightness or darkness within a certain range, HDR (High Dynamic Range) works around this by making stuff darker around bright objects, or at least that is the simple version. It tries to mimic the way your pupils adjust for sunlight in your eyes.
    Last edited: March 16, 2014
  12. bgolus

    bgolus Uber Alumni

    Messages:
    1,481
    Likes Received:
    2,299
    People think about HDR in games in far too complex ways. Sure there can be some complex parts to calculate but the idea is pretty simple.

    Take the average brightness of the view and divide the view by that average. Done. You have now exposed the view.

    The complex parts come from how you choose to calculate an average brightness, properly handling gamma conversions (which is a whole other post I won't be writing), and any tone mapping you may want to do in the end.

    How you calculate the average brightness can be simple or complex.

    Exposure can be calculated by just taking an average of the whole image and shrinking it down to a single pixel. This is a perfectly acceptable method to use, and is fairly accurate to how many real life cameras work. This is called "full frame" auto exposure.

    Another method cameras use is to sample a single point or an average of a small circle in the center of the view. This is popular for more point & shoot style cameras as generally you want the object at the center of the screen exposed and in focus. It would be really easy to do this by just taking a cropped section from the center of your view and doing mipmaps on that. This is called "spot" auto exposure.

    What we do is a bit more complex. We know the sun and space isn't important to the exposure; if you're playing you want to be able to see what's on the planet you're looking at and units. When we calculate the luminance for each pixel we also store an "importance" value we use for weighting, which we original rendered as part of the GBuffer pass in the accumulation buffer's alpha. Then when downsampling the image we throw away any pixels with a weight of zero. We additionally have a weight map that gives the center of the screen more importance than the sides, but still uses the luminance from the entire view unlike the spot auto exposure example I gave above. This second part is "center weighted" auto exposure which cameras can do as well.
    Last edited: March 16, 2014
  13. bgolus

    bgolus Uber Alumni

    Messages:
    1,481
    Likes Received:
    2,299
    So think about it this way: lights are only meshes, so you're entirely correct. The only difference between a tree, a particle, and a light is the shader and mesh they use. Even things people don't think about as rendering a mesh is rendering a mesh, like ambient occlusion or motion blur. To do either you have a render a mesh that covers the entire view, then all the complex work happens in the fragment shader.
  14. plink

    plink Active Member

    Messages:
    176
    Likes Received:
    89
    Seriously awesome stuff. I really love that you guys share all these details.
    aggie2016 likes this.
  15. vackillers

    vackillers Well-Known Member

    Messages:
    838
    Likes Received:
    360
    At the end of the day if we can all expect this sort of performance I think everyone is going to be mighty happy with that. When you played a home, were you still using the ubernet servers on a fast connection? or was you testing this on just a local machine? because the FPS will vary greatly depending on the speed of your net which I think is important to clarify when talking about frame rates. I'm guessing when the game isn't server side restricted we can expect much greater performance running on our own local hardware then a server?
  16. ozonexo3

    ozonexo3 Active Member

    Messages:
    418
    Likes Received:
    196
    it was realy nice to read. Good that we can see such stuff from Uber.
  17. bgolus

    bgolus Uber Alumni

    Messages:
    1,481
    Likes Received:
    2,299
    Network performance should have no impact on your framerate. Unit movement may be choppy, which is caused by network performance, or appear to move in rhythmic pulses, which is caused by the server side sim performance. Neither of these are client framerate though.
    Methlodis and Quitch like this.
  18. pcnx

    pcnx New Member

    Messages:
    13
    Likes Received:
    12
    The fact with weighted luminance makes perfect sense :)
    Have you measured the performance differences between a "naive" mipmap downsampling and
    a weighted method? since mipmapping is as i understand it very fast implemented on the GPU, the custom way of doing it (on the cpu?) must have some impact on performance .
    Last edited: March 16, 2014
  19. Remy561

    Remy561 Post Master General

    Messages:
    1,016
    Likes Received:
    641
    Lovely and interesting post!!!
    Looking forward for the improvements!!! :D
  20. cdrkf

    cdrkf Post Master General

    Messages:
    5,721
    Likes Received:
    4,793
    One thing I'm curious about- are there any plans to multi-thread the rendering engine on the client side as given the fact almost everyone is using multi-core cpu's it could provide a big speed up (or at least force the bottle neck onto the GPU)?

Share This Page