Unlimited Detail - The NextGen graphic technology

Discussion in 'Unrelated Discussion' started by Col_Jessep, August 2, 2011.

  1. Col_Jessep

    Col_Jessep Moderator Alumni

    Messages:
    4,227
    Likes Received:
    257
  2. Vlane

    Vlane New Member

    Messages:
    1,602
    Likes Received:
    0
    Old.

    Has been bad, will always be bad.
  3. x Zatchmo

    x Zatchmo New Member

    Messages:
    894
    Likes Received:
    0
  4. Col_Jessep

    Col_Jessep Moderator Alumni

    Messages:
    4,227
    Likes Received:
    257
    I posted the first vid a while back. This one is 3 days old.

    Anyway, anything that can get rid of shitty ground/flora ploygon approximation is highly welcome imo. If they can really do it all in the CPU the GPU is free for other stuff. Shaders, physics, light simulation...
  5. Vlane

    Vlane New Member

    Messages:
    1,602
    Likes Received:
    0
    Storage. That is all.
  6. Col_Jessep

    Col_Jessep Moderator Alumni

    Messages:
    4,227
    Likes Received:
    257
    Compression algorithms and procedural content generation.
  7. Vlane

    Vlane New Member

    Messages:
    1,602
    Likes Received:
    0
    Alright here goes (don't guarantee correct math, it's 1:50 am):

    Let's say you have a polygon (or did they call it atom? Can't remember) with x, y and z information (no colors, shading, etc.). That pixel will take about 4-6 bits with compression and things like that.
    Having unlimited detail (I'm just gonna go with two billion pixels) that would be 1 billion byte, aka 0,93 GB (1024 not 1000) when looking at a landscape and not moving. If you move you have to stream this several times a second. How the hell do you want to do this?

    Sure you could have something called "rock_a" but that wouldn't be unlimited detail, it would be unlimited sameyness.

    This was the problem I came across several months ago and no matter where I asked nobody could tell me how they want to achieve this.

    Apply physics, AI and all that fancy stuff and you got yourself a pretty sh*tty running game. Not even mentioning the fact that they haven't made any progress in about a year and that this is basically the same thing as fricking voxels.

    If these guys don't actually present something worth mentioning this stays a scam.

    Edit: Actually Notch does it way better than me.
  8. Col_Jessep

    Col_Jessep Moderator Alumni

    Messages:
    4,227
    Likes Received:
    257
    Right. No offense to Notch (after saying that I can offend him all I want :D) but he is not exceptional at managing computing resources and optimizing his product. Want an example? His way to switch between day and night so far required EVERY SINGLE BLOCK to be updated. He is just now redesigning the way the lighting engine works. He also thinks that mip mapping is a STD... *sigh*

    I think the idea behind Unlimited Detail is not to build everything out of point cloud data but only things that are useful. There is no reason why a graphic engine can't combine different systems. It has been done before, see Red Alert 2, Crysis, Worms 4...

    The limitation is that you can only repeat a couple of million points of data over and over again? Good thing that you can make a lawn by repeating the the same 10 blades of grass a couple of million times. I'm pretty sure there are some severe limitations with handling those huge amounts of point cloud data but it might be incredible useful for some applications.

    And Notch's island example that leads to 512 pentabyte of data is so ridiculous that I can hardly believe he posted that. Well, he is Notch, so I wasn't that surprised. Have you seen the model of the monster (demon, whatever...) in the video? It's not a massive volume of blocks, it's hollow. You don't need 8 meters depth of data when you can only see the upper two millimeters. I just improved Notch's system by 400000%. We are now looking at ~130GB of uncompressed, unique raw data that can be further optimized.

    You might not want to ask the guy who made this game
    [​IMG]
    on his opinion on cutting edge graphic engines... ;)
  9. killien

    killien Active Member

    Messages:
    979
    Likes Received:
    4
    Wasn't he doing all the development himself for the longest time?
  10. Col_Jessep

    Col_Jessep Moderator Alumni

    Messages:
    4,227
    Likes Received:
    257
    Yop. To be fair, he probably never expected Minecraft to become anywhere near as popular and cut a lot of corners. That's okay when you do a small Indie game but now it biting him in the ***. It shows at how badly Minecraft handles exceptions. They usually lead to a CTD when they should just throw an error in the debug log.
  11. LIVE 3RUPI

    LIVE 3RUPI New Member

    Messages:
    479
    Likes Received:
    0
    Thats a pretty cool vid, are they planning on using that tech on any games coming out in the future? Maybe when the 720 comes out.
  12. JON10395

    JON10395 New Member

    Messages:
    3,652
    Likes Received:
    1
    Is that guy related in any way to this guy?
    [​IMG]
    [​IMG]
  13. LIVE 3RUPI

    LIVE 3RUPI New Member

    Messages:
    479
    Likes Received:
    0
    Hes reminding me of the guy from quantum leap, maybe its the cigar.
  14. Col_Jessep

    Col_Jessep Moderator Alumni

    Messages:
    4,227
    Likes Received:
    257
    They are still working on the SDK and early tech demos. I would not expect this technology in any games for at least 3 or 4 years. What they could make is a small benchmark program. Are you familiar with Futuremark's 3DMark? They had a voxel-based test to determine CPU speed IIRC.
  15. L-Spiro

    L-Spiro New Member

    Messages:
    424
    Likes Received:
    0
    It is not really a scam but it is not as great as the guy says it is.
    If it were even possible to animate a point cloud he would have had something to show for it. He showed ugly static images and justified it by saying they arent artists, so why would he be afraid of showing an ugly animation, at least just to prove it can be done?

    It cant. He has found a way to perform ray-casts into point-cloud data fast enough to render them, but that is just one cast per screen pixel with enough early-outs to prevent ever touching 99.99% of the points. That is an entirely different ballgame from updating point-cloud data.

    Since he has a way of instancing data (most likely just by changing how the ray is cast rather than transforming the geometry (and keep in mind it might be instanced off-line, not in real-time, but I doubt there is enough memory for that)), you could make primitive animations by making joints have a lot of static objects that move at different speeds to create the effect of stretching, but it would look crappy at best.

    Destructable worlds are impossible. The system only works because the points have been stored so specifically. You cant just remove a point, especially under compression.


    There is no future in games for this technology, but it is useful for showing static objects to people. For example car models, house interiors, etc. It could be used in the medical industry, industrial, archaeological, etc. industries just fine.


    L. Spiro
  16. Col_Jessep

    Col_Jessep Moderator Alumni

    Messages:
    4,227
    Likes Received:
    257
    Or you could use it to create a giant, beautiful world for a RPG/FPS and just make all the NPCs and monsters out of polygons. Best of two worlds.

    I remember how they switched sprites for low poly models back in the day. The result was less detailed and blocky. Its only saving grace was that it was real 3D and could be rotated. That was totally enough at the time though. Maybe it's time to leave the comfort zone of polygons for building a landscape and try something new. Crysis' island was already build in voxels that were later transformed back into polygons. They were able to make a more realistic looking landscape that way.

    Maybe the can't use it for stuff that needs animations and all that but if they can just use it to make the ground look more realistic I'd be happy already.
  17. L-Spiro

    L-Spiro New Member

    Messages:
    424
    Likes Received:
    0
    Where is your reference for how CryEngine 2 handled the terrain in Crysis? I need this.


    As for mixing, although technically possible, next-generation technology wont be fast enough.
    To get mixing:
    #1: Start off with the current system at 20 FPS (I assume they could afford a computer almost as fast as mine with their budget, lets assume 20 FPS in the best case).
    #2: Depth is not currently stored. Once the point in the cloud is found, additionally compute a depth component. Since this is on the CPU, this will drop the FPS by at least 2.
    #3: Copy depth information into a depth buffer and send to the GPU. This type of transfer is actually so slow that doing it twice per frame is noticeably slower than doing it once per frame. Take off 1 FPS.
    #4: Perform the rest of the render with polygons. If you want the models to look anywhere near as detailed as the environment, we can safely consider a lot of polygons and high-resolution textures will be needed. FPS -= 10?


    This may be viable to a degree in the future, but not the next generation or 2, sooner if it catches on so much that hardware support is made.


    But the problem is you wont ever get animated polygonal meshes to look like the super-detailed environment, and although that may not have been a problem back in the day of Doom, people today have options available such as Battlefield 3 and Crysis 2. The standards are higher these days and they wont get anywhere unless they can make them mix seamlessly.


    Real-time shadowing is possible with mixed technology, but again it will take a few generations to perform well enough.


    What about physics? This is one of the main points I consider when kicking it out of the gaming industry. Yes it can do ray-casts extremely quickly, but how will it check if 2 objects are intersecting?
    Assuming a polygon player vs. the ground, the first thing you want to do is exit early through an AABB or OBB test (a bounding-box test). How can you do that against point-cloud data? Cast a ray down at fixed intervals along the bottom of the players bounding box? Too slow, and that was supposed to be the early-out.

    Wrap the level in minimalistic polygonal data?


    Basically, by the time we have the speed to make this usable in games, that same speed could be used on newer polygon technology to get nearly as impressive results.

    But for now it is perfectly fine for static scenes.


    [EDIT]
    He says they have support for animation (and called me grumpy).
    While I have an idea as to how this could be made possible, it would look horrible as I mentioned before. I want to see this.
    [/EDIT]


    L. Spiro
  18. Col_Jessep

    Col_Jessep Moderator Alumni

    Messages:
    4,227
    Likes Received:
    257
    I read it in a German magazine but I found a reference here:
    http://en.wikipedia.org/wiki/Voxel#Computer_gaming
    Sorry, no details about how and what they used it for exactly. From my experience with map generation I would assume they made detailed heightmap, used algorithms to add erosion and stuff and polished it using voxels. That's just a guess though.

    What you can do is to use the CPU to process some of the data that would usually have to go to the GPU. Modern multicore CPUs have some free resources in many games, why not use it? The graphic card can take over other tasks like physics calculation.

    Good point, that's probably something that needs to be resolved.

    Well, you could start using point cloud data only for stuff that adds some polish: trees, rocks, grass. That would already look a hell of a lot better than a low poly surface with textures. I think it's too early to say it's impossible. Those demos were not made by artists and were meant to show an incredible level of detail but they are completely unpolished.

    I'm not willing to give up on the technology before it is proven that you can't get it to play nicely. And it will take several years before it can be used, no doubt about that. By that time the hardware might be good enough to make it blend more or less seamlessly.
  19. L-Spiro

    L-Spiro New Member

    Messages:
    424
    Likes Received:
    0
    That is what I meant. The depth buffer will have to be generated on the CPU and transferred to the card. Both are pretty slow.


    L. Spiro
  20. LIVE 3RUPI

    LIVE 3RUPI New Member

    Messages:
    479
    Likes Received:
    0

    Hell no, im not up on computer tech and stuff but the graphics in the vid you posted look fantastic. Just hope im still playing x box in 4 years.

Share This Page