Minimum requirements: 56 gigaqubit processor. Seriously though, as impressive as it is, I don't care what they show me until I see it in a game, running at a good framerate. And how are they going to handle dynamic lighting? Character models? Weird stuff that doesn't exist in the real world? I guess you can build sets and scan those, which would be a whole new way of making games but jesus hell that sounds like a pricey way of doing things. And frankly, do we want photorealistic games? For some genres, sure. But playing a photorealistic Battlefield or something is probably just going to give me PTSD.
apparently not at all. apparently what they're showing can be navigated on a laptop. but when they say they can do animation in all of that "easy" I do get a bit skeptical. still. as is : great myst game. and the applications of this are not only limited to video games.
Well, this video is from 2014. And they say they will announced these 2 games they are working on "soon". I guess it shows how serious we should take this announcement. They have a good algorithm to work to represent hierarchical point clouds, even if I don't believe their O(1) time for rendering, they show impressive results. But they miss a whole lot of points for computer games, just a few from this video: * dynamic lightning, and volumetric lightning effects that are not representable by point clouds * 3d assets, which are not real - like aliens can't be scanned - you need artists for that * 3d scanning animations isn't that easy And my personal guess is, that a terrain of say 40 km^2 in the high resolution shown in the video will breach any limit on current mass storage. And some games showed that 40 km^2 isn't completely out of the window in modern games. But, yes, we can give them a bit more time
Oh cool I was kinda half-guessing about whether or not dynamic/fancy lighting would be doable with laser scanning. EGO BOOST AWAAAAAAY! As for non-existing assets, they can just make those and scan that. 3D printers could help with that. No reason you can't just scan a guy in an Alien costume. Though I have to wonder what happens to objects that don't reflect the light from the laser in a conducive manner...
I am just as skeptical as I was in 2011 or whatever it was when they first showed that other video. There are just a bunch of questions that they've not explained well enough: - so how big is the dataset? Would such a game have a few TB of download size?! Would it be online only, as 99.9999999% of the game data consists of petabytes of dots stored on some server? - how do they handle animations? Rotating a huge cloud of dots turns out to be pretty expensive. An alternative might be to have the data in 4 dimensions, so the time is just another dimension and they move through the 4th dimension to lookup different dots. This however means that every movement in the game has to be already there as data somewhere. No data is generated while playing at all. They are only showing data that already exists. So every possible thing that might need to be displayed needs to be available in their database. - how exactly does the lightning work? Is it static? If it is not static do the calculate it in real time or do they have like every possible lightning data ready and prepared? If it is calculated then how do they perfect looking real time lightning in real time? If it is not calculated but ready made, then what about a game that wants to display a scene in a lightning that will never happen in reality? So yeah still waiting on waaaay more details on how it works, and even more important: A working game that is using this and actually looks that good. For static images or completely prepared animations via 4D I guess I can imagine that it makes sense. But arbitrary changes to the world? Completely dynamic and with bazillions of different possibilities for the "moving parts" in the scene to interact with each other? Yeah... No I don't believe that until I see a working demo running on my own computer. Also I had the feeling the shown video was still not completely real. Can't say what was off, but I *think* I knew immediately that I am looking at a rendered image, albeit a very good looking one. Though not 100% sure on that. Might need some "is it real?" tests with both rendered and real videos.
Yeah there's no way consumer computers would be able to handle this dot enviornment. I mean, they pretty much made the equivalent of full 3D textures and that's already impractical for consumer hardware. If a normal enviornment with polygons and models was rendered at this detail, let's just say there would be an insane number of polys.
Oh I do believe they have some form of highly effective lookup that can do "which dot of the world relates to this pixel?" That's basically they need to answer, they need a gigantic database they can query in realtime for all pixels on the screen. I can imagine that may be possible.
so did I it's the visible polies and the lack of reflection and other light changes as well as the perfect non-linear traveling camera. they would have had to build a tailored camera rail with turns and slopes for these shots, it's unrealistic anyone would invest this much just for that result.(plus the editing to remove said rails (or crane) from the final result as they backtracked)
My guess is that they can do some kind of dynamic lightning. basically they can - at least - do the same that is done in raytracing: On every pixel trace one ray from the position of the surface to every light source, e.g. the sun. The problem here is that this does not scale very well with the number of light sources and it is just a 1st order approximation. It's in the shadows! Look at the video footage again and instead of looking at the lit things, try to look out for the colours and luminance (?) in the unlit parts of the scene. The scenes are far to evenly lit to feel realistic. You can see this on the scenes were you do not expect much shadows, like the cobblestone and the road segment at the end or the architecture in harsh sun light, they are very "visually believable". Funnily the church at 1:28 is really good. It seems that they use different tools to capture and process scenes, which results in more or less "prebaked" shadows.
Which is really unfortunately. Due to the way ray tracing is set up, you get a lot of the effects, you try to "fake" nowadays on rasterizers (aka our graphics cards) for free. It starts with all the reflections, refractions and correct transmittance and goes to the higher order approximations like soft shadows, ambient occlusion and so forth. To be honest, I'm looking forward to the day we get so much GPU power that ray tracing can be done in an practical way within the limits of say 16,7 ms and we can finally do away all those hacky render phases that clog up current render pipelines.