nipsen: :) We'll find out for certain in a couple of hours.
micktiegs_8: My understand was that the PS4's unified GDDR5 works quite a bit differently to the DDR3/4 of today's machines. Considering that the PS4 has 8GB, and it's not even the same memory type; I'd want to start with 8GB for a minimum.
Mm. Actually, gddr5 ram is pretty slow. The internal bus gddr5 ram operates on (on the external cards) has a broader addressing space potential, so a lot of the asynchronous operations/limited simd type stuff on a graphics card can run faster. Higher, potentially, memory frequencies also allow these specific operations to run somewhat faster than with a ddr3 setup. But this is seen isolated when it comes to scene-dependent operations like anti-aliasing and the simple math-operations the "shader-units" can compute (these are really just simple processors with limited instruction sets).
In the same way, the ddr3 variants of these graphics cards typically run with timing that is a lot lower than system ram of the same type, etc. And the ddr3 ram then runs on lower timing than the ram is designed for, and with fewer features than needed. They're more expensive ram pieces running without fairly expensive features disabled, in other words. So essentially, the reason why gddr5 exists at all is that it's cheaper to produce generic ram modules with the only required featureset than to add ddr3 ram to the cards separately. It's.. a cost-efficiency solution that has been hyped up a lot, but it doesn't genuinely score massively higher than ddr3 variants - because of the internal bus designs. Any real differences turn up in extremely specialized functions like mentioned, and it comes from the bus-design and addressing width rather than the clock speeds. Basically, new cards with better chips have gddr5 rather than ddr3, and score better because of a better architecture design. Another issue is that internal memory speeds to achieve the same speed has to be higher on gddr5, so Gddr5 variants generally produce more heat and draw more power (look this up - internal clocks run at double speed to achieve the same bandwidth externally).
Anyway. So what the ps4 has is a graphics card solution that uses system ram as graphics card ram. Over the APU-design AMD made. It's all kinds of efficient, and for graphics type operations that require resubmits to main ram this is a lot faster than any external bus design. So the limitation lies in the amount of shader-units/simd-type processors that were possible to fit on the cpu-die at the time. And I think the philosophy was that because the most needed functions were "graphics card type operations", slower/cheaper/less feature rich gddr5 ram was a good choice. Which it is, from a certain point of view. Since it means that if the internal bus is configured properly, the addressing space can be broader, meaning higher theoretical transfer speeds. As well as more efficient runs on the non-resubmit type operations like AAA. In practice, however, the difference isn't as... *cough* amazing as hyped up.
For comparison, the ps3 was said to be very meager in terms of bandwidth to the graphics card module. Which was the case if you used a traditional submit/resubmit method. But the memory bus (the eib - with concurrent reads and writes to smaller memory areas.. unique design, very interesting stuff) was configured so that the theoretical speeds in total back and forth could reach targets that dwarf anything we can ever hope to draw out of a linear bus on any silicon construction.
But gddr5 and ddr3 are essentially the same from an abstract point of view when addressed as system ram towards the cpu (or the APU in this case). The only real difference is that throughput at peak speeds can be slower, and that parity checks are done in a different way. While the specific graphics card operations in theory can be run more efficiently on a gddr5 /bus/ construction compared to a ddr3 bus, purely because of the increased bandwidth from larger concurrent addressing space.
In any case - the actual size of the memory pool to hold the graphics context in NMS should be relatively small, because of the way they supposedly only render visible objects, etc. They've demonstrated that already, so can probably hold that as true. While the need for a system-ram pool for containing the nodes needed to search up the information needs to be comparatively larger.
Like.. to take a made-up and not correct example: say you have a bunch of geometry and very few textures duplicated on top of each other only for the visible graphics context held at the same time (and you can toss all the things you don't see, everything that's right behind you, etc), then the graphics card only holds a very tiny fraction of the actual "universe" nearby. While the data needed to hold the information around you, that complex math (cpu type math) needs to look up has to be much larger than what you actually see.
So that's why the install is barely 3Gb large (and supposedly most of that is the soundtrack). While the processed data the engine generates, that then is looked up in galaxy-sized pieces has to be much bigger.
I'm sure someone will do a demonstration of how the engine addresses ram and gpu ram eventually, so people can see what exactly is going on. But that's the core of how it works. That the data the search-algorithms will need available to function is massively bigger than what's actually pulled to the graphics context for rendering.
(..1.7Gb left.. come on! :p I want to play!)