It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
avatar
mistermumbles: Well, that was quick. PC Gamer already has a review of the GTX 1080.

From their games tests:
- On average 30-40% faster than the Titan X/980 Ti
- On par or faster than GTX 980 SLI

Add-on:
Ars Technica review
Tom's Hardware review
Well, I guess that card out beats my GPU , which is the Nvidia GT 220 LOL. I bought the Galaxy GT 220 and it still works, runs loud and hot now, after all these years lol. Time for a new computer, this Sempron LE-1300 and rig are running on last leg here.
I cackled.
Proud owner of Geforce 7600GS 256Mb. :P :D
avatar
nipsen: ...
I don't completely understand what you're saying, so will say what I think in my own words and see if we're on the same band.

Indies vs. studios, I think that for indies it makes sense to get as quickly as possible to game development, and that means using an existing game engine. AAA devs who create their own engine (and there are few of them) want as much control and flexibility as possible, which means something that's as low level as possible.

Direct3D has been going more general and less conceptual over time. DX10 dropped all notions of specific vertex transformations from the API (world, view matrices, etc.). Some version of DX11 dropped the D3DX library completely, so any notion of organising shaders and any other handholding got removed. DX12 treats resources as memory, and the notion of textures whose memory is handled by the API is gone. In short, Direct3D becomes more and more complex to use, but provides more control. It's aimed at the engine writers, not hobbyists and indies. The barrier of entry becomes higher.

GPU's are already geared towards compute performance. Any game that doesn't have heavy calculations is using just a fraction of the chip's potential power.

The way I see it, things will likely continue this way: the base API's will be geared at general computing, with some graphics functionality thrown in (depth buffer, texture samplers, ...), but growing ever more low level and general. Over this will be built engines, which is what the vast majority of developers will use.
avatar
ET3D: The way I see it, things will likely continue this way: the base API's will be geared at general computing, with some graphics functionality thrown in (depth buffer, texture samplers, ...), but growing ever more low level and general. Over this will be built engines, which is what the vast majority of developers will use.
This is all because we hit the wall on CPU's and no longer need them to compute much of anything in a game.

the new DOOM is a prime example, rocking Vulkan (opengl next or whatever) its doing things with textures and shaders we never could have seen on mantle let alone the early versions of ogl... and DX cant even come close to this "bare metal" style of programming yet.

When ever there is a change in paradigm it takes time for the changes to come full circle for the normal users... i was barley able to send math to a GPU just 5 years ago.. now i can off-load all my python calculations to CUDA and spit back data directly to any input i want.
avatar
mistermumbles: Well, that was quick. PC Gamer already has a review of the GTX 1080.

From their games tests:
- On average 30-40% faster than the Titan X/980 Ti
- On par or faster than GTX 980 SLI

Add-on:
Ars Technica review
Tom's Hardware review
As I suspected, not as big of a game changer as the press releases would have us believe. It's a little short of the 2x performance. It's your basic next gen card update.
I have a question that I can't find an answer to with Google: what is Ti?

I often read about Ti cards (such as the GTX 750 Ti that used to be the fastest card without an extra power cable needed, the GTX 980 Ti that's not as fast as the new GTX 1080, people waiting for a GTX 1080 Ti).

The only thing I can deduct is that a Ti version of a card is faster than a regular version, but why? What is it that is enhanced when nVidia releases a Ti card?

Also, what do the letter Ti stand for?
Post edited May 19, 2016 by DubConqueror
avatar
DubConqueror: I have a question that I can't find an answer to with Google: what is Ti?

I often read about Ti cards (such as the GTX 750 Ti that used to be the fastest card without an extra power cable needed, the GTX 980 Ti that's not as fast as the new GTX 1080, people waiting for a GTX 1080 Ti).

The only thing I can deduct is that a Ti version of a card is faster than a regular version, but why? What is it that is enhanced when nVidia releases a Ti card?

Also, what do the letter Ti stand for?
Check out this page. Geforce 750 and 750Ti review.
Looks like normal version is the same as Ti, but some blocks of the chip were turned off, as you can see in the link i posted (Geforce 750 has 512 CUDA cores, while 750Ti has 640 CUDA cores). Also, GPU developer can add more memory to Ti version, lilke it happened with 980Ti (6GB, while 980 has only 4GB of VRAM), but usually it just means more powerful GPU.

Ti stands for Titanium, in my opinion. Just a marketing term a-la "Gold edition".
The 750's been out for 2 years now and only about £10-20 cheaper since it's release.
Got your pennies saves? Ouch!

https://www.overclockers.co.uk/pc-components/graphics-cards/nvidia/geforce-gtx-1080?__ckdesktop=2
low rated
I'm getting one as soon as that monitor comes out. Will probably order them together.
Looks like the preliminaries for the 1070 are in now, too. It's on par or slightly better performing than the Titan X for about the third of the cost. Not bad.
avatar
mistermumbles: Looks like the preliminaries for the 1070 are in now, too. It's on par or slightly better performing than the Titan X for about the third of the cost. Not bad.
Looks good so far. The hype is saying that the 1070 is even more energy efficient than the 970, which means upgrading my 760 is going to be very attractive, as the 970/1070 tend to use less power than the Keplers. Even better, I won't have to upgrade my PSU to get one.
NVIDIA is back to piss some more into AMD's cereal. The GTX 1060 will be out later this month, and according to NVIDIA it will be faster than the GTX 980 (probably not too much so though)... for only $250 (for the non-fancy versions). Seeing as their claims about the other cards were fairly accurate, I'd expect this to hold true as well.
Post edited July 07, 2016 by mistermumbles
Well, sounds like the GTX 1060 is more on par with the GTX 980 instead of being much faster (by a frame or two). Still, that's a pretty damn big leap from the GTX 960.
Post edited July 19, 2016 by mistermumbles