It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
Gonna buy 4.
Shorter Pascal: fewer transistors. Better heat management, along with cut down cuda and opencl features, fewer smx-modules, and a software/firmware pipeline focused exclusively on single-precision calculation, and shallow command queues (read: games with simple graphics and lots of post-filters). Put together, this allows for higher burst clocks and 1000fps out of the box on all your games from 2010.
avatar
Atlantico: But really, for a person who claims no particular knowledge of "technical gobbledy-gook", you're certainly not shy of parading out with "price-heat/power-performance ratio" proclamations, which would be entirely based on "technical gobbledy-gook".
avatar
mistermumbles: How many FPS can I generally expect? How much heat does it create? How much power does it draw? How much does it cost (power draw may have an impact as well)? ... That's not exactly what I consider highly technical terms rather than basic information for a buying decision.

What I was talking about is the whole 'look at them terraflops, bandwidth pipeline, type of memory used, architecture' ... blah blah blah. All I need to know is how it compares to other cards in performance terms. All the nitty gritty detail I couldn't give a rat's ass about.
So you prefer to think of GPUs as cars and get fed with commercial slides from press releases. Good luck with that.

Meanwhile in the real world, all the "nitty gritty detail you couldn't give a rat's ass about" is what affects and dictates how many FPS you can generally expect, how much heat the GPU dissipates, how much power the GPU draws and how much it costs to run.

You go out with no information (or just stupid information) and give sales people your money. And thank them for taking it.

GPUs are *all* about the "nitty gritty detail you couldn't give a rat's ass about" and nothing else.
avatar
anothername: Too bad. They managed to get me hot for the versions with the new HBM2 ram before they found out they still have a stock of too much old DDR5 to get rid off.

On the bright side my GTX 750 (non ti) wth a mere lousy 1 GB placeholder for the newest gen card is surprisingly good with everything I had thrown at it.
The consumer grade GP100's are rumored to be out later(in the year?) with HBM2 Ram. I imagine this will be the Pascal version of the Titan and possible GTX1080Ti.
seems like the 1080 is a pure 4k gaming card from what is shown in specs and data given, I might want to sell my 2 980's and pick up 2 of those so I can run 4k at full 60 fps without dips. very tempted or I might skip this run of cards and get the next series from this one.
avatar
nipsen: Shorter Pascal: fewer transistors. Better heat management, along with cut down cuda and opencl features, fewer smx-modules, and a software/firmware pipeline focused exclusively on single-precision calculation, and shallow command queues (read: games with simple graphics and lots of post-filters). Put together, this allows for higher burst clocks and 1000fps out of the box on all your games from 2010.
This should run those 1990 Dosbox games amazingly.
I wish I could afford it. My 640 is falling short. :(
avatar
Atlantico: Meanwhile in the real world, all the "nitty gritty detail you couldn't give a rat's ass about" is what affects and dictates how many FPS you can generally expect, how much heat the GPU dissipates, how much power the GPU draws and how much it costs to run.
And, as he said, the performance and power draw is what matters. The architecture itself doesn't matter one bit to the end user, it's just a means to an end.

Edit: The more I think about it, the more I realised how worthless knowing the architecture is and how little most people know about it. Enthusiasts might know how many compute units there are, and the features of the underlying architecture (Maxwell, GCN 1.2, etc.), but they know little of the actual architecture and how it affects performance at the low level. All the know is to extrapolate from one chip to another (and usually not that well).
Post edited May 10, 2016 by ET3D
avatar
Atlantico: Meanwhile in the real world, all the "nitty gritty detail you couldn't give a rat's ass about" is what affects and dictates how many FPS you can generally expect, how much heat the GPU dissipates, how much power the GPU draws and how much it costs to run.
avatar
ET3D: And, as he said, the performance and power draw is what matters. The architecture itself doesn't matter one bit to the end user, it's just a means to an end.

Edit: The more I think about it, the more I realised how worthless knowing the architecture is and how little most people know about it. Enthusiasts might know how many compute units there are, and the features of the underlying architecture (Maxwell, GCN 1.2, etc.), but they know little of the actual architecture and how it affects performance at the low level. All the know is to extrapolate from one chip to another (and usually not that well).
The architecture has nothing to do with compute units or power draw, the features of the architecture directly affect the expected FPS one can expect to get... forget it, why even bother. Just go check the slides from the manufacturer with all those fancy lines and easy to understand graphs. Apparently all their GPUs are amazing, super mega fast and spend no electricity. Yours for 600 bucks now.

No need to worry your pretty little head with any other details.
I'm following the news on upcoming cards on the Dutch tech forum Tweakers.net (I don't have an account there, but I read the fora for up-to-date news).

Personally I'm waiting for release of mid-range cards that can run without an extra power-cable for the GPU. I want to upgrade my PC by replacing my old Radeon HD 6670. It's 4 years old and starting to make noises besides not being able to play Far Cry Primal (even though I don't like Ubisoft, I would like wandering about in the Mesolithic). At the same time I want to spare our natural environment, so the chosen compromise for me is I do upgrade, but to a card that asks at the most 75 Watt and doesn't need my pwer supply - that has no extra GPU power connector - replaced.

Current highest performance for 75 Watt is some new GTX950 models: the ASUS GTX950-2G and the MSI GTX 950 2GD5 OCV2 and GTX 950 2GD5T OCV3. But I'm curious what the new AMD Polaris models will bring. I'm a bit disappointed in nVidia only releasing high-end Pascal cards now. It'll probably be autumn before we see cheaper less energy-intensive cards from nVidia.

My hopes for now are more focussed on AMD with it's Polaris cards, though with Polaris 11 thought to consume 50 Watt and target notebooks and Polaris 10 used for mainstream and high-end GPU's capable of VR, I'm not sure if AMD will hit my sweet spot of maximum performance that you get for 75 Watt.

http://arstechnica.co.uk/gadgets/2016/04/amd-polaris-will-be-a-mainstream-gpu/

here's the rumours as summarized on Tweaker.net's AMD news discussion topic:
http://gathering.tweakers.net/forum/list_messages/1662228/0#gerucht
avatar
Atlantico: forget it, why even bother.
You didn't bother. You didn't give even a single example. That's a classic line of someone clueless.

Edit: What I mean is: please give an example where knowing of a specific architectural feature should make a significant difference to a buying decision based purely on benchmarks.
Post edited May 10, 2016 by ET3D
avatar
Atlantico: forget it, why even bother.
avatar
ET3D: You didn't bother. You didn't give even a single example. That's a classic line of someone clueless.

Edit: What I mean is: please give an example where knowing of a specific architectural feature should make a significant difference to a buying decision based purely on benchmarks.
"should" or "would"? :p

Example: compute performance on nvidia quadro cards tends to be a lot higher than on geforce cards (or, in certain scenarios, the gtx cards tilt and crash). But that's also the case even when these two models actually use the same chipset. The reason for that is simply that the instruction queue scheduler on the gtx cards doesn't exist. So with current chipset production, the basic difference between a 1000 dollar card and a 100 dollar card is a software/firmware layer and a laser-cut for some of the lanes.

Does this make a difference to someone playing Call of Duty with FSAAx16 on top of the same blurry texture supersampled for precision 32 times? No. Should it make a difference considering how relevant OpenCL and other compute-type platforms are going to be soon? Maybe.

Another example: So it turns out that the ram "speed" on graphics cards is actually very, very low. And that the simd principles graphics card operations run on essentially makes that comparatively low access speed both optimal and necessary. If people knew a lot about gddr5 and ddrx4awesome standards, and so on, then they'd know that going from "ddr3" to gddr4 and gddr5xsupreawesome... really was a complete PR blitz. It had nothing to do with higher speed, and a lot to do with manufacturers wanting to get rid of gddr memory modules manufactured in bulk for different applications than just for graphics cards. They saved money on manufacturing, and sold the card for more money! And it used a lot more power, and memory bus architecture optimisations made it difficult to compare with the previous cards, so everything's great. Except for us, who are paying for this crap, and in so doing essentially stalling already scheduled architecture improvements. Because yay, gddr5awesome!

Should stuff like that matter? Anandtech and gawker doesn't think so, so I guess not.
avatar
Atlantico: forget it, why even bother.
avatar
ET3D: You didn't bother. You didn't give even a single example. That's a classic line of someone clueless.

Edit: What I mean is: please give an example where knowing of a specific architectural feature should make a significant difference to a buying decision based purely on benchmarks.
So your basic premise is, that no specific "architectural" feature (which is a rather nebulous term) makes a significant difference to a buying decision. All right.

An example:

GCN is a "architectural" superset, and features simultaneous execution of graphical and computational commands. This makes it incredibly efficient in games that use DX12, meaning less stress on the CPU, meaning higher FPS (up to 40% higher than without) thus much improved game responsiveness for you on a typical 4-core CPU.

This is something that drivers can't enable by themselves and is unavailable to "architectures" that don't have this feature built in. It is very important for DX12 games, unless you have a top-of-the-line CPU, in which case it's less important, but not insignificant.
avatar
Atlantico: So your basic premise is, that no specific "architectural" feature (which is a rather nebulous term) makes a significant difference to a buying decision.
I was paraphrasing your terms. My basic premise is that benchmarks provide a pretty good measure of a card's 'worth', and that understanding the underlying technology better does little to change how desirable a card should be.

For example, a GCN card that's more power hungry and slower than a Maxwell 2 card in most games isn't suddenly a great choice just because it's better in DX12. I mean, sure, under certain circumstances it might make a difference, but all in all buying an NVIDIA card tends to be a better choice for a slower CPU due to much more extensive driver optimisations.
avatar
Atlantico: So your basic premise is, that no specific "architectural" feature (which is a rather nebulous term) makes a significant difference to a buying decision.
avatar
ET3D: My basic premise is that benchmarks provide a pretty good measure of a card's 'worth', and that understanding the underlying technology better does little to change how desirable a card should be.
My basic premise is that benchmarks are an artificial construction that really only demonstrate what the benchmarker wants them to demonstrate. For instance, take the HD7970 vs GTX 680, back in the day all benchmarks declared the 680 to be a far superior GPU.

These two tested today yield different results, which I think are a very important thing to consider when considering *value* and thusly how desirable a GPU is. Today the HD7970 is not only a better GPU than the GTX 680, it's also able to play modern games in decent quality. This because of better architecture, more VRAM and continuously improved drivers from AMD. That's value.

"There is no doubt that the HD 7970 with its 3GB of vRAM, is a generally faster GPU for todays games than the 2GB vRAM-equipped GTX 680 although they are still competing in the same class. There are many games that ran poorly on the GTX 680 only because of the current 2GB vRAM limitation – which was non-existent in 2012/2013 – and that tweaking a few settings downward generally make the newest games run decently. Of course, if we replayed our original 2012 benchmark suite, the GTX 680 would still be generally faster than the HD 7970. "

http://www.babeltechreviews.com/hd-7970-vs-gtx-680-2013-revisited/
Post edited May 11, 2016 by Atlantico