It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
Given that the only difference between the manufacturers are their factory settings, I'm looking for some kind of comparison chart, kinda like one you'd get from 3dMark or something, that would show me prices (first and foremost) and then benchmark speeds for whatever the most resource taxing game is today. Lastly, I'd need to know how compatible each would be with a 5770 in a CrossFire setup (I don't really fancy paying another $400 for 2x graphics power, considering I can get 1.5x power for an extra $135~, and that's me just assuming a 5770 is only half as powerful as a 5870, which may or may not be the case). Normally I'd go with MSI, since they're usually the first licensed manufacturer to build an OC'd card right off the bat, but the problem there is that they don't ever build any updated boards after that and are usually surpassed by another manufacturer like XFX or Sapphire months later. Speaking of which, the only setting I'd be likely to change on a video card is the fan setting (my 4890 runs at a constant %60 for an idle temp of 42*F, rather than it's default "auto" speed and idle temp of 60*~F), has anyone any experience with the Sapphire Vapor-X card? Is it surpassed by a cheaper card in the same class?
This question / problem has been solved by Delixeimage
Not really what you are asking for, but I would suggest going nVidia instead.
Hear me out before you dismiss this as a fanboy rambling. I actually used to prefer ATI (admittedly, before they became AMD). Back then, the only differences were performance (and they were pretty equal, more often than not).
Now, you have the unbalancer of nVidia having won the gpgpu race. With things like PhysX and all the other shinies nVidia are releasing, it is not a competition.
To be honest I don't think theres any difference between manufacturers these days unless they do something bold like add something other than a sticker to the stock model. The addition of their own heatsink is usually a sign of quality (although there is leadtek which ruins that theory).
I'd recommend finding the one that has the best bundle
avatar
predcon: I'd need to know how compatible each would be with a 5770 in a CrossFire setup (I don't really fancy paying another $400 for 2x graphics power, considering I can get 1.5x power for an extra $135~, and that's me just assuming a 5770 is only half as powerful as a 5870, which may or may not be the case).

Note that that CrossFire works only between cards from the same model group. According to ATI's chart you cannot pair a 5770 with a 5870.
Also note that connecting multiple cards via CrossFire will slow all connected cards to the speed of the lowest one. This is on top of the inherent problem of performance gains not scaling in a linear manner (depending on the game a pairing may not even equal 1.5x the power of the weaker single card used), and for some games a CrossFire configuration actually reduces performance (although it can always be temporarily disabled if necessary).
My personal recommendation would be to go with a single card for now and worry about extra power later; by the time your 5870 starts needing more power you could add in another 5870 for a good price or just swap it with an 8870 or whatever exists by then.
avatar
Gundato: Now, you have the unbalancer of nVidia having won the gpgpu race.

Nvidia has won the physics battle for now, but they may well lose in the future as DirectX 11 hardware becomes mainstream. Nvidia's PhysX is the most common physics system in use by 26.8% of all titles, but Havok is close behind with 22.7%. Havok, Bullet Physics and Pixelux DMM have current or upcoming hardware acceleration support, and the open nature of compute shaders means a developer can even program their own custom physics system in DirectCompute or OpenCL and still take advantage of full hardware acceleration from both vendors. I will be interested to see whether Crytek does this for CryEngine 3.
Post edited July 07, 2010 by Arkose
avatar
Gundato: Not really what you are asking for, but I would suggest going nVidia instead.
Hear me out before you dismiss this as a fanboy rambling. I actually used to prefer ATI (admittedly, before they became AMD). Back then, the only differences were performance (and they were pretty equal, more often than not).
Now, you have the unbalancer of nVidia having won the gpgpu race. With things like PhysX and all the other shinies nVidia are releasing, it is not a competition.

If you're basing the victory of a "GPU race" on the ability to make a virtual blade of grass move independently of another, the I'm pretty much going to dismiss this as fanboy rambling, especially since you really haven't cited any example comparisons, benchmarks, or other such evaluations. All you've given me zazz, for lack of a better word, at the moment. Also, I don't care for blue screens anymore. That's why I switched FROM nVidia.
@Arkose
I'd just assumed that since the ATI pages group their cards based on their number-in-the-thousands-place, that any 5xxx would be compatible with any other 5xxx.
@Aliasalpha
IF I can find one, I'm going to spring for an MSI R5870 Lightning. Otherwise, I figure a card bundled with a good game and a tech support staff that speaks fluent English and has an education beyond A+ certification is just as good.
Probably the best 5870 on the market is Sapphire's Toxic with the Vapor-X cooler. For your money you get a cool running card thats ready for overclocking. Even if you have no interest in overclocking less heat in the system is always a good thing.
As for the Nvidia V ATI thing Nvidia's are more expensive (considerably so with the GTX480), they consume more power and they run a lot hotter. This also means their coolers run a lot louder as well. Is this really worth PhysX? We have all seen the benchmarks and with PhysX turned off the GTX480 cannot justify the additional price over a 5870.
As for CrossfireX and a 5770 I haven't seen any benchmarks for that set up I know you can do it but I can't see much of a benefit to using it. Worst case would be the system would run as if it was two 5770's. This has happened in the past as when people pared 4870's with 4850's the two would run at the clock speed of the 4850.
Do Gainward make radeons? I'd definitely recommend them, the Geforce 6800 Ultra came with a factory warranty on overclocking, thats how confident they were and it was a hell of a good card
For comparisons, or [url=http://www.tomshardware.com/reviews/radeon-hd-5870,2422-13.html]this might help.
Edit: I use both ATI and Nvidia and like both cards, I prefer the ease of install for the Nvidia driver updates but other than that . . . =)
Post edited July 07, 2010 by Stuff
I just got mine. Sapphire HD5870 Vapor-X. Rally good card at stock frequency, but with one of the best cooling system.
avatar
predcon: @Arkose
I'd just assumed that since the ATI pages group their cards based on their number-in-the-thousands-place, that any 5xxx would be compatible with any other 5xxx.

actually the 58xx and 57xx share close to nothing in common the actual die is different (cypress for the 58 and juniper for the 57) and until hybridX hits pairing them together will actually lower your frame rate if they work together at all.
My advice buy the best card you can afford or wait and see how southern islands performs in a few months.
avatar
Gundato: Not really what you are asking for, but I would suggest going nVidia instead.
Hear me out before you dismiss this as a fanboy rambling. I actually used to prefer ATI (admittedly, before they became AMD). Back then, the only differences were performance (and they were pretty equal, more often than not).
Now, you have the unbalancer of nVidia having won the gpgpu race. With things like PhysX and all the other shinies nVidia are releasing, it is not a competition.
avatar
predcon: If you're basing the victory of a "GPU race" on the ability to make a virtual blade of grass move independently of another, the I'm pretty much going to dismiss this as fanboy rambling, especially since you really haven't cited any example comparisons, benchmarks, or other such evaluations. All you've given me zazz, for lack of a better word, at the moment. Also, I don't care for blue screens anymore. That's why I switched FROM nVidia.

Was trying to keep it non-technical:
nVidia has CUDA. ATI has "Closer to Metal" or whatever the hell they call it (there is a reason I don't even know :p). CUDA is downright usable compared to the garbage ATI put out, and that is saying a lot.
Now we are getting OpenCL (maybe). Take a look at OpenCL. It could EASILY stand for "Open CUDA Language"
So nVidia has the head start. Maybe ATI/AMD can catch up, but that is a pretty good head start. Give it a few years and they'll probably be even again, but until then, nVidia has the advantage.
Thus, with nVidia setting the "standard", as it were, people are more likely to support it. Which keeps feeding the cycle.
That better than "fanboy ramblings" for you?
No. It's not. You're just citing more "features", and trashing ATI without giving any real information like hard numbers. I mean, "garbage ATI put out"? The hell is that supposed to mean?
I don't want a frickin' nVidia card. The title of this thread is "Need help selecting a Radeon HD 5870". So stop defending and advocating your precious Team Green, I don't want to hear about it.
avatar
Gundato: Now we are getting OpenCL (maybe). Take a look at OpenCL. It could EASILY stand for "Open CUDA Language"
So nVidia has the head start. Maybe ATI/AMD can catch up, but that is a pretty good head start. Give it a few years and they'll probably be even again, but until then, nVidia has the advantage.

It stands for Open Computing Language and is only similar to CUDA. OpenCL is actually more powerful as it allows programmers to use individual CPU cores and GPU stream processors. Nvidia may have an advantage here in that they were the first in opening their GPU's to non-graphics applications but the downside is it's only compatible with Nvidia, which annoyed Intel enough to develop Larabee. Bear in mind that while Nvidia has PhysX ATI will have Havok acceleration which will also be supported by Larabee. PhysX could well find itself phased out over the next few years simply because Havoc works with both ATI and Nvidia, PhysX only works with Nvidia.
For gamers however PhysX offers very little in actual benefit and CUDA is only useful for folding. Again it does not justify the difference in cost between a 5870 and GTX480.
avatar
Gundato: Now we are getting OpenCL (maybe). Take a look at OpenCL. It could EASILY stand for "Open CUDA Language"
So nVidia has the head start. Maybe ATI/AMD can catch up, but that is a pretty good head start. Give it a few years and they'll probably be even again, but until then, nVidia has the advantage.
avatar
Delixe: It stands for Open Computing Language and is only similar to CUDA. OpenCL is actually more powerful as it allows programmers to use individual CPU cores and GPU stream processors. Nvidia may have an advantage here in that they were the first in opening their GPU's to non-graphics applications but the downside is it's only compatible with Nvidia, which annoyed Intel enough to develop Larabee. Bear in mind that while Nvidia has PhysX ATI will have Havok acceleration which will also be supported by Larabee. PhysX could well find itself phased out over the next few years simply because Havoc works with both ATI and Nvidia, PhysX only works with Nvidia.
For gamers however PhysX offers very little in actual benefit and CUDA is only useful for folding. Again it does not justify the difference in cost between a 5870 and GTX480.

Wow, I need to start putting tags around my jokes. I was making a joke that it could be called 'open CUDA Language" because of how similar they are. Which still gives nVidia an advantage.
To the topic creator: Whatever, I said what I needed to say. I am going to wander off before you continue to scream that anyone who doesn't like what you like is a fanboy :p
avatar
Gundato: Wow, I need to start putting tags around my jokes. I was making a joke that it could be called 'open CUDA Language" because of how similar they are. Which still gives nVidia an advantage.

It's a different language completely. They work on the same principles but then so does Linux and Windows. You may have thought it was a joke but the implication was that OpenCL and CUDA are practically the same and therefore that gives Nvidia an advantage. Untrue, hence my post.