It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
avatar
1sealion: So there is a game on steam I want called blossom tales and the min spec is a Intel pentium g3250 yet the above computer I was looking at boasts of pentium. 7700? Does this mean the game will work because 3250 is a smaller number than 7700?
A good place to start looking at performance levels for CPUs, video cards and other components is the PassMark site. Just open the chart (there are multiple tiers) for the component you are looking for and search (Ctrl+F) for its model number.
Do I need a smart tv for the link? My downstairs tv is like 8 yrs old but with hdmi cables. It’s a sharp lcd 52 inch thin screen
No SmartTV needed, it connects over HDMI, like a Bluray player or receiver. Don't expect good performance over Wifi though, the Steam forum is full with people who can't play properly over a wireless connection. Wired is fine for most games.

Also: don't buy the SteamLink full price, I bought it at 95 percent off in the last sale. 3 bucks well spent ;)
Post edited August 22, 2018 by ignisferroque
avatar
1sealion: I want a new computer that can play most steam and gog games . Can anyone suggest a good gaming computer since I don’t understand all the specs
Yeah, for a desktop, buy a used one with good specs and you'll get away with a decent machine for ~600$-700$ instead of 1500$-2000$. Make sure the seller is reputable and provides some sort of warranty.

Not sure I'd do the same for a laptop because of their lower life-expectancy and the greater difficulty with replacing faulty parts of a laptop, but I would totally do it at this point for a desktop.

At this point, the main bottleneck for games is the gpu anyways. The cpu/ram requirements have not significantly budged for the last decade. You take a machine that was cutting-edge 8 years ago (well ok, maybe more like 6-7 years ago) and the main thing you'd need to update to run today's games would be the gpu.
Post edited August 22, 2018 by Magnitus
avatar
Magnitus: At this point, the main bottleneck for games is the gpu anyways. The cpu/ram requirements have not significantly budged for the last decade. You take a machine that was cutting-edge 8 years ago (well ok, maybe more like 6-7 years ago) and the main thing you'd need to update to run today's games would be the gpu.
You're cute. If that were the case i'd be able to play TW3 on this thing. CPU reqs are always going up, but they go up in funny ways. RAM has constantly been going up. The trick with CPUs is that they sometimes require specific instruction sets or clock speeds. Companies have been trying to cut down on this, but in reality, the more indirection and abstraction they use, the worse this gets. Fortunately, we do have programming languages like RUST trying to turn this around, but theory and practice are two different things, especially if people aren't using the right tools.

To make matters worse, more GPU code than you realize actually gets offloaded to the CPU, which is why GPUs get away with such low clock rates.
avatar
Magnitus: At this point, the main bottleneck for games is the gpu anyways. The cpu/ram requirements have not significantly budged for the last decade. You take a machine that was cutting-edge 8 years ago (well ok, maybe more like 6-7 years ago) and the main thing you'd need to update to run today's games would be the gpu.
avatar
kohlrak: You're cute. If that were the case i'd be able to play TW3 on this thing. CPU reqs are always going up, but they go up in funny ways. RAM has constantly been going up. The trick with CPUs is that they sometimes require specific instruction sets or clock speeds. Companies have been trying to cut down on this, but in reality, the more indirection and abstraction they use, the worse this gets. Fortunately, we do have programming languages like RUST trying to turn this around, but theory and practice are two different things, especially if people aren't using the right tools.

To make matters worse, more GPU code than you realize actually gets offloaded to the CPU, which is why GPUs get away with such low clock rates.
Holy cow, you're right, the Witcher 3 has some pretty high CPU requirements (RAM still low though).

Guess I don't play enought recent AAA games.

So, maybe not too old, but Still, at 400$ (maybe add an extra 300$ for a GPU upgrade), bet that quad-core wonder would fit the bill so I was spot-on for the price point at least: http://www.deltaserverstore.com/dell-t3600.html

You can find many such offers in stores selling second-hand hardware if you look. Still say it's worth if you're only looking for a gaming machine.

EDIT: Actually, I looked and this CPU line is like 6 years old so if you go for top of the line back then, might still be ok now.

Not sure about the instruction set though, but I've written fairly cross-platform (Windows/Linux, any intel CPU, bring it on baby... plus many consoles when I was doing middleware for gaming) C++ codes for years without going too much out of my way to do so. Haven't really touched the language for half a decade, but I can't believe things would have fallen off the wayside this much since then.

How much of a klutz would you have to be to write code that is locked down to a very narrow set of processors in the Intel family? I mean, with assembly when you go ultra low level, I'd get it, but with a relatively abstract programming language like C++? Cmon!
Post edited August 23, 2018 by Magnitus
avatar
kohlrak: You're cute. If that were the case i'd be able to play TW3 on this thing. CPU reqs are always going up, but they go up in funny ways. RAM has constantly been going up. The trick with CPUs is that they sometimes require specific instruction sets or clock speeds. Companies have been trying to cut down on this, but in reality, the more indirection and abstraction they use, the worse this gets. Fortunately, we do have programming languages like RUST trying to turn this around, but theory and practice are two different things, especially if people aren't using the right tools.

To make matters worse, more GPU code than you realize actually gets offloaded to the CPU, which is why GPUs get away with such low clock rates.
avatar
Magnitus: Holy cow, you're right, the Witcher 3 has some pretty high CPU requirements (RAM still low though).

Guess I don't play enought recent AAA games.

So, maybe not too old, but Still, at 400$ (maybe add an extra 300$ for a GPU upgrade), bet that quad-core wonder would fit the bill so I was spot-on for the price point at least: http://www.deltaserverstore.com/dell-t3600.html

You can find many such offers in stores selling second-hand hardware if you look. Still say it's worth if you're only looking for a gaming machine.

EDIT: Actually, I looked and this CPU line is like 6 years old so if you go for top of the line back then, might still be ok now.

Not sure about the instruction set though, but I've written fairly cross-platform (Windows/Linux, any intel CPU, bring it on baby... plus many consoles when I was doing middleware for gaming) C++ codes for years without going too much out of my way to do so. Haven't really touched the language for half a decade, but I can't believe things would have fallen off the wayside this much since then.

How much of a klutz would you have to be to write code that is locked down to a very narrow set of processors in the Intel family? I mean, with assembly, I'd get it, but with a relatively abstract programming language like C++? Cmon!
CPU instructions on x86 now include encryption, which is also great for built in random number generator. For legal reasons (gambling) and speed reasons (saves on the code for running a huge PRNG), I could see why someone would use that, but i'm not sure if it's available ring3.

SSE is a perfect example. You can work on huge chunks of data, using only a few instructions, which means you can cut down alot on clock requirements. You can more than double the amount of calculations per second by using SSE over FPU (Floating Point Unit), which is the default for C++. And this is ignoring that, a while ago, x86 had floating point arithmatic without a floating point unite (so strings and/or BCD numbers, which are really, really slow). And i'm not even going to get into scenarios where things like certain ARM and Atmel CPUs don't even have "div" instructions.

And also, the truth is, instruction requirements are quite common, but we don't really see it, because most of the time a particular gamer already has the up-to-date CPU, anyway, and new instructions aren't all that common (but it's a huge reason to upgrade). What kils me the most, though, is that when a company uses cross-platform coding even when they have absolutely no intention of working cross-platform. Moreover, most target machines are all the same processors, anyway, so there's no reason why you don't mix the C++ with assembly (which you can do easily, either with inline, separate sources that link together at linker stage, or using intrinsics [the most common method]).

EDIT: Another thing is that "cross CPU assembly" actually is a thing, too. Assembly has a bad rep not only for it's "complexity" (simplest programming language, if you take the time to think about it) and the fact it doesn't hold your hand (type checking and all those other annoying compiler errors). If you screw up, it's your own fault.
Post edited August 23, 2018 by kohlrak
avatar
kohlrak: CPU instructions on x86 now include encryption, which is also great for built in random number generator. For legal reasons (gambling) and speed reasons (saves on the code for running a huge PRNG), I could see why someone would use that, but i'm not sure if it's available ring3.

SSE is a perfect example. You can work on huge chunks of data, using only a few instructions, which means you can cut down alot on clock requirements. You can more than double the amount of calculations per second by using SSE over FPU (Floating Point Unit), which is the default for C++. And this is ignoring that, a while ago, x86 had floating point arithmatic without a floating point unite (so strings and/or BCD numbers, which are really, really slow). And i'm not even going to get into scenarios where things like certain ARM and Atmel CPUs don't even have "div" instructions.
I'll defer to your obvious CPU expertize on those matters. I'm a bit disappointed though, I was really hoping to get a decent secondary gaming rig for when my friends come over for 500$-700$ on used hardware.

avatar
kohlrak: And also, the truth is, instruction requirements are quite common, but we don't really see it, because most of the time a particular gamer already has the up-to-date CPU, anyway, and new instructions aren't all that common (but it's a huge reason to upgrade). What kils me the most, though, is that when a company uses cross-platform coding even when they have absolutely no intention of working cross-platform. Moreover, most target machines are all the same processors, anyway, so there's no reason why you don't mix the C++ with assembly (which you can do easily, either with inline, separate sources that link together at linker stage, or using intrinsics [the most common method]).
Well, I'll put a damper on this one, because now we're touching some of my expertize (full stack web developper here, specializing in scalable cloud applications).

Increasingly, an explosion of clients are running outside the desktop (hello smart phones, Raspiberry Pis, consoles and smart or embedded devices of all kinds).

And let's not even talk about the backend running in the cloud (on some abstract Intel architecture and in some cases arm as well nowadays).

Add to this the fact that the bottleneck of 95%+ of apps out there is not the CPU, but I/O and the case of code that is CPU-optimized to the point to the point of being hardware restrictive is pretty thin.

avatar
kohlrak: EDIT: Another thing is that "cross CPU assembly" actually is a thing, too. Assembly has a bad rep not only for it's "complexity" (simplest programming language, if you take the time to think about it) and the fact it doesn't hold your hand (type checking and all those other annoying compiler errors). If you screw up, it's your own fault.
I think people are not doing assembly for the same reason most people are not doing C/C++ outside of gaming and system-level programming.

I'll tooth my own horn a bit there and say I'm one of those few developers who can actually write flawless C++ without memory leaks, segfault errors and weird undefined behavior (usually caused by some race condition in a multi-threaded application) that pops up once in a while, but the reality is that it will take me 3 times as long to do it in C++ than it would take me to write it in Python and most developers out there simply are not meticulous enough to manage it.

CPU is not a bottleneck for most things and time to market is a big factor and when you can do something 3 times as fast with a more abstract language, it's a worthwhile tradeoff most of the time.

Not saying there's not <5% of your app that you'll want to do in C++ because you'll want to squeeze all the CPU performance that you can out of that part, but its exactly that, less than 5% of your app.
Post edited August 24, 2018 by Magnitus