It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
For some games, modders have improved support for low-spec computers, for example Skyrim has mods/tools for enabling ultra-low graphics settings and low-resolution textures beyond what the game normally supports.
Integrated GPU's are more of a general purpose adapters/accelerators and in the last 10 years they are fairly good at it. Then, integrated graphic solutions is a very broad, ranging from Intel HD/UHD 620 so commonly found in laptops running with a slow, single channel RAM to a latest Ryzen 4700G (wich is a bit faster than a desktop GTX 750Ti/GTX 660).
I'm not even mentioning (well, I am) lower spec like Celerons and Atoms.
Note that the GPU work is far more broad than play 3D games, usually the latest the better at doing decoding of video codecs and web components, reardless the "power" of the GPU.

I have no trouble playing The Falconeer, Grid Autosport , X-Com or Shadow Tactics on Intel HD 620. What you cannot expect is a iGPU with a TDP of 15W to be able to deliver the same performance as 150W dedicated cards, with way faster dedicated memory. I can play those games but obviously at lowest settings and reduced resolution (compared to native 1920x1080).
That said, I kinda agree with many posts, that games should not need the latest and greatest to be fun to play but then again, most popular games today are built around really heavy X-platform engines. Fell Seal (stellar example) runs on pretty much any modern hardware using Unity 2018, while Beat Cop is way more taxing on the system, using older Unity 5. Actually I'm kind of amazed a pixelated game like Beat Cop can run that poorly even on very low resolution (800x600), sure, simulating a "whole" city do tax the CPU quite a bit but there are way better looking games than run much better...

X-Com runs well using Unreal engine while Rime, well it can only be played on low settings 1080p on a GTX1650 (comparable to a 250-300$ card at the time the game launched)

Mobile games are a good exemple of what can be done with little resources available.

Random intresting facts:
Laptop i5-8250u integrated GPU is far more power hungry than nVidia MX 150 while decoding video and general web browsing, meaning always using the nVidia card can actually save battery. (BTW this CPU is very power hungry)
Very old AMD/ATI 3000 (non-HD, 2009) motherboard integrated graphics (as found in old phenom systems) can help decode some modern video codecs and play some games, like Into the Breach and SteamWorld Heist at full speed, while comparable Intel iGPU is pretty much worthless nowadays.
Integrated graphics with RAM running on dual channel mode, can be 20 to 40% faster on games with little to no cost of power consumption, compared to single channel. Even Microsoft's Surface 3 with only 2Gb of RAM, are setup with dual channel.
Modern laptop SKU's names are non-performance indicative. The same i5 CPU can be manufacturer configured in a number of ways and perform drastically dissimilar in each configuration. This means a i5 can be power limited and perform worse than a i3, while still costing more.
New Intel CPU's are very HOT, and being still stuck on old node, probably means x86 will slowly die.
Anyone looking for a dektop mini pc, take a look at i5-8259U NUC8I5BEK and i3-8109U NUC8i3BEK, those use Iris Graphics wich are a considerable step up above Intel UHD offerings, both being quite cheap (220-280 Euros respectively barebones).
low rated
avatar
dtgreene: Idea: Develop a game on a Raspberry Pi 4, then port it to more mainstream hardware. The game should run well on low-end devices if done this way, right?

If compile times are taking too long, develop on a more powerful system, but occasionally test on the Pi.

Bonus point: You're guaranteed that your game will run on Linux, unless you do something Pi or ARM specific, and I don't see that being likely to happen here.

(Before anybody thinks this is a joke, I'm contemplating doing exactly this.)
wonderful idea , wonder why nobody does this...
running on linux is a must have especially if you want to reach those special 2% of gamers
Is it possible to run the onboard gpu and GPU card at the same time on modern hardware?
avatar
thegreyshadow: What do you think?
Plenty of older games and crappy indie titles should easily be played with onboard GPUs.
I think some of the problems came from the lack of certain hardware features as well as devs that can't code.
avatar
§pectre: Is it possible to run the onboard gpu and GPU card at the same time on modern hardware?
It has been tried in the past
Hybrid Crossfire
LucidLogix Virtu GPU

Results were disappointing and both technologies sank without a trace.

Nvidia came closest to having something passable: dedicated PhysX cards.
One GPU for graphics, the other for physics. Naturally both must be Nvidia cards.
avatar
Mortius1: Nvidia came closest to having something passable: dedicated PhysX cards.
One GPU for graphics, the other for physics. Naturally both must be Nvidia cards.
I think that was a hangover from buying Ageia, who produced the early PhysX cards. I remember that they also allowed (around the time of the GTX275) the ability to use a PhysX capable GPU as a secondary card, just running PhysX. As the capability of the GPU increased, I think they quietly shelved the concept of running a GPU as a dedicated PhysX card.

Either way, for OP's opening remarks, I'm not convinced that laptops are dominating the gaming market at the expense of desktops. I think that most people in developed economies who wish to game either have a desktop or a number of different devices, depending on what they want to do. While it would be great if a game could be turned down to run on an integrated GPU, broadly, this is going to be too challenging for a development team - the additional time and resource spent on increasing team sizes to optimise for outdated hardware (which iGPUs are) would increase prices. The current balance we have (a mix of games that have high graphics fidelity but perhaps too high system requirements) alongside indie type titles ("worse" graphical fidelity, but low system requirements) seems to be a reasonable balance.
avatar
Dark_art_: Fell Seal (stellar example) runs on pretty much any modern hardware using Unity 2018, while Beat Cop is way more taxing on the system, using older Unity 5.
Also, since Fell Seal is strictly turn based, having some slowdown or lag spikes won't cause major issues. Sure, it might be annoying, but it's unlikely to get you killed; even if you have to escape an area quickly, time will not pass if you don't provide any input.

(Well, there's the missions added by the DLC, but they're not a problem for a couple reasons:
* The system time is used for them, and they run even when the game is not running (and hence not needing any system resources).
* The clock is not a time limit; rather, it's the amount of time the mission takes, so there's no need to hurry, in fact just the opposite is the case here.)

avatar
§pectre: Is it possible to run the onboard gpu and GPU card at the same time on modern hardware?
With virtual machines and GPU passthrough, you can. You won't be using them for the same game, but you could be running a demanding game with one GPU and a *different* demanding game with a different GPU (assuming the CPU is powerful enough to handle both, and every relevant piece of hardware supports this (which means no consumer-level nvidia card for the guest)).

This is, however, a rather unusual configuration, unless we're talking about a cloud hosting provider that offers GPUs (for an extra fee, of course).
Post edited March 08, 2021 by dtgreene
Ill chime in with everyone else. Integrated gpu are built and meant for business. They are cheap and nasty and not meant for anything more than very light gaming if u need to game using them.
avatar
thegreyshadow: While my paltry Intel UHD 620 onboard GPU can play some very good games with great results, there are others which make it struggle (such as The Witcher 2, a game already 10 years old!).

Making games which are unplayable (at least on the lowest end) on systems without discrete GPUs is counterproductive to game companies.

First of all, GPU production is experiencing (and has experienced since the global cryptocurrency explosion) chronic shortages and problems in availability.

Secondly, there is an absurdly large market of PCs with onboard GPUs.

Excluding such a computer base from your latest game simply impose an unreasonable limit on your potential audience.

Games should have Crysis-like and ray-tracing GPU consumption levels on their highest tiers, I can get that. But they should also ensure that their games should be playable on the lowest setting with onboard GPUs such as Intel UHD chipsets.

What do you think?
I think that hardcore gamers with disposable incomes are the target audience. People who have computers on the cheap aren't likely to spend much on games, or be enthusiasts. It's like going to a restaurant. If a waiter has a table where they ordered beers vs a table where they ordered water, the waiter will spend more time with the table that ordered beers to try and get a bigger tip from the bigger spenders.
Games have their specs listed by minimum and recommended specs and that's really all they need. If whatever PC you play on meet the specs then you can play the game and if it doesn't you can't.

Honesty and transparency is all that's needed here.
avatar
jepsen1977: Games have their specs listed by minimum and recommended specs and that's really all they need. If whatever PC you play on meet the specs then you can play the game and if it doesn't you can't.

Honesty and transparency is all that's needed here.
Except that the progression is not linear.

You can have a situation where game A can be played on system X but not system Y, while B can be played on Y but not X.

With GPUs, for example, older GPUs might not supports standards supported by more recent ones even if the older one was high end back in its day and the more recent one is the iGPU of a Celeron or Atom.

Also, the naming conventions can be confusing. For example, 4000 > 620, but in terms of GPU performance, Intel HD 4000 < Intel UHD 620 (I *think*, someone correct me if I'm wrong).
avatar
paladin181: I think that hardcore gamers with disposable incomes are the target audience. People who have computers on the cheap aren't likely to spend much on games
I share the same opinion and let me add that "those" people had moved to mobile.
I can't recall a single person on my friend circle who plays primarily on iGPU laptop. Either have a desktop, a beefy laptop or play on mobile and use the computer for other stuff.
avatar
paladin181: I think that hardcore gamers with disposable incomes are the target audience. People who have computers on the cheap aren't likely to spend much on games
avatar
Dark_art_: I share the same opinion and let me add that "those" people had moved to mobile.
I can't recall a single person on my friend circle who plays primarily on iGPU laptop. Either have a desktop, a beefy laptop or play on mobile and use the computer for other stuff.
I'd go so far as to argue that, except in limited circumstances where they have no money (people who aren't going to be a target for AAA developers), the only people gaming primarily on iGPUs are those who solely target old emulated systems (such as Playstation, Megadrive) or "retro" indie titles.
avatar
Mortius1: Nvidia came closest to having something passable: dedicated PhysX cards.
One GPU for graphics, the other for physics. Naturally both must be Nvidia cards.
avatar
pds41: I think that was a hangover from buying Ageia, who produced the early PhysX cards. I remember that they also allowed (around the time of the GTX275) the ability to use a PhysX capable GPU as a secondary card, just running PhysX. As the capability of the GPU increased, I think they quietly shelved the concept of running a GPU as a dedicated PhysX card.

[...]
It was actually counter productive since the additional geometry generated for the particles added up to the workload, in other words more geometry which had to be rendered and caused lower framerates.
Then there was the need to pair up the existing one with second expensive GPU dedicated just to the physics, basically people who already ran SLI could get some benefit in some games.

Lastly it was a proprietary middleware technology which could be ran on the CPU but, typical nVidia, was locked to their hardware and purposely performed like s*hit on anything else.
PhysX was used for physics calculations in the Unreal Engine up until 4.23 and ran on CPU with no problem whatsoever.
avatar
dtgreene: Also, the naming conventions can be confusing. For example, 4000 > 620, but in terms of GPU performance, Intel HD 4000 < Intel UHD 620 (I *think*, someone correct me if I'm wrong).
Naming is just that and have more to do with feature set than power.(HD 620 is the same as UHD regardind 3d performance. HD 4000 series was used on older CPU's, like Ivy Bridge, Haswell and Broadwell and can vary very much performance wise)

That said, minimum requirement should list feature set, not devices.
Also, it should listed the RAM amount used by the game, not total system RAM. There are some games that pretty much require a open browser for info, let alone modern users who play, watch streams, discord and many resource hog launchers that auto start on startup.
Post edited March 08, 2021 by Dark_art_