Posted November 24, 2015
JayC667: Okay I've got the same problem (Virtual Dual-Core at 50%) so I took a look.
There is one rather simple fact in programming which I think is responsible for this phenomenon, that "old" games use up all CPU.
Because most processors run at a different speed (even though the clocking/MHz may be same, speed could be different due to CPU architecture, bus types, operating systems and other factors).
So in order to run at a proper speed, the games have to find a way of determining how fast their processor is running. Most do that by measuring, how many simulation cycles can be performed within a certain time. As most games' logic processing is directly coupled to their rendering (they use only one main loop, not multiple threads), this is what you experience (more or less) as FPS aka Frames Per Second.
Due to restrictions on the graphic framework (DirectX, Glide, OpenGL) usually rendering is "capped" at 60 or 100 FPS, you won't get more FPS unless you cheat your system. This is actually also a good thing, later we will see why.
Now let's say that the came could do 80 simulation (logic & rendering) cycles per second, so you would actually get 80 pictures/frames per second, but your CPU and GPU would run at maximum power. And our human eyes and brains can only process 12 different images per second. But it notices the transitions between those images to up to 24.7 (I think) images per seconds (so you get the feeling that the movie/game is not quite smooth). This is the reason why most video standards settle their FPS somewhere around 25 FPS (PAL etc).
[Why old CRT monitors etc ran at higher "FPS" (60/70/72/90/100) was because of interference of other light sources and lightwaves cancelling and how CRTs worked in general. This topic is quite huge but it has become more or less obsolete with new rendering techniques like plasma/LCD/etc screens.]
So back to the topic: actually the game could run at 80 FPS, but we (our eyes and brains) already accept 25 or 30 FPS as perfectly smooth. So those additional 50 FPS are only wasted energy (yes, your PC/laptop will use more energy for higher performance - the increase of heat production btw is squared to the performance, so twice the speed gives four times the heat, eg uses about 4 times more energy - this is not generally applicable to the whole PC system, but minor effects are still observable).
So instead of wasting energy or just allowing other applications to run in the meantime, games nowadays limit themselves to certain FPS rates. In quite modern games there's usually even an in-game option for this.
The problem is: to limit FPS, the program need the possibility to suspend itself for a very short amount of time (milliseconds). Most people know that as sleeping, in multi-threaded environments is can also be implemented by yielding or waiting.
So back to the early time of computers and computer games: there simply were no proper sleep functions around (and nobody cared about multithreading). So either the sleeping function was not accurate enough - your game stuttered - or the sleeping intervals were too big, with the effect that the FPS dropped significantly below 30 FPS.
[By the way: even today pure C/C++ sleep functions have bad accuracy, somewhere between 20 to 40 ms, so you would get a lot of stuttering. This effect can easily be remedied by calling operating system functions (API) which usually have an accuracy of 4 to 40 NANO-seconds (so 0.004 - 0.04 ms), thus making them perfect for the sleeping task - but you see, little timing problems exist even today.]
So most games just calculated, how many max FPS they could simulate, and just corrected their logic to account for that. So they ran more or less at the same felt speed on every PC, even though they had very different FPS. But the gamer experienced the speed to be the same on all PCs.
One example which did not take this into account was Command and Conquer, where especially in multiplayer you realized this difference. If you system ran twice as fast as your friend's, your bases produced at twice the speed, your units move at twice the speed etc.
Back to "why does it run at 100/50/25 percent of my CPU"?
Simple: because it is simulating as fast as possible, but adjusting the logic so the user does not feel it run faster. And because it can only use one processor, only this processor runs at max speed, so you get a processor usage percentage of 1/c * 100% (where c is the amount of processors on your system).
Actually, old games on modern system could run at 3000 FPS or more. Due to the intended limitations of the graphics frameworks above, it is unusual to get higher rates than 100 FPS, but some games come up with frame rates of 250 or more.
Now modern games do something else: they usually leave the timing completely to the framework and only concentrate on logic speed, because both timing and graphics are handled by the framework.
So what the framework basically does: it runs in a main loop, and calls the game's logic procedure for every frame. From system calls it knows times pretty accurately, down to nanoseconds. A game that limits itself at 30 FPS would look something like this:
timePerCycle = 1000ms / 30 FPS; // so around 33 ms per cycle
mainloop {
startTime = now();
// logic and grapics
processLogic(timePerCycle);
renderGraphics();
// timing
lastCycleLength = now() - startTime; // how long the simulation of the last cycle took, say 6 ms
restWaitTime = timePerCycle - lastCycleLength; // 33 - 6 = 27 ms
sleep(restWaitTime); // now sleep those 27 ms
} // go to the start of the main loop
and modern games only implement the processLogic() function.
So the game actually just used 6 ms for processing and rendering, but slept for 27 ms. So 6/(6+27) = 6/33 = 0.18 = 18% means that the game process would have run at somewhere around 18% of processor speed.
Comparing this to an old game:
Assume that old game also needs 6 ms for logic and graphics.
Then it would run at (1 second = 1000 ms) / 6 ms = 166.7 FPS, but using the full processing power it can get. That's what usually happens in older games.
Knowing WHY is all very well, thanks very much for that, but the question is how to *change* it so that it doesn't take up 100% of CPU?There is one rather simple fact in programming which I think is responsible for this phenomenon, that "old" games use up all CPU.
Because most processors run at a different speed (even though the clocking/MHz may be same, speed could be different due to CPU architecture, bus types, operating systems and other factors).
So in order to run at a proper speed, the games have to find a way of determining how fast their processor is running. Most do that by measuring, how many simulation cycles can be performed within a certain time. As most games' logic processing is directly coupled to their rendering (they use only one main loop, not multiple threads), this is what you experience (more or less) as FPS aka Frames Per Second.
Due to restrictions on the graphic framework (DirectX, Glide, OpenGL) usually rendering is "capped" at 60 or 100 FPS, you won't get more FPS unless you cheat your system. This is actually also a good thing, later we will see why.
Now let's say that the came could do 80 simulation (logic & rendering) cycles per second, so you would actually get 80 pictures/frames per second, but your CPU and GPU would run at maximum power. And our human eyes and brains can only process 12 different images per second. But it notices the transitions between those images to up to 24.7 (I think) images per seconds (so you get the feeling that the movie/game is not quite smooth). This is the reason why most video standards settle their FPS somewhere around 25 FPS (PAL etc).
[Why old CRT monitors etc ran at higher "FPS" (60/70/72/90/100) was because of interference of other light sources and lightwaves cancelling and how CRTs worked in general. This topic is quite huge but it has become more or less obsolete with new rendering techniques like plasma/LCD/etc screens.]
So back to the topic: actually the game could run at 80 FPS, but we (our eyes and brains) already accept 25 or 30 FPS as perfectly smooth. So those additional 50 FPS are only wasted energy (yes, your PC/laptop will use more energy for higher performance - the increase of heat production btw is squared to the performance, so twice the speed gives four times the heat, eg uses about 4 times more energy - this is not generally applicable to the whole PC system, but minor effects are still observable).
So instead of wasting energy or just allowing other applications to run in the meantime, games nowadays limit themselves to certain FPS rates. In quite modern games there's usually even an in-game option for this.
The problem is: to limit FPS, the program need the possibility to suspend itself for a very short amount of time (milliseconds). Most people know that as sleeping, in multi-threaded environments is can also be implemented by yielding or waiting.
So back to the early time of computers and computer games: there simply were no proper sleep functions around (and nobody cared about multithreading). So either the sleeping function was not accurate enough - your game stuttered - or the sleeping intervals were too big, with the effect that the FPS dropped significantly below 30 FPS.
[By the way: even today pure C/C++ sleep functions have bad accuracy, somewhere between 20 to 40 ms, so you would get a lot of stuttering. This effect can easily be remedied by calling operating system functions (API) which usually have an accuracy of 4 to 40 NANO-seconds (so 0.004 - 0.04 ms), thus making them perfect for the sleeping task - but you see, little timing problems exist even today.]
So most games just calculated, how many max FPS they could simulate, and just corrected their logic to account for that. So they ran more or less at the same felt speed on every PC, even though they had very different FPS. But the gamer experienced the speed to be the same on all PCs.
One example which did not take this into account was Command and Conquer, where especially in multiplayer you realized this difference. If you system ran twice as fast as your friend's, your bases produced at twice the speed, your units move at twice the speed etc.
Back to "why does it run at 100/50/25 percent of my CPU"?
Simple: because it is simulating as fast as possible, but adjusting the logic so the user does not feel it run faster. And because it can only use one processor, only this processor runs at max speed, so you get a processor usage percentage of 1/c * 100% (where c is the amount of processors on your system).
Actually, old games on modern system could run at 3000 FPS or more. Due to the intended limitations of the graphics frameworks above, it is unusual to get higher rates than 100 FPS, but some games come up with frame rates of 250 or more.
Now modern games do something else: they usually leave the timing completely to the framework and only concentrate on logic speed, because both timing and graphics are handled by the framework.
So what the framework basically does: it runs in a main loop, and calls the game's logic procedure for every frame. From system calls it knows times pretty accurately, down to nanoseconds. A game that limits itself at 30 FPS would look something like this:
timePerCycle = 1000ms / 30 FPS; // so around 33 ms per cycle
mainloop {
startTime = now();
// logic and grapics
processLogic(timePerCycle);
renderGraphics();
// timing
lastCycleLength = now() - startTime; // how long the simulation of the last cycle took, say 6 ms
restWaitTime = timePerCycle - lastCycleLength; // 33 - 6 = 27 ms
sleep(restWaitTime); // now sleep those 27 ms
} // go to the start of the main loop
and modern games only implement the processLogic() function.
So the game actually just used 6 ms for processing and rendering, but slept for 27 ms. So 6/(6+27) = 6/33 = 0.18 = 18% means that the game process would have run at somewhere around 18% of processor speed.
Comparing this to an old game:
Assume that old game also needs 6 ms for logic and graphics.
Then it would run at (1 second = 1000 ms) / 6 ms = 166.7 FPS, but using the full processing power it can get. That's what usually happens in older games.