hedwards: Nintendo used to do the right thing in that regard with their GB line. The later revisions would play the same games, but had sharper graphics anyways. It's not something I foresee being reasonable with current generation consoles though. They were able to do that with the GB line because of the way the games were coded without requiring that the games be recoded for the fixes.
rtcvb32: All in all, the CPU/hardware didn't change except for perhaps the screen. As for the GBC/color, newer games included pixel codes what to make what color from the 4 color grayscale to something more pleasing, and built into the GBC for all released games included these color palette choices. It's not really that hard to come up with, unless the colors need to change mid-game for some reason.
Reminds me a bit of the scheme used by Apple computers, where if you had a black&white monitor/apple then it would look good, and if it was in color it looked good too, using the exact same code. It had to do with how the video was output and encoded to be interpreted as i recall.
But we're not talking about the difference of screens, we're talking about larger impactful changes. Higher Cpu speeds could affect how physics work a lot making the game easier/harder, more ram could show off bugs that were totally unnoticed with less memory and tighter memory management. more GPU cores may mean nothing as the fixed known number known during the hardware's release could mean that extra GPU power would just be idle, or maybe become glitchier due to race conditions.
CPU speed could outright break the game. True this isn't back when we had 20Mhz systems when the 66Mhz came out and they had to include a Turbo button to limit the CPU so the older programs wouldn't zoom by. It's said for every 10 lines of code there's a bug, and games and OSes made today are hundreds of millions of lines of code. Even tiny changes could make a huge difference. How much i'm really not sure. It depends on how reliant it goes to standards, standards we aren't told about, be they hardware, software, API, OS, or whatnot. We are totally in the dark.
With larger changes the compatibility challenges become significantly greater. Why would anybody be upgrading if the games didn't require it? And if the games did require it, then they've brought about similar complications to what PC gamers have without the benefits of using a computer.
I'm sure it's possible to make this work in a way that makes sense, but I can't see this being a profitable route to take. The profitable portion of this was already established in the past. Charge for controllers, HDD and similar.
And you're more or less completely right about GB, I think it did come with some extra memory or something like that, but the actual changes that were visible to the developers were limited so that the actual adjustments were mostly in the hardware itself allowing for any GB to play any GB game without issues.
hedwards: And those are all things that would require massive restructuring of the hardware and/or create a situation where the owners are having to worry about which revision they've got to see if a program will run on it. That's really not the market that consoles are in. People who don't mind that buy computers.
Also, nobody is going to pay for most of those things that isn't rich. For instance 4K is for theaters and people producing video, not for consumers. By the time you get your nose close enough to see the pixels you're already losing sight of the edges of the screen. I remember the first time I saw an HDTV big screen, I had to get pretty damn close to the screen before I could make out the pixels and I've got excellent eyesight.
Tallima: And as with Kinect, 3MB Video RAM cartridges and nunchucks, MS will be able to clearly distinguish what is runnable with which hardware. XBOX One VR games will require the VR add-on. Which you'll know you have b/c you'll have a VR add-on. PS4's doing it and nobody seems to have a problem.
Vertical and horizontal mounting options, home networking capabilities and multi-TV outputs would be simple to add without people going nuts that something's not working. If it's compatible, send different images. If it's not, send the same image to all TVs. Easy peasy.
The point MS is making is that hardware innovations can happen more easily when you have software that is malleable. When software is locked onto the hardware, it's nearly impossible to make changes and keep compatibility.
It's not a dumb idea to retain backwards compatibly while changing mounting, power or size options. It's brilliant to me. And no other console generation has ever pulled it off. Even PS4 can't do it. They rely on cloud-based computing to do it.
So, yes. There are add-ons and changes they can make. But also, it's not just about add-ons. It's about keeping the console running everything very stable with hardware changes to make the system cheaper, stabler, smaller, less power hungry and even possibly to the great fears of everyone, more feature-rich.
P.S. PS4 has 4k and soon VR and it's already finding a market. So this is all marketable stuff.
That sounds terribly confusing. MS might know, but the people buying things would then have to do a lot more research about whether or not a game is going to work with their console. The main benefit of having a console is that you don't have to think about things like compatibility. Any PS3 game should work wtih any PS3 console. Same goes for XB360, PS4, XBONE etc.
Introducing those kinds of upgrades just fragments the market and requires customers to do more research before buying games.