Lionel212008: I am not saying that a m1 like design approach is a panacea.
However, it is a design that does make a lot of sense.
Does the future look more ARM for the mainstream?
The biggest issue with x86 VS. ARM cpus is legacy support. That's the reason why x86 is still around, why Windows 10 still has a floppy disk driver, and why x86 is so power (electricity) hungry. Microsoft tried hard in years past to get x86 in smaller and smaller form factors and failed due to power requirements and user interface design. (UMPCs anyone?) All of that failure was legacy support. Power due to the CPU requirements, UI due to expectations and assumptions made by applications and their developers. (Mouse? Must be present. Display? At least 800x600. Full Keyboard? Absolutely. CPU power? Full throttle constantly. Memory? As much as we want. Fat finger touch input? What is that?)
As another legacy support example, It took decades to get rid of the 16-bit BIOS infrastructure, and in some cases it's still present in today's systems as UEFI CSM mode. That deprecated infrastructure required x86 CPUs to boot in 16-bit mode, and the CPU is still expected to switch to 16-bit mode when executing UEFI CSM code when it's used today. That requirement alone means that 16-bit mode can't be removed from the CPU design. Removing that mode means that OSes like WinXP / Vista / and to a lesser extent Win7 and Win8 would be unbootable on systems without the mode. (Even though all of those operating systems only use the BIOS support during initial startup.)
It's also not a simple case to get rid of the legacy support either. Many places, governments, companies, etc. rely on that legacy support to run their operations. There was a US state that recently said their COVID stimulus checks would be delayed due to computer programming issues. Turns out their unemployment system is still written in COBAL, a programming language that is 60 years old. Upgrades are not cheap, and in some cases an "upgrade" would require either complete replacement of entire infrastructure, or complete rewrites / recreation of source code. All of which is usually cost prohibitive.
Another problem is ARM's lack of standards which are taken for granted on x86. An example of this is hardware detection. On x86 you have things like ACPI tables in the firmware that tell the OS what hardware is present and where in memory the hardware is located. On ARM that stuff has to be built into the OS / device bootloader. Which is one of the reasons why upgrading things like Android devices is so painful and typically not done by manufacturers.
None of this is to say ARM doesn't have utility or that x86 cannot be fixed, but that each architecture has it's use and advantages. You probably won't be running the next up and coming AAA game on an ARM device, but if you just want some facebook / YouTube on the go, ARM is a pretty good choice. Don't care about power consumption, and need raw compute power / legacy application support? x86 has you covered. It's about using the right tool for the right job. In that regard, the future remains bright for both platforms.
dtgreene: Case in point: Try creating a file with the name "con" (with *any* extension) on a Windows system and see what happens.
osm: i'm sure there are a lot of quirks like that with the mammoth crappile that is WIn, but I don't even have Crapdose on my desktop. Will try on a work system. Will it kill kittens?
It's a reserved file name for legacy reasons. (COM ports A.K.A serial ports back in the early days of DOS needed an identifier to allow progams to use them, and COM1, COM2, etc. were chosen for that task. Windows was originally a GUI frontend application for DOS and as such had to abide by DOS's restrictions. Those limitations were kept in Win95, Win98, and WinME as they were still DOS under the hood. Those limitations then were kept in the transition to WinNT, Windows 2000, and WinXP for compatibility reasons, and they still exist today as old legacy code kept around from back then.)