How comes the CPU is the "new target" now? So far AMDs CPUs are sufficiently energy efficient and some true allrounders which is important for the tasks asked.
The main focus for improvements the next years is cleary in the GPU segment, especially AMD will need to improve there a lot.
ARM might be a thing but mainly in the portable market where every saved Watt counts... and compatibility is lesser of a issue.
Especially on powerful gaming hardware, we just need 8 (maybe up to 16 in the future) very powerful cores... not a lake of low powered ultra high efficient "mini cores" (which is as well a ARM weakness)... as a game got certain calculation that do not scale well with massive parallelizing of CPU cores. Sure, some tasks are friendly to this, even on games, but a game still mainly need very powerful cores and fast cache is very useful as well, reason why X3D is such a great thing.
One of the biggest thing is indeed very quick cache, even on the GPU. This is a matter Mark Cerny was bringing up at some point because AI can produce a huge amount of micro-TOPS (basically a storm of operations faster than a burst... each second) but without a cache-architecture with sufficient speed... it can not even barely become fully utilized. So, for gaming and AI use... memory can never be to fast, this is a special matter that is not sufficiently utilized by ARM. Although i guess Nvidia will have to work on it, in term they want to create some own ARM chips in combo with their "AI hardware".
The 5090 will provide nearly 2 TBs of bandwidth, looks crazy but totaly required for a GPU of this performance level. Still not fast enough for certain AI calculations, it seems... so it may need some specialized cache-architecture.
This is not a GPU only matter, as we already know that most games are enjoying fast cache there. There is simply different needs for gaming hardware, it is not all about efficiency and nothing else. Aswell... AI and gaming demands are pretty close... so a gaming hardware is pretty much as well a good AI hardware.
timppu: Maybe we should at least leave x86 behind and start using more energy-efficient ARM-based CPUs, like Snapdragon?
Palestine: I completely agree that there should be a dramatic shift in favor of increasing the adoption (and availability) of desktop and laptop RISC processors.
X86 is a architecture able to support CISC and RISC, as RISC is basically implemented into the CISC architecture. However, X86 can, no matter how its done... will always lose some efficiency in exchange for a higher compatibility and a in general way more complex instruction set. Sure, if you use a car with a engine several times the size, so it can handle any demand in every situation, then it surely will never be able to reach the same efficiency such as a engine with several times smaller size and specialized for a very specific task. Thats not what X86 is here for, it simply will need a high flexibility without to much loss on efficiency, which is a challenge AMD/Intel simply will have to tackle. I do not think that X86 is a "obsolescent" architecture, as it got another focus... this focus is on high performance, even if it may cost some efficiency, and a very wide range of resource management that is never "the most efficient thing" in general.
Sure, it is always possible to build something way more efficient... but it would mean to lose a lot of flexibility on resources and it would always mean to use pretty specialized software solutions, a matter Apple was working on for way to many years already. The compatibility of Apple hardware is in general very limited but using this "specialized approach" Apple indeed was able to become very energy efficient.