If you’re a little interested in IT, in computer technology, then it can’t have escaped you that there is slowly a landslide going on at the heart of computers. Intel’s time-honored x86 is clearly reaching its limits, while the ARM architecture offers more than enough breathing room. Apple is the first major computer manufacturer to consider the Intel chapter closed.
Let’s start by explaining the basics, in very simple form. CISC stands for Complex Instruction Set Computer and RISC for Reduced Instruction Set Computer. In fact, it is exactly that description that covers the charge. With CISC statements, complex commands can be executed in one command (but as you read later that is a bit fake actually) while RISC needs several instructions to achieve the same thing. In other words: CISC focuses on hardware and RISC on software.
In the early days of the PC (and almost every computer), that focus on hardware was a necessity. RAM was expensive, so software had to be written as compactly as possible, for example. The disadvantage of this approach is that a CISC processor has to have many more transistors on board to achieve that. Only then could you reduce the number of instructions; by defining complex instructions in the processor hardware. Now those switching elements in themselves could be realized, if necessary with tubes. Not cheap, but still cheaper, easier and more reliable to realize than working memory.
Microcode
Moreover, modern CISC processors are not completely honest in their CISC approach. Virtually every complex instruction is broken down under the hood into a series of smaller instructions, microcoded to be exact. So you can actually speak of a RISC ‘core’ in a CISC processor. You will also see that a complex CISC instruction simply takes more time (clock ticking) than an instruction from a RISC CPU. In the latter case, in principle, one instruction is processed per clock pulse, that’s it. At CISC this differs considerably per instruction. So immediate time savings do not necessarily have to result in a complex CISC instruction.
Cheap RAM and storage decisive
The nice thing these days is that memory – both storage and RAM – is extremely cheap. This eliminates the disadvantages of limited storage space. It is also lightning fast so that in practice it makes little difference whether you have to pick up one or ten instructions. Moreover, instructions are often read in advance by a modern CPU, so that everything is already ready internally in the CPU. In short: the need for CISC has decreased significantly on the hardware side. Furthermore, almost no one programs in assembly language anymore, but in a higher programming language. It means that as a programmer you really need to have very little knowledge of the processor architecture. If you program a piece of software for a CISC CPU, it can also be compiled for RISC. What that generates in code under the hood is actually not that interesting anymore.
Fewer transistors
RISC now offers quite a few advantages over CISC. Because a RISC CPU has far fewer transistors (also per core in the case of a multi-core CPU) on board, you can opt for a much more compact chip surface as a manufacturer. Or you go for a lot more cores. In both cases, energy consumption and heat development are in favor of RISC. It is therefore not surprising that RISC can be found as standard in portable devices (smartphones, tablets, portable game consoles, etc.). Also devices that are on day and night or that simply require a simple control system and therefore have to be energy efficient have RISC CPUs. Think of your router, coffee maker, mouse and basically everything that has a microprocessor on board.
Intel Atom as a try
Intel has tried to make an energy-efficient version of its x86 processor. The Atom knows everyone who is even remotely interested in computer technology. In fact, a downclocked x86 CPU with as little fuss as possible on board. The thing is (still) proving itself in things like NAS, a single single board computer (especially in industrial applications where highly optimized software has been developed for a certain architecture that is too expensive to build) and budget netbooks and tablets. That they are actually not really suitable for that is clearly noticeable from the slow operation of them. But hey, they can Windows run and so they are sold. It is the tragedy of the Wintel hegemony on the desktop.
Apple and RISC: the future
Nevertheless, Intel is no longer the big player of the past when it comes to market share of the CPU. There are countless RISC processors hiding in an equally countless number of smart devices. So in hard numbers, Intel is being wiped out. No drama in itself, because they made the big money on the desktop and server market. But a shift is underway. One of the most notable ‘switches’ is Apple. They developed their own CPU based on a RISC core from ARM. ARM is a ‘fabless’ CPU manufacturer and you can currently think of the company as a direct competitor to Intel. Fabless means that ARM does not manufacture a processor itself. It simply licenses a design, after which the buyer can let their imagination run wild.
It’s exactly what Apple has done with the new M1 found in both recent notebooks and desktops. Through extreme optimization and tweaking, Apple has made the M1 a CPU that can at least compete with its Intel counterparts on all fronts. Only for very specific applications is an i7 sometimes just a bit faster. Knee ear who pays attention to that, 99% of the end users will not notice anything. A nice detail is also that Apple with its newer version of macOS also includes an emulator for x86 code. This allows you to effortlessly continue to use old programs whose developer has not (yet) released a RISC version.
Compiler is key
However, most developers will soon offer their x86-compiled software compiled for ARM with the next update. This process is well underway and moving quickly. Even something as heavy as the Adobe software (think Photoshop & co) is now available for both x86 and ARM architecture. This prevents unnecessary translation and makes everything even more efficient. You will immediately see all the benefits of RISC in Apple’s new computer. The MacBooks have an unprecedentedly long working time on a full battery, up to 20 hours. iMacs – the desktop version – are thinner than ever. Heat management is much easier than with the x86 iMacs.
Even more important is the enormous breathing space that RISC offers. The desktop CPU M1 (actually a Soc either System on a Chip or a complete system on a chip) is small in size. Physically, there are still more than enough growth opportunities. But first, Apple already makes a profit on the structure: 5 nm means very small individual transistors on the chip surface. It is expected that this structure can be further reduced in the coming years. Which means: more transistors on the same surface. This is possible with RISC, because the number of transistors in use per core will remain low. Adding a core does not necessarily imply a much greater heat development.
Limit of x86 approaching
Intel, with its aging x86 architecture, has much bigger problems in terms of both heat generation and expandability. The sheer number of transistors in x86 CPUs simply means an increasingly hard limit that is eerily close now. It is also not for nothing that little progress has been made by Intel in recent years. Certainly: each new generation of CPUs is slightly faster than its predecessor. But you can’t keep adding an infinite number of cores without making concessions. One option to realize more cores is to reduce the clock speed, for example. Other tricks are also conceivable. But it will be a lot of work. You also see that the complexity of x86 causes concerns regarding security. Specter and Meltdown are a direct result of trying to squeeze as much speed as possible out of an architecture that isn’t really suitable for that anymore. And where conscious concessions have been made to safety, in the hope that everything would not go so smoothly…
Windows, UNIX, Linux and scalability
Strangely enough, neither Microsoft nor Intel have taken RISC (particularly ARM) seriously in recent years. Intel even sold its ARM division at the time. Windows is an operating system fully optimized for x86 CISC architecture. If Microsoft were to break with that barrier, it would have to develop a completely new operating system. May still be called Windows of course, but under the hood it must have completely dealt with old code. You might think of a Microsoft distro of Linux called Windows. Sounds crazy, but would be a much simpler solution than to keep muddling on with an outdated concept. You can then continue to use the old Windows programs via an emulation layer. Same thing Apple is doing now with macOS for old x86 code. By the way, remember that macOS is also just a layer over an old acquaintance in the form of UNIX. The big difference between UNIX (and Linux) and Windows is that UNIX & co are scalable. From watch to mega-server: this OS does it all. Microsoft can no longer use Windows for large server projects, the operating system simply does not like that anymore. Even for their own cloud services, Linux servers have been used for years.
On to the future!
The rise of ARM is not just the transition to a new processor architecture. The event also most likely heralds the demise of Windows as the de facto operating system for desktops. The system requirements for Windows 11 and the stubbornness in the design of the desktop will only speed up that process. Add to that the fact that the needs of end users have changed a lot in recent years. Needs that are actually not compatible with a top-heavy OS like Windows, but are much more suited to scalable operating systems. Then you understand that a lot of interesting things are going to happen in the coming years!
.