For the last ten years, Intel has built remote management technology into various motherboards and processors. The Intel Active Management Technology (AMT) system provides system administrators with a method of remotely controlling and securing PCs that functions independently of the operating system, hard drive, or boot state. It’s even capable of running when the system is off, provided the computer is still connected to line power and a network card. AMT doesn’t depend on the x86 processor directly — instead, it’s implemented through a 32-bit Argonaut RISC Core (ARC) CPU that’s integrated into all Intel processors. This microcontroller is part of the Intel Management Engine and is implemented on all Intel CPUs with vPro technology.
A new article on BoingBoing argues that Intel’s implementation of the IME and the microcontroller that runs it are fundamentally insecure, cannot be trusted, and could be used to perform potentially devastating exploits. Intel has publicly revealed very little about the precise function of its onboard microprocessor and the security system that guards it — and that, in turn, means that the company is essentially relying on security through obscurity to secure its own standard.
Concerns about IME and AMT are nothing new; Joanna Rutkowska discussed vulnerabilities found in a much earlier version of the standard in 2009, and research into exactly how the Intel Management Engine secures data and maintains a trusted environment has been ongoing for years.
Although the ME firmware is cryptographically protected with RSA 2048, researchers have been able to exploit weaknesses in the ME firmware and take partial control of the ME on early models. This makes ME a huge security loophole, and it has been called a very powerful rootkit mechanism. Once a system is compromised by a rootkit, attackers can gain administration access and undetectably attack the computer.
On systems newer than the Core2 series, the ME cannot be disabled. Intel systems that are designed to have ME but lack ME firmware (or whose ME firmware is corrupted) will refuse to boot, or will shut-down shortly after booting.
To be clear, not every CPU supports vPro, and not every system with a vPro-enabled CPU also implements IME. Intel’s enthusiast-class “K” processors, for example, typically lack vPro support. Still, there are millions of systems with both IME and vPro enabled, particularly business systems that are designed to be remotely managed.
BoingBoing and other security researchers that have weighed in on this topic have argued that the IME is fundamentally insecure because the code has never been open-sourced or reviewed by independent security researchers (at least none who weren’t silenced via NDA). BoingBoing is correct when it writes: “There is no way for the x86 firmware or operating system to disable ME permanently. Intel keeps most details about ME absolutely secret. There is absolutely no way for the main CPU to tell if the ME on a system has been compromised, and no way to ‘heal’ a compromised ME. There is also no way to know if malicious entities have been able to compromise ME and infect systems.”
How you read this situation probably depends on how you view the tension between various corporations, the NSA, and Intel’s own commitment to providing secure operating environments. The Intel Management Engine isn’t some dastardly concept that only Santa Clara supports — AMD has implemented its own security coprocessor based on ARM’s TrustZone technology. Hardened security co-processors are a common feature of modern SoCs, in both the ARM and x86 ecosystems.
There is, however, another side to this argument. For years, it’s been argued that open source software was intrinsically more secure than its closed source counterpart because anyone could look at and improve the code. In the past few years, we’ve seen some major flaws in open source software, including GnuTLS, Heartbleed, Shellshock, and Stagefright. Now, the reason these bugs were found and fixed is because the code was available for inspection — but in some cases, major flaws in mission-critical software packages like OpenSSL persisted for decades before finally being caught.
Having access to source code, in other words, isn’t enough in and of itself to determine whether or not something is secure. It can take months to perform a thorough security audit of a piece of software and it’s often hard, thankless work that’s not nearly as sexy as implementing new features or capabilities.
It’s not clear how much of a threat Intel’s ME actually represents — which, of course, is the researchers’ entire point. For now, Intel seems content to carry on as it has, which in turn implies that the company is either confident it has secured the IME against black hats. Hopefully it’s right, because if Intel CPUs turned out to have security vulnerabilities that the NSA or other national actors had exploited (be they Chinese, Russian, or Britain’s GCHQ), the fallout for Chipzilla would be catastrophic.