No, and you can't remove the PSP without rearchitecturing the processor.
However, they could release a PSP firmware that just loads the x86 cores with your chosen BIOS and then halts. Businesses that want their systems to have management backdoors[0] can use the PSP firmware that implements that functionality, while users that care about Free boot chains can use the minimized firmware.
[0] Is it a backdoor if the owner put it there deliberately to be able to detect regular OS-level viruses?
> Is it a backdoor if the owner put it there deliberately to be able to detect regular OS-level viruses?
Yes. Unless the owner has a perfect guarantee that they will be the only ones who can use it and the will be aware of it every time it is used. The likelihood of this being true in the given environment is next to zero.
They own it, though. They're free to fool themselves about it if they like.
PSP is a trusted execution environment not a remote management platform. There is no reason anyone should want to remove this from a privacy perspective. In fact, privacy and security software may be hindered without something like PSP or SGX.
Well, you don't have that much choice of CPU manufacturers, if you want a 65W+ performance class CPU, so you do the best you can.
But anyway, there are many manufacturers that were happy to sell widgets without internet-connected software, and compete on the quality and price of those widgets in a fairly honest way. But when it became possible to embed software that can send data to the internet somehow, well various departments just could not resist demanding their engineers do so. Support costs can be massively reduced if you can just see what the stupid user did and remote in with the backdoor key and fix their misconfigured router/vpn/appliance (true story). And your marketing and product-planning teams are just severely hobbled by not having all the telemetry they can dream of, there's just millions wasted on team salaries and marketing spend if they can't get that user data sent to the SAAS marketing/crm tool they want. But when a car didn't have internet-connected computers in it, they just had to do without.
We trust CPU manufacturers to act like reasonable companies. They make some good stuff, that's how they make money. They're not fly-by-night ventures. They can't risk the huge investments in their brand and their business. And they're not malicious spies trying to screw with you via very tricky microcode. But if they can conveniently put some value-add in the hidden support cpu because the marketing or support team really needed it ...
It's about the continued leverage they have by holding the key after you bought the CPU. They could be forced by secret court to sign backdoored firmware. And of course you can't load your own firmware, only whatever they release and sign.
If they're generous they'll release a stripped-down version, sure, but that's still their choice, not yours.
Backdoored firmware isn't about updates. It's about rootkit or evil maid attacks that install backdoored firmware that has been signed by the vendor. If you're using your own trust root then a 3rd party can't create a signature, even under duress. Thus there would be less of an incentive to pressure the vendor.
Updates are a separate concern since you'll want them for bugfixes. So they should be reviewable, open source. And then you check the vendor's signature and replace it with your own if you want. At least that's how things should work.
A TEE has nothing to do with any of that. A TEE is a CPU feature that generates a physically separate area of memory that no other process can access to protect sensitive data from even the OS (in case of compromise).
If the end user was "in control" of the PSP that'd defeat the purpose. These features are supposed to remain a secure enclave even if the machine is compromised.
Yeah, that's clearly not even fit for purpose by design.
At least from the perspective of end users. :)
Thinking about it a bit more, it also seems to be at odds with how security in the world works.
Is there any case of having equipment be in the hands of end users for unlimited time (and without supervision), where the security doesn't get cracked?
Once a single exploitable bug in the implementation is discovered, the whole thing becomes useless. Perhaps worse than useless, once malware starts to embed itself in the PSP.
So, if I remember correctly AMD PSP started out as a response to Intel ME, whose main selling point at the time was that businesses could remote into a compromised system and wipe malware without the malware being able to fight back. PSP also enables a few other things like encrypted virtualization - being able to lock AWS's staff out of the contents of my VMs' memory. That sort of feature isn't really useful to individual consumers.
SGX is an interesting case since Intel actually axed the feature a year ago. The only thing it did was enable more DRM for 4K Blu-Rays, and Hollywood's response to that feature going away has been to just refuse to let you play 4K Blu-Rays on PCs.
I'm not sure what other feature is involved here that would make sense in this use case. I mean, yes, PSP also can be used to enforce BIOS signing requirements, but the whole point of Coreboot is to be able to have a Free-as-in-freedom BIOS that you can legally and technically modify. If you wanted to do Apple-style secure local signing[0] of the modified BIOS, that'd be cool, but as far as I'm aware that's not on offer. So the PSP is a security deadweight for the kinds of techies that would care about Free BIOS.
Furthermore, there IS a very large customer that has wanted to remove these kinds of secure enclaves from their systems: the US government, specifically the National Security Administration[1]. There's a special configuration option in Intel ME to turn off everything but basic system bring-up. Why would the NSA want to turn off "privacy and security software" on their own machines? Well, again, the whole "wipe a compromised system without the malware being able to resist" thing implies that this isn't merely a security enclave, but a backdoor that could be compromised by a third-party. If you aren't specifically using that remote management stuff, you want the ME to be as brain-dead as possible.
[0] On Apple silicon that is fused for Macs, the bootloader allows booting operating systems that are signed with the Secure Enclave's local key. There is no unsigned boot; instead the Owner account on macOS has to authorize signing a new third-party OS in a special recovery mode in order to install, say, Linux or whatever.
Oh, and Apple doesn't actually give the Secure Enclave the ability to mess with the main application processor. It's more akin to a TPM, where it can withhold encryption keys from the regular OS but it can't actively snoop on it. So they also understand why PSP/ME were bad ideas.
> So, if I remember correctly AMD PSP started out as a response to Intel ME, whose main selling point at the time was that businesses could remote into a compromised system and wipe malware without the malware being able to fight back.
Do you have a source for this?
> SGX is an interesting case since Intel actually axed the feature a year ago. The only thing it did was enable more DRM for 4K Blu-Rays, and Hollywood's response to that feature going away has been to just refuse to let you play 4K Blu-Rays on PCs.
Signal used Intel SGX for remote attestation of their servers and is also using it for their upcoming username/password features to generate keys without Signal's knowledge (since it's a private enclave).
> Furthermore, there IS a very large customer that has wanted to remove these kinds of secure enclaves from their systems: the US government, specifically the National Security Administration[1]. There's a special configuration option in Intel ME to turn off everything but basic system bring-up. Why would the NSA want to turn off "privacy and security software" on their own machines? Well, again, the whole "wipe a compromised system without the malware being able to resist" thing implies that this isn't merely a security enclave, but a backdoor that could be compromised by a third-party. If you aren't specifically using that remote management stuff, you want the ME to be as brain-dead as possible.
I need a source saying PSP has remote management capability.
> Oh, and Apple doesn't actually give the Secure Enclave the ability to mess with the main application processor. It's more akin to a TPM, where it can withhold encryption keys from the regular OS but it can't actively snoop on it. So they also understand why PSP/ME were bad ideas.
What does "snoop" mean here? How can PSP "snoop" without remote management capability?
Be better to take control of it, and run it as a chunk of your hypervisor.
(This is only possible if there's support for custom signing of device control. Cloud environments need to know that the security core is under the control of vendor signed firmware.
(This provides cloud service providers with the inability to read their customers data and the customers with security.))
I don't think so - seems more likely that something like OpenSIL would need to talk to the PSP. The difference here [hopefully] is that those interfaces are documented.
edit: ie. from the blog post, on the xSIM part:
> "Provides a set of API services that initialize the platform host silicon. Most of the silicon initialization on AMD-based platforms is performed by embedded µControllers prior to x86 reset de-assertion."
Probably not, both because the PSP can be part of the SIL bringup which you statically link anyway, and because plenty of business customers want a Secure Enclave and the legal world is having a hard time getting paperwork to suggest it's both secure and open at the same time.
Why would you want to do away with AMD PSP? It's a trusted execution environment that privacy and security software (e.g. Signal) makes use of for it's confidentiality and integrity guarantees.
Because I don't need uncontrolled backdoors in my PC. By the way, my laptop has an option to disable PSP in BIOS but I don't understand how it works because PSP is still visible on the PCI bus as "encryption controller".
> It's a trusted execution environment
It is "trusted" only by manufacturers and cannot be controlled by the user. Why cannot it be controlled by the user? Probably because it is intented to be used as a backdoor or to report users who install pirated software, or download unapproved materials etc.
>Why cannot it be controlled by the user? Probably because it is intented to be used as a backdoor or to report users who install pirated software, or download unapproved materials etc.
I have a simpler answer that doesn't require inherent malice: To keep out hostiles that have control over the computer, be that virtually or physically. You can't compromise what you can't access.
Personally, I understand the concerns behind ME and PSP and am not particularly concerned. I trust Intel and AMD to not fuck with me, else why would I buy their processors? If I don't trust the ME/PSP because I don't trust Intel/AMD, I certainly can't trust the rest of their processors either.
There are several points that make all this look suspicious:
- first, ME/PSP do not follow minimum privileges principle. They have access to DRAM and network interfaces, so they can bypass restrictions set by OS and firewall. Does that make system more secure? I would say it is the opposite. They make the system less secure: for example, if there is a vulnerability in those modules then the whole system can be compromised and it will be difficult to detect using antivirus products.
- second, firmware for ME/PSP is encrypted. Why is it done so? To prevent user from knowing what it does. Why am I not allowed to know how my computer works?
Based on this, I can assume that intended purpose of this "trusted" modules is to implement user-hostile features like: DRM, software license checking, reporting illegal content, device fingerprinting, providing unauthorised remote access and so on.
I was wondering how Oxide would be able to follow through on their claims to be open sourcing even the firmware for their AMD systems. This makes it a little more clear.
Oxide phbl https://github.com/oxidecomputer/phbl doesn't use OpenSIL. I assume it's written from scratch based on documentation and maybe looking at AGESA. Why can't other server/firmware vendors do the same? Because they're just not willing to.
On AMD's amd64 platforms there is not much for the firmware to do in terms of the platform bring up as everything that is inside the CPU MCM gets initialized to some sane state (by a PSP-like additional management core, but AFAIK it is distinct hardware from PSP) before the actual CPU cores come out of reset. Most of what BIOS/UEFI firmware do on these platforms is about providing the right PC-like abstractions to the running OS.
If you control the OS and do not have to support backward compatibility you can shift lot of that logic into the OS itself and you end up with the firmware being essentially an ELF loader, which is not a bad description of what PHBL actually does.
On the Intels side of things the FSP is somewhat complex piece of software that does various low-level poking at the hardware in order to make running sane code possible.
There probably is a historical reason for that in that AMD started with K8 Opterons which are ccNUMA with packet switched interconnect from the inception and relegating initial power on configuration of that to the amd64-side firmware is not exactly an practical design (at least without breaking various assumptions inherent in the "PC" platform). Contrast that to Intel who got something like 15 years worth of products that used variations of the same Pentium Pro FSB (which as an shared bus removes some of the complexity involved in the platform initialization).
This is great. It's hopefully slightly better on the technical side than the FSP that we usually get, while also being less of a NDA-encumbered mess that you currently get with any of the big hardware and silicon vendors.