I still hope that one of these days people in general will realize that executable signing and SecureBoot are specifically designed for controlling what a normal person can run, rather than for anything resembling real security. The premises of either of those "mitigations" make absolutely no sense for personal computers.
I strongly disagree on the Secure Boot front. It's necessary for FDE to have any sort of practical security, it reduces malicious/vulnerable driver abuse (making it nontrivial), bootkits are a security nightmare and would otherwise be much more common in malware typical users encounter, and ultimately the user can control their secure boot setup and enroll their own keys if they wish.
Does that mean that Microsoft doesn't also use it as a form of control? Of course not. But conflating "Secure Boot can be used for platform control" with "Secure Boot provides no security" is a non-sequitur.
Full disk encryption protects from somebody yanking a hard drive from running server (actually happens) or stealing a laptop. Calling it useless because it doesn't match your threat model... I hate todays security people, can't threat model for shit.
> Full disk encryption protects from somebody yanking a hard drive from running server (actually happens) or stealing a laptop.
Both of these are super easy to solve without secure boot: The device uses FDE and the key is provided over the network during boot, in the laptop case after the user provides a password. Doing it this way is significantly more secure than using a TPM because the network can stop providing the key as soon as the device is stolen and then the key was never in non-volatile storage anywhere on the device and can't be extracted from a powered off device even with physical access and specialized equipment.
> The device uses FDE and they key is provided over the network during boot, in the laptop case after the user provides a password.
Sounds nice on paper, has issues in practice:
1. no internet (e.g. something like Iran)? Your device is effectively bricked.
2. heavily monitored internet (e.g. China, USA)? It's probably easy enough for the government to snoop your connection metadata and seize the physical server.
3. no security at all against hardware implants / base firmware modification. Secure Boot can cryptographically prove to the OS that your BIOS, your ACPI tables and your bootloader didn't get manipulated.
> no internet (e.g. something like Iran)? Your device is effectively bricked.
If your threat model is Iran and you want the device to boot with no internet then you memorize the long passphrase.
> heavily monitored internet (e.g. China, USA)? It's probably easy enough for the government to snoop your connection metadata and seize the physical server.
The server doesn't have to be in their jurisdiction. It can also use FDE itself and then the key for that is stored offline in an undisclosed location.
> no security at all against hardware implants / base firmware modification. Secure Boot can cryptographically prove to the OS that your BIOS, your ACPI tables and your bootloader didn't get manipulated.
If your BIOS or bootloader is compromised then so is your OS.
Well... they wouldn't be the first ones to black out the Internet either. And I'm not just talking about threats specific to oneself here because that is a much different threat model, but the effects of being collateral damage as well. Say, your country's leader says something that makes the US President cry - who's to say he doesn't order SpaceX to disable Starlink for your country? Or that Russia decides to invade yet another country and disables internet satellites [1]?
And it doesn't have to be politically related either, say that a natural disaster in your area takes out everything smarter than a toaster for days if not weeks [2].
> If your BIOS or bootloader is compromised then so is your OS.
well, that's the point of the TPM design and Secure Boot: that is not true any more. The OS can verify everything being executed prior to its startup back to a trusted root. You'd need 0-day exploits - while these are available including unpatchable hardware issues (iOS checkm8 [3]), they are incredibly rare and expensive.
> Say, your country's leader says something that makes the US President cry - who's to say he doesn't order SpaceX to disable Starlink for your country?
Then you tether to your phone or visit the local library or coffee shop and use the WiFi, or call into the system using an acoustic coupler on an analog phone line or find a radio or build a telegraph or stand on a tall hill and use flag semaphore in your country that has zero cell towers or libraries, because you only have to transfer a few hundred bytes of protocol overhead and 32 bytes of actual data.
At which point you could unlock your laptop, assuming it wasn't already on when you lost internet, but it still wouldn't have internet.
> The OS can verify everything being executed prior to its startup back to a trusted root.
Code that asks for the hashes and verifies them can do that, but that part of your OS was replaced with "return true;" by the attacker's compromised firmware.
I (the commenter you responded to) am a security engineer by trade and I'm arguing that SB is useful. I'm not sure if the parent commenter is or isn't a security person but my interactions with other people in the security field have given me the impression that most of them think it's good, too.
So I'm a little confused about the "can't threat model for shit part," I think these sorts of attacks are definitely within most security folks threat models, haha
>It's necessary for FDE to have any sort of practical security
why? do you mean because evil maid attacks exist? anyone that cared enough about that specific vector just put their bootloader on a removable media. FDE wasn't somehow enabled by secure boot.
>bootkits are a security nightmare and would otherwise be much more common in malware
why weren't they more common before?
serious question. Back in the 90s viruses were huge business, BIOS was about as unprotected as it would ever possibly be, and lots of chips came with extra unused memory. We still barely ever saw those kind of malware.
> anyone that cared enough about that specific vector just put their bootloader on a removable media. FDE wasn't somehow enabled by secure boot.
Sure, but an attacker could still overwrite your kernel which your untouched bootloader would then happily run. With SB at least in theory you have a way to validate the entire boot chain.
> why weren't they more common before?
Because security of the rest of the system was not at the point where they made sense. CIH could wipe system firmware and physically brick your PC - why write a bootkit then? Malware then was also less financially motivated.
When malware moved from notoriety-driven to financially-driven in the 2000s, bootkits did become more common with things like Mebroot & TDL/Alureon. More recently, still before Secure Boot was widespread, we had things like the Classic Shell/Audacity trojan which overwrote your MBR: https://www.youtube.com/watch?v=DD9CvHVU7B4 and Petya ransomware. With SB this is an attack vector that has been largely rendered useless.
It's also a lot more difficult to write a malicious bootloader than it is to write a usermode app that runs itself at startup and pings a C2 or whatever.
> Sure, but an attacker could still overwrite your kernel which your untouched bootloader would then happily run.
Except that it's on the encrypted partition and the attacker doesn't have the key to unlock it since that's on the removable media with the boot loader.
They could write garbage to it, but then it's just going to crash, and if all they want is to destroy the data they could just use a hammer.
> The attacker does this when the drive is already unlocked & the OS is running.
But then you're screwed regardless. They could extract the FDE key from memory, re-encrypt the unlocked drive with a new one, disable secureboot and replace the kernel with one that doesn't care about it, copy all the data to another machine of the same model with compromised firmware, etc.
> serious question. Back in the 90s viruses were huge business,
No, they were not. They were toys written for fun and/or mischief. The virus authors did not receive any monetary reward from writing them, so they were not even a _business_. So they were the work of individuals, not large teams.
The turning point was Bitcoin. Suddenly it provided all those nice new business models that can be scaled up: mining, stealing cryptowallets, ransomware, etc.
Secure Boot provides no useful security for an individual user on the machine they own, and as such should be disabled by default.
If you want to enable it for enterprise/business situations, thats fine, but one should be clear about that. Otherwise you get the exact Microsoft situation you mentioned and also no one knows about it.
So everyday users should be vulnerable to bootkits and kernel-mode malware...why, exactly? That is useful security. The fact that people do not pursue this type of malware very frequently is an effect of SB proliferation. If it were not the default then these attacks would be more popular.
You're arguing for not wearing seatbelts because no evidence has been shown that anyone has ever been saved by wearing one has been presented. That's just stupid by refuting ubiquitously understood data and facts.
SecureBoot ensures a valid, signed OS is installed and that the boot process generally hasn't been completely compromised in a difficult-to-mitigate manner. It provides a specific guarantee rather than universal security. Talking about "many vectors" has nothing to do with SecureBoot or boot-time malware.
Instead of proprietary SecureBoot controlled by megacorps, you can use TPM with Heads based entirely on FLOSS with a hardware key like Librem Key. Works for me and protects from the Evil Maid attack.
You can also use SB with your own keys (or even just hashes)...just because Microsoft is the default included with most commercially sold PCs—since most people use Windows on their PCs—doesn't mean SB is controlled by them. You can remove their signing cert entirely if you want. I have done this and used my own.
Plus they signed the shim loader for Linux anyways so they almost immediately gave up any "control" they might have had through SB.
The OP is describing the status quo on mobile phones and tablets. On mobile Secure Boot, and systems like it, are used to lock out the user. If the boot path integrity is altered, some apps won't work or will provide degraded experiences.
What's happening the article is what has already happened on mobile: it requires vendor signing to run anything on mobile OS and the vendor locks out 3rd party drivers from their OS entirely.
It's yet another step towards desktop computing converging with mobile when it comes to software/firmware/boot/etc integrity attestation, app distribution and signing, and the ability to use your own bootloader and system drivers. When Secure Boot was first rolled out on laptops, it was used by Microsoft to lock the user out of the boot process before it was adapted to let users register their own keys, it can always be used for its original purpose, and how it's currently used on mobile, again.
What's the improved security argument for terminating VeraCrypt's account though? SB does have clear benefits but what is unclear is the motivation for the account termination.
What's the likelihood that this account ban provides zero security benefit to users and was instead a requirement from the gov because Veracrypt was too hard to crack/bypass.
Are the demands that users become experts in provider their own security against more advanced actors not significantly worse? The control part is unfortunate but the defaults should make it so users can focus on sharing pictures of cats without fear or need for advanced cyber security knowledge.
I don't know about executable signing, but in the embedded world SecureBoot is also used to serve the customer; id est provide guarantees to the customer that the firmware of the device they receive has not been tampered with at some point in the supply chain.
If you install Windows first, Microsoft takes control (but it graciously allows Linux distros to use their key). If you install Linux first, you take control.
It's perfectly possible for you to maintain your own fully-secure trust chain, including a TPM setup which E.G. lets you keep a 4-digit pin while keeping your system secure against brute force attacks. You can't do that with the 1990s "encryption is all you need" style of system security.
It's funny, but I just encountered this for the first time the other day - feels like I had to do a lot of digging to find out how to do this so that I could add my LUKS key to my TPM... really felt like it took some doing on the HP all-in-one that I was trying to put debian on... maybe because it was debian being debian
I make the analogy with a company, because on that front, ownership seems to matter a lot in the Western world. It's like it had to have unfaithful management appointed by another company they're a customer of, as a condition to use their products. Worse, said provider is also a provider for every other business, and their products are not interoperable. How long before courts jump in to prevent this and give back control to the business owner?
This gets tricky. If I click on a link intending to view a picture of a cat, but instead it installs ransomware, is that abiding by its owner or not? It did what I told it to do, but not at all what I wanted.
If you connect your computer to the Internet, it can get hacked. If you leave it logged in unattended or don't use authentication, someone else can use it without your permission.
This isn't rocket science and it has nothing to do with artificially locking down a computer to serve the vendor instead of the owner.
Edit: I'd like to add that no amount of extra warranty from the vendors are going to cover the risk of a malware infection.
We dont need to get philosophical here. You(the admin) can require you (the user) to input a password to signify to you(the admin) to install a ransomware when a link is clicked. That way no control is lost.
What if the cat pictures are an app too? The computer can't require a password specifically for ransomware, just for software in general. The UI flow for cat pictures apps and ransomware will be identical.
A computer that can run arbitrary programs can necessarily run malicious ones. Useful operations are often dangerous, and a completely safe computer isn't very useful.
Some sandboxing and a little friction to reduce mistakes is usually wise, but a general-purpose computer that can't be broken through sufficiently determined misuse by its owner is broken as designed.
And what if that customer wants to run their own firmware, ie after the manufacturer goes out of business? "Security" in this case conveniently prevente that.
It's fair to think of secure boot in only the PC context but the model very much extends to phones. It seems ridiculous to me that to use a coupon for a big mac I have to compromise on what features my phone can run (either by turning on secure boot and limiting myself to stock os or limiting myself to the features and pricing of the 1 or 2 phones that allow re-locking).
Where is this "extremely well documented process" to enroll new signing keys on an embedded device? I don't see one for any of these embedded processors with secure boot.
2. Someone malicious close to the customer, an angry ex, tampers with their device, and uses the lack of Secure Boot to modify the OS to hide all trace of a tracker's existence, or
3. A malicious piece of firmware uses the lack of Secure Boot to modify the boot partition to ensure the malware loads before the OS, thereby permanently disabling all ability for the system to repair itself from within itself
Apple uses #2 and #3 in their own arguments. If your Mac gets hacked, that's bad. If your iPhone gets hacked, that's your life, and your precise location, at all times.
As an embedded programmer in my former life, the number of customers that had the capability of running their own firmware, let alone the number that actually would, rapidly approaches zero. Like it or not, what customers bought was an appliance, not a general purpose computer.
(Even if, in some cases, it as just a custom-built SBC running BusyBox, customers still aren't going to go digging through a custom network stack).
The argument is that P(customer wants to run their own firmware) cancels out and 2,3 are just the raw probability of you on the receiving end of an evil maid attack. If you think this is a high probability, a locked bootloader won’t save you.
Very neat, but 1) is not really P(customer wants to run their own firmware), but P(customer wants to run their own firmware on their own device).
So, the first term in 1) and 2) are NOT the same, and it is quite conceivable that the probability of 2) is indeed higher than the one in 1) (which your pseudo-statistical argument aimed to refute, unsuccessfully).
I encourage you to re-evaluate this. How many devices do you (or have you) own which have have a microcontroller? (This includes all your appliances, your clocks, and many things you own which use electricity.) How many of these have you reflashed with custom firmware?
Imagine any of your friends, family, or colleagues. (Including some non-programmers/hackers/embedded-engineers) What would their answers be?
As if the monetary gain of 2 and 3 never entered the picture. Malicious actors want 2 and 3 to make money off you! No one can make reasonable amounts of money off 1.
On Android, according to the Coalition Against Stalkerware, there are over 1 million victims of deliberately placed spyware on an unlocked device by a malicious user close to the victim every year.
#2 is WAY more likely than #1. And that's on Android which still has some protections even with a sideloaded APK (deeply nested, but still detectable if you look at the right settings panels).
As for #3; the point is that it's a virus. You start with a webkit bug, you get into kernel from there (sometimes happens); but this time, instead of a software update fixing it, your device is owned forever. Literally cannot be trusted again without a full DFU wipe.
And where are the stats for people running their own firmware and are not running stalkerware for comparison? You don’t need firmware access to install malware on Android, so how many of stalkerware victims actually would have been saved by a locked bootloader?
The entirety of GrapheneOS is about 200K downloads per update. Malicious use therefore is roughly 5-1.
> You don’t need firmware access to install malware on Android, so how many of stalkerware victims actually would have been saved by a locked bootloader?
With a locked bootloader, the underlying OS is intact, meaning that the privileges of the spyware (if you look in the right settings panel) can easily be detected, revoked, and removed. If the OS could be tampered with, you bet your wallet the spyware would immediately patch the settings system, and the OS as a whole, to hide all traces.
Assuming that we accept your premise that the most popular custom firmware for Android is stalkerware (I don’t). This is of course, a firmware level malware, which of course acts as a rootkit and is fully undetectable. How did the coalition against stalkerware, pray tell, manage to detect such an undetectable firmware level rootkit on over 1 million Android devices?
This assumes a high level of technical skill and effort on the part of the stalkerware author, and ignores the unlocked bootloader scare screen most devices display.
If someone brought me a device they suspected was compromised and it had an unlocked bootloader and they didn't know what an unlocked bootloader, custom ROM, or root was, I'd assume a high probability the OS is malicious.
Yes. They're making the point that your flashing yellow warning is a good thing, and that it's helpful to the customer that a mechanism is in place to prevent it from being disabled by an attacker.
What happens when there are no more devices that allow for that use case? This is already pretty much the case for phones, it's only a matter of time until Microsoft catches up.
Waydroid allows to run Android apps that don't require SafetyNet. If your bank forces you into the duopoly with no workaround, it's a good reason to switch.
I don't know about executable signing, but in the embedded world SecureBoot is also used to serve the PRODUCER; id est provide guarantees to the PRODUCER that the firmware of the device they SELL has not been tampered with at some point in the PROFIT chain.
You can set up custom SecureBoot keys on your firmware and configure Linux to boot using it.
There's also plenty of folks combining this with TPM and boot measurements.
The ugly part of SecureBoot is that all hardware comes with MS's keys, and lots of software assume that you'll want MS in charge of your hardware security, but SecureBoot _can_ be used to serve the user.
Obviously there's hardware that's the exception to this, and I totally share your dislike of it.
> You can set up custom SecureBoot keys on your firmware and configure Linux to boot using it.
Right, but as engineers, we should resist the temptation to equate _possible_ with _practical_.
The mere fact that even the most business oriented Linux distributions have issues playing along SecureBoot is worrying. Essentially, SB has become a Windows only technology.
The promise of what SB could be useful for is even muddier. I would argue that the chances of being victim of firmware tampering are pretty thin compared to other attack vectors, yet somehow we end up all having SB and its most significant achievement is training people that disabling it is totally fine.
An unsigned hash is plenty guard to against tampering. The supply chain and any secret sauce that went into that firmware is just trust. Trust that the blob is well intentioned, trust that you downloaded from the right URL, checked the right SHA, trust that the organization running the URL is sanctioned to do so by Microsoft...
Once all of that trust for every piece of software is concentrated in one organization, Microsoft, Apple or Google, is has become totally meaningless.
I happen to like knowing that my mobile device did not have a ring 0 backdoor installed before it left the factory in Asia. SecureBoot gives me that confidence.
The public keys are provided by the developer. Google, or Apple, for example. It's how they know that nothing was tampered with before it left the factory.
Apple or Google know what the cryptographic signature of the boot should be. They provide the keys. It's how they know that "factory reset" does not include covert code installed by the factory. That's what we're talking about.
At one time at our university we had table desktop dancers installed everywhere. Was kind of funny when it turned up just as a student wanted to defend their work in a lab.
> I still hope that one of these days people in general will realize that executable signing and SecureBoot are specifically designed for controlling what a normal person can run, rather than for anything resembling real security
For home/business users I'd agree. But in Embedded / money-handling then it's a life-saver and a really important technology.
Apple is also somewhat responsible for the attitude shift with the introduction of iOS. 20-25 years ago a locked down bootloader and only permitting signed code would have been seen by techies as dystopian. It's now quite normalized. They say it's about security but it's always been about control.