Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you're right that it would be difficult for the FCC to precisely define exactly when security updates are required. This is a problem in law generally, one that is usually resolved by imposing a reasonableness standard. Maybe here, a vulnerability needs to be patched if it might reasonably be expected to allow an attacker to take control of a device, or to do so when combined with other known or unknown vulnerabilities. Or maybe a different standard. Then when enforcement/lawsuits come around, the judge/jury/regulator has to evaluate the reasonableness of the manufacturer's actions in light of that standard. We'd love to see commentary on the record as to what the right legal standard might be.


> This is a general problem for law generally, one that is usually resolved by imposing a reasonableness standard.

Exactly this. Here in the UK we have "merchantable quality" as the standard for the required quality of any goods sold. How "merchantable" is defined is a matter for the courts to decide on a case-by-case basis. In practice, the courts take into account generally market expectations as well as the marketed price to determine the expected quality standard and it seems to work just fine. If my chair falls apart after a few years after ordinary use by ordinary people, then it wasn't of merchantable quality and the seller is in breach of the law.

In the case of security vulnerabilities, I think a similar approach would work well. The key thing is to ensure that sellers of IoT products cannot disclaim responsibility for security vulnerabilities altogether, which is exactly the problem today. If an IoT product can be subverted by an adversary after a few years of ordinary use by ordinary people, then the seller should be in breach of the law.


This sounds like a reasonable approach (sorry for the pun). One question - reasonable to whom? (who? - english is my first language sorry).

I ask because when I was doing security research, we'd often present issues and get responses like "but who is going to think of that?" or "No one could find that", only for someone to think of or find it later and take over a system. I still occasionally hear this from software developers (even though the industry as a whole has gotten much better over the years), but quite often from people who work in "cyberphysical" systems (e.g IOT).

Part of the tension seems to come from the fact that some infosec people can be equally unreasonable, declaring something utterly useless if there's a remote theoretical chance of a problem.

Unrelated to the above:

> Maybe here, a vulnerability needs to be patched if it might reasonably be expected to allow an attacker to take control of a device...

I suspect you know this and short-cutted for conversation, or maybe these are all the same legally, but "take control of a device" isn't the only win condition - DOS, info leaks, and so on also exist. I note this because I'm kind of curious if the law considers those the same or vastly different scenarios, and if any sort of FCC regulations would include them.


A cyber vuln is a defect in a product expected to function well. In other domains (cars, apartments, pharmaceuticals), if there is a defect, the manufacturer is responsible to ensure it is fixed.

It seems pretty simple. The standard should be the same as used in other industries where vendors need to recall, repair, or refund products in case of defects.


One way to mitigate this is to require introspection into what the update is. This has two implicit requirements which are that the firmware is source-available and has reproducible builds. With those two requirements you would be able to see what is being updated, and prove that the update your device receives is actually the update the manufacturer said they created.

The second requirement is something that is really overlooked in the software supply chain, partly because of the difficulty in achieving it. But it's a goal that the proper push from regulators could help us reach.

A knock on benefit is this helps secure the update channel, which if you are requiring firmware updates you must also require a way to make sure those updates are secure (since it inherently creates more attack surface area)


As a very broad starting point, we should be sure to address the fundamentals of security:

CIA: Confidentiality, Integrity, Availability.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: