Hacker Newsnew | past | comments | ask | show | jobs | submit | goblin89's commentslogin

In context of eIDAS, your phone starts to be used for much more sensitive matters than typing comments or even logging in to your bank. The repercussions from having a secretly patched bootloader can involve another person assuming your identity, including for large B2B transactions.

Requiring citizens to have (buy) some device to simply prove they are who they are seems hostile and dystopian to me. Some say it’s the future; I’m not convinced.

However, if you were to allow me to use my pocket computer (and nothing else) to prove I am who I say I am, you would want to trust that I am not pretending to be somebody else after extracting private keys from their phone or whatnot. I.e., you would want to require some sort of trusted computing.

Currently, that seems to only be provided by closed ecosystem phones.

Even still, I think it’s a mistake to be rolling out eIDAS as a mobile app first. The specification allows for this to be a dedicated hardware key (maybe even something YubiKey-like, and the EU already requires all phone manufacturers to have USB-C), so why not start with that.


> Requiring citizens to have (buy) some device to simply prove they are who they are seems hostile and dystopian to me.

Actually, that is not what’s happening. Based on further research, the use of eIDAS is required to be left up to citizen’s decision.


The reason (or, depending on your inclinations, the excuse) for trusted computing to exist is not to guarantee that I didn’t patch the bootloader of the phone on which I type my comment; it’s to guarantee I didn’t patch the bootloader of the phone on which your grandma logs in to her bank without her knowledge.

No, the reason is to let application providers decide which platforms you can run their software on. The reasons why they need that are diverse: DRM, preventing reverse engineering, shifting liability, "cheating" prevention - to name a few, but ultimately they're all about asserting control over the user, just motivated differently in various use cases. "Think of the grandmas".

What's the problem with the current status quo, or the status quo 5 or 10 years ago? 20 years ago there were basically no cheating prevention, but nobody cared. We just didn't play with cheaters. There are still cheaters in all games. No matter what kind of DRM streaming platforms use, their movies are on torrent immediately. The only difference compared to 5-20 years ago is that user experience is worse. I need to install a lot of intrusive bullshits, and I cannot watch movies with proper resolution. For literally nothing.

It's not just that "user experience is worse", it's an existential threat to Free Software.

In the past, when you had a proprietary tool you needed to use to do something, people could analyze and reimplement it. The reasons to do that varied - someone needed "muh freedomz", someone else wanted to do the thing on an unsupported platform, someone else wanted to change something in the way the tool worked (perhaps annoyed by paper jams)... Eventually you could end up with an interoperable FLOSS reimplementation. This has happened with lots of various things - IMs, network service clients, appliance drivers, even operating systems, and this is how people like me could switch away from Windows and have their computers (and later phones) remain fully functional in the society around us, perhaps with minor annoyances, but without real showstoppers.

Remote attestation changes this dynamic drastically. Gaim (Pidgin), Kadu couldn't be made if the service provider like AIM, ICQ, Gadu-Gadu etc. could determine whether you're using the Official App™ from the Official Store™ on the Official OS™ and just refuse to handle requests from your reimplementation. They could still try and be hostile to you without it, and often did, but it wasn't an uneven fight. Currently we're still in the early days and you can still go by in the society by defaulting to use services on the Web, using plastic card instead of phone for payments etc. but this is already changing. And it's not just a matter of networked services either - I bet we're going to see peripheral devices refusing to be driven by non-attested implementations too.

Secure boot chains have some value and are worth having, but not when they don't let the user be in charge (or let the user delegate that to someone else) and when they prioritize the security of "apps" rather than users. The ability for us as users to lie to the apps is actually essential to preserving our agency. Without that we're screwed, as now to connect ourselves to the fabric of the society we'll need to find and exploit vulnerabilities that are going to be patched as soon as they become public.


> The ability for us as users to lie to the apps is actually essential to preserving our agency. Without that we're screwed, as now to connect ourselves to the fabric of the society we'll need to find and exploit vulnerabilities that are going to be patched as soon as they become public.

The same freedom is being abused by malicious actors. Even on Windows (like BlackLotus), but also on pre-infected phones emptying people's bank accounts. This is an incredibly unfortunate outcome, but what's the solution?

I see no other potential outcome than that free computing and trusted computing are going to be totally separate. Possibly even on the same device, but not in a way that lets anyone tamper with it.


A lot of other freedoms are being abused and always have been, but somehow we don't go and ban kitchen knives, as having them around is valuable. This is a false dichotomy. Systems can be secure and trusted by the user without having to cede control, and some risks are just not worth eliminating.

Most importantly - it's the user who needs to know whether their system has been tampered with, not apps.


> somehow we don't go and ban kitchen knives

False analogy. You can’t have your kitchen knife exploited by a hacker team in North Korea, who shotgun attacks half of the public Internet infrastructure and uses the proceeds to fund the national nuclear program, can you? (I somewhat exaggerate, but you get the idea.)

> Systems can be secure and trusted by the user without having to cede control

In an ideal world where users have infinite information and infinite capability to process and internalize it to become an infosec expert, sure. I don’t know about you, but most of us don’t live in that world.

I agree it’s not perfect. Having to use liquid glass and being unable to install custom watch faces is ridiculous. There’s probably an opportunity for a hardened OS which can be trusted by interested parties to not be maliciously altered, and also not force so many constraints onto users like current walled gardens do. But a fully open OS, plus an ordinary user who has no time or willingness to casually become a tptacek on the side, in addition to completely unrelated full-time job that’s getting more competitive due to LLMs and whatnot, seems more like a disaster than utopia.


> You can’t have your kitchen knife exploited by a hacker team in North Korea, who shotgun attacks half of the public Internet infrastructure and uses the proceeds to fund the national nuclear program, can you? (I somewhat exaggerate, but you get the idea.)

Isn’t the status quo, that you need to intentionally choose to allow this?


Yes (well, kinda - attested systems can be and are vulnerable too), and remote attestation is completely orthogonal to that threat anyway. Securing the boot chain does not involve letting apps verify the environment they run in, it's an extra (anti-)feature that's built on top of secure boot chains.

It's also really incredible how people can see "user being in control" and just immediately jump to "user having to be an infosec expert", as if one implied the other. You can't really discuss things in good faith in such climate :(


Bootloader patching is just what you chose to use in your original false analogy. Letting apps verify the environment they run in is just as critical for the purposes of guaranteeing the digital identity. It’s all pieces of the puzzle.

It's not. I can guarantee my identity by e.g. scanning my ID card on a system with absolutely no secure boot chain. I can also guarantee a secure boot chain with my patched bootloader. Neither of these things require apps to verify the environment they run in.

> I can guarantee my identity by e.g. scanning my ID card on a system with absolutely no secure boot chain.

Your ID card is on your phone. Go ahead, guarantee you’re not using a duplicate of someone else’s ID card, that no one could duplicate your card, with a mainstream widely available consumer phone.

> I can also guarantee a secure boot chain with my patched bootloader.

Go ahead, show how your grandma automatically guarantees to interested parties that I or whoever else didn’t patch her bootloader to run a backdoored OS, while using a mainstream widely available consumer phone.

> Neither of these things require apps to verify the environment they run in.

Demonstrate a mainstream, widely available consumer phone that does these things without requiring apps to verify the environment they run it.

We can continue this infinitely, but if you keep making sweeping contrarian statements without contributing the proof required then it’s just not worth it.


> Your ID card is on your phone.

No, it's not. It lays on the desk next to me right now. I can communicate with it over NFC and I can't duplicate it. There's a debit card next to it and the same applies there - though it can also be communicated with by using a smartcard reader, which can't be done with my ID.

> guarantees to interested parties

The only interested party is my grandma, and she'll come to me to help her because her phone will stop working when the boot chain gets compromised (as it should).

> Demonstrate a mainstream, widely available consumer phone that does these things without requiring apps to verify the environment they run it.

Pretty much all of them today? Letting apps verify the environment is an extra feature built on top of secure boot chains, not the other way around. We're only having this discussion because having secure boot chains enables app attestation to work in the first place, and letting the user patch things is just a matter of key management policies. If you think these are "sweeping contrarian statements", you may want to spend some time learning how these things work.

This is not a technical problem, technical aspects have been already solved a long time ago. This is a social/political problem of who holds power over whom.


On iOS, the worst you can do is not update your OS and thus be vulnerable to exploits. There is no setting that a casual user could be social engineered into enabling that would allow the OS to be patched.

> but somehow we don't go and ban kitchen knives, as having them around is valuable

Some countries do :) Though I think physical analogies are misleading in a lot of ways here.

> Systems can be secure and trusted by the user without having to cede control, and some risks are just not worth eliminating.

Secure, yes, trustworthy to a random developer looking at your device, no. They're entirely separate concepts.

> Most importantly - it's the user who needs to know whether their system has been tampered with, not apps.

Expecting users to know things does a lot of heavy lifting here.


I never mentioned users having to know things (what you quoted was about the user getting informed whether their system is compromised, which is the job of a secure boot chain). The user being in control means that the user can decide who to trust. The user may end up choosing Google, Apple, Microsoft etc. and it's fine as long as they have a choice. Most users won't even be bothered to choose and that's fine too, but with remote attestation, it's not the user who decides even if they want to. And we don't need random developers looking at our devices to consider them trustworthy, it's none of their business and it's a big mistake to let them.

> what you quoted was about the user getting informed whether their system is compromised, which is the job of a secure boot chain

User being informed means they have to know what a compromised system would entail. That alone is a huge and frankly impossible thing to expect from regular people.

> Most users won't even be bothered to choose and that's fine too, but with remote attestation, it's not the user who decides even if they want to.

> And we don't need random developers looking at our devices to consider them trustworthy, it's none of their business and it's a big mistake to let them.

Then you can't demand those developers trust your device.


> That alone is a huge and frankly impossible thing to expect from regular people.

The systems used by regular people could just refuse to boot further when detecting a compromise, so I'm not sure where this comes from. We have prior art for that too. This is still orthogonal to letting users who want to patch things patch them, and not letting the apps verify what environment they run in. It's all compatible with each other, and with both regular and power users.

> Then you can't demand those developers trust your device.

Somehow we could for decades. Whether we'll still be able to in the future depends only on how much noise and friction we'll make about it now.


> This is still orthogonal to letting users who want to patch things patch them, and not letting the apps verify what environment they run in. It's all compatible with each other, and with both regular and power users.

No, they're fundamentally opposed to each other. The entire point is that developers don't want their apps patched by just anyone, especially not malicious actors. Small minority of power users will inevitably get caught in the crossfire.

> Somehow we could for decades. Whether we'll still be able to in the future depends only on how much noise and friction we'll make about it now.

No, you really couldn't. Past lack of technical means doesn't mean anyone trusted your device nor that we had use-cases where this was important. (It was also usually solved with external hardware, physical dongles and whatnot.)


> The entire point is that developers don't want their apps patched

That's exactly what I'm trying to say. The entire point is not to secure the user, it's to secure the apps. It's working against the user's interest, as letting the user lie to apps is essential to user's agency. The technical means used to achieve this could also be used to work for the user and ensure their security without compromising their agency, but that's not what happens on mainstream platforms.

> No, you really couldn't.

Yes, you could. Exactly how you describe, so it was used only where it mattered, and in other cases they just had no choice. Today the friction is so low that even McDonald's app will refuse to work on a device it considers untrustworthy. The user does not benefit from that at all.


> as letting the user lie to apps is essential to user's agency.

You do understand that in this case the user's agency has a very clear line?

Tampering with an electronic identity software is not a fundamental right the same way as tampering with your ID-card or passport isn't.

> [...] and in other cases they just had no choice.

QED. Not that they wouldn't or didn't want to.


App attestation does not stop at legally binding identity software, and legally binding identity software can be serviced without app attestation. I accept not being able to tamper with my ID card, I may say it's "mine" but it ultimately belongs to the government; I don't accept not being able to tamper with my computers, they wouldn't belong to me anymore if that was the case.

> Not that they wouldn't or didn't want to.

Of course, but my devices' purpose isn't to grant wishes to corporations. In the ideal world they would still have no other choice. Unfortunately the more people use platforms that let them attest the execution environment the less leverage we have against them.


> I accept not being able to tamper with my ID card, I may say it's "mine" but it ultimately belongs to the government; I don't accept not being able to tamper with my computers, they wouldn't belong to me anymore if that was the case.

So where does a digital ID card fit in your model? It's the government's but on your computer.


I have a digital ID card on my desk right now. It does not need to be stored on the phone which has all the means necessary to communicate with the card. In fact, if it was in a slightly different form factor I could even put it physically into my phone as it happens to have a built-in smartcard reader, which would still be a more reasonable solution than apps since then it wouldn't be strongly coupled with a complex device that can break or be compromised in various ways (some of which can't be solved with attestation) and would maintain a clear separation between what's mine and what's government's. What exactly would I, as a user, gain by muddling that distinction?

How large is this preinfected phones problem? Is it large enough to sacrifice freedom?

We have had a large discovery of pre-installed malware every year for the past decade so far. Seems like a fairly big problem.

And how exactly did attestation help there?

Securing apps from the user does not secure the user from malware.


Now you can't bundle malware deep within the system "ROM" unless you want to break SafetyNet's attestation. It's a big change in that aspect.

Custom ROMs tell you that this is not true at all.

Custom ROMs no longer pass SafetyNet attestation, which apps such as banking ones (or streaming service ones) check.

I hope you mean Play Integrity, since there is no SafetyNet attestation anymore. And for that: https://github.com/osm0sis/PlayIntegrityFork

But there were similar things for SafetyNet attestation until it existed.


Product rebrandings are kinda irrelevant.

Your link nicely says "as a general rule you can't use values from recent devices due to them only being allowed with full hardware backed attestation". These attestation workarounds have been rendered increasingly obsolete.


Typical studio grade cans need studio grade equipment to drive them. No surprise if decently sounding headphones that already ship with tailored DAC, amplifier, ANC cost more than decent headphones for which you need to buy all that (and lug around if you travel).

Yet, with that taken into account, today the latest DT 1770 Pro still cost over 20% more than the latest AirPods Max.

Considering Apple markets Max for audio work, they compete on the same turf. This makes Apple’s offer unusually cost effective, not the other way around. I think this can be attributed to their fragility and inferior sound quality relative to DT 1770 Pro (at the end of a decent signal chain).


> Yet, with that taken into account, today the latest DT 1770 Pro still cost over 20% more than the latest AirPods Max.

Not sure where you're looking, but seems I paid 535 EUR for my beyerdynamics (and that's what Amazon sells them for right now too), meanwhile these Apple headphones cost 579 EUR, so seems it's opposite really, studio-grade headphones being cheaper than the consumer-grade hardware Apple sells.

> Considering Apple markets Max for audio work

They might be marketed like that, because it influences what wealthy consumers chose to buy, but AFAIK, no one is sitting with AirPods Max in their studios for work, at least from what I've been able to tell.


> Not sure where you're looking

Both products in US on the site of respective manufacturer. Maybe you bought the older model (which by the way has higher impedance, so dedicated amplifier is a must, take it into account when you calculate the price).

> no one is sitting with AirPods Max in their studios for work

People absolutely use them for serious work. They are much more of a personal product though, and there are other factors that would make an average studio disinclined to invest in them, like fragility and cost of repair and a whole bunch of unnecessary for a studio features.

Of course, when the studio already has all the rest of the hardware, soundproofed room, etc., it could actually be cheaper to buy cans that do not in fact include ANC, DAC, Dolby, amplifier, etc., and maybe even enjoy a bump in audio quality while at that. For someone who does not have that, it is often simply not a practical choice.


It would be interesting to see the cost breakdown of the BOM for the two headphones.

I wouldn't be surprised if Beyerdynamic has similar, if not more, margin.


My number two complaint about AirPods Pro is that decreasing the volume of system sounds doesn’t seem to do much. Every time the low battery message makes me jolt and is a bit deafening. It is nice that it’s has no vocal component, but it’s still quite annoying. Curious if anyone compared them to Max in this regard.


New AirPods Max finally have lossless wired audio, which is pretty nice and makes them finally catch up with the Pros.

Does anyone have experience with obtaining a flatter frequency response from any AirPods, though? While maintaining the full power of noise cancellation.

My experience with Pros has always been that they exaggerate the bass. EQ settings available in Music are coarse, and I don’t know of any other way to control frequency response independently of the app that plays the sound.

I know they are not really best for critical audio work, but they are damn convenient.


Apple added wired lossless audio last year when they moved to USB-C iirc


I see, I remember checking that they didn’t support high definition wireless codec but missed the part where they could do lossless over the wire last year.

Why can’t they squeeze in that codec, considering Pros have it for years and are a lot smaller?

Edit: apparently I was confusing AirPods Pros with Sony WH models, which have LDAC. I guess there is no chance Apple adopts LDAC, even in their large heavy cans.


LDAC is also not lossless btw.


At nearly 1000 kbps I think it is close enough for my ear.


Based on the wording AirPods Max 2 looks to have the same limitation as AirPods Max (USB C) where using the wired audio means the mic is not usable.

Really quite annoying from the "damn convenient" aspect as well.


> My experience with Pros has always been that they exaggerate the bass

Based on my experience, almost all consumer-grade headphones (in ear and headphones) seem to suffer from this, I'm guessing people tend to prefer bass-heavy over "not enough bass". Not until you start looking at headphones meant for studio-use does it seem to get closer to expected when it comes to the bass.


If you're using Android there's global eq available (mostly). I use an app called wavelet that lets you search for your headphone model and download a pre-made profile.

iPhone users are kinda out of luck, but the autoeq database can show you how to set Music's equalizer to approximate a flat response


Research by Harmon suggests almost everyone, musicians and pros included, prefers exaggerated lows and highs over flat response. Check the "Harmon Curve"

And there is certainly a way for you to set system wide eq, see what AutoEq recommends.


With an ordinary fretted guitar, you can sort of perfectly tune it to what you play but not perfectly tune it in a global sense.

That’s an issue with tuning instruments in general, and why pianos are generally slightly out of tune as a compromise.

As you get used to a particular guitar and strings, as you train your ear, you can also learn to work around the imperfections by adjusting how you hold down the strings (even with a fretted guitar, you can slightly repitch a string by holding it differently).


Classical guitarists are used to pushing nylon strings into consonance by compressing the string either towards the nut or the bridge. Not so easy with steel, where players will just preemptively retune to whatever chords are most prominent in the song.


I play with generally lighter strings. 8.5-40 mighty slinky fender scale. I noticed when I switched my fingers pay much more attention to pressure, and being in tune with microbends.

Been thinking of going a bit lighter recently, and also getting a classical.


Streaming is no replacement for a physical music library.

It’s not only that a lot of good music is not on streaming: music also get removed. I have a smart playlist that gets automatically populated with songs in my library that are not available (evidently pulled from Apple Music), and it is growing with tunes that I like and that are sometimes impossible to find elsewhere. If I had the foresight to get actual copies, I could still listen to them.


My "Random Singles" Youtube playlist created in 2006 is approaching 1500 tracks. ~200 of them are hidden, because the video is no longer available.

It's not that I can't find other copies (although in some cases I literally can't), it's that the information has been deleted from my records. My exocortex has had a scalpel remove something, and no amount of backtracking and process of elimination is going to restore it at this large-N corpus size.

I was able to back up what was still up a few years ago, so there's a hard drive in my closet with some of it. But if I tried that at this point, Youtube is pretty determined to fight me with IP blocks.


You can stream from your own not-physical library though. Which is a great replacement for a physical music library.


True.


The marketing move of offering an unlimited plan reveals that storage and traffic are not that expensive and someone made a choice that light users will subsidize heavy users. With that, hiding your data from you and subsequently deleting it, at least without first encouraging you to download it within some post-downgrade grace period, would be a choice, not necessity, and is user-hostile.

If it is an actual necessity—a service chose to market an unlimited plan to attract more users, and then realized they are losing money on storage and traffic so much that they would unapologetically burn bridges with existing users who showed themselves as willing to pay (who maybe needed to downgrade temporarily for whatever reason) with the above move—and yet their strategy is apparently to keep offering that plan (in hopes to turn things around with more light users joining?), I would question whether that service has serious issues with even medium term planning.


No matter their actual costs to provide the service, I'm struggling to see why they should not immediately delete all of your stored files upon cancellation of the storage service.

They are a European company, so you are the customer, not the product and recipient of subsidies. They use less manipulation and dark patterns than an equivalent American company.

You pay, you get service. You don't pay, you don't get service. If they can't bill you, they should try to communicate with you for a few months before treating it as a cancellation. If you cancel, then your choice is clear and you should expect your service to be immediately terminated at the end of the current billing period. If their service is storing files for you, termination of the service means deletion of the files.

There is no need for a grace period when you knowingly and voluntarily make the decision to terminate a file storage service.


> you are the customer, not the product and recipient of subsidies

They also do advertisement (promoted tracks and audio ads) but this is irrelevant to my point, what I described applies regardless, including the fact that heavy users of the unlimited plan and free users definitely receive subsidies, both from light users and from ad revenue of the platform.

> You pay, you get service. You don't pay, you don't get service

The definition of the service you receive and how good it is includes what happens when you decide to off-ramp from receiving it. Changing your service plan is your indication that you want to change service, what happens after that is how they handle it. There is no stipulation whatsoever that things stop being available to you immediately.

In fact, in case of SoundCloud, they themselves prove this, because they did not delete data but instead continued to keep data for free, which means providing you a service that you presumably stopped paying for. The silly move of them was to do that and not allow you to download it, and then emailing the victim urging them to pay to access this data, which makes it 100% a dark pattern and means they are effectively blackmailing customers with proven ability and willingness to pay.

If I remember right, Apple (an American company) handles it better and gives you a month to download excess data if you downgrade, but sure, “dark patterns”.

> There is no need for a grace period when you knowingly and voluntarily make the decision to terminate a file storage service.

If you terminate your use of a file storage service, you would expect your personal data to be deleted. However, no one terminated their use of a service, somebody apparently downgraded their payment plan (temporarily or not).


Sounds like they will warn you about your storage limit for a while, so you can choose which data to delete to be under the limit, before deleting your data at random to force you under the limit. Quite reasonable.


You mean Apple? I don’t think they actually delete any minor excess data that may occur incidentally due to race condition or eventual consistency. Just if you actually downgrade, they do… After a month or so, during which you can still download.


SoundCloud used to be good prior to the redesign.

Recently I decided to evaluate it for serious use and start posting there again, only until their new uploader told me I need to switch to a paid plan, even though I triple-checked I was well within free limits and under my old now unused username I uploaded a lot more (mostly of experimental things I am not that proud of anymore).

It looks like their microservices architecture is in chaos and some system overrides the limits outlined in the docs with stricter ones. How can I be sure they respect the new limits once I do pay, instead of upselling me the next plan in line?

Adding to that things like the general jankiness or the never-ending spam from “get more fake listeners for $$$” accounts (which seem to be in an obvious symbiosis with the platform, boosting the numbers for optics), the last year’s ambiguous change in ToS allowing them to train ML systems on your work, it was enough for me to drop it. Thankfully, it was a trial run and I did not publish any pending releases.

If you still publish on SoundCloud, and you do original music (as opposed to publishing, say, DJ sets, where dealing with IP is problematic), ask yourself whether it is timr to grow up and do proper publishing!


This sounds like a classic consistency vs latency trade-off. Enforcing strict quotas across distributed services usually requires coordination that kills performance. They likely rely on asynchronous counters that drift, meaning the frontend check passes but the backend reconciliation fails later. It is surprisingly hard to solve this without making the uploader feel sluggish.


That would explain why the front-end would allow you to attempt something that goes over your limits, but not why the back-end would reject something that doesn't go over your limits.


My bet at the time was that they have a bunch of hidden extra limits based on account age, IP/user agent information, etc. If that is true, their problem is that they advertise the larger limits instead of the smaller limits (to get more users signed up), and that they do not communicate when their extra limits apply and instead straight up upsell you, which are both dark patterns.


That sounds plausible. I've had to implement similar reputation-based limits on my own backend just to keep inference costs from exploding, so I sympathize with the fraud prevention angle. Masking that as a generic quota issue to push an upsell is pretty hostile though.


The feeling of being gaslit, when I calculated and recalculated the length of my tracks and compared it with limits on their pricing page, was quite unpleasant.

Another possibility is maybe they reduced their limits from 3 to 2 hours of audio around the same time. I don’t know if it happened before or after my experience, did not read their blogs or press releases, only made sure I was well under whatever limits were currently listed on their pricing & plans page (I was probably under 2 hours as well, but as this point can’t be bothered to check). Perhaps that transition was chaotic and for some time their left hand did not know what the right hand is doing.


Fair point. I suspect it comes down to ghost reservations or stale caches. If a previous upload failed mid-flight but didn't roll back the quota reservation immediately, the backend thinks you're over the limit until a TTL expires. Or you delete something to free up space, but the decrement hasn't propagated to the replica checking your quota yet.


Fair point. I suspect it comes down to how they handle retries. If an upload times out but the counter already incremented, the system sees the space as used until an async cleanup job runs. It is really common to have ghost usage in eventually consistent systems.


That’s a possibility.


Yes, TCAS II warns all the way down to 100m AGL (around 320ft above the ground), and they were already between 1000ft and 1500ft (~400m).

It may or may not have advised what to do (to climb/descent/etc.) because that is turned off below 1000ft, and they were approximately at that altitude at the time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: