Hacker Newsnew | past | comments | ask | show | jobs | submit | mholt's commentslogin

Even better IMO is this status page: https://mrshu.github.io/github-statuses/

"The Missing GitHub Status Page" with overall aggregate percentages. Currently at 90.84% over the last 90 days. It was at 90.00% a couple days ago.


It has been pretty rough. Their own numbers report just a single `9` for Actions in Feb 2026 with 98% uptime. But that said -- I don't get the 90% number.

Anecdotally, it seems believable that 1 in 50 times (2%) in Feb that Actions barfed. Which is not very nice, but it wasn't at 1 in 10 times (10%).


It looks like the aggregate stats are more of a venn diagram than an average. So if 1/N services are down, the aggregate is considered down. I don't think this is an accurate way to calculate this. It should be weighted or in some way show partial outages. This belief is derived from the Google SRE book, in particular chapters 3 (embracing risk) and 4 (service level objectives)

https://sre.google/sre-book/embracing-risk/

https://sre.google/sre-book/service-level-objectives/


If you're using all services, then any partial outage is essentially a full outage. Of course, you can massage the numbers to make it look nicer in the way you described but the conservative approach is better for the customers. If you insist, one could create this metric for selected services only to "better reflect users".

That being said, even when looking at the split uptimes, you'd have to do a very skewed weighting to achieve a number with more than one 9.


> That being said, even when looking at the split uptimes, you'd have to do a very skewed weighting to achieve a number with more than one 9.

It's definitely bad no matter how it you slice the pie.

If GH pages is not serving content, my work is not blocked. (I don't use GH pages for anything personally)


That's how you count uptime. You system is not up if it keeps failing when the user does some thing.

The problem here is the specification of what the system is. It's a bit unfair to call GH a single service, but it's how Microsoft sells it.


As a “customer”, I consider github down if I can’t push, but not down if I can’t update my profile photo (literally did this today, sending out my github to potential employers for the first time in a long time). This stuff is notoriously hard to define


> That's how you count uptime.

It's not how I and many others calculate uptime. There is not uniformity, especially when you look at contracts.


Thinking back to when I was hosting, I think telling a customer "your web server was running fine it's just that the database was down" would not have been received well.


I mean I think it's useful. It answers the question, "what percentage of the time can I rely on every part of GitHub to work correctly?". The answer seems to be roughly 90% of the time.


I don't use half of the services, the answer is not straight forward

https://mrshu.github.io/github-statuses/


Nobody cares about every part of GitHub working correctly. I mean, ok, their SREs are supposed to, but tabling the question of whether that's true: if tomorrow they announced a distributed no-op service with 100% downtime, you should not have the intuition that the overall availability of the platform is now worse.


In a nutshell, why would the consumer care (for the SLO) care about how the vendor sliced the solution into microservices?


It will depend on the contract.

When I was at IBM, they didn't meet their SLOs for Watson and customers got a refund for that portion of their spend


An aggregate number like that doesn’t seem to be a reasonable measure. Should OpenAI models being unavailable in CoPilot because OpenAI has an outage be considered GitHub “downtime”?


As long as they brand it as a part of GitHub by calling it "GitHub Copilot" and integrate it into the GitHub UI, I think it's fair game.


The third-party aspect is irrelevant, but while high downtime on any product looks bad for the company and the division, I consider GitHub Copilot an entirely separate product from GitHub, and GitHub Copilot downtime doesn't interfere with my use of GitHub repos or vice versa, so I'd consider its downtime separately.

GitHub Actions, on the other hand, is frequently used in the same workflows as the base GitHub product, so it's worth considering both separately and together, much like various Azure services, whereas I see no reason at all to consider an aggregate "Microsoft" downtime metric that includes GitHub, Azure, Office 365, Xbox Live, etc.

The most useful, metric, actually, is "downtimes for the various collections of GitHub services I regularly use together", but that would obviously require effort to collect the data myself.


My use of GitHub is like yours; I depend on Actions, but I couldn't give less of a damn about Copilot. However, Microsoft has tried to get people to adopt Copilot-heavy workflows, where Copilot plays an integral part in the pull request review process. If your process is as Microsoft pushes for -- wait for Copilot to comment, then review and resolve the stuff Copilot points out -- then Copilot being down means you can't really handle pull request, at least not in accordance with your standard process. For people who embrace Copilot in the way Microsoft wants them to, a GitHub Copilot outage has a serious impact on their GitHub experience.


What is Google's uptime (including every single little thing with Google in the name)?


I don't think that's a fair comparison. Google Maps, Google Calendar, Google Drive, Google Search, Google Chrome, Google Ads, etc. are all clearly completely different products which have very little to do each other, they're just made by the same company called Google.

GitHub is a different situation. There's one "thing" users interact with, github.com, and it does a bunch of related things. Git operations, web hooks, the GitHub API (and thus their CLI tool), issues, pull requests, Actions; it's all part of the one product users think of as "GitHub", even if they happen to be implemented as different services which can fail separately.

EDIT: To illustrate the analogy: Google Code, Google Search and Google Drive are to Google what Microsoft GitHub, Microsoft Bing and Microsoft SharePoint are to Microsoft.


Completely agree, it makes it worse actually as Github's secondary functions so to speak are things we implicitely rely on.

When I merge to master I expect a deploy to follow. This goes through git, webhooks and actions. Especially the latter two can fail silently if you haven't invested time in observation tools.

If maps is down I notice it and immediately can pivot. No such option with Github.


It depends, for example - I would consider Google Drive uptime as part of say Google Docs’ overall uptime because if I can’t access my stored documents or save a document I’ve been working on for the past 3 hours because Drive is down I would be very pissed and wouldn’t care if it’s Drive or Docs that is the problem underneath I still can’t use Google Docs as a service at that point.


I think reasonable people can disagree on this.

From the point of view of an individual developer, it may be "fraction of tasks affected by downtime" - which would lie between the average and the aggregate, as many tasks use multiple (but not all) features.

But if you take the point of view of a customer, it might not matter as much 'which' part is broken. To use a bad analogy, if my car is in the shop 10% of the time, it's not much comfort if each individual component is only broken 0.1% of the time.


> But if you take the point of view of a customer, it might not matter as much 'which' part is broken. To use a bad analogy, if my car is in the shop 10% of the time, it's not much comfort if each individual component is only broken 0.1% of the time.

Not to go too out of my way to defend GH's uptime because it's obviously pretty patchy, but I think this is a bad analogy. Most customers won't have a hard reliability on every user-facing gh feature. Or to put it another way there's only going to be a tiny fraction of users who actually experienced something like the 90% uptime reported by the site. Most people are in practice are probably experienceing something like 97-98%.


Sorry, by 'customer' I meant to say something like a large corporate customer - you're buying the whole package, and across your org, you're likely to be a little affected by even minor outages of niche services.

But yeah, totally agree that at the individual level, the observed reliability is between 90% and 99%, and probably toward the upper end of that range.


A better analogy is if one bulb in the right rear brake light group is burnt out. Technically the car is broken. But realistically you will be able to do all the things you want to do unless the thing you want to do is measure that all the bulbs in your brake lights are working.


That's an awful analogy because "realistically you will be able to do all the things you want to do". If a random GitHub service goes down there's a significant chance it breaks your workflow. It's not always but it's far from zero.

One bulb in the cluster going out is like a single server at GitHub going down, not a whole service.


Or if your kettle is not working the house is considered not working?


I've been on a flight that was late leaving the gate because the coffeemaker wasn't working.


These are two pages telling two different things, albeit with the same stats. The information is presented by OP in a way to show the results of the Microsoft acquisition.


holy shit that's nearly five weeks of down time.

Well, I mean, I guess that's fair really. How long has github been around? Surely it's got five weeks of paid time off by now...


Yeah but if you need Internet failover, cell phone towers are likely flooded. Starlink will be much more available (probably).


Or they’re just offline, because their backup batteries only last a few hours, and the gensets for the backhaul have run out of diesel. Iberian blackout last year, I didn’t even know it had happened until I went to pick the kid up from school - just another day at the home office.


This too depends on which POP/ground station you're landing at.

Maybe less so once the majority of starling satellites are capable of laser communications to route your traffic down to a less saturated ground station.


The vast majority of Starlink satellites do have laser interconnect now. They started launching them in 2021 or 2022 I believe.


And yet, try getting a full backup of your Google phone onto your own computer. (Without rooting/wiping the whole thing.) Heck, try getting just your text messages off (without a separate app)!

You can't. (Last time I checked.) The backup is encrypted in the cloud, and the only way to download it is to restore it to a phone.

Whereas I can just plug in my iPhone and get a full backup, complete with sqlite manifest, completely accessible. Text messages, photo library, everything.


Google takeout. Done, nice try though to make some totally irrelevant comparison to excuse apple behavior.


Does that include all the local storage from my apps now?


Can you restore it to your phone?


tbf that's Apple's fault, not the choice of the free, unpaid open source developer.


Apple's fault that they didn't bother to edit the text that says "No install fuss"?


Probably don't know how now that the LLM helping them write the code has lost that context.

From their github it appears all the code is llm-generated


You mean the AI agent that was prompted to vibe code this?


I never loved the idea of GSB or centralized blocklists in general due to the consequences of being wrong, or the implications for censorship.

So for my masters' thesis about 6-7 years ago now (sheesh) I proposed some alternative, privacy-preserving methods to help keep users safe with their web browsers: https://scholarsarchive.byu.edu/etd/7403/

I think Chrome adopted one or two of the ideas. Nowadays the methods might need to be updated especially in a world of LLMs, but regardless, my hope was/is that the industry will refine some of these approaches and ship them.


Block lists will always be used for one reason or another, in this case these are verified malicious sites, there is no subjective analysis element in the equation that could be misconstrued as censorship. But even if there was, censorship implies a right to speech, in this case Google has the right to restrict the speech of it's users if it so wishes, matter of fact, through extensions there are many that do censor their users using Chrome.


> censorship implies a right to speech, in this case Google has the right to restrict the speech of it's users

I don't follow. Even if Google does have the legal right [1], that does not make the censorship less problematic, or morally right. And even if it's hard to make a legislative fix ("You want to ban companies from trying to protect their users from phishing?") [2], that doesn't undo the problems of the current state, or mean we should be silent about it.

[1] This is far from certain, as it could be argued to be tortious interference, abuse of market power, defamation if they call something phishing when it's not.. Then there's the question of jurisdiction..

[2] It's a very common debating tactic to assert that a solution is difficult, to avoid admitting a problem exists.


Certainly they have the legal right as you pointed out. Freedom of speech is a legal right not a moral prerogative or entitlement.

HN bans users that violate its rules for example. If I were to insult you severely, HN mods have every right to protect you from my speech and censor me by deleting my message and banning me. The threats posed by these malicious sites are far worse than insults on a forum.

Companies like Google are expected by the public and governments alike to protect their users. they would even be entitled to requiring every site a user visits requires an EV cert and age verification enabled if they want. it isn't just their legal right, everyone, not just corporations, has the right to pursue what they feel is the correct way of doing things. Their responsiblity is to their investors first, users second, governmental regimes third and everyone else after that. Your presumed entitlement here is as everyone else.

For #2, I don't recall claiming a solution being difficult (unless you thought banning companies from protecting their users, was somehow a thing I was saying should be done). Matter of fact, I am near incensed that HN users are utterly and shamefully ignorant on harm users suffer. You should be ashamed of your ignorance. Not only this but I've had long debates on HN on similar lines when it came to topics like the play store require developer authentication. It almost makes me wish your freedom of speech was entirely taken away from you so you can have some understanding of the suffering people undergo, and what such measures are trying to prevent. Freedom of speech has never been a right obtained at the expense of harm to others. The moment someone is harmed, you lose your freedom of speech, that is the case in a public arena where such laws exist, but even more so under private platforms. But i did say almost! I think you're just used to problems being of a technical nature, where as in this case it is a human threat (crime) problem.

Furthermore, I am constantly disappointed at the sheer dereliction of duty exhibited by HNers when it comes to security. Your product must protect your users by default, there is absolutely no acceptable amount of harm users should experience for the sake of non-users. Site owners have no entitlements to browsers, they only have privileges. Browsers can and do absolutely play gatekeepers to websites.

As far as #1, I have argued tortious interference about Google's practices myself before. I am not a lawyer, so I don't know if this qualifies or not, but can I also claim tortious interference if HN bans me, if I miss out on HN job posts or exposure to the startup scene? can I claim defamation for being banned on HN wrongfully? is HN abusing it's market power because of the sheer number of silicon valley types that aggregate on this site? And I suspect you're not a lawyer either, because jurisdiction is a concept that applies to a judicial body (hence: juris), Google is not a judicial body, and they're not handing out a judicial sentence.

I wonder, are you aware of the CA/B forum? hmm..

The fact is, a browser is a software used to access network resources. Part of that feature set, as advertised explicitly to users, is that it will make attempts to keep their access to the network safe and secure. In other words, all of your claims of entitlement are nullified by the simple fact that the "censorship" is an advertised feature, one that not only most browser users use, but it is an opt-out-able optional feature. Not only that, there is always an option to click through the safebrowsing warning and visit the site anyways.

Both from a moral and legal perspective, I challenge you to make yourself liable to all damages people suffer as a result of not having safebrowsing enabled. Insure them free of charge. Next thing I know you'll be claiming enterprise networks shouldn't "censor" either, or better yet, they can but people who can't afford multi-million-dollar firewalls shouldn't be protected for the sake of access you feel entitled to.

As far as libel, simply being incorrect doesn't make it libel, it needs to be intentional. so long as they can back up the reasonable cause of your site making it on their list, it isn't libel. Just same as your IP can land on their lists and gmail will refuse to accept email from you (just same as every public email provider).

Freedom of speech is not freedom of access, both morally and legally. You dilute actual freedoms when you try to abuse them to gain advantages like this. It is important to understand when being able to do something is a right versus a privilege. It is also important to solve the root cause of problems, even though I disagree with you on this topic, Google's monopoly is a big problem, as is Microsoft's and other companies, but your solution being "there shouldn't be a solution" is (I'd dare say) morally objectionable and abhorrent considering the types of harm people suffer as a result. Perhaps appeals to block lists could have a more legally regulated process? But there are more pressing issues like payment processors banning merchants and users alike all the same (worse than browsers than site in terms of impact?), and not a single government would dare claim that is out of line, let along regulate it. The right of companies to do business how they want is highly protected in free market economies, and something like Chrome isn't even a paid product or service to where you can have a commercial or contractual claim over it.

Since this is a long comment, I'll add this finally to it: If you seriously think Google cant' block arbitrary sites on its free software and service, then by that logic users should also have entitlements for bans on sites like HN, and even on things like your open source project, you can't just not accept pull requests or ignore them, if it is affecting a user and they're relying on it, your features preventing them from doing things is tortious interference. claiming negative things about pull requests is libel.


> Freedom of speech is a legal right not a moral prerogative or entitlement.

No, the 1st amendment of the US constitution is a legal right. Free speech in general is a much broader concept, not limited to its legal implementation.

> For #2, I don't recall claiming a solution being difficult

You didn't, but it is how these discussions usually develop, and I thought of saving some time. And indeed that's how it went.


it did not go that way, the problem is not difficulty of doing anything, a private corporation offering a free product has the right to do whatever it wants with that product. their reasoning behind GSB is not for you to debate.

Free speech in general is a legal concept. rights in general are not moral concepts, when you say you have a right to do something, it is always in the context of a rules based framework. When you say something is right (same word, different meaning) or wrong, that is morality. Speech can be right or wrong. prohibiting someone from speaking can also be right or wrong, but it isn't called "freedom of speech" or "censorship". If you can't articulate why something is morally wrong without referring to a right under some rule based framework, then you're not talking about morality, you're talking about not liking some rule.

When you are in someone's house, they have the right to decide what you can talk about or not talk about, because it is their home and your presence there is a privilege. Replace home with business, and then replace business with a free product that you're not even paying for and that's this situation.

"I don't like it" is not a moral reasoning. You need to be able to articulate why something is immoral if you're going to use morality as a reason. Similarly, you need to explain what specific laws grant you an entitlement if you feel like a legal entitlement is violated.


> Replace home with business, and then replace business with a free product that you're not even paying for and that's this situation.

And then replace business with country and society that enables that business' existence, and in whose sovereign land that business is located (i.e. in whose house it is), and that's still this situation.

> Free speech in general is a legal concept.

So if someone says "free speech", you just have no idea whatsoever what they're talking about, until they also tell you which country/jurisdiction they're talking about, do you?

And I didn't make a moral argument - I said that there is a moral (not just legal) argument to be made. I don't have the time or inclination to walk you through why free expression is desirable, or why letting a handful of giant entities crush speech and smaller businesses is undesirable. If you need that explained to you, I don't think we'll see eye to eye no matter how long we debate.


> And then replace business with country and society that enables that business' existence, and in whose sovereign land that business is located (i.e. in whose house it is), and that's still this situation.

Yes, so it is a legal construct then? countries and societies generally exist under the rule of law. In the US, both legally and socially, we've decided to accept a free-market capitalist way. Under that social agreement, both individuals and companies have certain rights and entitlements over their products and services.

Under a more universal moral regime, if you have a good reason to believe someone might come in harms way, you have an obligation to do something about it so long as it is within your means to do. Preventing others from coming into harm supersedes the presumed entitlements of third parties. In this case, Google is nice enough to let users disable GSB or bypass GSB warnings. When a certificate for a website expires for example, similar to GSB every browser shows a warning. almost every single time, the site isn't compromised and there is no MITM attack happening, but we accept that is the best course of action, I don't see you protesting that because you understand it is the right thing to do. But in this case you just don't like GSB and you're looking for some moral ground to stand on because no other ground will let you.

> So if someone says "free speech", you just have no idea whatsoever what they're talking about, until they also tell you which country/jurisdiction they're talking about, do you?

You just said it isn't a legal concept, so why does that matter? But context does matter, in this case we're on a US based website talking about a US based company.

> why free expression is desirable, or why letting a handful of giant entities crush speech and smaller businesses is undesirable.

aha! you don't need to walk me through anything, but I think you confuse what is desirable and undesirable with what is moral and immoral. for desirable and undesirable, you use the law to enact your preferences. your desires however have no bearing on morality.

I don't think we'll see eye to eye either, but because I suspect our understand of morality and the rule of law is not aligned.


I know for a fact that GSB contains non-malicious sites in its dataset.


It is possible for sure. what's your point? spamhaus does too with IPs, abuse.ch does too, every enterprise firewall's reputation list does too. that's the whole point of reputation, if it was reliable 100% it wouldn't be "reputation".


You claimed they all are malicious sites or they wouldn’t be included but that’s factually incorrect


I assumed a human review is always in place, if not then you're right and I was wrong.


But does it automatically provision the DNS records and rotate the keys?

I'm actually kind of furious at nginx's marketing materials around ECH. They compare with other servers but completely ignore Caddy, saying that they're the only practical path to deploying ECH right now. Total lies: https://x.com/mholt6/status/2029219467482603717


Thanks for sharing this, we like it a lot. Mohammed Al-Sahaf implemented this for us so that releases can be made by a quorum of maintainers rather than being blocked by me every time.

Here's the first release done with it: https://github.com/caddyserver/caddy/releases/tag/v2.11.0-be...

And you can see the PR flow where the action happens: https://github.com/caddyserver/caddy/pull/7383


I did some research for a large financial library we were helping maintain to improve CI and did a writeup on the best way to redo the ci for:

* pushing a container image to docker hub

* pushing a sdk to npm

* pushing a rust crate to crates.io

* publishing a cli executable and some docs to GitHub as a release

We settled on a eeeily similar approach as caddy sans the release proposal. We are also heavily focusing on trusted publishing and attestation (via cosign) for any platform that supports it.

I went through this today and it is just work of art. Mohammed Al-Sahaf Is an artisan in CI, truly.


That is one beautiful instrument. What does the front look like?

And I know we can't hear it in its "original glory" anymore, but is the sample only like 10 seconds long because it's proprietary, or is the cello too delicate to play a full number on, or...?

Edit: Found the museum piece with full pictures: https://emuseum.nmmusd.org/objects/6684/violoncello?ctx=7735...


Old string instruments generally remain playable[1] so it most probably wouldn’t be too delicate to play. However most old Amati and Stradivarius instruments will have had a refit during the Romantic period to play on metal strings. This massively increases the string tension compared to the gut strings that would have been used in the original design. This refit often involves a new bridge, soundpost and nut[2] and (if it happened) would have moved the position of the soundpost relative to the bridge. So you’d want to undo all that to hear it in its original state. You’d also want to remove the end pin and use a historically accurate bow as cello players used to play by sort of cradling the cello in their legs rather than having it on a pin and the bow changed shape.

Gut strings have more resonance and a much better sound (in my opinion) at the expense of being less loud and much harder to keep consistently in tune. The romantic movement led to larger forces in the orchestra and more brass, which meant strings had to be louder and in greater numbers to get a balanced sound, and obviously being able to stay in tune over the course of the (longer and longer) pieces of music was convenient!

Here’s an example of a historically-informed performance of the Bach cello suite no 1 in G so you get an idea what the gut strings and bow sound like. https://youtu.be/cGnZHIY_hoQ?si=J1GMF4Yg2h4dQ6-A

Source: Wife is an “early musician” albeit not a string player and teaches at a couple of big conservertoires in London. I was a professional bass player (not early music, Jazz and similar) so know about string set up from that. Have lots of early musician friends.

[1] Unlike old wind instruments (recorders etc) where the players’ breath causes the instrument to degrade so they literally become unplayable over time. That is why even though we have renaissance recorders for example, they are in museums and modern reproductions made by copying their measurements etc play better than the originals. That’s not true of old string instruments. There are 16th century string instruments out there being played all the time.

[2] That’s not as radical as it sounds. The soundpost plays a crucial role in the sound production of the instrument as it transmits the resonance of the strings into the body of the instrument but it’s basically just a piece of dowling rod. The nut and bridge would conventionally be replaced whenever you put a new fingerboard on, which happens a lot as you wear them out.


Replying to myself to add two further things which I should have mentioned before. Firstly I looked at the photos and that instrument is on gut. You can see it clearly here[1]. So you may have found it interesting idk but you can ignore everything I said about setup.

Secondly one thing that makes this instrument so special is that as rare and precious as original Amati and Stradivarius violins are, original cellos and basses are rarer. There are two reasons for this:

1) Because there are fewer cellos and basses in the orchestra than there are violins and violas, and fewer cello concertos etc than violin concertos for high-end virtuosos to perform, the elite makers made far fewer of these instruments originally. That goes double for an instrument like this that was literally made for a king. All of these instruments have a distinguished history but that's on another level.

2) Secondly, it's much easier for a large instrument to be damaged. Let alone just the day to day knocks etc that happen when you have a massive instrument cluttering up your house, given the history of wars etc in Europe since the 16th century it's practically a miracle that any of these instruments survived intact.

If you're interested in historical instruments, the Horniman museum in London has a great collection. https://www.horniman.ac.uk/ also there are pretty cool collection in Brussels https://www.mim.be/en and Amsterdam https://flutealmanac.directory/listing/rijksmuseum-musical-i...

[1] https://emuseum.nmmusd.org/internal/media/dispatcher/86655/f...


The new horniman remains super, but I miss the old dusty, "dolmech recorder collection in a case untouched in decades" horniman.

Museums have to renew. It's a massive improvement overall for community engagement but the old one was a place you could feel like you were discovering things, not being told things. The science museum London is the same: cleaned out the trash, made it less romantic and interesting.


> Firstly I looked at the photos and that instrument is on gut. You can see it clearly here[1].

Aren't the two string on the left of that picture made of metal?


I would think they're gut core with metal winding around them - much as the bass strings on a classical guitar are nylon with a metal winding.

On a modern instrument the core would be metal as well.


Ah right! Thanks for the clarification.


If you look at the R1 pages, you'll see those pages, though scroll-heavy, at least contain more useful info. I'm hoping that after R2 is actually available to order, that they'll update the page with more information. It's still early.


Of course they're down while I'm trying to address a "High severity" security bug in Caddy but all I'm getting is a unicorn when loading the report.

(Actually there's 3 I'm currently working, but 2 are patched already, still closing the feedback loop though.)

I have a 2-hour window right now that is toddler free. I'm worried that the outage will delay the feedback loop with the reporter(s) into tomorrow and ultimately delay the patches.

I can't complain though -- GitHub sustains most of my livelihood so I can provide for my family through its Sponsors program, and I'm not a paying customer. (And yet, paying would not prevent the outage.) Overall I'm very grateful for GitHub.


have you considered moving or having at least an alternative? asking as someone using caddy for personal hosting who likes to have their website secure. :)


We can of course host our code elsewhere, the problem is the community is kind of locked-in. It would be very "expensive" to move, and would have to be very worthwhile. So far the math doesn't support that kind of change.

Usually an outage is not a big deal, I can still work locally. Today I just happen to be in a very GH-centric workflow with the security reports and such.

I'm curious how other maintainers maintain productivity during GH outages.


For us the main shift was accepting that “being able to work locally” and “knowing whether users are affected” are two different problems.

Local dev usually survives outages just fine. What hurts is losing external signals and assuming things are okay when they’re not.

After a few incidents like this, we stopped relying on a single monitoring setup. One self-hosted probe plus at least one fully independent external check reduced blind spots a lot. It doesn’t prevent outages, but it avoids flying blind during them.


Yep, I get you about the community.

As an alternative, I thought mainly as a secondary repo and ci in case that Github stops being reliable, not only as the current instability, but as an overall provider. I'm from the EU and recently catch myself evaluating every US company I interact with and I'm starting to realize that mine might not be the only risk vector to consider. Wondering how other people think about it.


> have you considered moving or having at least an alternative

Not who you're responding to, but my 2 cents: for a popular open-source project reliant on community contributions there is really no alternative. It's similar to social media - we all know it's trash and noxious, but if you're any kind of public figure you have to be there.


Several quite big projects have moved to Codeberg. I have no idea how it has worked out for them.


Zig has been doing fine since switching to Codeberg


I would have said Codeberg’s reliability was a problem for them but… gestures vaguely at the submission


LOL Codeberg's 'Explore' link is 503 for me!


N.I.N.A. (Nighttime Imaging 'N' Astronomy) is on bitbucket and it seems to be doing really well.

Edit: Nevermind, looks like they migrated to github since the last time I contributed


I get that, but if we all rely on the defaults, there couldn't be any alternatives.


You are talking to the maintainer of caddy :)

Edit- oh you probably meant an alternative to GitHub perhaps..


no worries, misunderstandings happen.


Which security bug(s) are you referring to?


Presumably bugs that may still be under embargo


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: