You have to throw the context away at that point. I've experienced the same thing and I found that even when I apparently talk Claude into the better version it will silently include as many aspects of the quick fix as it thinks it can get away with.
I've been using pi.dev since December. The only significant change to the harness in that time which affects my usage is the availability of parallel tool calls. Yet Claude models have become unusable in the past month for many of the reasons observed here. Conclusion: it's not the harness.
I tend to agree about the legacy workarounds being actively harmful though. I tried out Zed agent for a while and I was SHOCKED at how bad its edit tool is compared to the search-and-replace tool in pi. I didn't find a single frontier model capable of using it reliably. By forking, it completely decouples models' thinking from their edits and then erases the evidence from their context. Agents ended up believing that a less capable subagent was making editing mistakes.
Meaning Msft Principal is below L5? I got the same feedback from one of my friends who works at Google. She said quality of former MSFT engineers now working at Google was noticeably lower.
I mean imputed prestige within the organization. Being an L5 is nothing; it's the promote-or-fire cutoff at Google AFAIK. But being a Principal is slightly more than nothing; it's two levels above the promote-or-fire cutoff.
I mean, _now_, sure, I'd assume Microsoft Principals should be hired around L4 at Google. But that's just due to a temporary inbalance in the decline of legacy organizations. Give it a few years and it will even back out and msft 64 will be in the middle of L5 range like levels.fyi claims.
L5 hasn't been the promote or fire cutoff at Google for perhaps a decade. L4 is the new L5, mostly because Google would have to pay L5s more, and it has been terrified of personnel costs for years.
But even so, an L5 at Google is basically a nobody as far as prestige or convincing other people to adopt your plan goes. Even L6 is basically just an expert across several mostly local teams. L7 is where the prestige gets going.
I first became aware of the phenomenon of an enlightened anti-UNIX bundle in ZFS; in particular how it unifies lvm, RAID, and the filesystem. While zfs isn't universally loved, it seems that each hot new filesystem that comes out now adopts this strategy as well.
While this doesn't lead to immediate enlightenment about where the balance is, it does highlight an important aspect to consider: whether the whole is more than the sum of its parts. One way openzfs is more than the sum of its parts is that it closes the RAID write hole. The next step, whether it be stabilized in openzfs or otherwise, is to merge encryption into the stack: The current state of the art is to compose block encryption with zfs on top. But a better solution would be for zfs's object layer to encrypt its blocks itself. Because the blocks are not required to have a particular disk alignment or size, the filesystem can offer authenticated encryption without losing the random-access property, as well as granular keys, thus offering some clear advantages over the UNIXy composition method.
Actually I'm not sure how strong an example ripgrep is by comparison. Could a `find` replacement do the ignore patterns just as well? OTOH, does ripgrep offer better I/O and compute parallelism than a naive xargs/parallel?
> Could a `find` replacement do the ignore patterns just as well? OTOH, does ripgrep offer better I/O and compute parallelism than a naive xargs/parallel?
I'm not sure if you're asking rhetorically or not, but I genuinely don't know the answers to those questions, and I'd argue that's kind of the point. Pretty much any time I've ever had to do anything non-trivial with either find or xargs, I've had to look up how to do it. The most common way I've used xargs over the years by far is piping to it as a quick and dirty way to condense whitespace to a string I get out at the end of some one-liner.
The larger point I was trying to make is that good experience out of the box" is in practice a legitimate reason that people will prefer one thing to another even if it's equivalent to some other thing you might be able to throw together manually that's just as good from a technical perspective. There's certainly power in knowing how to use composable tools, but there's also power in being able to save time to put towards other things if you care about them more, and people will have different preferences about where to strike that balance for a given tool. The more precise point I was trying to make is that "this doesn't fit the UNIX philosophy" seems like fairly weak criticism; if that's the strongest argument that can be made against ripgrep, it makes a lot of sense why it was so successful.
Pi coding agent does this by default with all outputs but Claude (all versions tested, including opus 4.6) just completely ignores this capability. Even when the tool output explicitly tells the agent that the full output is saved in a particular file, Claude reruns the command.
Indeed, I wonder whether the vaccine content matters at all in current vaccines. We could probably just inject people with the adjuvants and get the same result.
> I wonder whether the vaccine content matters at all in current vaccines.
The target does matter, that is the basis for the whole technology, and the thing most predictive of efficacy.
That's why the flu shots often don't work and the shots for smallpox and measles do, the flu is a more rapidly mutating target.
Going crazy with the adjuvants was popular during the pandemic when it became clear that the virus had mutated (the target protein), but no one wanted to do R&D for a new target.
Counting white blood cells became a proxy for efficacy, and you can manipulate that stat with adjuvants.
The content clearly matters, and efficacy is tracked (this year it was poor because the eventual pandemic flu strain was a H3N2 virus which mutate rapidly)[0]. This was despite WHO updating the recommendations at the last hour in April/May 2025.
But critically this isn't as important as people think. The primary goal of the flu vaccination is of course to temper spread of the main viruses that season. But it's also to build people's immune library of exposure to flu viruses.
Recall that the 1918 "Spanish" flu was so terrible not because it was intrinsically a worse virus but that it was one which many younger generations had not been previously exposed.
COVID has meant that many younger generations again has a much smaller library of past exposure.
That's actually just how the Internet is. Nothing to do with the great firewall.
reply