Hacker Newsnew | past | comments | ask | show | jobs | submit | ragnese's commentslogin

This is also a problem, IMO, in having this optimization in PHP. Anonymous functions are instances of a Closure class, which means that the `===` operator should return false for `foo() === foo()` just like it would for `new MyClass() === new MyClass()`.

But, since when has PHP ever prioritized correctness or consistency over trivial convenience? (I know it's anti-cool these days to hate on PHP, but I work with PHP all the time and it's still a terrible language even in 2026)


I never understood why people think somehow PHP is fine now, and I've had that opinion expressed several times on HN. The best I can make out is that people's expectations are so dismal now that they're like "Well new versions fixed 2 of the 5 worst problems I noticed, so that's good right?"

Because PHP is a amazing backed language for making CRUD apps. Always has been.

It has great web frameworks, a good gradual typing story and is the easiest language to deploy.

You can start with simple shared hosting, copy your files into the server and you are done. No docker, nothing.

Sure it has warts but so have all mainstream programming languages. I find it more pleasant than TypeScript which suffers from long compile times and a crazy complex type system.

The only downside is that PHP as a job means lots of legacy code. It a solid career but you will rarely if ever have interesting programming projects.


Its bad indeed. Its unfixable at this point. We just get bolton features.

We could do something like `#function() {}` or `#() => {}` which makes a function static.

It’s a “terrible” language? That’s news to me. What’s “terrible” about it?

> `new MyClass() === new MyClass()`

Does that look like the code you’re writing for some reason? Because I’ve seen 100k loc enterprise PHP apps that not once ran into that as an issue. The closest would be entities in an ORM for which there were other features for this anyway.


I'm especially angry that if you go to reddit.com in a mobile browser, it will sometimes fully block you from certain subreddits (not just NSFW ones) and tell you that you can only access it from the app. Meanwhile, you can easily visit the exact same subreddit by typing old.reddit.com/r/whatever. The outright lying bothers me so much. I refuse to be desensitized to lying just because everyone is lying all the time; it's still really wrong, and they really should be ashamed of themselves.

reddit browser behavior got me into using frontends for various sites, such as redlib dot privacyredirect dot com

there are surprisingly many of them for pretty much every social media website.


When you say "meme", it sounds like it might not be true. But, a few years ago I handed my stepson a USB flash drive with some files on it, he plugged it into his laptop and the very first thing he did was launch Google Chrome and then not have any clue what to do to access the files (it was a Windows laptop).

One of the most enraging things about life since 2005-ish is that no matter how private and careful I am, it doesn't even matter because every other inconsiderate fool I know and interact with will HAPPILY let some random company have access to THEIR contacts--which includes me--in order to play Farmville for a month until they get bored of that and offer up my private information to the next bullshit ad company that asks for their contacts.

It used to frustrate me that people didn't care about their own privacy, because I genuinely didn't want evil people to hurt them. But, it's even more angering that people don't have the common decency to consider whether their friends and family would want them sharing their phone numbers, email addresses, photos of them, etc.


Famously, that's how shadow profiles got created for Facebook and LinkedIn and many others.

Or add your real name to photos of you stored in Google Photos.

Yep. If someone is trying to make you do something, or stop doing something, or buy something, your first question should always be "Why?".

Why would someone try to force me off of my browser (that has ad-blocking and tracker-blocking mitigations) and on to a locked-down app that may want permission to run in the background, display notifications, access my files or camera, etc?

Maybe it really is to "improve my experience"... yeah, right.


Talk about trivializing complexity...

The idea that making things immutable somehow fixes concurrency issues always made me chuckle.

I remember reading and watching Rich Hickey talking about Clojure's persistent objects and thinking: Okay, that's great- another thread can't change the data that my thread has because I'll just be using the old copy and they'll have a new, different copy. But now my two threads are working with different versions of reality... that's STILL a logic bug in many cases.

That's not to say it doesn't help at all, but it's EXTREMELY far from "share xor mutate" solving all concurrency issues/complexity. Sometimes data needs to be synchronized between different actors. There's no avoiding that. Sometimes devs don't notice it because they use a SQL database as the centralized synchronizer, but the complexity is still there once you start seeing the effect of your DB's transaction level (e.g., repeatable_read vs read_committed, etc).


It's not that shared-xor-mutate magically solves everything, it's that shared-and-mutate magically breaks everything.

Same thing with goto and pointers. Goto kills structured programming and pointers kill memory safety. We're doing fine without both.

Use transactions when you want to synchronise between threads. If your language doesn't have transactions, it probably can't because it already handed out shared mutation, and now it's too late to put the genie in the bottle.

> This, we realized, is just part and parcel of an optimistic TM system that does in-place writes.

[1] https://joeduffyblog.com/2010/01/03/a-brief-retrospective-on...


+5 insightful. Programming language design is all about having the right nexus of features. Having all the features or the wrong mix of features is actually an anti-feature.

In our present context, most mainstream languages have already handed out shared mutation. To my eye, this is the main reason so many languages have issues with writing asynch/parallel/distributed programs. It's also why Rust has an easier time of it, they didn't just hand out shared mutation. And also why Erlang has the best time of it, they built the language around no shared mutation.


> It's a good article but I think you need to start explaining structured concurrency from the very core of it: why it exists in the first place.

I disagree. Not every single article or essay needs to start from kindergarten and walk us up through quantum theory. It's okay to set a minimum required background and write to that.

As a seasoned dev, every time I have to dive into a new language or framework, I'll often want to read about styles and best practices that the community is coalescing around. I promise there is no shortage at all of articles about Swift concurrency aimed at junior devs for whom their iOS app is the very first real programming project they've ever done.

I'm not saying that level of article/essay shouldn't exist. I'm just saying there's more than enough. I almost NEVER find articles that are targeting the "I'm a newbie to this language/framework, but not to programming" audience.


> I promise there is no shortage at all of articles about Swift concurrency aimed at junior devs for whom their iOS app is the very first real programming project they've ever done.

You’d be surprised. Modern Swift concurrency is relatively new and the market for Swift devs is small. Finding good explainers on basic Swift concepts isn’t always easy.

I’m extremely grateful to the handful of Swift bloggers who regularly share quality content.


Got a list of those bloggers you like?


Paul Hudson is the main guy right now, although his stuff is still a little advanced for me. Sean Allen on youtube does great video updates and tutorials.


I haven't written any Go in many years (way before generics), but I'm shocked that something so implicit and magical is now valid Go syntax.

I didn't look up this syntax or its rules, so I'm just reading the code totally naively. Am I to understand that the `user` variable in the final return statement is not really being treated as a value, but as a reference? Because the second part of the return (json.NewDecoder(resp.Body).Decode(&user)) sure looks like it's going to change the value of `user`. My brain wants to think it's "too late" to set `user` to anything by then, because the value was already read out (because I'm assuming the tuple is being constructed by evaluating its arguments left-to-right, like I thought Go's spec enforced for function arg evaluation). I would think that the returned value would be: `(nil, return-value-of-Decode-call)`.

I'm obviously wrong, of course, but whereas I always found Go code to at least be fairly simple--albeit tedious--to read, I find this to be very unintuitive and fairly "magical" for Go's typical design sensibilities.

No real point, here. Just felt so surprised that I couldn't resist saying so...


> I would think that the returned value would be: `(nil, return-value-of-Decode-call)`.

`user` is typed as a struct, so it's always going to be a struct in the output, it can't be nil (it would have to be `*User`). And Decoder.Decode mutates the parameter in place. Named return values essentially create locals for you. And since the function does not use naked returns, it's essentially saving space (and adding some documentation in some cases though here the value is nil) for this:

    func fetchUser(id int) (User, error) {
        var user User
        var err Error

        resp, err := http.Get(fmt.Sprintf("https://api.example.com/users/%d", id))
        if err != nil {
            return user, err
        }
        defer resp.Body.Close()
        return user, json.NewDecoder(resp.Body).Decode(&user)
    }
https://godbolt.org/z/8Yv49Yvr5

However Go's named return values are definitely weird and spooky:

    func foo() (i int) {
     defer func() {
      i = 2
     }()
     return 1
    }
returns 2, not 1.


yeah, not really an expert but my understanding is that naming the return struct automatically allocates the object and places it into the scope.

I think that for the user example it works because the NewDecoder is operating on the same memory allocation in the struct.

I like the idea of having named returns, since it's common to return many items as a tuple in go functions, and think it's clearer to have those named than leaving it to the user, especially if it's returning many of the same primitive type like ints/floats:

``` type IItem interface { Inventory(id int) (price float64, quantity int, err error) } ```

compared to

``` type IItem interface { Inventory(id int) (float64, int, error) } ```

but feel like the memory allocation and control flow implications make it hard to reason about at a glance for non-trivial functions.


> My brain wants to think it's "too late" to set `user` to anything by then, because the value was already read out

It doesn’t set `user`, it returns the User passed to the function.

Computing the second return value modifies that value.

Looks weird indeed, but conceptually, both values get computed before they are returned.


> Are there any good resources on optimizing python performance while keeping idiomatic?

At the risk of sounding snarky and/or unhelpful, in my experience, the answer is that you don't try to optimize Python code beyond fixing your algorithm to have better big-O properties, followed by calling out to external code that isn't written in Python (e.g., NumPy, etc).

But, I'm a hater. I spent several years working with Python and hated almost every minute of it for various reasons. Very few languages repulse me the way Python does: I hate the syntax, the semantics, the difficulty of distribution, and the performance (memory and CPU, and is GIL disabled by default yet?!)...


Ints probably get a big boost in languages where the only built-in for-loop syntax involves incrementing an index variable, like C. And, speaking of C, specifically, even the non-int types are actually ints or isomorphic to ints: enums, bools, char, pointers, arrays (which are just pointers if you squint), etc...

But, otherwise, I'd agree that strings probably win, globally.


Actually because of provenance the C pointers are their only type which isn't just basically the machine integers again.

A char is just a machine integer with implementation specified signedness (crazy), bools are just machine integers which aren't supposed to have values other than 0 or 1, and the floating point types are just integers reinterpreted as binary fractions in a strange way.

Addresses are just machine integers of course, but pointers have provenance which means that it matters why you have the pointer, whereas for the machine integers their value is entirely determined by the bits making them up.


It's been a long time since I've done C/C++, but I'm not sure what you're saying with regard to provenance. I was pretty sure that you were able to cast an arbitrary integer value into a pointer, and it really didn't have to "come from" anywhere. So, all I'm saying is that, under-the-hood, a C pointer really is just an integer. Saying that a pointer means something beyond the bits that make up the value is no more relevant than saying a bool means something other than its integer value, which is also true.


Start your journey here: https://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_260.htm

That's defect report #260 against the C language. One option for WG14 was to say "Oops, yeah, that should work in our language" and then modern C compilers have to be modified and many C programs are significantly slower. This gets the world you (and you're far from alone among C programmers) thought you lived in, though probably your C programs are slower than you expected now 'cos your "pointers" are now just address integers and you've never thought about the full consequences.

But they didn't, instead they wrote "They may also treat pointers based on different origins as distinct even though they are bitwise identical" because by then that is in fact how C compilers work. That's them saying pointers have provenance, though they do not describe (and neither does their ISO document) how that works.

There is currently a TR (I think, maybe wrong letters) which explains PNVI-ae-udi, Provenance Not Via Integers, Addresses Exposed, User Disambiguates which is the current preferred model for how this could possibly work. Compilers don't implement that properly either, but they could in principle so that's why it is seen as a reasonable goal for the C language. That TR is not part of the ISO standard but in principle one day it could be. Until then, provenance is just a vague shrug in C. What you said is wrong, and er... yeah that is awkward for everybody.

Rust does specify how this works. But the bad news (I guess?) for you is that it too says that provenance is a thing, so you cannot just go around claiming the address you dredged up from who knows where is a valid pointer, it ain't. Or rather, in Rust you can write out explicitly that you do want to do this, but the specification is clear that you get a pointer but not necessarily a valid pointer even if you expected otherwise.


it is kind of fun to me that most C programmers believe that ISO C exists/is implemented anywhere, when in reality we have a bunch of compilers that claim to compile iso C, but just have a lot of random basically unfixable behavior.


I do believe the C standards committee got it completely backwards with regards to undefined behaviour optimisations. By default the language should act in a way that a human can reason about, in particular it should not be more complicated than assembly. Then, they can add some mechanism for decorating select hot program blocks as amenable to certain optimisations[1]. In the majority of the program the optimisation of not writing a single machine word to memory before calling memcmp is not measurable. The saddest part is that other languages like Rust and Zig have picked all this up like cargo cult language design. Writing code is already complicated enough without having to watch out for pitfalls added so the compiler can achieve one nanosecond faster time on SPECint.

[1] As an aside, the last time I tried to talk to a committee representative about undefined behaviour optimisation pitfalls, I was told that the standard does not prescribe optimisations. Which was quite puzzling, because it obviously prescribes compiler behaviour with the express goal of allowing certain optimisations. If I took that statement at face value, it would follow that undefined behaviour is not there for optimisation's sake, but rather as a fun feature to make programming more interesting...


Rust has no UB in safe Rust, so it’s closer to your ideal than not.

It also doesn’t have UB for cargo cult reasons.


A pointer derived from one allocation and a pointer from another allocation does not need to compare the same, even if the address is. And you will likely invoke UB. This is because C tries to be portable and also support segmented storage.

A pointer derived from only an integer was supposed to alias any allocation, but good luck discussing this with your optimizing C compiler.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: