Hacker Newsnew | past | comments | ask | show | jobs | submit | rcoveson's commentslogin

A revelation of a mysterious element of the game which is not revealed in any of its marketing material is a spoiler. The fact that you believe it's a "decent compromise" doesn't enter into it. The proper disclaimer for your comment would be: "Spoilers, but I think these things should be spoiled."

I played the game years ago and did not have this element spoiled, and I thought it was presented at exactly the right time and in the right way. I'd go so far as to say that if somebody is so frustrated by that early mystery (which you're all but guaranteed to understand better and better as you play) that they quit there, then the rest of the game will just be an exercise in misery. It's a puzzle game. The developers put settings in place to cut the flight mechanics out of it so people could just experience it as a puzzle box instead of a flight simulator as well. What they did NOT put in the game is a hint about the thing you're spoiling.


"presented at exactly the right time and in the right way" is highly dependent on individual gameplay experiences. For me it was revealed in a very obtuse way. I love the game very much but I think this is perhaps its biggest flaw.

Jury is still out as to how well it works, but the traditional prompt is: "Be fruitful, and multiply"

Don't forget the extra instructions about not eating from the tree of the knowledge of good and evil.

I'm happy to say that I would be fired if I did this, thought this, or wrote this comment.

EDIT: Parent used to say "it's common for salespeople to log in to customer environments to show potential customers what the product looks like with actual data in it."


I removed the part where I said 'it's typical for sales people to access customer environments', because I don't know how accurate that is, but probably happens more than anyone knows. Obviously it shouldn't happen without customers consent.

Also, reviewing the article again, the access patterns don't seem to match with this behavior, so there seems to be something else going on.


I've worked in enterprise IT departments for nearly 20 years and not once during a demo for any product has a sales engineer logged into a live customer or showed actual customer data

in the 100s of demos i have sat through, over a few decades, not a single one used live customer data.

You would be fired if you had a thought?

If I had the thought, in general, that this was a fine thing to do, then yes. Presumably I would do it or permit somebody else to do it and be fired.

Root-level comment has been edited as noted by respondant lynndotpy <https://news.ycombinator.com/item?id=47785027> and original author Geee <https://news.ycombinator.com/item?id=47785316>.

It's even more depressing than that framing would suggest, because we skipped over the decades where cars were just fast, powerful transportation tools and went straight from "mind bicycles" to "mind Teslas" full of cameras, tracking, proprietary software, and subscription fees.

> You see a crow fly into the tongue of a headcrab and die. You now know everything you need to know about this enemy.

But not everything you don't need to know, like it's name. That's a barnacle. But I still love the point your making here. :)


... I don't know how I mixed up headcrab and barnacle LOL. Ty for telling me.


I think that's a little harsh. A lot of the most powerful bits are applicable to any intelligence that we could digitally (ergo casually) instantiate or extinguish.

While it may seem that the origin of those intelligences is more likely to be some kind of reinforcement-learning algorithm trained on diverse datasets instead of a simulation of a human brain, the way we might treat them isn't any less though provoking.


when you read this and its follow-up "driver" as a commentary on how capitalism removes persons from their humanity, it's as relevant as it was on day one.

good sci fi is rarely about just the sci part.


But there's a reason that caches are always sized in powers of two as well, and that same reason is applicable to high-performance ring buffers: Division by powers of two is easy and easy is fast. It's reliably a single cycle, compared to division by arbitrary 32bit integers which can be 8-30 cycles depending on CPU.

Also, there's another benefit downstream of that one: Powers of two work as a schelling point for allocations. Picking powers of two for resizable vectors maximizes "good luck" when you malloc/realloc in most allocators, in part because e.g. a buddy allocator is probably also implemented using power-of-two allocations for the above reason, but also for the plain reason that other users of the same allocator are more likely to have requested power of two allocations. Spontaneous coordination is a benefit all its own. Almost supernatural! :)


CPU Caches are powers of two because retrieval involves a logarithmic number of gates have to fire in a clock cycle. There is a saddle point where more cache starts to make the instructions per second start to go back down again, and that number will be a power of two.

That has next to nothing to do with how much of your 128 GB of RAM should be dedicated to any one data structure, because working memory for a task is the sum of a bunch of different data structures that have to fit into both the caches and main memory, which used to be powers of two but now main memory is often 2^n x 3.

And as someone else pointed out, the optimal growth factor for resizable data structures is not 2, but the golden ratio, 1.61. But most implementations use 1.5 aka 3/2.


Fwiw in this application you would never need to divide by an arbitrary integer each time; you'd pick it once and then plumb it into libdivide and get something significantly cheaper than 8-30 cycles.


powers-of-two are problematic with growable arrays on small heaps. You risk ending up with fragmented space you can't allocate unless you keep growth less than 1.61x, which would necessitate data structures that can deal with arbitrary sizes.


Speaking from a place of long-term frustration with Java, some compiler authors just absolutely hate exposing the ability to hint/force optimizations. Never mind that it might improve performance for N-5 and N+5 major releases, it might be meaningless or unhelpful or difficult to maintain in a release ten years from now, so it must not be exposed today.


I once exposed a "disableXYZOptimization" flag to customers so they could debug a easier without stuff getting scrambled. Paid for my gesture for the next year signing off on release updates, writing user guide entries, bleh.


So it's better to hardcode your specific library name and deal with the same issue after people have reverse engineered it and started depending on it anyway?


That seems valid for customers expecting a warranty or support. But they should allow it if customers waive all such in writing.


Warranty and support specifically for that flag? Because I don't see how general warranty and support requires keeping any hint flags forever.


If you remove the hint flag peoples build will break


Doesn't need to, it can acknowledge and ignore the hints.


True, but there might be more problems — like if you drop support their run time will be slow because they rely on this flag and they are unhappy


The premise of removing the flag is that it's useless or a problem. If it's still causing a big speed boost somewhere then you need to figure something out, but the core scenario here is that it's obsolete.


Just 20 watts, the same amount of electricity that powers 2 LED lightbulbs for 24 hours, one nanosecond, or twelve-thousand years.


Huge oversight by Google. Now they're going to have to invent some other way to indicate that you want to show hidden search results and inodes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: