I'd never checked out Blind before. I just went there and checked out a few of the front page posts & comments. It has some of the most toxic and destructive "advice" I've seen for people asking for help or insight. I'm a bit astounded. Is this typical?
It's pathetic, you see some of the most depraved, narcissistic members of the tech society there. The quality of the discourse you can guess is shockingly bad, and most people are from the Bay area. Is this an accurate representation of people in the Bay? Or is it just a platform for toxic folk to hang out?
Having been on blind for nearly a year, I find it interesting how weak the rumor mill on Blind is. The real rumor mill at most large companies basically knows everything that is going on; blind is mostly junior programmers flaunting often made-up compensation and scrambling to gain a low-tier grade level.
Blind could have been an opportunity for them to get direct and honest insight into how the sausage is made but instead is trolling and leetcode-your-way-to-fang obsessing.
I spent more time reading the threads than I should have, it's strangely addicting. I now feel naive for thinking that the people working at the big tech companies have a certain base level of "all-round" skills. In no way I expected so much cynicism, narcism and lack of empathy! Isn't this showing up in interviews? Or should it just be seen as online trolling and venting?
The conversation in Blind is not representative of "most people" in big tech companies. Blind is largely a dumpster fire with occasional pockets of unexpected value. Most people don't spend their time there, because it's such low value discourse.
Sociopaths are extremly good in faking empathy or a friendly personality when in need (eg interviews). To actually recognize them you have to look very close for at least some weeks once they feel they are not risking their initial position anymore. Sadly most companies at that point have no way of managing these guys or to recognize they have to fire them (often because they are still extremly careful to keep faking their personality with the higher ups, while being toxic to those below and at the same level of them)
Most assholes aren't sociopaths. The pseudo-anonymity and "exclusivity" of Blind encourages trolls and assholes to congregate. People say things just to piss others off, and they also say "publicly" the douchebag things they'd normally only say to close friends/family (overt racism, homophobia, xenophobia, etc). None of this means they are actually sociopaths, though. Just run of the mill assholes and trolls.
Am surprised as well, its so easy to spot for those traits in an interview - apply some pressure or ask some pointed questions. I've weeded out those folk as an interviewer.My feeling is that the Bay Area has a huge tech engineer shortage and companies throwing absurd amounts of money/benefits has bred a culture of entitlement.
That depends on the interview, isn’t it? In my on-site with one of the companies, I was asked hard algorithm questions in four out of five rounds, and then one system design round. I doubt anything related to personality would show up
You can read plenty of behavioral signal from how candidates interact with interviewers while solving hard technical challenges. But many companies don't really invest much into training interviewers how to effectively interview and gather useful signal or calibrating their evaluation to the goals and standards of the organization.
> I now feel naive for thinking that the people working at the big tech companies have a certain base level of "all-round" skills. In no way I expected so much cynicism, narcism and lack of empathy!
Why? These companies are like Wall Street, they pay the most, so they attract people who are predominantly interested in money. These kind of people tend not to be the most upstanding.
I think that because it's basically unmoderated you either get:
1. Posts that are usually removed elsewhere stay
2. People who would otherwise go elsewhere (because their posts are removed) stay.
But it is incredibly problematic.
I hope it's just where the toxic find their voice, an echo chamber for a hopefully small minority of the total population. It's really depressing to think that so many well-educated people are so narrow minded and lacking in empathy. It would however explain the systemic irresponsible behavior present in some companies such as Facebook and Uber. It's easy to believe that even a minority of such people could be the bad apples that spoil the rest.
First time for me too. The relationships section was shocking to me. I'm a physicist, and thought I knew nerds well, but the lack of spousal empathy directly trending towards divorce on the top end of that page made me... sad.
The general tone of the discourse is so depraved that you can either cry or laugh. I choose the latter. The relationship advice is a particularly dirty source of entertainment.
I agree that there's a lot of toxic stuff, especially in the general communities. I do think it's a net benefit for company-wide chatter though. There is good and useful perspective in there, it's just harder to find than all the garbage.
My first look at Blind rated cities to work in by how hot/available women were there (e.g. comments like "SF sucks, you have to settle on dating uglies" or "NYC women are so much hotter than SV women, no contest where to live"). The question asked to the Blind community was just "where would it be better to live long term" or something completely not to do with dating or women, but the majority of responses were about the hotness of women dates.
Later on, I found unsavory relationship advisement going on, and further than that, outright racism against Indians in particular. None of these comments were downvoted, many encouraged and agreed upon. (EDIT: Not downvoted, criticized, sorry.(
Racism is not illegal and should not be suppressed. It does should also not be encouraged, however.
At the end of he day, people do not have the right to feel good.
> My first look at Blind rated cities to work in by how hot/available women were there (e.g. comments like "SF sucks, you have to settle on dating uglies" or "NYC women are so much hotter than SV women, no contest where to live").
It may be off topic the original post, but is most definitely relevant to the users. Nothing wrong with rating women. People do it all the time. It's about time it is made less taboo.
> Nothing wrong with rating women. People do it all the time. It's about time it is made less taboo.
In the current context of rampant sexual discrimination, there is everything wrong with "rating women". To be very clear, when "rating women" I presume you are talking about the snap judgements made based on stereotypical beauty attributes relating to physical appearance.
The problem with "rating women" in this way is that ranking people according to a very small set of "desirable" attributes neglects or diminishes other very important aspects of what makes a person worth knowing and loving. Appearance becomes a top priority over and against other perhaps more important attributes including loyalty, kindness, and intelligence.
Additionally, "rating women" along such lines reinforces the sense that people are fungible sources of (aesthetic) value rather than worthy individuals in their own right. None of this even touches upon the social and psychological problems that result when people are judged on such a small set of shallow traits.
While people do judge each other based on appearance all the time, such tendencies are to be resisted and questioned because, for one, the standards of physical beauty are well-known to be shaped by the imperatives of advertising which have established that making people feel insecure drives sales.
In fact, it's not taboo to judge people on appearance at all. "Rating women" is a social norm. Not judging women based upon on their appearance is actually the rare exception.
> The problem with "rating women" in this way is that ranking people according to a very small set of "desirable" attributes neglects or diminishes other very important aspects of what makes a person worth knowing and loving.
That's not just a problem with "rating women", or even with rating men (which happens a lot more than most people would be comfortable admitting!) for that matter. It's a basic failure mode in human psychology https://en.wikipedia.org/wiki/Halo_effect
And yes, marketing and advertising exploit this too. Associating a product with conventionally-attractive people is one way of manipulating us to make us feel better about it.
Please note this is in response of a request for examples of toxic/destructive behavior and advisement, and I provided. Your response indicates you do not understand the context of the conversation (eg. replying to portions of my post in a manner that has nothing to do with the conversation) and thus I cannot reply in a manner that is on-topic.
Vouched. If we're going to talk about Blind of all things (and the kinds of sentiments it reveals, incites, or amplifies) we can't bury those perspectives.
Sure I'd rather keep most of HN clean of vitriol, but a thread about it? We need to challenge, understand, and dissect these perspectives, not disregard them.
First time taking a look at that site and I'm pretty sure a lot of it is trolling. Lots of threads starting with something semi-ridiculous or controversial to solicit lots of responses to which the OP spends an unreasonable amount of time responding to. Classic flame baiting.
Really typical. It’s worse than reddit and hardly has anything useful. I was an early user when it first started but got turned off by the number of spammed posts that were so toxic.
Anyone who believes a random startup that advertises they provide anonymity in security on their homepage is smoking something strong. No legitimate company that cares about security makes such absolute statements.
Lol. I like the concept of blind but it is one of shittiest apps I have used. Their app consistently doesn't respond correctly to buttons and their redesigns just make their user interface worse.
Oh, and on the topic of security, one guy found a SQL injection exploit and demonstrated it by giving any users who commented on their post 100 likes...
I have used blind. Some posts were pretty blatant. Just like Madison-Ashley, someone is going to dump all those posts in future. And somebody will create an indexable search, maybe paid access for employers - instead of open access for public. Not a good thing.
Sounds like they don’t give a shit and only reacted when TC was going to write a story about it. Surprising given that user trust is at the core of their business, and without it they have nothing.
>At its core, the app and anonymous social network allows users to sign up using their corporate email address, which is said to be linked only to Blind’s member ID.
People just trusted it?
I get that to seem legitimate the users have to be confirmed in some way, but as a user... now way am I exposing myself that way.
The emails were "sitting plaintext in an exposed database."
Major things wrong: unencrypted email addresses + private messages, leaving the database without a password, database wasn't fixed until a week after knowing of the error.
I find it concerning that this kind of news seems normal now.
I would go further and say using it is gross incompetence along the lines of a broker taking stock investment advice from spam email. Literally the only reason why someone would send out random stock advice like that is doing a pump and dump scheme. Similarly if they aren't going to use your email then they have no reason to ask for it and any 'anonymous' app that asks for identifying information isn't.
I get the sentiment you are sharing and I am not advocating execution of Blind anyway, but it you want to make anonymous "rooms" like say peole from Google belong to one room and Facebook to another. How'd you make that without first establishing that someone is from Google.
I understand taking their work email is asking for more info than desired. Probably they can use something zero-knowledge (I am not well aware of the concept) where someone proves their employer's identity without their own.
Using Blind with your company email on the company WiFi seems really dumb? Maybe I'm paranoid, but I act as though everything that goes through my company WiFi and on my company computer is being tracked and stored in some database forever under my name.
And I assume the company email is used to send you a verification email, which means your employer is now tipped off to the fact that you're using a site to anonymously criticize them.
The article says "remove" but before you remove, you need to list all employees that have that email - if you just make a test account or look for the domain then you can pipe the results to a text file and that's your list of company insiders who are on the platform.
What leadership does with that info, is well, never good.
You can't use Blind without giving them your company email. But yeah, you can reasonably assume that your company can track who received Blind invites.
The WiFi bit doesn't matter much assuming Blind uses SSL (though I've never checked). Your company could see that you've connected but not what you've posted or read.
Many companies terminate their internal/corporate TLS traffic on a reverse proxy they control. This typically lets them see employee internet activity in the clear.
This works only if you're on a company-controlled device. If you're on a device they do not control, you won't have their root cert installed and this MITM attack is infeasible.
If you're using a company-controlled device, you should of course assume they can see all of your network activity. They could easily be capturing all of your activity on the device itself without any MITM attack.
What someone at my org did was send an invite to everyone, so pretty much everyone got an email. We use outlook webapps so it is easy enough to login on a non company device to click the link.
My company knows everyone that got a link (everyone) but not who clicked the link.
> Kim denied this. “We don’t use MD5 for our passwords to store them,” he said. “The MD5 keys were a log and it does not represent how we are managing data. We use more advanced methods like salted hash and SHA2 on securing users’ data in our database.”
> Latacora, 2018: In order of preference, use scrypt, argon2, bcrypt, and then if nothing else is available PBKDF2.
> Avoid: SHA-3, naked SHA-2, SHA-1, MD5.
SHA2 is a decent cryptographic hashing algorithm, that is true. But a cryptographic hashing algorithm isn't what you want when storing passwords, at least, not on its own.
Some of the problems with just using a hash algorithm: same passwords have the same hash; hash algos are typically fast, so brute forcing can make many attempts quickly, and they can be pre-computed into rainbow tables. There are ways to fix this (salts, work factors), but generally you shouldn't "do it yourself."; functions like those named in the quote from Cryptographic Right Answers put all that together in a nice package. scrypt, I believe, even uses SHA-2 in its construction, but also solves the aforementioned problems.
The "Purpose and operation" part of the Wikipedia article for PBKDF2 (also mentioned above) goes into some of this, as well (even if it is only recommended as a last resort, I think this paragraph is educational w.r.t. the problems around passwords): https://en.wikipedia.org/wiki/PBKDF2#Purpose_and_operation
> The database also contained passwords, which were stored as an MD5 hash, a long-outdated algorithm that is nowadays easy to crack. Many of the passwords were easily unscrambled using readily available tools when we tried.
That's not how hash functions work...
> Kim denied this. “We don’t use MD5 for our passwords to store them,” he said. “The MD5 keys were a log and it does not represent how we are managing data. We use more advanced methods like salted hash and SHA2 on securing users’ data in our database.”
This sounds much more likely.
> (Logging in with an email address and unscrambled password would be unlawful, therefore we cannot verify this claim.)
So, they directly claim that weakly hashed passwords were available (and unscramble-able, apparently??), but they're unable to prove this and they're ignoring the company's reasonable explanation. Great reporting.
“We don’t use MD5 for our passwords to store them,” he said. “The MD5 keys were a log and it does not represent how we are managing data. We use more advanced methods like salted hash and SHA2 on securing users’ data in our database.”
So, they store your data securely, but that doesn’t matter, since they also stored it insecurely, and leaked the latter.
I'm perfectly comfortable with saying "unscramble" when talking about using a rainbow table to figure out the corresponding password for a MD5 hash.
If I'm writing docs for MD5, that's one thing. General tech reporting? "Unscramble" is understandable to someone who's never had to implement a password hash, which is most of the populace.
SHA2 isn’t any more reasonable. You _can_ use sha with a proper key derivation function. But just saying that sha2 somehow improves on md5 without additional context indicates to me that you haven’t understood the problem.
Kind of. A hash function just provides a near random set of characters of fixed length for a given set of input in a way where the output characters are reproducible for the given input. Passwords are not stored. It is the computed hash value that is stored. When a user attempts to login with a username and password the password is hashed and compared to the stored hash.
That said you don't need the actual password to login. Any input that hashes to the same hash string is acceptable, which is called a hash collision. When they say cracking the hash this is likely what they mean, and its trivial to compute provided a rainbow table.
Salting provides an additional round of computation. For example let's say a user is trying to login. Their password is hashed but before the hashed are compared some additional information is added on the end of the computed hash and that new value is hashed. It is this new hash that is compared with the stored hash, which requires knowledge of the hash algorithm, the salt, and the hash value. You can generally guess the final hash algorithm in question by observing the character length of the stored hashes, but since there are two hash computations a different hash algorithm could be used for the first round of hashing.
To be secure the salt must be stored in a different location from the stored hashes and the salt value should not be statically visible in the source code provided a source code compromise. Statically expressed passwords are uploaded to code repositories all the time. Don't believe that you are protected from associated vulnerabilities merely because the code base isn't open source.
To the article's defense neither claim was verified, but both claims were reported. When the journalist cannot validate a claim themselves, or with experts, it is completely acceptable to report the claim and report the validation status.
It's cool that you jumped on the opportunity to explain how password hashing works. However, the reporter actually cracked hashes, so we can bypass all of this discussion and plainly see the hashes were insecure.
And for what it's worth:
> To be secure the salt must be stored in a different location from the stored hashes and the salt value should not be statically visible in the source code provided a source code compromise.
This isn't true. Your salt can be totally public if you're using a robust key derivation function. Likewise you can make e.g. the work factor (rounds) public for bcrypt and N, r and p public for scrypt (cost factor, block size and parallelization parameters).
The rest of what you said about secrets management in code is sound though.
Uh, what? You can obviously crack a hash digest, and this is standard nomenclature used in both industry and academia. It simply means you've broken preimage resistance or collision resistance in practice. What exactly do you find controversial about this?
And your second paragraph doesn't follow. What I said isn't a strawman attack, it's a basic observation. If you're not using a secure key derivation function, a private salt will not save you. If you are, the salt can be public and there is no meaningful degradation in security whatsoever - you could even prepend or append it to the digest if you'd like.
As a broader point, what you're saying about fixed-length strings is incorrect. Hash functions need not output strings of fixed length. The formal definition of a hash function also admits functions of the form:
H: {0, 1}^* -> {0, 1}^*
not just functions of the form:
H: {0, 1}^* -> {0, 1}^n.
Or in other words, the codomain need not be finite, and the range can be variable. Keccak (SHA-3) is an example of a hash function which provides variable-length output instead of fixed-length output (i.e. via the sponge construction).
> To the article's defense neither claim was verified, but both claims were reported. When the journalist cannot validate a claim themselves, or with experts, it is completely acceptable to report the claim and report the validation status.
The reporter says they successfully produced passwords, sounds clean-cut and dry to me.
Don't you think this is a pretty important claim to verify, instead of asserting without giving proof? Granted, if a dictionary attack gives you a bunch of 'password' variants, you probably don't need much more evidence.
If the reporter is so invested in producing this story, then being able to say "no, these are definitely passwords" in the face of the executive would be the golden ticket, right?
>To be secure the salt must be stored in a different location from the stored hashes and the salt value should not be statically visible in the source code provided a source code compromise.
This is outdated though. You don't need to take extra steps to secure the salt if you move to a good hash.
"Hashing" is unfortunately a very overloaded term. Even searching for "password hashing" gave me pretty bad results for the top hits. "Key derivation function" (KDF) gave much better results.
They made a big mistake, but generally I love reading Blind. There are a lot of interesting things on it about which I couldn't talk even to my own colleagues.
As many others, I had heard about it and never actually took a look until now. Seems like a waste of time to me. Make a throw away account on Reddit like everyone else, and you'll probably get more well thought out and measured replies.
Obviously your mind is made up, but I don't think you've considered that you don't really know that someone is a VP at X company on Blind, either. They could be a janitor.
People are welcome to enjoy whatever fiction they want to read. I'm just saying you'll probably find more useful and reliable information elsewhere.
> Blind claims on its website that its email verification “is safe, as our patented infrastructure is set up so that all user account and activity information is completely disconnected from the email verification process.”
Wow a patented infrastructure! Dope!
I wonder if it's open source so that can be validated objectively?
"Blind claims on its website that its email verification “is safe, as our patented infrastructure is set up so that all user account and activity information is completely disconnected from the email verification process.”
"Patented infrastructure"? It smells like BS to me.
No, it’s true. This was right after the Susan Fowler event when Uber’s Blind channel blew up in membership. They blocked it, people just laughed and turned off wifi, and then they unblocked it a few weeks later.
I enjoyed blind during their early days, now all threads are the same:
"HEY, starting at company XYZ at level N, with total comp of YYYY. Is that good"
"No, that's not possible you are lying!"
"Hell yeah, well done mate"
...
Their core functionality is to keep the confidentiality of its users.
There is widespread available technology and know-how on how to do this successfully and consistently.
Blind failed miserably at this fundamental task.
Yet...
>>Blind last month secured another $10 million in new funding after a $6 million raise in 2017
So the VCs are perfectly happy to dump millions into a company that is dishonest and incompetent (see also Uber, Theranos), while thousands of competent honest startups go begging.
Provides a bit of background into why most VC funds struggle to outperform the market
For all the toxicity blind is one of the best resources to discover comp info online, since sharing salary is still a bit taboo. Some of it is e-statting but I’ve found it to be useful for when you negotiate comp.
H1B databases tend to underestimate. My entry in the database lists my starting salary from a few years ago, with no bonuses, options/rsus, or promotion/raises.
levels.fyi only really works for companies that have large numbers of reports. The raw data has so many reporting inconsistencies that it's really hard to interpret for most companies.
I agree, though the "stated purpose" of the site being to anonymously complain about coworkers probably doesn't help it stay as a positive environment for intelligent discussion either.
While you can choose to be anon on HN, I think the culture of HN tends away from that. People will use recurring internet handles, write informative bios of themselves, etc. I suppose nothing stops people from doing this on Blind, but like I said I think it's a cultural difference between platforms.
There is less moderation on Blind, so there are more "provocative posts" that people definitely do not want associated with an identifier.
They say "anonymous" but right after signing up with my work email they followed me on Twitter. The two are only connected by my name. Could be coincidence, but probably not.
Also, the community there is pretty toxic. I understand it's mostly the design of the app to be a place where people can say what they want, but I think it attracts a certain type that I'm not terribly interested in getting close with.
> Email verification is safe, as our patented infrastructure is set up so that all user account and activity information is completely disconnected from the email verification process.
> Certain embodiments herein also provides a system and method for authentication, which can prevent service users' identities from being exposed even by hacking of a terminal or server side of a service provider, negligence in information management or a manager's misconduct.
impossible.
> Certain embodiments herein also provides a system and method for authentication, which can store information provided by a service user during subscription and authentication procedures in such a manner that the information cannot be decoded from a side of a service provider's server.
Impossible without a trusted 3rd party to perform the authentication and return a token to the service provider. Which is a well known and well deployed, not novel technique.
Patents are hard to read, and this one is no exception. Unfortunately I don't have enough interest to invest the time required, but at a glance it seems that the technique is to have an authentication server that can hide the user identity from the relying service, basically by replacing it with a token. Too obvious.
But there's some kind of dedup exchange mentioned; it might be that the auth server doesn't itself store a list of the identities that it has authenticated previously so it has to interact with the service (in a blind way) to dedup. Perhaps the novelty here is that the service itself cannot uniquely identify users; ie each post could be coming from any user in a group. All the service knows is that the post is from a user in that group. On its face, that seems false -- for the first time ever I actually looked at blind and each post has a user pseudonym as metadata. There would be no point to that pseudonym if it didn't represent a uniquely identified individual user.
Anyway, all this is meaningless protection unless there is adequate SoD between the auth server and the service provider, which for teamblind it is obvious there is not.
I stopped reading after reading a thread of grown men throw hissy fits because they thought they were screwed over by their $300k salary when they felt they “deserved” $400k+
the whole concept of blind is faulty! there is no way you could trust them to maintain your privacy. and of course they can read all the dirt on everything. it’s a guarantee they have no meaningful controls in place.
wanted to add, with all the recent (and we know to be continuously ongoing) talk about chinese espionage ... who needs high-cost espionage when you can get employees to air dirty laundry at zero-cost?