Hacker Newsnew | past | comments | ask | show | jobs | submit | dec0dedab0de's commentslogin

we played leisure suit larry at my friend's house, when his parents were at work. Guessing the right answer for the parental lock was most of the fun.

Enough that they're not facilitating abuse.

I noticed this directly a few weeks ago. I was camping with a friend pretty deep in the woods, but at a campground. About a half a mile away there was an RV running a generator, which was annoying as hell, but not the end of the world. Then in the middle of the night, while we were stargazing the generator turned off, and we could noticeably hear the wildlife adapt to the change. Some got quieter, but mostly it was wildlife returning to the area. As if the sound from the generator was a forcefield keeping everything away, or at least hidden.

That last part is what really opened my eyes about the noise polution from datacenters


Nice philosophising, but it's vehicles. Primarily cars, but not only.

Around here cars are more common, but quiet enough that I rarely notice. Trucks, motorcycles, quads, trains, and boats are all significantly noisier.


* Spam is not email from legitimate companies with valid contact details that have an opt out that you forgot to click when you signed up with them. That's legitimate marketing emails. You might argue they also shouldn't exist, but they are a different category.*

No, they’re all spam. It’s just that some spam is significantly worse than others.

Edit:

this just reminded me of an interaction with a customer when I worked at a dialup ISP over 20 years ago. We would routinely get abuse reports about spam coming from our network that would turn out to be a family computer with a virus. We would disable their account until we got ahold of them, and then help them run antivirus or redirect them to a local shop to fix it.

But this one time my boss is like “Hey you wanna pretend you're the email manager? We have an actual spammer sending ads for a local business through our smtp servers”. We were all laughing at the audacity of it, they were sending thousands of the same message out, I think it was for a tackle shop.

When I called the guy to let him know why we disabled his account he immediately got angry at me, I vividly remember him saying “It’s not spam, it’s for a business!!” I explained to him that it doesn’t matter, it’s just as bad, and could get the whole company blacklisted from sending emails. Turns out his friend owned the business, and convinced him to install something that sent emails through outlook express.

The reason I got that duty is because I had no problem being confrontational back then. I remember telling him that I think he should be fined, and permanently banned from the internet. But that we’ll only let him back on if he uninstalls the thing.

He called back indignantly asking why we were allowing some other spam. I had to explain that it was from another network, and we’re trying to stop it, and that if every ISP were like us then it would barely be a problem.

I wonder if that business spams through google now.


This seems dishonest, like someone is forcing the decision for other reasons, and they're using security and AI as a distraction.

alright, I'll bite. What are these objective moral principles?

Do unto others as you would have done to you.

Or its simpler corollary: Don't do to others what you would _not_ want done to you.

Everything else derives from this.


Ah the golden rule, classic, but it is so simplistic that it could encourage bad behavior. You can never assume that something you want or don't want applies to anyone else.

I think a better formulation is the so-called "platinum rule", i.e. to treat people as they want to treated (with the important qualification that you ∈ people). But even then it's not without issue (what if someone's wants are harmful to them, e.g. a child refusing to eat anything but candy?), and it's still a far cry from illuminating "objective moral principles" and fairly useless as a calculus for balancing different people's competing interests.

How about passing a job interview better than someone else?

the whole "this wouldn't be a problem if america wasn't so puritanical maaaaaan" argument is total bullshit that completely ignores the young girls who get hurt by this

If society wasn't so puritanical there would be no harm from it.


> If society wasn't so puritanical there would be no harm from it.

What is "puritanical" then, in your opinion? Is it being uptight? Do we abandon our efforts to uphold modesty and dignity of the human person? Do we stop teaching our children about boundaries and propriety, and their responsibilities as they mature? How should boys and young men conduct themselves around women? Should they learn about the concept of consent, and respecting others' wishes?


Those who immediately jump to accusations of "puritanical!" at the mere criticism of any sexual indiscretion are usually sexual deviants of some kind. They may not necessarily partake in any actual sexual activity (they probably usually don't, hence the desire to lower the baseline of acceptable behavior), and they aren't fit for relationships as those require respect. Porn brain comes to mind.

It's time we got back to sexual ethics, as it is absolutely not the case that anything goes or even that "consent" suffices to make something okay.


In this particular type of abuse, the issue arises because humans value hiding reproductive organs. A similar social constraint applies to women's virginity. So if the society was not so focused on hiding reproductive organs, this particular issue would not have arisen. Nobody laughs at naked hands. It is just expected to be bared and if you put on gloves it is because of weather and not usually to hide hands.

I was just pointing out the poor logic of the comment I was replying to.

Draw the boundary at physical abuse.

I mean duh, but also this seems like a fairly weak gotcha. Cookies != Tracking, they can track you just fine without cookies, and they can use cookies without tracking you.

The report is specifically ads cookies and includes links to primary source disclosures on the websites of the companies mentioned. We did not count things like DDoS cookies, login tokens, and the like. We operate with unparalleled precision in our domain.

I'm curious why this was downvoted--I'm not complaining or trying to go against HN guidelines; I'm genuinely unclear as to why the first-party source for the article clarifying the question in GP was marked dead. Bad actors? Misinterpretation? Other?

No idea, I thought it was a valid question and we go to great lengths in our methodology for this reason. The audits we supply for enterprise are highly specific as to cookie purpose for this reason: https://webxray.ai

> Cookies != Tracking, they can track you just fine without cookies

That's probably true, but not what the articles reporting:

> 55 percent of the sites it checked set ad cookies in a user’s browser even if they opted out of tracking

So essentially, it's ignoring user preference directly, not just in spirit.


"Legitimate interest."

That concept is applicable to the European Union, doesn't apply in California.

Cooking != Tracking?

That's historically been a very prominent purpose of cookies.

Sure it's not exclusively tracking, but it's nonsense to make the assertion that "Cookies != Tracking"


Cookies serve a lot of valuable purposes, it's important to disambiguate.

Sure. But given the lack of specificity from the person I was responding to, it felt important to correct.

Sure it's not exclusively tracking, but it's nonsense to make the assertion that "Cookies != Tracking"

They're not primarily for tracking either.


This article is nothing, but the title is probably right. At least if you consider it unethical to source training data without informed consent, because generating code is inherently unsafe. Of course, you have to have a very narrow definition of AI for even that to be true.

I think that's where most people thought that this article was going.

Shall we just have that debate anyway? :D

The big question that I hoped the article might address: Can AI ever be ethical (within the norms of what the average Jo(e) considers ethical), or have we forever poisoned the well?

If the technology and mathematical underpinnings have been created on fundamentally immoral grounds (IP theft, energy / water excesses, etc) what would we have to do to produce an entirely - or even mostly - ethical AI stack?

Is it even possible, given the dependencies on (Lithium / Israel / fossil fuels / conflict mining / capitalistic exploitation / any other morally questionable underpinning you might think of) to re-do the work to such a point that we could "black box" our way to decently function LLMs?

Assuming that comes with a caveat of rolling back the technological progress, how far back do we have to go? It feels like the bronze age is a step too far, at least on the basis of my "average Jo(e)" test above - but what is considered reasonable?

Then - and only then - would it make sense to ask how to make the content generation itself ethical.

It feels like the Nazi medical science issue all over again, except nobody really cares as much about this one. But socially, it feels like an anti-capitalistic uprising is on the horizon, so maybe if that happens, a moral aversion to the state of AI might piggyback onto it?

Not that I want it to. Quite like AI really. Feels like the background immorality radiation of the earth is quite high anyway, maybe AI isn't the thing to fluff our feathers about. But it's certainly an interesting thing to mull as we weep over our non-gm oat milk babyccinos, pitying at the state of the world.

(I'm really an upbeat person, honest...)


Technically, any given technology by itself is neutral. However, it tends to amplify some human tendencies or others. Great power, great responsibility, all that.

(Depending on various definitions, to some people specifics of this amplification could warrant taking a mental shortcut and just considering that tech as harmful in itself. After all, if it is neutral and helpful under unattainable circumstances, and harmful under real-world conditions, then it is pointless to draw that distinction.)

Personally, I believe that technological and mathematical underpinnings of LLMs by themselves do not at all imply IP theft or detriment to the environment and society, but the way this technology is being adopted should raise serious questions in anyone with such capability.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: