Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I like the distillation of aliens but I think that undersells the risk because aliens are individuals with their own goals and motivations. It's more like robots with 300 IQ who unquestionably obey the person or group that made them, even when they're serving others. And look, the 300 IQ thing isn't even a major point to the argument. The fact that the robots by virtue of being machines naturally have capabilities humans lack is enough. As long as they're smart enough to carry out complex tasks unattended is more than enough to cause harm on a massive scale.

The problem then isn't really the AI, the robots are morally and ethically neutral. It's the humans that control them who are the real risk.



No, what you're stating is actually a different but dangerous problem. That is the smart but subservient AI to Dr Evil problem.

The issue talked about here looks similar but is different.

That is the not (or faking) subservient AI with its own motivations. The fact they are 300 IQ means you may very well not understand harm is occurring until it's far too late.

>The problem then isn't really the AI, the robots are morally and ethically neutral.

Again, no. This isn't about AI as a sub-agent. This is about AI becoming an agent itself capable of self learning and long term planning. No human controls them (or they have the false belief they control them).

Both problems are very harmful, but they are different issues.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: