Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I highly recommend the book referenced in the article: Nick Bostrom's Superintelligence.

https://www.amazon.com/Superintelligence-Dangers-Strategies-...

It has helped me make informed, realistic judgments about the path AI research needs to take. It and related works should be in the vocabulary of anybody working towards AI.



Every time I encounter Bostrom's writing, I think of this Von Neumann quote:

"There’s no sense in being precise when you don’t even know what you’re talking about."

Bostrom is one of those medieval cartographers drawing fantastical beasts in the blank spots of continents which he has never visited.


Actually there are at least two decades-old branches of computer science/mathematics that have formulated precise definitions of AI, and proved many theoretical results that gave way to lots of practical applications. These branches of CS are called "Reinforcement Learning" and "Universal AI".

While Gwern has already mentioned Reinforcement Learning, UAI is a less known (but even more rigorous and well received) mathematical theory of general AI that arose from Marcus Hutter work [1].

My point here is how can one say that there is no definition of AI when there are several precise mathematical definitions available with many theorems proven about them?

1. http://www.hutter1.net/ai/uaibook.htm


You are confusing narrow AI for AGI. None of those things have proved anything practical about what an actually achievable AGI would look like, rather than some theoretical construct that is provably incomputable.


No, he is not. Hutter's work on universal AI, his AIXI formulation is specifically a model of application generic AGI.

That said it is also not computable with finite time or resources, so it is unclear what relevance it has to practical applications.


Because AIXI_tl has failure modes (it doesn't model itself as being embedded in its environment so it can't ensure its own survival) demonstrating that any approach which is just a weaker version of it will have those same problems.

> That said it is also not computable with finite time or resources, so it is unclear what relevance it has to practical applications.

You can define it as space or time-bound and then it's finite but still intractable.


I agree with the first sentence, but I'd like to note that there are practical (though weak) approximations of AIXI that preserve some of its properties, and while not turing-complete, prove to be more performant when compared to other RL approaches on Vetta benchmark. See [1].

Also there is a turing-complete implementation of OOPS, a search procedure related to AIXI that can solve toy problems, programmed by none other than Jurgen Schmidthuber 10 years ago [2]

Even more important: there is a breadth of RL theory built around MDPs and POMDPs. There are asymptotical, convergence, bounded regret, on-policy/off-policy results, etc. Modern practical Deep RL agents (the ones DeepMind is researching) are developed on the same RL theory and inherit many of these results.

From my POV it looks unfavorable to researchers that produced these results over decades of work when the comment's grandfather (and grand-grandfather) write that there is no definition and theory about AI, and that AI is like alchemy.

1. https://www.jair.org/media/3125/live-3125-5397-jair.pdf 2. http://people.idsia.ch/~juergen/oops.html


Thanks for the quote and the metaphor. It's a good description of what's wrong with the "AI risk" community. Drives me nuts how much traction they've been able to get, and how many ardent defenders, when they're not doing any intellectual work, just facile speculation. Their dogma seems to infect every conversation on AGI and it's a shame.


Bless you for saying it.

The analogy I like to use for our understanding of AI is alchemy. We threw Sir Isaac Goddamned Newton at chemistry and he couldn't make forward progress, because the tools were not precise enough. Similarly, we just don't understand minds enough yet to formulate sensible questions about AI.

This doesn't bother Bostrom. He builds castles of thought in the air, and then climbs up into them.


Quentin Hardy of the NYT on Bostrom: His career amounts to: "Assume hummingbirds will speak French. Let's discuss their novels." https://twitter.com/qhardy/status/806003812431319041


> Similarly, we just don't understand minds enough yet to formulate sensible questions about AI.

We most certainly do understand enough to formulate questions, and even answer some of them. The problem is that the people making the most noise (Bostrom et al) are not trained in neuroscience or computer science, nor do they have practical experience in deploying working systems. They have about as much training and expertise as science fiction writers, and the end result is similar.


> The problem is that the people making the most noise (Bostrom et al) are not trained in neuroscience or computer science…

This is incorrect. Among his degrees, Bostrom has a master's in computational neuroscience. His arguments have also convinced PhD neuroscientists (such as Sam Harris) and computer scientists (such as Stuart Russell) about the potential dangers of AI.


Degrees don't matter, publications do.


Particularly when that degree is from a single year program and now 20 years old, in a field that has been revolutionized multiple times in he interim. It's a bit like someone saying they are a web developer because they went through an App Academy like boot camp in 1996, if such a thing existed. King's College is bit more prestigious than that, sure, but content wise it is a fair comparison.

It would be a different story if he published in the meantime, but he did not. Nor did he work on practical projects in industry or anything. He shifted gears to philosophical speculation which he has done since.


Not only are you moving the goalposts, but you are again incorrect. Since 1999, Bostrom has authored four books and published over 30 articles in peer-reviewed journals.

There are good arguments to be made against Bostrom's Superintelligence, but malignments and surface analogies aren't appropriate. Please engage the ideas, not the man.


Published in the field of neuroscience or computer science?

Read what I wrote again please. I think you misinterpreted.


Silly me, I thought it was results!


Not about AGI. We understand enough to formulate questions about narrow "AI," certainly, since that already exists in a narrow sense.


In case you were not aware, we have about a decade of conferences on the specific topic of Artificial General Intelligence. Many of the papers from that conference provide valuable insights into the capabilities and limits of various approaches to solving specific general-intelligence problems. You might find articles from the past conferences interesting:

http://agi-conference.org/

There is also the Advances in Cognitive Systems journal and associated conference, which is AGI even if they prefer to avoid that specific acronym:

http://www.cogsys.org/

And there are always a small but growing number of papers related to AGI in each AAAI conference.


1. Just because something has been published as a paper, does not mean it is applicable, says something interesting (even if theoretical), or even that it's actually correct (it just can't be obviously wrong).

2. Setting aside cogsys (CogSci is a whole different beast than AI/ML in computer science), the only impactful journal/conference you've listed is AAAI.

3. Papers are also typically incremental and all of the AGI papers I've seen in AAAI (and there have been very few) are no different, tackling some small theoretical subproblem.

4. I'm not saying the research is useless. It's very valuable. But it's is pure theory right now, and to claim it has insights for us about what AGI would actually look like is very premature.


Cutting criticism is unless without examples. Name such a beast, and why it is unlikely to be.


Maybe I'm ignorant, but reading the abstract/introduction,I immediately got the sense that this guy (Gwern) was a crank. At the very least, I figured it was some tangentially related philosophical quote, not part of the main body.


Nick Bostrom also did an interview on EconTalk that covers quite a lot of the topics in the book from a high-level, if you want a shorter introduction to AI safety and the control problem: http://www.econtalk.org/archives/2014/12/nick_bostrom_on.htm...


Thank you! Now I have a way to introduce my book-averse friends to the control problem.


It's a book worth reading as it seems to have captured quite a bit of interest from the movers and shakers in this field. However one should be aware that it presents a one-sided view point and reasonable minds disagree. It's not clear yet what the future will bring, and whether this focus on AI safety will reduce real existential risk, or delay life saving technologies.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: