Also, have you seen this AI Impacts post and the interview it links to? I would expect so, but it seems worth asking. Tom Griffiths makes similar points to the ones you've made here.
I think these are all points that many people have considered privately or publicly in isolation, but that thus far no one has explicitly written them down and drawn a connection between them. In particular, lots of people have independently made the observation that ontological crises in AIs are apparently similar to existential angst in humans, ontology identification seems philosophically difficult, and so plausibly studying ontology identification in humans is a promising route to understanding ontology identification for arbitrary minds. So, thank you...
I agree with this. It's the right way to take this further by getting rid of leaky generalizations like 'Evidence is good, no evidence is bad," and also to point out what you pointed out: is the evidence still virtuous if it's from the past and you're reasoning from it? Confused questions like that are a sign that things have been oversimplified. I've thought about the more general issues behind this since I wrote this, since I actually posted this on LW over two weeks ago. (I've been waiting for karma.) In the interim, I found an essay on Facebook by...
I really like this bit.
Thank you.
I found a lot of this post disconcerting because of how often you linked to LessWrong posts, even when doing so didn't add anything. I think it would be better if you didn't rely on LW concepts so much and just say what you want to say without making outside references.
I mulled over this article for quite awhile before posting it, and this included the pruning of many hyperlinks deemed unnecessary. Of course, the links that remain are meant to produce a more concise article, not a more opaque one, so what you say is ...
But I think we may be disagreeing over whether "thinks AI risk is an important cause" is too close to "is broadly positive towards AI risk as a cause area." I think so. You think not?
Are there alternatives to a person like this? It doesn't seem to me like there are.
"Is broadly positive towards AI risk as a cause area" could mean "believes that there should exist effective organizations working on mitigating AI risk", or could mean "automatically gives more credence to the effectiveness of organizations that ...
Why should the person overseeing the survey think AI risk is an important cause?
Because the purpose of the survey is to determine MIRI's effectiveness as a charitable organization. If one believes that there is a negligible probability that an artificial intelligence will cause the extinction of the human species within the next several centuries, then it immediately follows that MIRI is an extremely ineffective organization, as it would be designed to mitigate a risk that ostensibly does not need mitigating. The survey is moot if one believes this.
I think that it's probably quite important to define in advance what sorts of results would convince us that the quality of MIRI's performance is either sufficient or insufficient. Otherwise I expect those already committed to some belief about MIRI's performance to consider the survey evidence for their existing belief, even if another person with the opposite belief also considers it evidence for their belief.
Relatedly, I also worry about the uniqueness of the problem and how it might change what we consider a cause worth donating to. Although you don't ...
Sorry about the confusion, I mean to say that even though the Against Malaria Foundation observes evidence of the effectiveness of its interventions all of the time, and this is good, the founders of the Against Malaria Foundation had to choose an initial action before they had made any observations about the effectiveness of their interventions. Presumably, there was some first village or region of trial subjects that first empirically demonstrated the effectiveness of durable, insecticidal bednets. But before this first experiment, the AMF also presumabl...
I'm new to the EA Forum. It was suggested to me that I crosspost this LessWrong post criticizing Jeff Kaufman's speech at EA Global 2015 entitled 'Why Global Poverty?' on the EA forum, but I need 5 karma to make my first post.
EDIT: Here it is.
Your comment reads strangely to me because your thoughts seem to fall into a completely different groove from mine. The problem statement is perhaps: write a program that does what-I-want, indefinitely. Of course, this could involve a great deal of extrapolation.
The fact that I am even aspiring to write such a program means that I am assuming that what-I-want can be computed. Presumably, at least some portion of the relevant computation, the one that I am currently denoting 'what-I-want', takes place in my brain. If I want to perform this computation in an... (read more)