All of Gram_Stone's Comments + Replies

Your comment reads strangely to me because your thoughts seem to fall into a completely different groove from mine. The problem statement is perhaps: write a program that does what-I-want, indefinitely. Of course, this could involve a great deal of extrapolation.

The fact that I am even aspiring to write such a program means that I am assuming that what-I-want can be computed. Presumably, at least some portion of the relevant computation, the one that I am currently denoting 'what-I-want', takes place in my brain. If I want to perform this computation in an... (read more)

0
kbog
7y
No, the problem statement is write a program that does what is right. Then you missed the point of what I said, since I wasn't talking about what to call it, I was talking about the tools and methods it uses. The question is what people ought to be studying and learning. If you want to solve a philosophical problem then you're going to have to do philosophy. Psychology is for solving psychological problems. It's pretty straightforward. I mean the kind of work that is done in philosophy departments, and which would be studied by someone who was told "go learn about moral philosophy". Yes, that's true by his own admission (he affirms in his reply to Berker that the specific cognitive model he uses is peripheral to the main normative argument) and is apparent if you look at his work. He's eliding into normative arguments about morality, rather than merely describing psychological or cognitive processes. I don't know what you are talking about, since I said nothing about obsolescence. Great! Then they'll acknowledge that studying testimony and social consensus is not studying what is good. Rather than bad actors needing to be restrained by good actors, which is neither a psychological nor a philosophical problem, the problem is that the very best actors are flawed and will produce flawed machines if they don't do things correctly. Would you like to me to explicitly explain why the new wave of pop-philosophers and internet bloggers who think that moral philosophy can be completely solved by psychology and neuroscience don't know what they're talking about? It's not taken seriously; I didn't go into detail because I was unsure if anyone around here took it seriously.

Also, have you seen this AI Impacts post and the interview it links to? I would expect so, but it seems worth asking. Tom Griffiths makes similar points to the ones you've made here.

0
Kaj_Sotala
7y
I'd seen that, but re-reading it was useful. :)

I think these are all points that many people have considered privately or publicly in isolation, but that thus far no one has explicitly written them down and drawn a connection between them. In particular, lots of people have independently made the observation that ontological crises in AIs are apparently similar to existential angst in humans, ontology identification seems philosophically difficult, and so plausibly studying ontology identification in humans is a promising route to understanding ontology identification for arbitrary minds. So, thank you... (read more)

I agree with this. It's the right way to take this further by getting rid of leaky generalizations like 'Evidence is good, no evidence is bad," and also to point out what you pointed out: is the evidence still virtuous if it's from the past and you're reasoning from it? Confused questions like that are a sign that things have been oversimplified. I've thought about the more general issues behind this since I wrote this, since I actually posted this on LW over two weeks ago. (I've been waiting for karma.) In the interim, I found an essay on Facebook by... (read more)

I really like this bit.

Thank you.

I found a lot of this post disconcerting because of how often you linked to LessWrong posts, even when doing so didn't add anything. I think it would be better if you didn't rely on LW concepts so much and just say what you want to say without making outside references.

I mulled over this article for quite awhile before posting it, and this included the pruning of many hyperlinks deemed unnecessary. Of course, the links that remain are meant to produce a more concise article, not a more opaque one, so what you say is ... (read more)

3
Peter Wildeford
8y
You can rephrase LW jargon with what the jargon represents (in LW jargon, "replace the symbol with the substance"): For one example, instead of saying: Say:
6
MichaelDickens
8y
Specific examples: * Linking to the Wikipedia pages for effective altruism, existential risk, etc. is unnecessary because almost all of your audience will be familiar with these terms. * For lots of your links, I had no problem understanding what you meant without reading the associated LW post. * You used a lot of LW jargon where you could have phrased things differently to avoid it: "dissolve the question", "disguised queries", "taboo", "confidence levels outside of an argument". * Lots of your links were tangential or just didn't add anything to what you already said: "a wise outsider", your three links for "save the world", "the commonly used definition", "you can arrive at true beliefs...", "but they took the risk of riding...", "useless sentiment", "and it's okay". I believe the following links were fine and you could leave them in: "mind-killed", "eschatology", "a common interest of many causes", "you can see malaria evaporating", "Against Malaria Foundation" (although I'd link to the website rather than the Wikipedia page), "Existential Strategy Research". I'd remove all the others. Although you might want to remove some of these too—each of links to LessWrong posts on this list is fine on its own, but you probably don't want to have more than one or two links to the same website/author in an article of this length. Hope that helps.

But I think we may be disagreeing over whether "thinks AI risk is an important cause" is too close to "is broadly positive towards AI risk as a cause area." I think so. You think not?

Are there alternatives to a person like this? It doesn't seem to me like there are.

"Is broadly positive towards AI risk as a cause area" could mean "believes that there should exist effective organizations working on mitigating AI risk", or could mean "automatically gives more credence to the effectiveness of organizations that ... (read more)

0
Marcus_A_Davis
8y
I meant picking someone with no stake whatsoever in the outcome. Someone who, though exposed to arguments about AI risk, has no strong opinions one way or another. In other words, someone without a strong prior on AI risk as a cause area. Naturally, we all have biases, even if they are not explicit, so I am not proposing this as a disqualifying standard, just a goal worth shooting for. An even broader selection tool I think worth considering alongside this is simply "people who know about AI risk" but that's basically the same as Rob's original point of "have some association with the general rationality or AI community." Edit: Should say "Naturally, we all have priors..."

Why should the person overseeing the survey think AI risk is an important cause?

Because the purpose of the survey is to determine MIRI's effectiveness as a charitable organization. If one believes that there is a negligible probability that an artificial intelligence will cause the extinction of the human species within the next several centuries, then it immediately follows that MIRI is an extremely ineffective organization, as it would be designed to mitigate a risk that ostensibly does not need mitigating. The survey is moot if one believes this.

0
Marcus_A_Davis
8y
I don't disagree on the problems of getting someone who thinks there is "negligible probability" of AI causing extinction being not suited for the task. That's why I said to aim for neutrality. But I think we may be disagreeing over whether "thinks AI risk is an important cause" is too close to "is broadly positive towards AI risk as a cause area." I think so. You think not?

I think that it's probably quite important to define in advance what sorts of results would convince us that the quality of MIRI's performance is either sufficient or insufficient. Otherwise I expect those already committed to some belief about MIRI's performance to consider the survey evidence for their existing belief, even if another person with the opposite belief also considers it evidence for their belief.

Relatedly, I also worry about the uniqueness of the problem and how it might change what we consider a cause worth donating to. Although you don't ... (read more)

Sorry about the confusion, I mean to say that even though the Against Malaria Foundation observes evidence of the effectiveness of its interventions all of the time, and this is good, the founders of the Against Malaria Foundation had to choose an initial action before they had made any observations about the effectiveness of their interventions. Presumably, there was some first village or region of trial subjects that first empirically demonstrated the effectiveness of durable, insecticidal bednets. But before this first experiment, the AMF also presumabl... (read more)

I'm new to the EA Forum. It was suggested to me that I crosspost this LessWrong post criticizing Jeff Kaufman's speech at EA Global 2015 entitled 'Why Global Poverty?' on the EA forum, but I need 5 karma to make my first post.

EDIT: Here it is.

1
Linch
8y
"And I would argue that any altruist is doing the same thing when they have to choose between causes before they can make observations. There are a million other things that the founders of the Against Malaria Foundation could have done, but they took the risk of riding on distributing bed nets, even though they had yet to see it actually work." This point should be rewritten, I think. I'm not sure what the "it" here you're talking about actually is.