This seems mostly reasonable, but also seems like it has some unstated (rare!) exceptions that maybe seem too obvious to state, but that I think would be good to state anyway.
E.g. if you already have reason to believe an organization isn't engaging in good faith, or is inclined to take retribution, then giving them more time to plan that response doesn't necessarily make sense.
Maybe some other less extreme examples along the same lines.
I wouldn't be writing this comment if the language in the post hedged a bit more / left more room for exceptions, but read...
We can't sustain current growth levels
Is this about GDP growth or something else? Sustaining 2% GDP growth for a century (or a few) seems reasonably plausible?
Not quite the same question but I believe ACE started as one of the CEA children but is a separate entity now.
It still doesn't fully entail Matt's claim, but the content of the interview gets a lot closer than that description. You don't need to give it a full listen, I've quoted the relevant part:
Thanks for finding and sharing that quote. I agree that it doesn't fully entail Matt's claim, and would go further to say that it provides evidence against Matt's claim.
In particular, SBF's statement...
At what point are you out of ways for the world to spend money to change? [...] [I]t’s unclear exactly what the answer is, but it’s at least billions per year probably, so at least 100 billion overall before you risk running out of good things to do with money.
... makes clear that SBF was not completely risk neutral.
At the end of the excerpt Rob says "So you...
When I listened to the interview, I briefly thought to myself that that level of risk-neutrality didn't make sense. But I didn't say anything about that to anyone, and I'm pretty sure I also didn't play through in my head anything about the actual implications if Sam were serious about it.
I wonder if we could have taken that as a red flag. If you take seriously what he said, it's pretty concerning (implies a high chance of losing everything, though not necessarily anything like what actually happened)!
Seems worthwhile to quote the relevant bit of the interview:
====
Sam Bankman-Fried: If your goal is to have impact on the world — and in particular if your goal is to maximize the amount of impact that you have on the world — that has pretty strong implications for what you end up doing. Among other things, if you really are trying to maximize your impact, then at what point do you start hitting decreasing marginal returns? Well, in terms of doing good, there’s no such thing: more good is more good. It’s not like you did some good, so good doesn’t matter an...
Hey David, yep not our finest moment, that's for sure.
The critique writes itself so let me offer some partial explanation:
Fortunately one upside of the co...
(Same comment as gcmm posted at the same time... Won't delete mine but it's basically a duplicate.)
Seems like it's counterfactual in the same sense as the Facebook match: all of this money is going to charities one way or another, but mostly won't go to charities EAs find plausible so you're moving money from some random charity to something you think is especially good.
I realize this is a bit hypothetical but it does seem like the numbers matter a bit, so I want to ask:
Is there some basis on which you're imagining 50% of folks in an animal welfare EA group think that if factory farmed animals are moral patients, it's more likely that they have net-positive lives?
That surprised me a bit (I'd imagine it close to 0%, but I'm not too active in any EA groups right now, especially not any animal-focused ones so I don't have much data).
This subject would benefit from making distinctions among software projects, and some example projects.
There's huge variation in comp for programmers (across variables like location, and the kind of work they can do), and also huge variation in complexity across projects and what they need. I think this post understates those distinctions, and therefore somewhat overstates the risk of a cheap engineer of the needed sort leaving for high pay.
Do you have any more details on the opinions you've gotten from legal experts? I'd be interested in hearing more about the reasoning for why it's okay.
I think Paul Christiano explained well here why it might be questionable:
If you make an agreement "I'll do X if in exchange you do Y," ... Obviously the tax code will treat that differently than doing X without any expectation of reciprocity, and the treatment depends on Y. ...
We think these matches are ... mostly attributable to this initiative
As someone whose donation was partially matched ($3k of $5k), I can attest that this is correct, I would not have participated without at least some of these efforts from this group of people.
are best thought of as target populations than cause areas ... the space not covered by these three is basically just wealthy modern humans
I guess this thought is probably implicit in a lot of EA, but I'd never quite heard it stated that way. It should be more often!
That said, I think it's not quite precise. There's a population missing: humans in the not-quite-far-future (e.g. 100 years from now, which I think is not usually included when people say "far future").
For what it's worth, I think maybe this would be improved by some more information about the standards for application acceptance. (Apologies if that already exists somewhere that I haven't been able to find.)
[Edited to remove the word "transparency", which might have different connotations than I intended.]
Yeah, recidivists reverted once, so it seems reasonable to expect they're more likely to again. That makes the net impact of re-converting a recidivists unclear. Targeting them may be less valuable even if they're much easier to convert.
Image is not showing up for me still.
Is there any reason to share those details privately instead of being transparent in public?
Thanks for letting us know about this study!
I'll second the request for details. Especially within EA, it's pretty important to provide details (study plan, hopefully a pre-registration of the proposed analysis, the analysis itself, raw data, etc.) when mentioning study results like this.
The value in discussing the meaning of a word is pretty limited, and I recognize that this usage is standard in EA.
Still, I've done a pretty bad job explaining why I find it confusing. I'll try again:
Suppose we had an organization with a mission statement like "improve the United States through better government." And suppose they had decided that the best way to do that was to recommend that their members vote Republican and donate to the Republican Party. The mission is politically neutral, but it'd be pretty weird for the organization to call ...
That's not really inconsistent with cause-neutrality, given Michelle's definition (which I admit seems pretty common in EA).
(As long as GWWC is open to the possibility of working on something else instead, if something else seemed like a better way to help the world.)
Not really your fault. I'm starting to think the words inherently mean many things and are confusing.
Thanks for the posts.
Yep, we're just using different definitions. I find your definition a bit confusing, but I admit that it seems fairly common in EA.
For what it's worth, I think some of the confusion might be caused by my definition creeping into your writing sometimes. For example, in your next post (http://effective-altruism.com/ea/wp/why_poverty/):
"Given that Giving What We Can is cause neutral, why do we recommend exclusively poverty eradication charities, and focus on our website and materials on poverty? There are three main reasons ..."
If we're really using...
I'd suggest that we interpret "cause-neutrality" in a more straightforward, plain-language way: neutrality about what cause area you support; lack of commitment to any particular cause area.
As with your definition, cause-neutrality is a matter of degree. No one would be completely neutral across all possible causes. In an EA context, a "cause-neutral" EA person or organization might be just interested in furthering EA and not specifically interested in any of the particular causes more than others. But they might want to exclude some ca...
Hi Michelle--
I'm a bit confused. If cause-neutrality is "choos[ing] who to help by how much they can help", then there are many individuals and organizations who seem to fit that definition who I wouldn't ordinarily think of as cause-neutral. For example, many are focused exclusively on global health; many others are focused on animals; etc. Many of those with a cause-exclusive focus chose their focus using "how much they can help" as the criterion. Many of these came to different conclusions from others (either due to different values,...
And the discussion on Jeff's FB post: https://www.facebook.com/jefftk/posts/775488981742.
Especially helpful:
"Back when I was doing predictions for the Good Judgement Project this is something that the top forecasters would use all the time. I don't recall it being thought inaccurate and the superforecasters were all pretty sharp cookies who were empirically good at making predictions."
The name "Multiple Stage Fallacy" seems to encourage equivocation: Is it a fallacy to analyze the probability of an event by breaking it down into multiple stages, or a fallacy to make the mistakes Eliezer points to?
For the Nate Silver example, Eliezer does aim to point out particular mistakes. But for Jeff, the criticism comes sort of between these two possibilities: There's a claim that Jeff makes these mistakes (which seems to be wrong - see Jeff's reply), but it's as if the mere fact of "Multiple Stages" means there's no need to actually make an argument.
Robert, your charts are great. Adding one that compares "give in year 0" with "give in year 1" would illustrate Carl's point.
It seems like Rob is arguing against people using Y (the Pascal's Mugging analogy) as a general argument against working on AI safety, rather than as a narrow response to X.
Presumably we can all agree with him on that. But I'm just not sure I've seen people do this. Rob, I guess you have?
Yeah, Matthews really should have replaced "CS" with "math" or "math and philosophy".
That would be more accurate, more consistent with AI safety researchers' self-conception, and less susceptible to some of these counterarguments (especially Julia's point that in CS, the money and other external validation is much more in for-profits than AI safety).
This post seems to be about a mix of giving that you think isn't the most effective (public radio) and giving that you think is (plausibly) the most effective but isn't widely acknowledged to be effective within EA.
This is what we need: intelligent criticism of EA orthodoxy from the outside
Does SuperIntelligence really rise to the level of "EA orthodoxy"?
This might just be a nitpick, but it does really seem like we would want to avoid implying something too strong about EA orthodoxy.
I wanted to make a top-level post for it a few days ago but I need 5 more upvotes before I can create those. So I took the chance to share it here when I saw this "Open Thread".
I'd accept there's some tradeoff here, but I'd hope it's possible to defend your reasoning while being sufficiently supportive.
This makes me want to distinguish among different kinds of privilege, as in this post:
Dominance is privilege that is harmful to other people and that no one should have; Support is privilege that everyone should have, and is not on its own harmful to anyone else.
For instance: A habit of attempting to dominate conversations is internalized dominance, and actually being allowed to do so is external dominance; speaking up for oneself is a sign of internalized support, while actually being listened to is external support.
I think Jeff is (mostly?) talking abou...
Makes sense.