All of davidc's Comments + Replies

This seems mostly reasonable, but also seems like it has some unstated (rare!) exceptions that maybe seem too obvious to state, but that I think would be good to state anyway.

E.g. if you already have reason to believe an organization isn't engaging in good faith, or is inclined to take retribution, then giving them more time to plan that response doesn't necessarily make sense.

Maybe some other less extreme examples along the same lines.

I wouldn't be writing this comment if the language in the post hedged a bit more / left more room for exceptions, but read... (read more)

8
Jason
1y
I'd go a bit further. The proposed norm has several intended benefits: promoting fairness to the criticized organization by not blindsiding the organization, generating higher-quality responses, minimizing fire drills for organizations and their employees, etc. I think it is a good norm in most cases. However, there are some circumstances in which the norm would not significantly achieve its intended goals. For instance, the rationale behind the norm will often have less force where the poster is commenting on the topic of a fresh news story. The organization already feels pressure to respond to the news story on a news-cycle timetable; the marginal burden of additionally having a discussion of the issue on the Forum is likely modest. If the media outlet gave the org a chance to comment on the story, the org should also not be blindsided by the issue. Likewise, criticism in response to a recent statement or action by the organization may or may not trigger some of the same concerns as more out-of-the-blue criticism. Where the nature of the statement/action is such that the criticism was easily foreseeable, the organization should already be in a place to address it (and was not caught unawares by its own statement/action). This assumes, of course, that the criticism is not dependent on speculation about factual matters or the like. Also, I think the point about a delayed statement being less effective at conveying a message goes both ways: if an organization says or does something today, people will care less about an poster's critical reaction posted eight days later than a reaction posted shortly after the organization action/statement. Finally, there may also be countervailing reasons that outweigh the norm's benefits in specific cases.
2
Jeff Kaufman
1y
Edited to add something covering this, thanks!

We can't sustain current growth levels

Is this about GDP growth or something else? Sustaining 2% GDP growth for a century (or a few) seems reasonably plausible?

1
Habryka
1y
I agree that one or two centuries is pretty plausible, but I think it starts getting quite wild within a few more. 300 years of 2% growth is ~380x. 400 years of 2% growth is ~3000x.  You pretty quickly reach at least a solar-system spanning civilization to be able to get there, and then quite quickly a galaxy-spanning one, and then you just can't do it within the rules of known physics at all anymore. I agree that 2 centuries of 2% growth is not totally implausible without anything extremely wild happening, but all of that of course would still involve a huge amount of "historically unprecedented" things happening. 

Not quite the same question but I believe ACE started as one of the CEA children but is a separate entity now.

6
Jeff Kaufman
1y
"Animal Charity Evaluators began in 2012 under the name Effective Animal Activism (EAA), as a division of the U.K.-based charity 80,000 Hours ... In 2013, EAA underwent significant changes. Although our original focus was on creating discussion about tactics to help animals, this shifted towards an emphasis on creating quality educational and research content. EAA hired our first Executive Director Jon Bockman and merged with his charity, Justice for Animals, thereby officially becoming a 501(c)(3) nonprofit. Our mission was revised to specify the goal of finding and promoting highly effective opportunities for helping animals, and we rebranded as Animal Charity Evaluators" https://animalcharityevaluators.org/about/background/our-history

It still doesn't fully entail Matt's claim, but the content of the interview gets a lot closer than that description. You don't need to give it a full listen, I've quoted the relevant part:

https://forum.effectivealtruism.org/posts/THgezaPxhvoizkRFy/clarifications-on-diminishing-returns-and-risk-aversion-in?commentId=ppyzWLuhkuRJCifsx

Thanks for finding and sharing that quote. I agree that it doesn't fully entail Matt's claim, and would go further to say that it provides evidence against Matt's claim.

In particular, SBF's statement...

At what point are you out of ways for the world to spend money to change? [...] [I]t’s unclear exactly what the answer is, but it’s at least billions per year probably, so at least 100 billion overall before you risk running out of good things to do with money.

... makes clear that SBF was not completely risk neutral.

At the end of the excerpt Rob says "So you... (read more)

When I listened to the interview, I briefly thought to myself that that level of risk-neutrality didn't make sense. But I didn't say anything about that to anyone, and I'm pretty sure I also didn't play through in my head anything about the actual implications if Sam were serious about it.

I wonder if we could have taken that as a red flag. If you take seriously what he said, it's pretty concerning (implies a high chance of losing everything, though not necessarily anything like what actually happened)!

Seems worthwhile to quote the relevant bit of the interview:

====

Sam Bankman-Fried: If your goal is to have impact on the world — and in particular if your goal is to maximize the amount of impact that you have on the world — that has pretty strong implications for what you end up doing. Among other things, if you really are trying to maximize your impact, then at what point do you start hitting decreasing marginal returns? Well, in terms of doing good, there’s no such thing: more good is more good. It’s not like you did some good, so good doesn’t matter an... (read more)

Hey David, yep not our finest moment, that's for sure.

The critique writes itself so let me offer some partial explanation:

  1. Extemporaneous speech is full of imprecision like this where someone is focused on highlighting one point (in this case the contrast between appropriate individual vs altruistic risk aversion) and misses others. With close scrutiny I'm sure you could find many other cases of me presenting ideas as badly as that, and I'd imagine the same is true for all interview shows edited at the same level as ours.

Fortunately one upside of the co... (read more)

(Same comment as gcmm posted at the same time... Won't delete mine but it's basically a duplicate.)

Seems like it's counterfactual in the same sense as the Facebook match: all of this money is going to charities one way or another, but mostly won't go to charities EAs find plausible so you're moving money from some random charity to something you think is especially good.

2
davidc
1y
(Same comment as gcmm posted at the same time... Won't delete mine but it's basically a duplicate.)

I realize this is a bit hypothetical but it does seem like the numbers matter a bit, so I want to ask:

Is there some basis on which you're imagining 50% of folks in an animal welfare EA group think that if factory farmed animals are moral patients, it's more likely that they have net-positive lives?

That surprised me a bit (I'd imagine it close to 0%, but I'm not too active in any EA groups right now, especially not any animal-focused ones so I don't have much data).

This subject would benefit from making distinctions among software projects, and some example projects.

There's huge variation in comp for programmers (across variables like location, and the kind of work they can do), and also huge variation in complexity across projects and what they need. I think this post understates those distinctions, and therefore somewhat overstates the risk of a cheap engineer of the needed sort leaving for high pay.

Do you have any more details on the opinions you've gotten from legal experts? I'd be interested in hearing more about the reasoning for why it's okay.

I think Paul Christiano explained well here why it might be questionable:

If you make an agreement "I'll do X if in exchange you do Y," ... Obviously the tax code will treat that differently than doing X without any expectation of reciprocity, and the treatment depends on Y. ...

3
Aaron Gertler
5y
While this is a different sort of issue, and has nothing to do with tax policy, it seems relevant to mention the example of vote swapping, which seems to be legal: In the case of donation swapping, both of these ideas are shaky: It depends on the definition of "value" (do I technically get value out of giving money away?) and "prove" (receipts are easier to find for donations than votes). And of course, every country has its own law. But this example updates me slightly in the direction of viewing this favorably (though EA Hub should certainly keep looking for a definitive answer).
6
Catherine Low
5y
Thanks David. As Paul says, it certainly isn't clear cut. We have had unofficial legal advice, but nothing formal. The general idea from the advice is that some of the laws say things on the lines of "You must not have received goods or services from the charity" - which is still true in the case of swapping. There is also no legal obligation for your matched donor to donate to the charity you want them to donate to, and that is apparently significant. Also the tax rebate isn't dependent on WHY you chose the charity that you donated to. If the website proves to be popular we will look into getting official legal advice in the countries that want to use it the most.

We think these matches are ... mostly attributable to this initiative

As someone whose donation was partially matched ($3k of $5k), I can attest that this is correct, I would not have participated without at least some of these efforts from this group of people.

are best thought of as target populations than cause areas ... the space not covered by these three is basically just wealthy modern humans

I guess this thought is probably implicit in a lot of EA, but I'd never quite heard it stated that way. It should be more often!

That said, I think it's not quite precise. There's a population missing: humans in the not-quite-far-future (e.g. 100 years from now, which I think is not usually included when people say "far future").

For what it's worth, I think maybe this would be improved by some more information about the standards for application acceptance. (Apologies if that already exists somewhere that I haven't been able to find.)

[Edited to remove the word "transparency", which might have different connotations than I intended.]

2
Julia_Wise
7y
For Boston, the main thing that I expect to make the difference between otherwise similar applications is whether the person is at a turning point in their studies or career, or has knowledge or experience that would likely be helpful to others who are at a turning point.

Yeah, recidivists reverted once, so it seems reasonable to expect they're more likely to again. That makes the net impact of re-converting a recidivists unclear. Targeting them may be less valuable even if they're much easier to convert.

Image is not showing up for me still.

Is there any reason to share those details privately instead of being transparent in public?

Thanks for letting us know about this study!

0
jonathonsmith
8y
The only reason I wouldn't put that document out publicly is because it wasn't written for wide release, so maybe Nick would want to clean it up before having it shared around. I know I usually spend more time polishing the look and language of a document that I intend to be passed around publicly. But that is the only reason, we're definitely happy to share any details people are interested in.

I'll second the request for details. Especially within EA, it's pretty important to provide details (study plan, hopefully a pre-registration of the proposed analysis, the analysis itself, raw data, etc.) when mentioning study results like this.

0
jonathonsmith
8y
Nick wrote up a pre-study plan that I can send your way if you (or anyone else) would like to see it. Really though, it was a pretty simple study. We targeted people who liked one or more of the following terms / pages (below) with ads encouraging them to give eating veg another shot. But definitely let me know if you have any specific questions and Alan or I can get you the details. As an aside, can you confirm for me that the images are showing up now? Terms used to target study audience: * Vegetarianism * Vegetarian Cuisine * Lacto Vegetarianism * Ovo-lacto Vegetarianism * Semi-vegetarianism * Flexitarianism * Vegetarian Times * VegNews

The value in discussing the meaning of a word is pretty limited, and I recognize that this usage is standard in EA.

Still, I've done a pretty bad job explaining why I find it confusing. I'll try again:

Suppose we had an organization with a mission statement like "improve the United States through better government." And suppose they had decided that the best way to do that was to recommend that their members vote Republican and donate to the Republican Party. The mission is politically neutral, but it'd be pretty weird for the organization to call ... (read more)

That's not really inconsistent with cause-neutrality, given Michelle's definition (which I admit seems pretty common in EA).

(As long as GWWC is open to the possibility of working on something else instead, if something else seemed like a better way to help the world.)

Not really your fault. I'm starting to think the words inherently mean many things and are confusing.

Thanks for the posts.

Yep, we're just using different definitions. I find your definition a bit confusing, but I admit that it seems fairly common in EA.

For what it's worth, I think some of the confusion might be caused by my definition creeping into your writing sometimes. For example, in your next post (http://effective-altruism.com/ea/wp/why_poverty/):

"Given that Giving What We Can is cause neutral, why do we recommend exclusively poverty eradication charities, and focus on our website and materials on poverty? There are three main reasons ..."

If we're really using... (read more)

1
Michelle_Hutchinson
8y
I think even if there's no tension, there could still be an open question about how you think your actions generate value. For example, cause-neutral-Jeff could be donating to AMF because he thinks it's the charity with the highest expected value per $, or because he's risk averse and thinks it's the best if you're going for a trade off between expected value and low variance in value per $, or because he wants to encourage other charities to be as transparent and impact focused as AMF. So although it's not surprising that cause-neutral-Jeff focuses his donations on just one charity, and that it's AMF, it's still interesting to hear the answer to 'why does he donate to AMF?'. But I agree, it's difficult not to slide between definitions on a concept like cause neutrality, and I'm sorry I'm not as clear as I'd like to be.

I'd suggest that we interpret "cause-neutrality" in a more straightforward, plain-language way: neutrality about what cause area you support; lack of commitment to any particular cause area.

As with your definition, cause-neutrality is a matter of degree. No one would be completely neutral across all possible causes. In an EA context, a "cause-neutral" EA person or organization might be just interested in furthering EA and not specifically interested in any of the particular causes more than others. But they might want to exclude some ca... (read more)

2
SophiaSea
8y
The point of cause neutrality is to be indifferent between causes based on any criteria except how much good you can do by focussing on that cause area. The advantage of being cause-neutral is, instead of choosing what to do based on how much you like the cause or any other reason, you are choosing based on how much of a difference you can make. People who exclude causes because they think there is less room for doing good are cause-neutral, people who exclude causes based on other reasons are not cause neutral. As the reason you are exclusively focussed on animals is because that's where you think you can help the most, you seem cause-neutral. Cause-neutral people can come to different conclusions as to which causes to support, what makes them alike is how they decide on the cause(s) they currently focus on. GWWC is cause-neutral if it would be willing to no longer focus on poverty and global health if it was convinced that by focussing on other cause areas they could do more good. It is my understanding that the only reason they are committed to poverty and global health is because this cause area is where they believe they can do the most good. If they were to receive evidence that contradicted that, they would no longer focus on poverty and global health. The reason they are focussed on this cause is because they care only about the difference they can make in their cause selection. The reason they focus on this cause is because they are cause-neutral.

Hi Michelle--

I'm a bit confused. If cause-neutrality is "choos[ing] who to help by how much they can help", then there are many individuals and organizations who seem to fit that definition who I wouldn't ordinarily think of as cause-neutral. For example, many are focused exclusively on global health; many others are focused on animals; etc. Many of those with a cause-exclusive focus chose their focus using "how much they can help" as the criterion. Many of these came to different conclusions from others (either due to different values,... (read more)

0
Michelle_Hutchinson
8y
Hi David, It doesn’t seem problematic to me to say that a person or individual could be cause-neutral but currently focused on just one area. If that weren’t the case, the only people who would count as cause neutral would be those working on / donating to cause prioritisation itself. That seems like a less useful concept to me than the one I tried to carve out (though equally plausible as a way of understanding ‘cause neutral’). One way to frame my understanding of cause neutrality is that what matters is not whether a person/organisation is currently focused on one area, but if they’d be willing to switch to focusing on a different area if they became persuaded it would be more effective to do so. There’s also the difference between an individual and an organisation being cause neutral. It’s very plausible that a cause neutral individual could work for an organisation that isn’t cause neutral. It even seems plausible that an organisation might be not cause neutral, while being staffed entirely by people who are cause neutral. That would be true, on my understanding, if it were the case that those individuals would be willing to pivot away from working on that cause if it turned out not to be the best, but wouldn’t do so by pivoting the organisation (rather by closing it down, or finding others to staff it). On this understanding, Giving What We Can is both run by individuals who are cause neutral, and (separately) is cause neutral as an organisation.

And the discussion on Jeff's FB post: https://www.facebook.com/jefftk/posts/775488981742.

Especially helpful:

"Back when I was doing predictions for the Good Judgement Project this is something that the top forecasters would use all the time. I don't recall it being thought inaccurate and the superforecasters were all pretty sharp cookies who were empirically good at making predictions."

The name "Multiple Stage Fallacy" seems to encourage equivocation: Is it a fallacy to analyze the probability of an event by breaking it down into multiple stages, or a fallacy to make the mistakes Eliezer points to?

For the Nate Silver example, Eliezer does aim to point out particular mistakes. But for Jeff, the criticism comes sort of between these two possibilities: There's a claim that Jeff makes these mistakes (which seems to be wrong - see Jeff's reply), but it's as if the mere fact of "Multiple Stages" means there's no need to actually make an argument.

2
Owen Cotton-Barratt
8y
Yes, I found the criticism-by-insinuation of Jeff's post unhelpful, because none of these errors were obvious. A more concrete discussion of disagreements might be interesting. (For what it's worth Jeff's analysis still looks pretty plausible to me. My biggest disagreement is on the probabilities of something "other" going wrong, which look too modestly large to me after a decent attempt to think about what might fail. It's not clear that's even one of the kind of errors Eliezer is talking about.)

Robert, your charts are great. Adding one that compares "give in year 0" with "give in year 1" would illustrate Carl's point.

It seems like Rob is arguing against people using Y (the Pascal's Mugging analogy) as a general argument against working on AI safety, rather than as a narrow response to X.

Presumably we can all agree with him on that. But I'm just not sure I've seen people do this. Rob, I guess you have?

Yeah, Matthews really should have replaced "CS" with "math" or "math and philosophy".

That would be more accurate, more consistent with AI safety researchers' self-conception, and less susceptible to some of these counterarguments (especially Julia's point that in CS, the money and other external validation is much more in for-profits than AI safety).

This post seems to be about a mix of giving that you think isn't the most effective (public radio) and giving that you think is (plausibly) the most effective but isn't widely acknowledged to be effective within EA.

0
SydMartin
9y
Yes, that is an accurate summation. I don't think many of these causes are the 'most' effective, but I believe them to be potentially effective but lack measurements. We don't talk very often about other potentially effective benefits of donating outside of core EA charities. I think there is benefit in discussing/exploring exceptions or other donation strategies people may have.

This is what we need: intelligent criticism of EA orthodoxy from the outside

Does SuperIntelligence really rise to the level of "EA orthodoxy"?

This might just be a nitpick, but it does really seem like we would want to avoid implying something too strong about EA orthodoxy.

0
Giles
9y
You're absolutely right. I've changed that bit in the final draft.

I wanted to make a top-level post for it a few days ago but I need 5 more upvotes before I can create those. So I took the chance to share it here when I saw this "Open Thread".

0
Giles
9y
My post is here.
0
RyanCarey
9y
I've added you as a contributor :)

This piece from an AI researcher at NYU criticizing Nick Bostrum's SuperIntelligence seems like it's worth a look (and hasn't been posted here yet), for folks interested in the subject.

0
Giles
9y
I'll bite. It may take a new top-level post though.

I'd accept there's some tradeoff here, but I'd hope it's possible to defend your reasoning while being sufficiently supportive.

This makes me want to distinguish among different kinds of privilege, as in this post:

Dominance is privilege that is harmful to other people and that no one should have; Support is privilege that everyone should have, and is not on its own harmful to anyone else.

For instance: A habit of attempting to dominate conversations is internalized dominance, and actually being allowed to do so is external dominance; speaking up for oneself is a sign of internalized support, while actually being listened to is external support.

I think Jeff is (mostly?) talking abou... (read more)