Related (and perhaps of interest to EAs looking for rhetorical hooks): there are a bunch of constitutions (not the US) that recognize the rights of future generations. I believe they're primarily modeled after South Africa's constitution (see http://www.fdsd.org/ideas/the-south-african-constitution-gives-people-the-right-to-sustainable-development/ & https://en.wikipedia.org/wiki/Constitution_of_South_Africa).
I haven't read about this case, but some context: This has been an issue in environmental cases for a while. It can manifest in different ways, including "standing," i.e., who has the ability to bring lawsuits, and what types of injuries are actionable. If you google some combination of "environmental law" & standing & future generations you'll find references to this literature, e.g.: https://scholarship.law.uc.edu/cgi/viewcontent.cgi?referer=https://www.google.com/&httpsredir=1&article=1272&context=fac_pubs
Last I c...
Agree on PR stunt -- as long as one party has standing in this kind of litigation, it doesn't generally matter whether the others do.
This comment is not directly related to your post: I don't think the long-run future should be viewed of as a cause area. It's simply where most sentient beings live (or might live), and therefore it's a potential treasure trove of cause areas (or problems) that should be mined. Misaligned AI leading to an existential catastrophe is an example of a problem that impacts the long-run future, but there are so, so many more. Pandemic risk is a distinct problem. Indeed, there are so many more problems even if you're just thinking about the possible impacts of AI.
I'd go farther here and say all three (global poverty, animal rights, and far future) are best thought of as target populations rather than cause areas. Moreover, the space not covered by these three is basically just wealthy modern humans, which seems to be much less of a treasure trove than the other three because WMHs have the most resources, far more than the other three populations. (Potentially there's also medium-term future beings as a distinct population, depending on where we draw the lines.)
I think EA would probably be discovering more things if...
Variant on this idea: I'd encourage a high status person and a low status person, both of whom regularly post on the EA Forum, to trade accounts for a period of time and see how that impacts their likes/dislikes.
Variant on that idea: No one should actually do this, but several people should talk about it, thereby making everyone paranoid about whether they're a part of a social experiment (and of course the response of the paranoid person would be to actually vote based on the content of the article).
I strongly agree. Put another way, I suspect we, as a community, are bad at assessing talent. If true, that manifests as both a diversity problem and a suboptimal distribution of talent, but the latter might not be as visible to us.
My guess re the mechanism: Because we don't have formal credentials that reflect relevant ability, we rely heavily on reputation and intuition. Both sources of evidence allow lots of biases to creep in.
My advice would be:
When assessing someone's talent, focus on the content of what they're saying/writing, not the general fe
Thanks for doing these analyses. I find them very interesting.
Two relatively minor points, which I'm making here only because they refer to something I've seen a number of times, and I worry it reflects a more-fundamental misunderstanding within the EA community:
Re the first point, people use "cause area" differently, but I don't think AI -- in its entirety -- fits any of the usages. The alignment/control problem does: it's a problem we can make pr...
Max's point can be generalized to mean that the "talent" vs. "funding" constraint framing misses the real bottleneck, which is institutions that can effectively put more money and talent to work. We of course need good people to run those institutions, but if you gave me a room full of good people, I couldn't just put them to work.
and I wonder how the next generation of highly informed, engaged critics (alluded to above) is supposed to develop if all substantive conversations are happening offline.
This is my concern (which is not to say it's Open Phil's responsibility to solve it).
Hey Josh,
As a preliminary matter, I assume you read the fundraising document linked in this post, but for those reading this comment who haven’t, I think it’s a good indication of the level of transparency and self-evaluation we intend to have going forward. I also think it addresses some of the concerns you raise.
I agree with much of what you say, but as you note, I think we’ve already taken steps toward correcting many of these problems. Regarding metrics on the effective altruism community, you are correct that we need to do more here, and we intend t...
This document is effectively CEA's year-end review and plans for next year (which I would expect to be relevant to people who visit this forum). We could literally delete a few sentences, and it would cease to be a fundraising document at all.
Fixed. At least with respect to adding and referencing the Hurford post (more might also be needed). Please keep such suggestions forthcoming.
As you explain, the key tradeoff is organizational stability vs. donor flexibility to chase high-impact opportunities. There are a couple different ways to strike the right balance. For example, organizations can try to secure long-term commitments sufficient to cover a set percentage of their projected budget but no more, e.g., 100% one year out; 50% two years out; 25% three years out [disclaimer: these numbers are not considered].
Another possibility is for donors to commit to donating a certain amount in the future but not to where. For example, imagine ...
I'm looking into this on behalf of CEA/GWWC. Anyone else working on something similar should definitely message me (michael.page@centreforeffectivealtruism.org).
If the reason we want to track impact is to guide/assess behavior, then I think counting foreseeable/intended counterfactual impact is the right approach. I'm not bothered by the fact that we can't add up everyone's impact. Is there any reason that would be important to do?
In the off-chance it's helpful, here's some legal jargon that deals with this issue: If a result would not have occurred without Person X's action, then Person X is the "but for" cause of the result. That is so even if the result also would not have occurred without Person Y's...
The downvoting throughout this thread looks funny. Absent comments, I'd view it as a weak signal.
Agreed. Someone earning to give doesn't meet the literal characterization of "full time" EA.
How about fully aligned and partially aligned (and any other modifier to "aligned" that might be descriptive)?
In thinking about terminology, it might be useful to distinguish (i) magnitude of impact and (ii) value alignment. There are a lot of wealthy individuals who've had an enormous impact (and should be applauded for it), but who correctly are not described as "EA." And there are individuals who are extremely value aligned with the imaginary prototypical EA (or range of prototypical EAs) but whose impact might be quite small, through no fault of their own. Incidentally, I think those in the latter category are better community leaders than those in the former.
Edit: I'm not suggesting that either group should be termed anything; just that the current terminology seems to elide these groups.
I'll embrace the awkwardness of doing this (and this is more than the past month):
1) I printed and distributed about 1050 EA Handbooks to about a dozen different countries.
2) I believe I am the but-for cause of about five new EAs, one of whom is a professional poker player with a significant social media following who has been donating a percentage of her major tournament wins.
3) I donated $195k this calendar year.
I'm curious as a descriptive matter whether people have been downvoting due to disagreement or something else. Why, for example, do so many fundraising announcements get downvotes? I'm not certain we need a must-comment policy, but the mere fact that I don't know what a downvote means certainly impacts its signalling value.
I see the downvoting trend as a symptom of some potentially problematic community dynamics. I think this warrants a top-level post so we can hash out what the purpose, value, and risks are of downvotes.
Thanks, Julia. You make an important point here that I think is often lost in discussion of the "how much is enough" issue. The issue often is framed in terms of a conflict between one's own interests and the world's interests (e.g., ice cream for me or a bednet for someone else). But when viewed in terms of burnout/sustainability, the conflict disappears: allowing oneself to eat ice cream every so often might actually be in the world's best interest. Even a means machine requires oil.
The people who ask me about my shirt generally have never heard of effective altruism, but they are sufficiently interested in what "effective altruism" literally suggests to want more information.
I wear the t-shirt from EA Global (San Francisco) all the time. I love the design and actually find it to be a pretty effective way to start a conversation about EA, presumably because only those with interest in the idea ask me about it. I think a more-involved logo might be viewed as more confrontational and therefore less likely to elicit inquiries.
I don't get that criticism. I can always donate to help you do direct work. I don't see any way to criticize donating per se other than through non-consequentialist reasoning.
Edit: Unless they're criticizing the ratio of direct work to donations.
I appreciate the feedback. I also shoot down most of my ideas, but I thought this one was worth sharing. I don't want to be in the position of "defending" the viability of the idea, but I will at least attempt to clarify it:
I did not imagine this ultimately catering primarily to the EA community, which is why I didn't think of .impact or impact certificates as alternatives. I imagined a widely used site like Craigslist on which people advertised random skills and needs. I didn't imagine an explicit "EA angle" other than that the goal wa...
Of course. But as I understand it, the hypothesis here is that given (i) the amount of money that will invariably go to sub-optimal charities; and (ii) the likely room for substantial improvements in sub-optimal charities (see DavidNash's comment), that one (arguably) might get more bang for their buck trying to fix sub-optimal charities. I think it's a plausible hypothesis.
I'm doubtful that one can make GiveWell charities substantially more effective. Those charities are already using the EA lens. It's the ones that aren't using the EA lens for which big improvements might be made at low cost.
EDIT: I suppose I'm assuming that's the OP's hypothesis. I could be wrong.
This is true with respect to where a rational, EA-inclined person chooses to donate, but I think you're taking it too far here. Even in the best case scenario, there will be MANY people who donate for non-EA reasons. Many of those people will donate to existing, well-known charities such as the Red Cross. If we can make the Red Cross more effective, I can't see how that would not be a net good.
I am very intrigued by the potential upside of this idea. As I see it, one can change charity culture by changing consumer demand (generally what GiveWell does), which will eventually lead to a change in product. Alternatively, one can change charity culture by changing the product directly, on the assumption that many consumers care more about the brand than the product.
Would the service be free to the nonprofits? Would it help nonprofits conduct studies to assess their impact?
Anecdata: I have a friend who works at a big-name nonprofit who has been trying to find exactly this service.
I've been thinking about how to weigh the direct impact of one's career (e.g., donations) against the impact of being a model for others. For example, imagine option A is a conventional, high-paying salaried job, and option B is something less conventional (e.g., a startup) with a higher expected (direct) impact value. It's not obvious to me that option B has a higher expected impact value when one takes into account the potential to be a model for others. In other words, I think there might be a unique kind of value in doing good in a way that others can emulate. I'm curious whether you agree with this, and if so, how one might factor it into the analysis.
Haha, don't be silly, I stopped eating solid food a long time ago.
[Was just joking about vegetables.]
I didn't derive sufficient immediate pleasure from reading the news. But like eating one's vegetables, I thought it was justified by long-term returns.
(Hoping someone now provides a reason I don't have to eat my vegetables.)
Indeed, that is what I meant.
I was assuming that MIRI's position is that it presently is the most-effective recipient of funds, but that assumption might not be correct (which would itself be quite interesting).
A modified version of this question: Assuming MIRI's goal is saving the world (and not MIRI), at what funding level would MIRI recommend giving elsewhere, and where would it recommend giving?
Thanks, Ryan, but years of reading the news have left me unable to process such a long, thoughtful piece about how years of reading the news will leave me unable to process long, thoughtful pieces.
I love it when reason points in a direction I already wanted to go but mistakenly thought it unreasonable. Thanks.
What's the argument for not consuming news? I don't necessarily disagree, but it's not self-evident to me.
Here's an EA forum post on the second (Harvard Law) article: http://effective-altruism.com/ea/8f/lawyering_to_give/
Although well-intentioned, I think the Harvard Law article is dangerous. The legal community is potentially pretty low-hanging fruit for EA recruitment: it contains a lot of people who make a lot of money and who generally make misguided but well-intentioned charitable decisions, both regarding how to donate their money and how to use their talents.
Changing the culture of this community will be complicated, however. Early missteps could be ext...
Once again, I am quite late to the party, but for posterity's sake, just want to add a few points: First, this is exactly what I do, and it's just not that hard. Second, I was formery a public interest lawyer (doing impact litigation) and believe the skill set required for that job is very similar to the skill set required for my current job (commercial litigation). Lastly, I am doing what I am doing on the belief that it does the most good -- I've considered the alternatives! If anyone seriously believes I'm mistaken, I'd very much like to hear from them.
I've noticed that what "EA is" seems to vary depending on the audience and, specifically, why it is that the audience is not already on board. For example, if one's objection to EA is that one values local lives over non-local lives, or that effects don't matter (or are trumped by other considerations), then EA is an ethical framework. But many people are on board with the basic ethical precepts but simply don't act in accordance with them. For those people, EA seems to be a support group for rejecting cognitive dissonance.
I'm thinking more along the line of mentors for the mentors, and I think one solution would be a platform on which to crowd source ideas for individuals' ten-year strategic plan. In a perfect world, one would be able to donate one's talents (in addition to one's money) to the EA cause, which could then be strategically deployed by an all-seeing EA director. Maybe MIRI could work on that.
Absolutely re personal factors. "Outsource" is an overstatement.
And no, I don't mean decisions like whether to be a vegetarian (which, as I've noted elsewhere, presents a false dichotomy) or whether to floss, which can be generically answered.
I mean a personalized version of what 80,000 hours does for people mid-career. Imagine several people in their mid-30s to -40s--a USAID political appointee; a law firm partner; a data scientist working in the healthcare field--who have decided they are willing to make significant lifestyle changes to bette...
I love the idea of outsourcing my donation decisions to someone who is much more knowledgeable than I am about how to be most effective. An individual might be preferable to an organization for reasons of flexibility. Is anyone actually doing this -- e.g., accepting others' EtG money?
In fact, I'd outsource all kinds of decisions to the smartest, most well-informed, most value-aligned person I could find. Why on earth would I trust myself to make major life decisions if I'm primarily motivated by altruistic considerations?
The trade-off argument is right as far as it goes, but that might not be as far as we think: the metaphor of the "will power points" seems problematic. As MichaelDickens and Jess note, many lifestyle changes have initial start-up costs but no ongoing costs. And many things we think will have ongoing costs do not (see, e.g., studies showing more money and more things don't on average make us happier; conversely, less money and fewer things might not make us less happy). An earning-to-give investment banker might use the trade-off logic to explain ...
I use the recycling analogy when talking to people about this issue. I consider myself to be one-who-recycles, but if I have bottle in my hand and there's nowhere convenient to recycle it, I'll throw it away. Holding onto that bottle all day because I've decided I'm a categorical recycler seems kind of silly. I treat food the same way.
Regarding your broader point re consistency, my guess is that we way over-emphasize the effect of diet over other relatively cost-less things we can do to make the world a better place -- in large part because there are organ...
Wonderful essay. Thanks, Jess. A few responses:
(i) It's not clear to me that the vegan-vegetarian distinction makes sense, as I believe, for example, that consuming eggs or milk can be more harmful (in terms of animal suffering) than certain forms of meat consumption.
(ii) Related to (i) (and to Paul_Christiano's point re "other ways to make your life worse to make the world better"), other than for signalling/heuristic reasons, I don't think being categorically vegan/vegetarian is all that important. I believe that reducing animal products in my ...
Tara left CEA to co-found Alameda with Sam. As is discussed elsewhere, she and many others split ways with Sam in early 2018. I'll leave it to them to share more if/when they want to, but I think it's fair to say they left at least in part due to concerns about Sam's business ethics. She's had nothing to do with Sam since early 2018. It would be deeply ironic if, given what actually happened, Sam's actions are used to tarnish Tara.
[Disclosure: Tara is my wife]