All of mhpage's Comments + Replies

Tara left CEA to co-found Alameda with Sam. As is discussed elsewhere, she and many others split ways with Sam in early 2018. I'll leave it to them to share more if/when they want to, but I think it's fair to say they left at least in part due to concerns about Sam's business ethics. She's had nothing to do with Sam since early 2018. It would be deeply ironic if, given what actually happened, Sam's actions are used to tarnish Tara.

[Disclosure: Tara is my wife]

4
Stuart Buck
1y
Strong agree, but in that case, it seems very unlikely that Will was unaware of these serious "concerns about Sam's business ethics" back in 2018, and it seems all the more incumbent on him to offer an explanation as to why he kept such a close affiliation with SBF thereafter. 

Related (and perhaps of interest to EAs looking for rhetorical hooks): there are a bunch of constitutions (not the US) that recognize the rights of future generations. I believe they're primarily modeled after South Africa's constitution (see http://www.fdsd.org/ideas/the-south-african-constitution-gives-people-the-right-to-sustainable-development/ & https://en.wikipedia.org/wiki/Constitution_of_South_Africa).

0
kokotajlod
6y
OK, thanks! This is very helpful, I'm reading through the article you cite now.

I haven't read about this case, but some context: This has been an issue in environmental cases for a while. It can manifest in different ways, including "standing," i.e., who has the ability to bring lawsuits, and what types of injuries are actionable. If you google some combination of "environmental law" & standing & future generations you'll find references to this literature, e.g.: https://scholarship.law.uc.edu/cgi/viewcontent.cgi?referer=https://www.google.com/&httpsredir=1&article=1272&context=fac_pubs

Last I c... (read more)

Agree on PR stunt -- as long as one party has standing in this kind of litigation, it doesn't generally matter whether the others do.

3
mhpage
6y
Related (and perhaps of interest to EAs looking for rhetorical hooks): there are a bunch of constitutions (not the US) that recognize the rights of future generations. I believe they're primarily modeled after South Africa's constitution (see http://www.fdsd.org/ideas/the-south-african-constitution-gives-people-the-right-to-sustainable-development/ & https://en.wikipedia.org/wiki/Constitution_of_South_Africa).

This comment is not directly related to your post: I don't think the long-run future should be viewed of as a cause area. It's simply where most sentient beings live (or might live), and therefore it's a potential treasure trove of cause areas (or problems) that should be mined. Misaligned AI leading to an existential catastrophe is an example of a problem that impacts the long-run future, but there are so, so many more. Pandemic risk is a distinct problem. Indeed, there are so many more problems even if you're just thinking about the possible impacts of AI.

2
Evan_Gaensbauer
6y
I agree with Jacy. Another point I'd add is effective altruism is a young movement also focused on making updates and change its goals as new and better info can be integrated into our thinking. This leads to the evolution of various causes, interventions and research projects in the movement undergoing changes which make them harder to describe. For example, for a long time in EA, "existential risk reduction" was associated primarily with AI safety. In the last few years ideas from Brian Tomasik have materialized in the Foundational Research Institute and their focus on "s-risks" (risks of astronomical suffering). At the same time, organizations like Allfed are focused on mitigating existential risks which could realistically happen on a timeline in the medium-term future, i.e., the next few decades, but the intervention themselves aren't as focused on the far-future, e.g., at least the next few centuries. However, x-risk and s-risk reduction dominate in EA through AI safety research as the favoured intervention, and with a focus motivated by astronomical stakes. Lumping that altogether could be called a "far future" focus. Meanwhile, 80,000 Hours advocates for the use of the term "long-run future" for a focus on risks extending from the present to the far future which depend on policy regarding all existential risks, including s-risks. I think finding accurate terminology for the whole movement to use is a constantly moving target in effective altruism. Obviously using common language optimally would be helpful, but debating and then coordinating usage of common terminology also seems like it'd be a lot of effort. As long as everyone is roughly aware of what each other is talking about, I'm unsure how much of a problem this is. It seems professional publications out of EA organizations, as longer reports which can afford the space to define terms, should do so. The EA Forum is still a blog, so that it's regarded as lower-stakes, I think it makes sense to be tol
1
CalebW
6y
In the same vein as this comment and its replies: I'm disposed to framing the three as expansions of the "moral circle". See, for example: https://www.effectivealtruism.org/articles/three-heuristics-for-finding-cause-x/
Jacy
6y17
0
0

I'd go farther here and say all three (global poverty, animal rights, and far future) are best thought of as target populations rather than cause areas. Moreover, the space not covered by these three is basically just wealthy modern humans, which seems to be much less of a treasure trove than the other three because WMHs have the most resources, far more than the other three populations. (Potentially there's also medium-term future beings as a distinct population, depending on where we draw the lines.)

I think EA would probably be discovering more things if... (read more)

Variant on this idea: I'd encourage a high status person and a low status person, both of whom regularly post on the EA Forum, to trade accounts for a period of time and see how that impacts their likes/dislikes.

Variant on that idea: No one should actually do this, but several people should talk about it, thereby making everyone paranoid about whether they're a part of a social experiment (and of course the response of the paranoid person would be to actually vote based on the content of the article).

2
kbog
6y
Problem is that the participants would not be not blinded, so they would post differently. People act to play the role that society gives them.

I strongly agree. Put another way, I suspect we, as a community, are bad at assessing talent. If true, that manifests as both a diversity problem and a suboptimal distribution of talent, but the latter might not be as visible to us.

My guess re the mechanism: Because we don't have formal credentials that reflect relevant ability, we rely heavily on reputation and intuition. Both sources of evidence allow lots of biases to creep in.

My advice would be:

  1. When assessing someone's talent, focus on the content of what they're saying/writing, not the general fe

... (read more)
6
David_Moss
6y
How we do we know that we are not bad at assessing talent in the opposite direction?

Thanks for doing these analyses. I find them very interesting.

Two relatively minor points, which I'm making here only because they refer to something I've seen a number of times, and I worry it reflects a more-fundamental misunderstanding within the EA community:

  1. I don't think AI is a "cause area."
  2. I don't think there will be a non-AI far future.

Re the first point, people use "cause area" differently, but I don't think AI -- in its entirety -- fits any of the usages. The alignment/control problem does: it's a problem we can make pr... (read more)

Max's point can be generalized to mean that the "talent" vs. "funding" constraint framing misses the real bottleneck, which is institutions that can effectively put more money and talent to work. We of course need good people to run those institutions, but if you gave me a room full of good people, I couldn't just put them to work.

and I wonder how the next generation of highly informed, engaged critics (alluded to above) is supposed to develop if all substantive conversations are happening offline.

This is my concern (which is not to say it's Open Phil's responsibility to solve it).

2
Milan_Griffes
3y
Where do you feel that the responsibility for solving it lies?
6
Peter Wildeford
7y
Agreed that this is a grave concern that worries me a lot.

Hey Josh,

As a preliminary matter, I assume you read the fundraising document linked in this post, but for those reading this comment who haven’t, I think it’s a good indication of the level of transparency and self-evaluation we intend to have going forward. I also think it addresses some of the concerns you raise.

I agree with much of what you say, but as you note, I think we’ve already taken steps toward correcting many of these problems. Regarding metrics on the effective altruism community, you are correct that we need to do more here, and we intend t... (read more)

1
Evan_Gaensbauer
7y
Would CEA be open to taking extra funding to specifically cover the cost of hiring someone new whose role would be to collect the data and generate the content in question?

This document is effectively CEA's year-end review and plans for next year (which I would expect to be relevant to people who visit this forum). We could literally delete a few sentences, and it would cease to be a fundraising document at all.

Fixed. At least with respect to adding and referencing the Hurford post (more might also be needed). Please keep such suggestions forthcoming.

As you explain, the key tradeoff is organizational stability vs. donor flexibility to chase high-impact opportunities. There are a couple different ways to strike the right balance. For example, organizations can try to secure long-term commitments sufficient to cover a set percentage of their projected budget but no more, e.g., 100% one year out; 50% two years out; 25% three years out [disclaimer: these numbers are not considered].

Another possibility is for donors to commit to donating a certain amount in the future but not to where. For example, imagine ... (read more)

1
Owen Cotton-Barratt
8y
This is an empirical question, but at the moment my intuition goes the other way: that the fraction of the benefits of committing coming from commitments to specific organisations is larger than the corresponding fraction of costs. I have two main reasons for thinking this: The first reason for this is that commitments from existing donors won't shed that much light on the total amount of "EA money" that will be available: in the first place because personal financial uncertainty means they shouldn't make firm commitments about all of their giving; in the second place because the amount of EA money that is available will depend in significant part on the rate of influx of new donors, and a lot of the uncertainty will relate to that. The second reason is that I think organisational decision-making is particularly helped by pushing uncertainty towards zero. The effective difference between a 59% and a 79% chance of getting enough funding may be rather smaller than the difference between a 79% and 99% chance. Organisations may be reluctant to take on new staff if that gives a realistic chance of having to let existing staff go. So the benefits of having organisational certainty rather than just sector-certainty are large. (If we believe this is false, we should perhaps think that orgs should stop holding significant reserves.) I'm not expert in this, would be interested to hear thoughts from people more directly involved in financial planning for EA orgs.

I'm looking into this on behalf of CEA/GWWC. Anyone else working on something similar should definitely message me (michael.page@centreforeffectivealtruism.org).

7
Evan_Gaensbauer
8y
.impact is already looking into this. Tom Ash and I will talk about it later this week. I'll fill you in on developments going forward.

If the reason we want to track impact is to guide/assess behavior, then I think counting foreseeable/intended counterfactual impact is the right approach. I'm not bothered by the fact that we can't add up everyone's impact. Is there any reason that would be important to do?

In the off-chance it's helpful, here's some legal jargon that deals with this issue: If a result would not have occurred without Person X's action, then Person X is the "but for" cause of the result. That is so even if the result also would not have occurred without Person Y's... (read more)

1
Linch
8y
I can't find the exact term, but my casual understanding of game theory/mechanism design will point to @mhpage's points being the right approach in economics too, not just law.

The downvoting throughout this thread looks funny. Absent comments, I'd view it as a weak signal.

Agreed. Someone earning to give doesn't meet the literal characterization of "full time" EA.

How about fully aligned and partially aligned (and any other modifier to "aligned" that might be descriptive)?

1
Gleb_T
8y
I believe the category of membership in the EA movement should apply only to people who are fully value-aligned with the movement, meaning that they hold “doing the most good for the world” as a significant and notable value. The question of where they are on the spectrum of involvement in the EA movement would correlate to how high they perceive this value in comparison to other values they hold, as demonstrated by their behavior.

In thinking about terminology, it might be useful to distinguish (i) magnitude of impact and (ii) value alignment. There are a lot of wealthy individuals who've had an enormous impact (and should be applauded for it), but who correctly are not described as "EA." And there are individuals who are extremely value aligned with the imaginary prototypical EA (or range of prototypical EAs) but whose impact might be quite small, through no fault of their own. Incidentally, I think those in the latter category are better community leaders than those in the former.

Edit: I'm not suggesting that either group should be termed anything; just that the current terminology seems to elide these groups.

I'll embrace the awkwardness of doing this (and this is more than the past month):

1) I printed and distributed about 1050 EA Handbooks to about a dozen different countries.

2) I believe I am the but-for cause of about five new EAs, one of whom is a professional poker player with a significant social media following who has been donating a percentage of her major tournament wins.

3) I donated $195k this calendar year.

1
michaeljohnston0
8y
How can I live up to that?! I give up! Jk :D Fully embracing the awkward: how can I be more like you?
1
Linch
8y
Wow, that's incredibly fantastic!
2
WilliamKiely
8y
Thank you for embracing the awkwardness; this is inspiring!
8
ClaireZabel
8y
Wow. That's pretty damn impressive.

I'm curious as a descriptive matter whether people have been downvoting due to disagreement or something else. Why, for example, do so many fundraising announcements get downvotes? I'm not certain we need a must-comment policy, but the mere fact that I don't know what a downvote means certainly impacts its signalling value.

2
impala
8y
Speaking solely for myself, I've down voted fundraising announcements when I felt people were asking for money inappropriately, without a good, straightforward case for why I shouldn't give to AMF instead (to take the example I currently give to). I try not to down vote solely because I disagree with someone.

I see the downvoting trend as a symptom of some potentially problematic community dynamics. I think this warrants a top-level post so we can hash out what the purpose, value, and risks are of downvotes.

3
Gleb_T
8y
Ironic that your comment was downvoted by someone. I think this exemplifies the need for a top-level post. It seems that the point of upvoting and downvoting is not for people to use it for purposes of popularity and anonymity, but to evaluate quality of ideas independent of who they came from, and also to signal to others whether they should engage with the post or not. For example, a good policy change is for people explain their justifications for downvoting as part of downvoting something.

Thanks, Julia. You make an important point here that I think is often lost in discussion of the "how much is enough" issue. The issue often is framed in terms of a conflict between one's own interests and the world's interests (e.g., ice cream for me or a bednet for someone else). But when viewed in terms of burnout/sustainability, the conflict disappears: allowing oneself to eat ice cream every so often might actually be in the world's best interest. Even a means machine requires oil.

The people who ask me about my shirt generally have never heard of effective altruism, but they are sufficiently interested in what "effective altruism" literally suggests to want more information.

0
Gleb_T
9y
Gotcha, I think I'm specifically talking about those who are not going to be curious about the term as such, but would want to get into the movement if it was described in a more clear way.

I wear the t-shirt from EA Global (San Francisco) all the time. I love the design and actually find it to be a pretty effective way to start a conversation about EA, presumably because only those with interest in the idea ask me about it. I think a more-involved logo might be viewed as more confrontational and therefore less likely to elicit inquiries.

0
Gleb_T
9y
My goal is to have a shirt option that's effective for those who are not yet interested in effective altruism and would be uncomfortable asking about the word itself. Any ideas on appropriate slogans?

I don't get that criticism. I can always donate to help you do direct work. I don't see any way to criticize donating per se other than through non-consequentialist reasoning.

Edit: Unless they're criticizing the ratio of direct work to donations.

1
zdgroff
9y
I think that's usually what it is. But there's usually no explicit argument for it.

I appreciate the feedback. I also shoot down most of my ideas, but I thought this one was worth sharing. I don't want to be in the position of "defending" the viability of the idea, but I will at least attempt to clarify it:

I did not imagine this ultimately catering primarily to the EA community, which is why I didn't think of .impact or impact certificates as alternatives. I imagined a widely used site like Craigslist on which people advertised random skills and needs. I didn't imagine an explicit "EA angle" other than that the goal wa... (read more)

Yes, I meant to include a shout-out to .impact. Consider this a belated one.

This IS quite similar! Thanks. Will look further into it.

Of course. But as I understand it, the hypothesis here is that given (i) the amount of money that will invariably go to sub-optimal charities; and (ii) the likely room for substantial improvements in sub-optimal charities (see DavidNash's comment), that one (arguably) might get more bang for their buck trying to fix sub-optimal charities. I think it's a plausible hypothesis.

I'm doubtful that one can make GiveWell charities substantially more effective. Those charities are already using the EA lens. It's the ones that aren't using the EA lens for which big improvements might be made at low cost.

EDIT: I suppose I'm assuming that's the OP's hypothesis. I could be wrong.

1
SydMartin
9y
Yes this is indeed my hypothesis; thank you for stating it so plainly. I think you've summed up my initial idea quite well. My assumption is that trying to improve a very effective charity is potentially a lot of work and research, while trying to improve an ineffective but well funded charity, even a little, could require less intense research and have a very large pay-off. Particularly given that there are very few highly effective charities but LOTS of semi-effective, or ineffective ones, meaning there is a larger opportunity. Even if only 10% of non EA charities agree to improve their programs by 1% I believe the potential for overall decrease in suffering is greater. There is also the added benefit of signalling. Having an organization that is working to improve effectiveness (despite of funding problems [see Telofy's comment]) shows organizations that donors and community members really care about measuring and improving outcomes. It plants the idea that effectiveness and an EA framework are valuable and worth considering. Even if they don't use the service initially. My thought here is this is another way (possibly a very fast one) to spread EA values through the charity world. Creating a shift in nonprofit culture to value similar things seems very beneficial.

This is true with respect to where a rational, EA-inclined person chooses to donate, but I think you're taking it too far here. Even in the best case scenario, there will be MANY people who donate for non-EA reasons. Many of those people will donate to existing, well-known charities such as the Red Cross. If we can make the Red Cross more effective, I can't see how that would not be a net good.

0
Diego_Caleiro
9y
At the end of the day, the metric will always be the same. If you can make the entire red cross more effective, it may be that each unit of your effort was worth it. But if you anticipate more and more donations going to EA recommended charities, then making them even more effective may be more powerful. See also DavidNash comment.

I am very intrigued by the potential upside of this idea. As I see it, one can change charity culture by changing consumer demand (generally what GiveWell does), which will eventually lead to a change in product. Alternatively, one can change charity culture by changing the product directly, on the assumption that many consumers care more about the brand than the product.

Would the service be free to the nonprofits? Would it help nonprofits conduct studies to assess their impact?

Anecdata: I have a friend who works at a big-name nonprofit who has been trying to find exactly this service.

1
Evan_Gaensbauer
9y
Ben Todd made this comment here detailing organizations he knows about (sort-of) working in that vein. Try forwarding that list to your friend!

I've been thinking about how to weigh the direct impact of one's career (e.g., donations) against the impact of being a model for others. For example, imagine option A is a conventional, high-paying salaried job, and option B is something less conventional (e.g., a startup) with a higher expected (direct) impact value. It's not obvious to me that option B has a higher expected impact value when one takes into account the potential to be a model for others. In other words, I think there might be a unique kind of value in doing good in a way that others can emulate. I'm curious whether you agree with this, and if so, how one might factor it into the analysis.

1
Benjamin_Todd
9y
I agree that thinking about your advocacy efforts is important, but I'm not sure with this example there's an obvious way to call it one way or the other. It would depend on the extent to which you're actually influencing other people. Also, there's effects in the opposite direction e.g. doing more remarkable, drastic actions makes you stand out more, which gives you greater reach. (e.g. giving 5% of your income is good, but giving 50% means you get international press coverage). Doing what you sincerely believe is best can also be powerful. Some more relevant reading: http://www.jefftk.com/p/optimizing-looks-weird https://meteuphoric.wordpress.com/2015/03/08/the-economy-of-weirdness/

Haha, don't be silly, I stopped eating solid food a long time ago.

[Was just joking about vegetables.]

I didn't derive sufficient immediate pleasure from reading the news. But like eating one's vegetables, I thought it was justified by long-term returns.

(Hoping someone now provides a reason I don't have to eat my vegetables.)

0
jayd
9y
The paleo diet? http://jayquantified.blogspot.com/2012/07/the-paleo-diet-follow-up-v.html (But see http://lesswrong.com/lw/e9o/what_is_the_evidence_in_favor_of_paleo/ )

Indeed, that is what I meant.

I was assuming that MIRI's position is that it presently is the most-effective recipient of funds, but that assumption might not be correct (which would itself be quite interesting).

A modified version of this question: Assuming MIRI's goal is saving the world (and not MIRI), at what funding level would MIRI recommend giving elsewhere, and where would it recommend giving?

1
So8res
9y
I’m not sure how to interpret this question: are you asking how much money I'd like to see dumped on other people? I’d like to see lots of money dumped on lots of other people, and for now I’m going to delegate to the GiveWell, Open Philanthropy Project, and GoodVentures folks to figure out who and how much :-)

Thanks, Ryan, but years of reading the news have left me unable to process such a long, thoughtful piece about how years of reading the news will leave me unable to process long, thoughtful pieces.

1
Denkenberger
9y
My solution is listening to all the TED talks-only about a six-month delay and much more durable information.

I love it when reason points in a direction I already wanted to go but mistakenly thought it unreasonable. Thanks.

0
jayd
9y
Why did you want to go that direction?

What's the argument for not consuming news? I don't necessarily disagree, but it's not self-evident to me.

2
RyanCarey
9y
I found Avoid News, Towards a Healthy News Diet by Rolf Dobelli quite convincing.
2
John_Maxwell
9y
This is the blog post that convinced me a few years ago.

Here's an EA forum post on the second (Harvard Law) article: http://effective-altruism.com/ea/8f/lawyering_to_give/

Although well-intentioned, I think the Harvard Law article is dangerous. The legal community is potentially pretty low-hanging fruit for EA recruitment: it contains a lot of people who make a lot of money and who generally make misguided but well-intentioned charitable decisions, both regarding how to donate their money and how to use their talents.

Changing the culture of this community will be complicated, however. Early missteps could be ext... (read more)

Once again, I am quite late to the party, but for posterity's sake, just want to add a few points: First, this is exactly what I do, and it's just not that hard. Second, I was formery a public interest lawyer (doing impact litigation) and believe the skill set required for that job is very similar to the skill set required for my current job (commercial litigation). Lastly, I am doing what I am doing on the belief that it does the most good -- I've considered the alternatives! If anyone seriously believes I'm mistaken, I'd very much like to hear from them.

I've noticed that what "EA is" seems to vary depending on the audience and, specifically, why it is that the audience is not already on board. For example, if one's objection to EA is that one values local lives over non-local lives, or that effects don't matter (or are trumped by other considerations), then EA is an ethical framework. But many people are on board with the basic ethical precepts but simply don't act in accordance with them. For those people, EA seems to be a support group for rejecting cognitive dissonance.

Thanks, Ryan. That's all very helpful.

(And the MIRI reference was a superintelligent AI joke.)

0
RyanCarey
9y
Haha ohhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh!

I'm thinking more along the line of mentors for the mentors, and I think one solution would be a platform on which to crowd source ideas for individuals' ten-year strategic plan. In a perfect world, one would be able to donate one's talents (in addition to one's money) to the EA cause, which could then be strategically deployed by an all-seeing EA director. Maybe MIRI could work on that.

0
RyanCarey
9y
MIRI is focussing on mathematical AI safety research at the moment, so they wouldn't currently want to act as a director of EA resources in general!! I think for people who really have substantial personal non-monetary resources to give away, there are people who are prepared to step into a temporary advice-giving role, which might not even be so materially different from what you're describing. even with my limited non-monetary resources, I've got quite helpful advice from people like Carl Shulman, Nick Beckstead and Paul Christiano, who I think are somewhat of a collective miscellaneous-problem-EA-question-answerer!! Mentoring the mentors: the problem with giving advice to senior people is that if you know less about their domain than they do, then your advice might well make them worse off. So in such cases, it's often preferable to bring you together with similar people, so that you can bounce ideas off one another. Or maybe I'm still missing some considerations, but these reservations seem worth taking into account.

Absolutely re personal factors. "Outsource" is an overstatement.

And no, I don't mean decisions like whether to be a vegetarian (which, as I've noted elsewhere, presents a false dichotomy) or whether to floss, which can be generically answered.

I mean a personalized version of what 80,000 hours does for people mid-career. Imagine several people in their mid-30s to -40s--a USAID political appointee; a law firm partner; a data scientist working in the healthcare field--who have decided they are willing to make significant lifestyle changes to bette... (read more)

1
RyanCarey
9y
Ah, mid-career work-related decisions. Yes, it seems important. As mid-career decisions are more tailored, they're harder for 80,000 Hours, who are nonetheless better equipped than most for this task. Although career direction is important, you can see why it might be done less than directing donations - everyone's money works the same, and so one set of charity-evaluations generalises reasonably well to everyone, assuming they have fairly similar values. Career decisions are harder. Mentors who sympathise with the idea of effective altruism are helpful here, because they know you. Also special interest groups could be useful. So for people in policy, it makes sense for them to be acquainted with other effective altruists in a similar space, even if they're living in a different country. If someone who had an unusually high-stakes career (say Jaan Tallinn, a cofounder of Skype) wanted to make an altruistic decision about his career, I'm sure he could pull together some of 80,000 Hours and others to do some relevant research for him. Beyond that, how we can get these questions better answered is an open question :)

I love the idea of outsourcing my donation decisions to someone who is much more knowledgeable than I am about how to be most effective. An individual might be preferable to an organization for reasons of flexibility. Is anyone actually doing this -- e.g., accepting others' EtG money?

In fact, I'd outsource all kinds of decisions to the smartest, most well-informed, most value-aligned person I could find. Why on earth would I trust myself to make major life decisions if I'm primarily motivated by altruistic considerations?

0
Jeff Kaufman
7y
This is now a thing: http://effective-altruism.com/ea/174/introducing_the_ea_funds/
1
RyanCarey
9y
Well, even if you're primarily motivated by altruistic considerations, there are likely to be some significant personal factors that you can introspect more easily. But what's related, and clearly beneficial, is getting advice from mentors who you talk to when you have a bigger than usual decision. My other thought is: what kinds of decisions do you want to outsource? Clever altruistic people have occasionally described why they made various kinds of decisions in their personal lives, and these can be copied e.g.: * http://www.gwern.net/DNB%20FAQ * https://meteuphoric.wordpress.com/2014/11/21/when-should-an-effective-altruist-be-vegetarian/ * http://robertwiblin.com/2012/04/19/should-you-floss-a-cost-benefit-analysis/

The trade-off argument is right as far as it goes, but that might not be as far as we think: the metaphor of the "will power points" seems problematic. As MichaelDickens and Jess note, many lifestyle changes have initial start-up costs but no ongoing costs. And many things we think will have ongoing costs do not (see, e.g., studies showing more money and more things don't on average make us happier; conversely, less money and fewer things might not make us less happy). An earning-to-give investment banker might use the trade-off logic to explain ... (read more)

5
CarlShulman
9y
" (see, e.g., studies showing more money and more things don't on average make us happier;" The studies do show having more money does make people feel happier. See RCTs of cash transfers like GiveDirectly's, and household data within and between countries. You get less happiness per dollar as you have more, but an n% fall or rise in income still has happiness effects in the same ballpark.

I use the recycling analogy when talking to people about this issue. I consider myself to be one-who-recycles, but if I have bottle in my hand and there's nowhere convenient to recycle it, I'll throw it away. Holding onto that bottle all day because I've decided I'm a categorical recycler seems kind of silly. I treat food the same way.

Regarding your broader point re consistency, my guess is that we way over-emphasize the effect of diet over other relatively cost-less things we can do to make the world a better place -- in large part because there are organ... (read more)

Wonderful essay. Thanks, Jess. A few responses:

(i) It's not clear to me that the vegan-vegetarian distinction makes sense, as I believe, for example, that consuming eggs or milk can be more harmful (in terms of animal suffering) than certain forms of meat consumption.

(ii) Related to (i) (and to Paul_Christiano's point re "other ways to make your life worse to make the world better"), other than for signalling/heuristic reasons, I don't think being categorically vegan/vegetarian is all that important. I believe that reducing animal products in my ... (read more)

-3
Robert_Wiblin
9y
sparkle fingers
Load more