All of RandomEA's Comments + Replies

I was actually assuming a welfarist approach too. 

But even under a welfarist approach, it's not obvious how to compare campaigning for criminal justice reform in the US to bednet distribution in developing countries.

Perhaps it's the case that this is not an issue if one accepts longtermism. But that would just mean that the hidden premise is actually longtermism.

2
Benjamin_Todd
3y
Hmm in that case, I'd probably see it as a denial of identifiability. I do think something along these lines is one of the best counteraguments to EA. I see it as the first step in the cluelessness debate.

The reason many volunteering schemes persist is that volunteers are more likely to donate in the future. For instance, when FORGE cut their volunteering scheme to be more effective, they inadvertently triggered a big drop in donations.

This seems somewhat misleading to me. If you click through to the FORGE blog post, it states that "volunteers were each required to raise a minimum of $5,000." 

I don't think it's reasonable to extrapolate from 'an organization that required each volunteer to raise a substantial sum saw a large decrease in revenue after d... (read more)

The criticisms of volunteering in this article seem directed at traditional volunteering: structured opportunities that produce direct impact. Under this definition of volunteering, the criticisms seem reasonable. 

But a person might be interested in a broader sense of volunteering: unpaid, non-job related ways of using their free time to have an impact. Under this definition, there are many worthwhile volunteering opportunities. For example, a person could do one on one video calls with college EAs interested in their field, provide feedback on draft ... (read more)

Great work! I think it might be a good idea for you to state on the page that the numbers are per kcal of energy. I clicked the link before reading your post and initially assumed it was the impact of eliminating the category from a standard diet. For what it's worth, I think it could be useful to have "impact of category in a standard diet" as an option on the page.

1
VilleSokk
3y
Thank you for the feedback! I took note about the per kcal issue and will try to fix it soon. It would definitely be useful to also account for current levels of consumption. The impact of avoiding broilers would probably increase.

I agree that one word is better but I think this factor is less important than other factors like clarity. Because of this, I think "Helping others" would be better than "Helpfulness."

I also think the placement of "Cause prioritization" and "Collaboration" should be switched in the primary proposal so that "Cause prioritization" is next to "Effectiveness."

And in the alternative proposal, I think "Cause prioritization" should be replaced with "Commitment to others."

I strongly prefer "reasoning carefully" to "rationality" to avoid EA being too closely associ... (read more)

I really like the idea of an acronym! Thank you for taking the time to create one and write a post about it. If I may, I'd like to add another option to the table:

Collaboration

Altruism

Reasoning carefully

Impartiality

Norms (integrity, inclusion etc.)

Greatest impact

I like the word "caring" because it pushes back against the idea that a highly deliberative approach to altruism is uncaring. 

1
BrianTan
3y
I like the word caring too, so this is an interesting suggestion! A couple of comments: 1. Would "rationality" be better than "reasoning carefully"? 2. Terms that are two words tend to be harder to remember for acronyms, so that's why I'd go with "rationality" rather than "reasoning carefully". Greatest impact isn't also ideal because it's two words, and it isn't a usual phrase used in the community.

Michael Bitton has used this argument as a reductio against longtermism (search "Here's an argument").

It seems it could work as to the medium term but would not work as to the very long term because i) if the fertility rate is above replacement, the initial additional people stop having a population effect after humanity reaches carrying capacity and ii) if the fertility rate is below replacement, the number of additional people in each generation attributable to the initial additional people would eventually reach zero.

Two suggestions for the list of "broad categories of longer-term roles that can offer a lot of leverage" under "Aim at top problems":

  • Under "Direct work", add foundations as one type of organization and grantmaking as one type of skill (or make this a separate category)
  • Under "Government and policy", add international organization to the list of employers to consider

Similar changes could be made to the "Five key categories" in the article "List of high-impact careers". 

2
Benjamin_Todd
3y
Makes sense! We've neglected those categories in the last few years - would be great to make the advice there a bunch more specific at some point.

Thanks Luke. Do you know why EA Funds excludes ACE Movement Grants? There is substantial overlap between the recipients of ACE Movement Grants and the recipients of EA Animal Welfare Fund grants, which is why I wanted clarification that exclusion is not meant to imply anything negative about ACE Movement Grants.

Feature request: Create an option for content in the "Recent Discussion" section to be sorted based on the "Magic (New & Upvoted)" formula used for "Frontpage Posts" instead of based solely on recency. This would allow people without time to go through every single piece of new content to still be able to find and engage with important new comments. 

For animal suffering:

  1. we can't say that farm animals live lives that are not worth living;
  2. advocating higher welfare standards legitimizes factory farming;
  3. corporations are unlikely to adhere to their higher welfare pledges;
  4. commercial fishing is okay since fish usually die painfully anyways;
  5. bad to transition from animal farming since jobs would be lost;
  6. the world will eventually transition to cultivated meat anyways;
  7. should end human suffering before addressing animal suffering;
  8. advocates ignore how food system affects communities of color;*
  9. I can't donate to far
... (read more)
1
Luke Freeman
3y
Hi RandomEA, That section of the website discusses why a fund can be a good option and then lists the funds that are available on EA Funds (the four EA Funds plus the Regranting organisations listed on EA Funds, minus CEA's Community Building Grants as we felt that was less targeted towards a general public audience that would typically visit that page). Hope that helps to clarify. Best, Luke

Global Poverty and Animal Suffering Donation Opportunities (2020)

Comprehensively Evaluated Charities

GiveWell Maximum Impact Fund (allocated between GiveWell Top and Standout Charities at GiveWell's discretion; list of GiveWell Top Charities for 2020 below)

... (read more)

How about just Good Careers?

The two most widely known EA organizations, GiveWell and 80,000 Hours, both have short and simple names.

It seems to me there's a fourth key premise:

0. Comparability: It is possible to make meaningful comparisons between very different kinds of contributions to the common good.

2
Benjamin_Todd
3y
Hey, I agree something like that might be worth adding. The way I was trying to handle it is to define 'common good' in such a way that different contributions are comparable (e.g. if common good = welfare). However, it's possible I should add something like "there don't exist other values that typically outweigh differences in the common good thus defined". For instance, you might think that justice is incredibly intrinsically important, such that what you should do is mainly determined by which action is most just, even if there are also large differences in terms of the common good.

It looks like I'm  too late. But here's something I've been wanting to ask. 

In your paper "The Definition of Effective Altruism," you distinguish effective altruism from utilitarianism on various grounds, including that:

  • EA does not claim that a person must sacrifice their personal interests (e.g. having children) when doing so would bring about greater good; and
  • EA does not claim that a person must violate non-consequentialist constraints in the rare situations when doing so might bring about greater good.

For me, this points to a broader principle... (read more)

I've completed my draft (now at 47,620 words)! 

I've shared it via the EA Forum share feature with a number of GPI, FHI, and CLR people who have EA Forum accounts.

I'm sharing it in stages to limit the number of people who have to point out the same issue to me.

Thanks Howie.

Something else I hope you'll update is the claim in that section that GiveWell estimates that it costs the Against Malaria Foundation $7,500 to save a life.

The archived version of the GiveWell page you cite does not support that claim; it states the cost per life saved of AMF is $5,500. (It looks like earlier archives of that same page do state $7,500 (e.g. here), so that number may have been current while the piece was being drafted.)

Additionally, the $5,500 number, which is based on GiveWell's Aug. 2017 estimates (click here and ... (read more)

Hi Arden and the 80,000 Hours team,

Thank you for the excellent content that you produce for the EA community, especially the podcasts.

There is one issue that I want to raise. I gave serious thought to raising this via your survey, but I think it is better raised publicly.

In your article "The case for reducing extinction risk" (which is linked to in your "Key ideas" article), you write:

Here are some very rough and simplified figures to show how this could be possible. It seems plausible to us that $100 billion spent on reducing extinctio
... (read more)
1
Howie_Lempel
4y
Hi RandomEA, Thanks for pointing this out (and for the support). We only update the 'Last updated' field for major updates not small ones. I think we'll rename it 'Last major update' to make it clearer. The edit you noticed wasn't intended to indicate that we've changed our view on the effectiveness of existential risk reduction work. That paragraph was only meant to demonstrate how it’s possible that x-risk reduction could be competitive with top charities from a present-lives-saved perspective. The author decided we could make this point better by using illustrative figures that are more conservative than 80k’s actual rough guess and made the edit. We’ve tinkered with the wording to make it clearer that they are not actual cost-effectiveness estimates. Also, note that in both cases the paragraph was about hypothetical effectiveness if you only cared about present lives, which is very different from our actual estimate of cost effectiveness. Hope this helps clear things up.

While I have made substantial progress on the draft, it is still not ready to be circulated for feedback.

I have shared the draft with Aaron Gertler to show that it is a genuine work in progress.

1
RandomEA
4y
I've completed my draft (now at 47,620 words)!  I've shared it via the EA Forum share feature with a number of GPI, FHI, and CLR people who have EA Forum accounts. I'm sharing it in stages to limit the number of people who have to point out the same issue to me.

Thanks Ben. There is actually at least one argument in the draft for each alternative you named. To be honest, I don't think you can get a good sense of my 26,000 word draft from my 570 word comment from two years ago. I'll send you my draft when I'm done, but until then, I don't think it's productive for us to go back and forth like this.

Thanks Pablo and Ben. I already have tags below each argument for what I think it is arguing against. I do not plan on doing two separate posts as there are some arguments that are against longtermism and against the longtermist case for working to reduce existential risk. Each argument and its response are presented comprehensively, so the amount of space dedicated to each is based mostly on the amount of existing literature. And as noted in my comment above, I am excerpting responses to the arguments presented.

FWIW I'd still favour two posts (or if you were only going to one, focusing on longtermism). I took a quick look at the original list, and I think they divide up pretty well, so you wouldn't end up with many reasons that should appear on both lists. I also think it would be fine to have some arguments appear on both lists.

In general, I think conflating the case for existential risk with the case for longtermism has caused a lot of confusion, and it's really worth pushing against.

For instance, many arguments that undermine existential risk actually imply we

... (read more)

As an update, I am working on a full post that will excerpt 20 arguments against working to improve the long-term future and/or working to reduce existential risk as well as responses to those arguments. The post itself is currently at 26,000 words and there are six planned comments (one of which will add 10 additional arguments) that together are currently at 11,000 words. There have been various delays in my writing process but I now think that is good because there have been several new and important arguments that have been developed in the past year. My goal is to begin circulating the draft for feedback within three months.

4
Aaron Gertler
4y
Any updates on how this post is going? I'm really curious to see a draft!
1
AlasdairGives
4y
that sounds fantastic. I'd love to read the draft once it is circulated for feedback

Judging from the comment, I expect the post to be a very valuable summary of existing arguments against longtermism, and am looking forward to reading it. One request: as Jesse Clifton notes, some of the arguments you list apply only to x-risk (a narrower focus than longtermism), and some apply only to AI risk (a narrower focus than x-risk). It would be great if your post could highlight the scope of each argument.

For those who are curious,

  • in April 2015, GiveWell had 18 full-time staff, while
  • 80,000 Hours currently has a CEO, a president, 11 core team members, and two freelancers and works with four CEA staff.
3
Benjamin_Todd
4y
We have 12.7 FTE of full-time staff, and 1.4 FTE of freelancers. FTE = full-time-equivalent.

Hi Ben,

Thank you to you and the 80,000 Hours team for the excellent content. One issue that I've noticed is that a relatively large number of pages state that they are out of date (including several important ones). This makes me wonder why it is that 80,000 Hours does not have substantially more employees. I'm aware that there are issues with hiring too quickly, but GiveWell was able to expand from 18 full-time staff (8 in research roles) in April 2017 to 37 staff today (13 in research roles and 5 in content roles). Is the reason that 80,000 Hou... (read more)

2
BrownHairedEevee
4y
I second this. I imagine that updating the AI problem profile must be a top priority for 80K because AI safety is a popular topic in the EA community, and it's important to have a central source for the community's current understanding of the problem.

Hi there, I think how quickly to hire is a really complex question. It would be best to read the notes on how quickly we think we should expand each of our programmes in our annual review as well as some of the comments in the summary.

Just quickly on the comparison with GiveWell, I think we're on a fairly similar trajectory to them, except that GiveWell started 4-5 years earlier, so it might be more accurate to compare us to GiveWell in 2015. We are planning to reach ~25 staff, though it will take several more years. Another difference is that we allocate

... (read more)

It seems to me that there are two separate frameworks:

1) the informal Importance, Neglectedness, Tractability framework best suited to ruling out causes (i.e. this cause isn't among the highest priority because it's not [insert one or more of the three]); and

2) the formal 80,000 Hours Scale, Crowdedness, Solvability framework best used for quantitative comparison (by scoring causes on each of the three factors and then comparing the total).

Treating the second one as merely a formalization of the first one can be unhelpful when thinking through th... (read more)

In his blog post "Why Might the Future Be Good," Paul Christiano writes:

What natural selection selects for is patience. In a thousand years, given efficient natural selection, the most influential people will be those who today cared what happens in a thousand years. Preferences about what happens to me (at least for a narrow conception of personal identity) will eventually die off, dominated by preferences about what society looks like on the longest timescales.

(Please read all of "How Much Altruism Do We Expect?" for the full context.)

Thanks Lucy! Readers should note that Elie's answer is likely partly addressed to Lucy's question.

What are your thoughts on the argument that the track record of robustly good actions is much better than that of actions contingent on high uncertainty arguments? (See here and here at 34:38 for pushback.)

Should non-suffering focused altruists cooperate with suffering-focused altruists by giving more weight to suffering than they otherwise would given their worldview (or given their worldview adjusted for moral uncertainty)?

Has your thinking about donor coordination evolved since 2016, and if so, how? (My main motivation for asking is that this issue is the focus of a chapter in a recent book on philosophical issues in effective altruism though the chapter appears to be premised on this blog post, which has an update clarifying that it has not represented GiveWell's approach since 2016.)

How confident are you that the solution to infinite ethics is not discounting? How confident are you that the solution to the possibility of an infinitely positive/infinitely negative world automatically taking priority is not capping the amount of value we care about at a level low enough to undermine longtermism? If you're pretty confident about both of these, do you think additional research on infinites is relatively low priority?

What do you think is the strongest argument against working to improve the long-term future? What do you think is the strongest argument against working to reduce existential risk?

(This comment assumes GiveWell would broadly agree with a characterization of its worldview as consequentialist.) Do you agree with the view that, given moral uncertainty, consequentialists should give some weight to non-consequentialist values? If so, do you think GiveWell should give explicit weight to the intrinsic value of gender equality apart from its instrumental value? And if yes, do you think that, in consider the moral views of the communities that GiveWell operates in, it would make sense to give substantially more weight to the views of women t... (read more)

3
lucy.ea8
4y
Answer from Elie Hassenfeld source Q) On Gender Inequality, reproductive health, etc., GiveWell hasn’t done much work on this. Do you see gender equality as having intrinsic value? What are your thoughts on women’s empowerment? A) * We’re broadly consequentialist in the giving that we do - focused on the direct impact on the world * We take that utilitarian perspective rather than the philosophical value of justice or helping the least * Focusing on equality per se has not been a focus for that reason * We could treat this differently by seeing gender inequality as an intrinsic value, rather than just an instrumental value. * Within the broader framework, we could treat it as an intrinsic value * It’s been a major challenge to weigh different good outcomes that charities do * Some charities improve health, some improve well-being * We try to solve this by using moral weights, to compare the good achieved by different charitable outcomes * These are things that we don’t have the right answers, and our approach to answer these have evolved over time * We used to take the median of what staff believe, to IDInsight to hear from beneficiaries on what they value * We now have a part of our team assigned to these questions, to decide which outcomes would have intrinsic weight * On reproductive health specifically, we’ve looked into that, and we couldn’t find charities that are competitive with our top charities * That’s still in the scope of where we’ll look into

There are many ways that technological development and economic growth could potentially affect the long-term future, including:

  • Hastening the development of technologies that create existential risk (see here)
  • Hastening the development of technologies that mitigate existential risk (see here)
  • Broadly empowering humanity (see here)
  • Improving human values (see here and here)
  • Reducing the chance of international armed conflict (see here)
  • Improving international cooperation (see the climate change mitigation debate)
  • Shifting the growth curve forward (see here)
  • Hasten
... (read more)

Do you think that "a panel of superforecasters, after being exposed to all the arguments [about existential risk], would be closer to [MacAskill's] view [about the level of risk this century] than to the median FHI view"? If so, should we defer to such a panel out of epistemic modesty?

8
Davidmanheim
4y
I personally, writing as a superforecaster, think that this isn't particularly useful. Superforecasters tend to be really good at evaluating and updating based on concrete evidence, but I'm far less sure about whether their ability to evaluate arguments is any better than that of a similarly educated / intelligent group. I do think that FHI is a weird test case, however, because it is selecting on the outcome variable - people who think existential risks are urgent are actively trying to work there. I'd prefer to look at, say, the views of a groups of undergraduates after taking a course on existential risk. (And this seems like an easy thing to check, given that there are such courses ongoing.)
3
MichaelStJules
4y
Do you have references/numbers for these views you can include here?

How much uncertainty is there in your case for existential risk? What would you put as the probability that, in 2100, the expected value of a substantial reduction in existential risk over the course of this century will be viewed by EA-minded people as highly positive? Do you think we can predict what direction future crucial considerations will point based on what direction past crucial considerations have pointed?

What do you think of applying Open Phil's outlier opportunities principle to an individual EA? Do you think that, even in the absence of instrumental considerations, an early career EA who thinks longtermism is probably correct but possibly wrong should choose a substantial chance of making a major contribution to increasing access to pain relief in the developing world over a small chance of making a major contribution to reducing GCBRs?

Is the cause area of reducing great power conflict still entirely in the research stage or is there anything that people can concretely do? (Brian Tse's EA Global talk seemed to mostly call for more research.) What do you think of greater transparency about military capabilities (click here and go to 24:13 for context) or promoting a more positive view of China (same link at 25:38 for context)? Do you think EAs should refrain from criticizing China on human rights issues (click here and search the transcript for "I noticed that over the last few ... (read more)

In an 80,000 Hours interview, Tyler Cowen states:

[44:06]
I don't think we'll ever leave the galaxy or maybe not even the solar system.
. . .
[44:27]
I see the recurrence of war in human history so frequently, and I’m not completely convinced by Steven Pinker [author of the book Better Angels of Our Nature, which argues that human violence is declining]. I agree with Steven Pinker, that the chance of a very violent war indeed has gone down and is going down, maybe every year, but the tail risk is still there. And if you let the clock tick out f
... (read more)
2
MichaelStJules
4y
This math problem is relevant, although maybe the assumptions aren't realistic. Basically, under certain assumptions, either our population has to increase without bound, or we go extinct. EDIT: The main assumption is effectively that extinction risk is bounded below by a constant that depends only on the current population size, and not the time (when the generation happens). But you could imagine that even for a stable population size, this risk could be decreased asymptotically to 0 over time. I think that's basically the only other way out. So, either: 1. We go extinct, 2. Our population increases without bound, or 3. We decrease extinction risk towards 0 in the long-run. Of course, extinction could still take a long time, and a lot of (dis)value could happen before then. This result isn't so interesting if we think extinction is almost guaranteed anyway, due to heat death, etc..

What are your thoughts on these questions from page 20 of the Global Priorities Institute research agenda?

How likely is it that civilisation will converge on the correct moral theory given enough time? What implications does this have for cause prioritisation in the nearer term?
How likely is it that the correct moral theory is a ‘Theory X’, a theory radically different from any yet proposed? If likely, how likely is it that civilisation will discover it, and converge on it, given enough time? While it remains unknown, how can we properly hedg
... (read more)

Do you think there are any actions that would obviously decrease existential risk? (I took this question from here.) If not, does this significantly reduce the expected value of work to reduce existential risk or is it just something that people have to be careful about (similar to limited feedback loops, information hazards, unilateralist's curse etc.)?

In the new 80,000 Hours interview of Toby Ord, Arden Koehler asks:

Arden Koehler: So I’m curious about this second stage: the long reflection. It felt, in the book, like this was basically sitting around and doing moral philosophy. Maybe lots of science and other things and calmly figuring out, how can we most flourish in the future? I’m wondering whether it’s more likely to just look like politics? So you might think if we come to have this big general conversation about how the world should be, our most big general public conversation
... (read more)
2
MichaelA
4y
Initial response: Ooh, there's a new 80k episode?! And it's with Toby Ord?! [visibly excited, rushes to phone] Secondary response: Thanks for sharing that! Sounds like, as hoped, his book will provide and prompt a more detailed discussion of this idea than there's been so far. I look forward to gobbling that up.

The GPI Agenda mentions "Greg Lewis, The not-so-Long Reflection?, 2018" though as of six months ago that piece was in draft form and not publicly available.

With respect to the necessity of a constitutional amendment, I agree with you on presidential elections but respectfully disagree as to congressional elections.

For presidential elections, the proposal with the most traction is the National Popular Vote Interstate Compact, which requires compacting states to give their electoral votes to the presidential ticket with a plurality of votes nationwide but only takes effect after states collectively possessing a majority of all electoral votes join the compact. Proponents argue that it is constitutional (with ma... (read more)

The 80,000 Hours career review on UK commercial law finds that "while almost 10% of the Members of Parliament are lawyers, only around 0.6% have any background in high-end commercial law." I have been unable to find any similar analysis for the US. Do you know of any?

2
JacobS
4y
I dug up a few other places 80,000 Hours mentions law careers, but I couldn't find any article where they discuss US commercial law for earning-to-give. The other mentions I found include: In their profile on US AI Policy, one of their recommended graduate programs is a "prestigious law JD from Yale or Harvard, or possibly another top 6 law school." In this article for people with existing experience in a particular field, they write “If you have experience as a lawyer in the U.S. that’s great because it’s among the best ways to get positions in government & policy, which is one of our top priority areas.” It's also mentioned in this article that Congress has a lot of HLS graduates.

This makes me feel more strongly that there should be a separate career advice organization focused on near term causes. (See here for my original comment proposing this idea.)

A near term career advice organization could do the following:

  • Write in-depth problem profiles on causes that could be considered to be among the most pressing from a near term perspective but that are not considered to be among the most pressing from a long term perspective (e.g. U.S. criminal justice reform, developing country mental health, policy approaches to global poverty, fo

... (read more)
3
Benjamin_Todd
5y
80,000 Hours would likely be supportive of another organisation specialising in global health or factory farming career advising. I'd prefer to divide up by problem areas, rather than long vs. short term. We plan to write more about this.
2
kbog
6y
(relevant blogs, to be precise) Some of those objections would not apply a journal like this. Namely, the journal itself would be about questions which matter and have a high impact, and cause prioritization is no longer so ignored that you can make great progress by writing casually. Also, by Brian's own admission, some of his reasons are "more reflective of my emotional whims". In any case, Brian's only trying to answer the question of whether a given author should submit to a journal. Whether or not a community should have a journal is a subtly different story.

Would it be possible to introduce a coauthoring feature? Doing so would allow both authors to be notified of new comments. The karma could be split if there are concerns that people would free ride.

[Criminal Justice Reform Donation Recommendations]

I emailed Chloe Cockburn (the Criminal Justice Reform Program Officer for the Open Philanthropy Project) asking what she would recommend to small donors. She told me she recommends Real Justice PAC. Since contributions of $200 or more to PACs are disclosed to the FEC, I asked her what she would recommend to a donor who wants to stay anonymous (and whether her recommendation would be different for someone who could donate significantly more to a 501(c)(3) than a 501(c)(4) for tax reasons). She told me that s... (read more)

Do you know if this platform allows participants to go back? (I assumed it did, which is why I thought a separate study would be necessary.)

Load more