All Comments

Climate Change Is Neglected By EA

That's a fair point. As per the facebook event description, I was originally asked to discuss two posts:

I ended up proposing that I could write a new post, this post. The event was created with a title of "Is climate change neglected within EA?" and I originally intended to give this post the same title. However, I realized that I really wanted to argue a particular side of this question and so I posted this article under a more appropriate title.

You are correct to call out that I haven't actually offered a balanced argument. Climate change is not ignore by EA. As is clear in Appendix A, there have been quite a few posts about climate change in recent years. The purpose of this post was to draw out some particular trends about how I see climate change being discussed by EA.

Climate Change Is Neglected By EA

Thanks for your comments and for linking to that podcast.

And while you may be right that it's a bit naive to just count all climate-related funding in the world when considering the neglectedness of this issue, I suspect that even if you just considered "useful" climate funding, e.g. advocacy for carbon taxes or funding for clean energy, the total would still dwarf the funding for some of the other major risks.

In my post I am arguing for an output metric rather than an input metric. In my opinion, climate change will stop being a neglected topic when we actually manage to start flattening the emissions curve. Until that actually happens, humanity is on course for a much darker future. Do you disagree? Are you arguing that it is better to focus on an input metric (level of funding) and use that to determine whether an area has "enough" attention?

Climate Change Is Neglected By EA

Thanks for your feedback.

Framing climate change as the default problem, and working on other cause areas as defecting from the co-ordination needed to solve it, impedes the essential work of cause-impartial prioritisation that is fundamental to doing good in a world like ours.

I think it's worth emphasizing that the title of this post is "Climate Change Is Neglected By EA", rather than "Climate Change Is Ignored By EA", or "Climate Change Is the Single Most Important Cause Above All Others". I am strongly in favor of cause-impartial prioritisation.

In "Updated Climate Change Problem Profile" I argued that Climate Change should receive an overall score of 24 rather than 20. That's a fairly modest increase.

This post itself argues that EA is losing potential members by not focusing on climate change. But this claim is in direct tension with claims that climate change is neglected. If there are droves of potential EAs who only want to talk about climate change, then there are droves of new people eager to contribute to the climate change movement. The same can hardly be said for AI safety, wild animal welfare, or (until this year, perhaps) pandemic prevention.

I don't agree with this "direct tension". I'm arguing that (A) Climate Change really is more important than EA often makes it out to be, and that (B) EA would benefit from engaging with people about climate change from an EA perspective. Perhaps as part of this engagement you can encourage them to also consider other causes. However, starting out from an EA position which downplays climate change is both factually wrong and alienating to potential EA community members.

Examples of people who didn't get into EA in the past but made it after a few years

Thank You for the detailed answer.

In general, I think that everyone’s situation is different and you shouldn’t base
actions on stuff like this too much.

Then, I do not know what else to base my actions on. I also don't understand what you mean by "too much". Do you have an example in mind?

I am trying to look for "similar" (big quotes) people and see how they did it. And copying actions and testing it out seems to be the "better" options I have. It might work, it might not work out in the end to a variety of reasons. But atleast there is one example instead of empty claims about how to get an EA job. Your example, Peters example and MSJs example tell me one thing, it requires persistence and a lot of time (2-5 years), hardwork, long time EA engagement, writing/researching, applying, criticizing, learning about EA etc... I now have an understanding that it would take 2-5 years (and that I need to be ready to accept this). This never hit home to me before today. So BIG THANK YOU for that.

And one more question:

Why were you so bent on getting an EA job? Why not ETG. You are a software engineer I see from your Linkedin.

P.S I am asking you these as I am struggling myself with such questions.

Examples of people who didn't get into EA in the past but made it after a few years

I don't know if you need someone to say this, but:

You can often do more good outside of an EA organisation than inside one. For most people, the EA community is not the only good place to look for grantmaking or research jobs.

If I could be a grantmaker anywhere, I'd probably pick the Gates Foundation or the UK Government's Department for International Development. If I could be a researcher anywhere, I might choose Harvard's Kennedy School of Public Policy or the Institute for Government. None of these are "EA organisations" but they would all most likely allow me to do more good than working at GiveWell. (Although I do love GiveWell and encourage interested applicants to apply!)

Some people already know this and have particular reasons they want to work in an EA organisation, but some don't, so I thought it was worth saying.

How Much Leverage Should Altruists Use?
Another similar investment idea: Instead of buying a managed futures fund, buy value and momentum funds while shorting the broad market to produce net zero stock exposure, and then apply lots of leverage.

I feel like it's worth emphasizing the benefits of this more. Can't this significantly reduce the risk and volatility of your portfolio? OTOH, some of the funds you mention have only been around for a few years, and they have done really poorly, as Paul pointed out. I don't have confidence that they're well-managed.

I was looking into some momentum (and growth) ETFs, and found that several were pretty heavy in Apple, Amazon, Google, Microsoft and Facebook (not that these stocks should be avoided, but you might want to diversify more on top of investing in these ETFs). I found a few that were more diversified and performed really well over the past 5-10 years (although past performance may not be indicative of future performance) and decently during the pandemic:

  • XMMO: Invesco S&P MidCap Momentum ETF
  • RPG: Invesco S&P 500 Pure Growth ETF (also weighs momentum)
  • SPGP: Invesco S&P 500 GARP ETF (just based on growth, not momentum, AFAIK)
  • PDP: Invesco DWA Momentum ETF

For value stocks, what about buying VOOV or Buffett's Berkshire Hathaway BRK.B? Worth keeping in mind that BRK.B has dropped ~50% during some crashes, and VOOV has only been around since 2010.

For those interested in global health and poverty, you may end up (very) correlated with Gates, Buffett and the Gates Foundation if you're investing in value ETFs and the strategies happen to lead to similar choices, and obviously if you buy BRK.B. I think most of Gates' wealth is no longer in Microsoft, but I'm not sure how much he has left in it.

Climate Change Is Neglected By EA

Yes, you are correct and thank you for forcing me to further clarify my position (in what follows I leave out WAW since I know absolutely nothing about it):

  1. EA funds, which I will assume is representative of EA priorities has these funds a) “Global Health and Development”; b) “Animal Welfare”; c) “Long-Term Future”; d) “EA Meta”. Let’s leave D aside for the purposes of this discussion.

  2. There is good reason to believe the importance and tractability of specific climate change interventions can equal or even exceed those of A & B. We have not done enough research to determine if this is the case.

  3. The arguments in favor of C being the only area we should be concerned with, or the area we should be most concerned with, are:

I) reminiscent of other arguments in the history of thought that compel us (humans) because we do not account for the limits of our own rationality. I could say a lot more about this another time, suffice it to say here that in the end I cautiously accept these arguments and believe x-risk deserves a lot of our attention.

II) are popular within this community for psychological as well as purely rational reasons. There is nothing wrong with that and it might even be needed to build a dedicated community.

III) For these reasons I think we are biased towards C, and should employ measurements to correct for this bias.

  1. None of these priorities is neglected by the world, but certain interventions or research opportunities within them are. EA has spent an enormous amount of effort finding opportunities for marginal value add in A, B & C.

  2. Climate change should be researched just as much as A & B. One way of accounting for the bias I see in C is to divert a certain portion of resources to climate change research despite our strongly held beliefs. I simply cannot accept the conclusion that unless climate change renders our planet uninhabitable before we colonize Mars, we have better things to worry about. That sounds absurd in light of the fact that certain detrimental effects of climate change are already happening, and even the best case future scenarios include a lot of suffering. It might still be right, but it’s absurdity means we need to give it more attention.

What surprises me the most from the discussion of this post (and I realize it’s readers are a tiny sample size of the larger community) is that no one has come back with: “we did the research years ago, we could find no marginal value add. Please read this article for all the details”.

Climate Change Is Neglected By EA

The same can hardly be said for AI safety, wild animal welfare, or (until this year, perhaps) pandemic prevention. - Will

Otherwise, looking at malaria interventions, to take just one example, makes no sense. Billions have and will continue to go in that direction even without GiveWell - Uri

I noticed Will listed AI safety and wild animal welfare (WAW), and you mentioned malaria. I'm curious if this is the crux – I would guess that Will agrees (certain types of) climate change work is plausibly as good as anti-malaria, and I wonder if you agree that the sort of person who (perhaps incorrectly) cares about WAW should consider that to be more impactful than climate change.

Climate Change Is Neglected By EA

Thanks for sharing! This does seem like an area many people are interested in, so I'm glad to have more discussion.

I would suggest considering the opposite argument regarding neglectedness. If I had to steelman this, I would say something like: a small number of people (perhaps even a single PhD student) do solid research about existential risks from climate change -> existential risks research becomes an accepted part of mainstream climate change work -> because "mainstream climate change work" has so many resources, that small initial bit of research has been leveraged into a much larger amount.

(Note: I'm not sure how reasonable this argument is – I personally don't find it that compelling. But it seems more compelling to me than arguing that climate change isn't neglected, or that we should ignore neglectedness concerns.)

Brief update on EA Grants

EDIT: The EA Meta Fund and Long-Term Future Fund deadline is now 12 June, as announced elsewhere.

Examples of people who didn't get into EA in the past but made it after a few years

I never had to ask anyone's permission to do EA-related research.

I just worked hard in my free-time for over five years and then finally was able to have enough reputation to co-found my own research non-profit (Rethink Priorities). I totally understand this path isn't accessible to everyone (or even most people) but it is probably more within the grasp of people than they might think and I think it's worth some consideration.

I think a great place to start is just making thoughtful EA Forum posts (perhaps aiming to emulate good reasoning transparency and the style of highly upvoted posts) and trying to talk with other thoughtful EAs.

Even if you don't go on to co-found your own research non-profit, this portfolio you build will almost certainly either get you noticed by a recruiter or boost your application when you do apply.

Biggest Biosecurity Threat? Antibiotic Resistance

Hi, To be honest I just stumbled across EA from a google search looking for suitable charities to donate a portion of money too concerning the locusts situation in the Horn of Africa. So I'll say its a very impressive forum you have here and well done all for taking part, its great to see personally.

As to the topic at hand as a recent open university mature student one of my projects was on antibiotic resistance in the environment - specifically wastewater concentrations. There is a big problem with antibiotics being used in animal farming and the use in humans leading to environmental accumulations. Human use is often not metabolised by the body and excreted in active form where antibiotic resistance may develop in water treatment plants (WWTP) and water treatment sinks. In general our 1950s style sewage treatment isn't very good at removing these, or other persistent pollutants as well.

One of the papers I found "Review of Antimicrobial Resistance in the environment and its Relevance to Environmental regulators", Singer et al from NERC, wallingford, UK gave a really excellent background. I was deeply impressed as well by Switzerlands plans to upgrade half their WWTPs. Much of how much environmental contamination contributes seems largely unknown - I remember its the above NERC paper that lists heavy metals and other locally occuring conditions as co-factors although there are a few others.

Anyway I'm a big out of my depth for now, I wish I'd found you guys earlier on in my life, but will be keeping an eye out on this as things progress in my life. I hope some of the above is helpful.

Examples of people who didn't get into EA in the past but made it after a few years


Oh, I should also add that I read and commented on several of CE's reports (commenting on the EA Forum posts, and I also read other effective animal advocacy research). I did this leading up to my first application that was rejected, but I think my recent feedback was much more useful, and I was encouraged to apply following a conservation about my feedback.

Helping wild animals through vaccination: could this happen for coronaviruses like SARS-CoV-2?

Thank you for your excellent comment, Gavin! You highlight several important points, with which we agree. Concerning why viruses in general, and coronaviruses in particular, are so prevalent in bats, you're quite right, although on top of what you said there are other factors that can be considered too, which is why we argued that it could be a sum of "high genetic diversity (there are both many species and many individual  bats), [and that bats are] long-lived, and they roost in large groups." 

Examples of people who didn't get into EA in the past but made it after a few years
  1. Before the ACE's internship I applied to MIRI for a programming internship but didn't receive an answer. After ACE, I applied for various roles in Centre For Effective Altruism and wasn't selected. I also was rejected for researcher roles by OpenPhil (after some number of rounds) and Charity Entrepreneurship, shortly before I got the job at RP. It's possible that I got rejected by CE because I said that I was uncertain if I will be willing to move to Vancouver which is something that they wanted at that time, but I think it was more because I totally failed at a task they gave me during the hiring process. I guess I should note that OpenPhil had very many good applicants and CE was reluctant to hire at that time and I think they didn't hire anyone during that hiring round. Oh, I was also asked to apply for a researcher role at Effective Giving but in the end I was told that there was one candidate who was better fit than me and they hired them. I also applied as a researcher for Veddis Foundation and was not selected, no interview. There may have been more rejections that I don't remember. I think I applied for all or most of these researcher roles after writing those two articles.
  2. Yes, I didn't have a real job during that year and for a lot of the year I was doing random stuff like going to hippyish events. I did two weeks of contractor work for Effective Giving (I think after they decided to not hire me), but I think I didn't mention that to RP so I don't think it affected my hiring chances.
  3. Yes, it was the articles that drew attention to hirers at RP. I did not know hirers personally. When they reached out to me, they wrote "I've been following your research for ACE and on the EA Forum and I think you'd be an exceptional fit for the role." Similarly, I think that those articles were by far the main reason why Effective Giving was interested in me, though I did know the hirer personally. But it’s not enough to draw attention, I imagine that RP also hired me because I did ok or well at interviews and tasks.
  4. EA London asked publicly if anyone would be interested in organizing some concrete events with their help. I was the only one who volunteered and we organized an event. I think they offered me the internship because of that and because they knew that I won’t need that much management. I was already friends with people who were running EA London. Organizing events and writing articles have almost nothing in common so I think that it’s unrelated.

In general, I think that everyone’s situation is different and you shouldn’t base your actions on stuff like this too much. I think that in the end what helped me to be the kind of person who would be considered for these kinds of roles was a long time intense engagement with EA, thinking hard about where to donate, etc. Another thing I did that may have helped me get the ACE internship was criticising their work vie emails, pointing out something that I thought was a mistake.

Applications are open for EAGxVirtual 2020

Thanks for clarifying all of this! Given that most questions are optional I no longer have this concern, and I'm glad that you've clarified this on the application.

Much looking forward to seeing you there as well!

Comparisons of Capacity for Welfare and Moral Status Across Species

Thanks a lot for the response, Jason! It seems like we actually agree more than it seemed.

if I were to put on my hierarchical hat, I would suggest that so long as the intrinsic characteristics that determine moral status are distinct from the characteristics that determine capacity for welfare, the double-counting worry can be avoided.

Agreed. If we accept the possibility you suggest, then I can see how status-adjusted welfare doesn't run into double-counting. The question is: what makes these status-conferring characteristics morally relevant if not their contribution to welfare? Some views, I suppose, hold that the mere possession of some intrinsically valuable features—supposedly, rationality, autonomy, being created by God, being human, and whatnot—determine moral status even if they don't contribute to welfare. That's a coherent kind of view, and perhaps you're right that a view like this would not necessarily be arbitrary, but I have a hard time finding it plausible. I just don't understand why some property should determine how to treat x if it has nothing to do with what can harm or benefit x.

If appeal to differences in moral status is the only way to avoid obligations that one finds deeply counterintuitive, then the appeal isn’t necessarily arbitrary.

Yeah, I understand the motivation behind Kagan's move. His descriptions of the distributive implications of unitarianism do make it look like mice just can't have the same moral status as human beings. But it doesn't follow that the interests of mice should count less. Many other morally relevant facts might explain why we ought not to massively shift resources towards mice. But yes, I can see the appeal of the hierarchical views as a solution to these problems. However, we should be wary of which intuitions shape our response to those sorts of cases (as Sebo argues in his review), or we're just going to construct a view that rationalizes whatever allocation of resources we find acceptable. Sometimes, Kagan's reasoning sounds like: "Come on, we're not going to help rats! Therefore they must have a much lower status than persons."

I’m sympathetic to the view that ultimately moral status is context-sensitive or agent-relative or somehow multidimensional

Me too, very much so. As for practical value, I like Kagan's eventual move towards "practical realism" a lot. There's a similar move in Rachels (2004). A helpful way to think about this, for utilitarians, is in terms of R.M. Hare's two levels of moral thinking, nicely developed for animals in Varner (2012).

Examples of people who didn't get into EA in the past but made it after a few years

Thanks a ton Sauliu. May I ask you a few more questions:

  1. Are there other internships/jobs you got rejected to? (and where in your timeline were those rejects)

  2. Can you please tell me more about what all you did in that gap year other than write those two articles to "boost your chances"? Did you take a break from normal FT work during that gap year?

  3. So the articles drew the attention to a hirer at RP? Not your connections.

  4. How did you get the EA community building internship? Why was it "unrelated"

External evaluation of GiveWell's research

Oh man, happy to have come across this. I'm a bit surprised people remember that article. I was one of the main people that set up the system, that was a while back.

I don't know specifically why it was changed. I left 80k in 2014 or so and haven't discussed this with them since. I could imagine some reasons why they stopped it though. I recommend reaching out to them if you want a better sense.

This was done when the site was a custom Ruby/Rails setup. This functionality required a fair bit of custom coding functionality to set up. Writing quality was more variable then than it is now; there were several newish authors and it was much earlier in the research process. I also remember that originally the scores disagreed a lot between evaluators, but over time (the first few weeks of use) they converged a fair bit.

After I left they migrated to Wordpress, which I assume would have required a fair effort to set up a similar system in. The blog posts seem like they became less important than they used to be; in favor of the career guide, coaching, the podcast, and other things. Also the quality has become a fair bit more consistent, from what I can tell as an onlooker.

The ongoing costs of such a system are considerable. First, it just takes a fair bit of time from the reviewers. Second, unfortunately, the internet can be a hostile place for transparency. There are trolls and angry people who will actively search through details and then point them out without the proper context. I think this review system was kind of radical, and can imagine it not being very comfortable to maintain, unless it really justified a fair bit of effort.

I'm of course sad it's not longer in place, but can't really blame them.

Examples of people who didn't get into EA in the past but made it after a few years

I was hoping to get a position at ACE after my research internship (which lasted maybe 7 months) and I was told it was a possibility but they hired other people instead. However, after the internship I knew better what kind of articles would be useful and had some relevant connections. In the following year I wrote two articles (this and this) which were reviewed by my connections at ACE before publishing which was very useful . I also did some unrelated stuff like an EA community building internship. I applied for various EA jobs, mostly research, and didn't get some of them. Someone who was hiring at Rethink Priorities reached out to me and asked me to apply because they liked those two articles I wrote. I applied and got the job, about one year after my ACE internship ended.

Examples of people who didn't get into EA in the past but made it after a few years

Thank You StJules. Appreciate it. This is actually great. Thanks for the details. Ultimately it is about getting a job in EA. But Interships also sounds good.

And congratulations on the Internship.

Research proposals?

Jaime Sevilla gave a detailed advice on how to generate research proposals, which might also be useful.

Climate Change Is Neglected By EA

The assumption is not that people outside EA cannot do good, it is merely that we should not take it for granted that they are doing good, and doing it effectively, no matter their number. Otherwise, looking at malaria interventions, to take just one example, makes no sense. Billions have and will continue to go in that direction even without GiveWell. So the claim that climate change work is or is not the most good has no merit without a deeper dive into the field and a search for incredible giving / working opportunities. Any shallow dive into this cause reveals further attention and concern are warranted. I do not know what the results of a deeper dive might show, but am fairly confident we can at least be as effective working on climate change as working on some of the other present day welfare causes.

I do believe that there is strong bias towards the far future in many EA discussions. I am not unsympathetic to the rational behind this, but since it seems to override everything else, and present day welfare (as your reply implies) is merely tolerated, I am cautious about it.

CU & extreme suffering

Thanks for the answer! I'll have to think some more about this. I meant "within one current human".

CU & extreme suffering

Hm, interesting. I was more pointing to the current human, as you write. What I was trying to get at was that I would think it would be absurd for someone to accept, say, 50 years of burning of alive before the nerves die off, for any sort of well-being "within one current human" for 50 years. If this is true for the CU as well, it seems she have to account for such an asymmetry by 'extrinsic' means.

Your answer (and the links) definitely made me a lot more uncertain about a bunch of things though!

Examples of people who didn't get into EA in the past but made it after a few years

Do research internships count? I just started one at Charity Entrepreneurship.

I think I might have described my history in one of your other posts/questions.

I first applied to ACE and GiveWell research internships in 2016, back when I was still new to EA, but didn't get either. The extent of my EA involvement at the time was over Facebook.

Then I studied for a master's with the intention to earn to give, got involved with my local EA group and started running it last summer, started commenting and writing on the EA Forum, and earned to give, although I hadn't made any significant donations yet. Then I applied to Charity Entrepreneurship and ACE internships in August and November/December, respectively, and didn't get either. Then I donated about 45% of my 2019 income in December, wrote an EA Forum post that won an EA Forum prize, and attended my first EA Global (the virtual one in March). I talked with someone from CE at a local EA group meetup, and my current supervisor at CE at EA Global, and I think I made decent impressions on them. Then I applied to CE's research internship last month and was accepted this time.

I imagine that if I get a full-time position in EA research, this internship will be an important contributing factor. I don't expect it to guarantee me a full-time position, though, since they're very competitive and pretty rare.

Developing my inner self vs. doing external actions

This is a great question and one everyone struggles with.

TL;DR work on self improvement daily but be open to opportunities for acting now. My advice would indeed be to balance the two, but balance is not a 50-50 split. To be a top performer in anything you do, practice, practice, practice. The impact of a top performer can easily be 100x over the rest of us, so the effort put into self improvement pays off. Professional sports is a prime example, but research, engineering, academia, management, parenting, they all benefit from working on yourself.

The trap to avoid is not acting before you are perfect. Do not let opportunity for doing good slip you by. Your first job, relationship, child will all suffer from your inexperience, but how else do you gain experience? In truth, the more experience you gain the greater the challenges you will allow yourself to tackle, so being comfortable acting with some doubt of your ability is critical to great achievements.

Climate Change Is Neglected By EA

I think that more research is definitely warranted. EAs can bring a unique perspective to something like climate change, where there are so many different types of interventions which probably vary wildly in effectiveness. I don't think enough research has been done to rule out the possibility of there existing hugely effective climate change interventions that are actually neglected/underfunded, even if climate change as a whole is not. And since people who care about climate change are typically science-minded, there's a chance a significant chunk could be persuaded to fund the more effective interventions once we identify them.

I knew a bit about misinformation and fact-checking in 2017. AMA, if you're really desperate.

I think I've got similar concerns and thoughts on this. I'm vaguely aware of various ideas for dealing with these issues, but I haven't kept up with that, and I'm not sure how effective they are or will be in future.

The idea of making captcha requirements before things like commenting very widespread is one I haven't heard before, and seems like it could plausibly cut off part of the problem at relatively low cost.

I would also quite like it if there were much better epistemic norms widespread across society, such as people feeling embarrassed if people point out they stated something non-obvious as a fact without referencing sources. (Whereas it could still be fine to state very obvious things as facts without sharing sources all the time, or to state non-obvious things as fairly confident conjectures rather than as facts.)

But some issues also come to mind (note: these are basically speculation, rather than drawing on research I've read):

  • It seems somewhat hard to draw the line between ok and not ok behaviours (e.g., what claims are self-evident enough that it's ok to omit a source? What sort of tone and caveats are sufficient for various sorts of claims?)
    • And it's therefore conceivable that these sorts of norms could be counterproductive in various ways. E.g., lead to (more) silencing or ridicule of people raising alarm bells about low probability high stakes events, because there's not yet strong evidence about that, but no one will look for the evidence until someone starts raising the alarm bells.
    • Though I think there are some steps that seems obviously good, like requiring sources for specific statistical claims (e.g., "67% of teenagers are doing [whatever]").
  • This is a sociological/psychological rather than technological fix, which does seem quite needed, but also seems quite hard to implement. Spreading norms like that widely seems hard to do.
  • With a lot of solutions, it seems not too hard to imagine ways they could be (at least partly) circumvented by people or groups who are actively trying to spread misinformation. (At least when those people/groups are quite well-resourced.)
    • E.g., even if society adopted a strong norm that people must include sources when making relatively specific, non-obvious claims, there could then perhaps be large-scale human- or AI-generated sources being produced, and made to look respectable at first glance, which can then be shared alongside the claims being made elsewhere.

We could probably also think of things like more generally improving critical thinking or rationality as similar broad, sociological approaches to mitigating the spread/impacts of misinformation. I'd guess that those more general approaches may better avoid the issue of difficulty drawing lines in the appropriate places and being circumventable by active efforts, but may suffer more strongly from being quite intractable or crowded. (But this is just a quick guess.)

Aligning Recommender Systems as Cause Area

I'm not sure about users definitely preferring the existing recommendations to random ones - I actually have been trying to turn off YouTube recommendations because they make me spend more time on YouTube than I want. Meanwhile other recommendation systems send me news that is worse on average than the rest of the news I consume (from different channels). So in some cases at least, we could use a very minimal standard of: a system is aligned if the user better off because the recommendation system exists at all.

This is a pretty blunt metric, and probably we want something more nuanced, but at least to start off with it'd be interesting to think about how to improve whichever recommender systems are currently not aligned.

I Want To Do Good - an EA puppet mini-musical!

Really enjoyed this, thank you. I especially liked the undertone of 'uncertainty isn't a reason not to try, it's a reason to find out more'. Good life advice in general, I think.

Climate Change Is Neglected By EA

Will and Rob devote a decent chunk of time to climate change on this 80K podcast, which you might find interesting. One quote from Will stuck with me in particular:

I don’t want there to be this big battle between environmentalism and EA or other views, especially when it’s like it could go either way. It’s like elements of environmentalism which are like extremely in line with what a typical EA would think and then maybe there’s other elements that are less similar [...] For some reason it’s been the case that people are like, “Oh, well it’s not as important as AI”. It’s like an odd framing rather than, “Yes, you’ve had this amazing insight that future generations matter. We are taking these actions that are impacting negatively on future generations. This is something that could make the long run future worse for a whole bunch of different reasons. Is it the very most important thing on the margin to be funding?"

I agree that, as a community, we should make sure we're up-to-date on climate change to avoid making mistakes or embarassing ourselves. I also think, at least in the past, the attitude towards climate work has been vaguely dismissive. That's not helpful, though it seems to be changing (cf. the quote above). As others have mentioned, I suspect climate change is a gateway to EA for a lot of altruistic and long-term-friendly people (it was for me!).

As far as direct longtermist work, I'm not convinced that climate change is neglected by EAs. As you mention, climate change has been covered by orgs like 80K and Founders Pledge (disclaimer, I work there). The climate chapter in The Precipice is very good. And while you may be right that it's a bit naive to just count all climate-related funding in the world when considering the neglectedness of this issue, I suspect that even if you just considered "useful" climate funding, e.g. advocacy for carbon taxes or funding for clean energy, the total would still dwarf the funding for some of the other major risks.

From a non-ex-risk perspective, I agree that more work could be done to compare climate work to work in global health and development. There's a chance that, especially when considering the air pollution benefits of moving away from coal power, climate work could be competitive here. Hauke's analysis, which you cite, has huge confidence intervals which at least suggest that the ranking is not obvious.

On the one hand, the great strength of EA is a willingness to prioritize among competing priorities and double down on those where we can have the biggest impact. On the other hand, we want to keep growing and welcoming more allies into the fold. It's a tricky balancing act and the only way we'll manage it is through self-reflection. So thanks for bringing that to the table in this post!

Climate Change Is Neglected By EA

Wildlife conservation and wild animal welfare are emphatically not the same thing. "Tech safety" (which isn't a term I've heard before, and which on googling seems to mostly refer to tech in the context of domestic abuse) and AI safety are just as emphatically not the same thing.

Anyway, yes, in most areas EAs care about they are a minority of the people who care about that thing. Those areas still differ hugely in terms of neglectedness, both in terms of total attention and in terms of expertise. Assuming one doesn't believe that EAs are the only people who can make progress in an area, this is important.

In climate change it counts the lawyers already engaged in changing the recycling laws of San Francisco as sufficent for the task at hand.

This is (a) uncharitable sarcasm, and (b) obviously false. There are enormous numbers of very smart scientists, journalists, lawyers, activists, etc etc. working on climate change. Every general science podcast I listen to covers climate change regularly, and they aren't doing so to talk about Bay-Area over-regulation. It's been a major issue in the domestic politics in every country I've lived in for over a decade. The consensus among left-leaning intellectual types (who are the main group EA recruits from) in favour of acting against climate change is total.

Now, none of this means there's nothing EA could contribute to the climate field. Probably there's plenty of valuable work that could be done. If more climate-change work started showing up on the EA Forum, I'd be fine with that the same way I'm fine with EAs doing work in poverty, animal welfare, mental health, and lots of other areas I don't personally prioritise. But would I believe that climate change work is the most good they could do? In most cases, probably not.

I knew a bit about misinformation and fact-checking in 2017. AMA, if you're really desperate.

Really thanks!

I think "offense-deffense balance" is a very accurate term here. I wonder if you have any personal opinion on how to improve our situation on that. I guess when it comes to AI-powered misinformation through media, it's particularly concerning how easily it can overrun our defenses - so that, even if we succeed by fact-checking every inaccurate statement, it'll require a lot of resources and probably lead to a situation of widespread uncertainty or mistrust, where people, incapable of screening reliable info, will succumb to confirmatory bias or peer pressure (I feel tempted to draw an analogy with DDoS attacks, or even with the lemons problem).

So, despite everything I've read about the subject (though notvery sistematically), I haven't seen feasible well-written strategies to address this asymmetry - except for some papers on moderation in social networks and forums (even so, it's quite time consuming, unless moderators draw clear guidelines - like in this forum). I wonder why societies (through authorities or self-regulation) can't agree to impose even minimal reliability requirementes, like demanding captcha tests before spreading messages (so making it harder to use bots) or, my favorite, holding people liable for spreading misinformation, unless they explicitly reference a source - something even newspapers refuse to do (my guess is that they are affraid this norm would compromise source confidentiality and their protections against legal suits). If people had this as an established practice, one could easily screen for (at least grossly) unreliable messages by checking their source (or pointing out its absence), besides deterring them.

When can I eat meat again?

I was a bit surprised to read what you wrote about Cultivated Meat. I am not an expert, but I've looked into this topic and my understanding is that there are fundamental technical challenges to be solved at least in cell expansion, the rate and specificity of cell growth, and the creation of thick cuts of any tissue. I'm sure that these can be solved in the end, but they seem very difficult (considering that cell expansion is needed for making blood cells and other non-tissue type of cells in the much more heavily funded biomedical field which is also less bottlenecked by medium cost).

I understand that today we may be possible to make some hybrid products, but that these won't really be similar to the real thing. Is this similar to your view?

Climate Change Is Neglected By EA
A year ago Louis Dixon posed the question “Does climate change deserve more attention within EA?”. On May 30th I will be discussing the related question “Is Climate Change Neglected Within EA?” with the Effective Environmentalism group. This post is my attempt to answer that question.

It's definitely possible I'm misunderstanding what you're trying to do here. However, I think it is usually not the case that if you attempt to do an impartial assessment of a yes-no question, all the possible factors point in the same direction.

I mean, I don't know this for sure, but I imagine if you were to ask me to closely investigate a cause area that I haven't thought about much before (wild animal suffering, say, or consciousness research, or Alzheimer's mitigation), and I investigated 10 sub-questions, I don't think all 10 of them will point in the same way. My intuition is that it's much more likely that I'd either find 1 or 2 overwhelming factors, or many weak arguments in favor or against, and some in the other direction.

I feel bad for picking on you here. I think it is likely the case that other EAs (myself included) have historically made this mistake, and I will endeavor to be more careful about this in the future.

Climate Change Is Neglected By EA

I wonder how much of the assessment that climate change work is far less impactful than other work relies on the logic of “low probability, high impact”, which seems to be the most compelling argument for x-risk. Personally, I generally agree with this line of reasoning, but it leads to conclusions so far away from common sense and intuition, that I am a bit worried something is wrong with it. It wouldn’t be the first time people failed to recognize the limits of human rationality and were led astray. That error is no big deal as long as it does not have a high cost, but climate change, even if temperatures only rise by 1.5 degrees, is going to create a lot of suffering in this world.

In an 80,000 hours podcast with Peter Singer the question was raised whether EA should split into 2 movements: present welfare and longtermism. If we assume that concern with climate issues can grow the movement, that might be a good way to account for our long term bias, while continuing the work on x-risk at current and even higher levels.

I Want To Do Good - an EA puppet mini-musical!

Watching it yet again, I think it would feel more right if the guy where not so easily convinced, but instead it ended with him, being "hm, that sounds promising, I'm going to learn some more".

Both the puppet really felt like real people with actual personalty to me, up until t=1:57. But then the guy just complexly changes his mind which broke my suspense of disbelief. I think that's the point when mostly started to sound like "yet another commercial".

I Want To Do Good - an EA puppet mini-musical!

The format of the video is basically: "Do you worry about these things, then we have the solution." Integrated with some back and forth, that I really like.

"Do you worry about these things, then we have the solution." is a standard panther in commercials, for a good reason. I think this is a good panther also for selling idea ideas like EA. But it also means that you can just say you understand my concerns and that you have solutions, you have to give me some evidence, or else is is just another empty commercial.

The person singing about their doubts felt relatable, in that they brought up real concerns about charity that I could imagine having before EA. I don't remember exactly but these seemed like standard and very reasonable concerns. And got the impression that you (the video maker) really understand "my" (the viewers) worries about giving to charity.

But when you where singing about the solutions you fall a bit short. I don't think this video would win the trust of an alternative Linda, that your suggestions for charity is actually better. I think it would help to put in some argument why treatable decides, and how to lift the barriers you mention.

Every charity says they are special, so just it don't count for much. But if you give me some arguments that I can understand for why your way is better, then that is evidence that you're onto something, and I might go and check it out some more.


All that said, I re-wathced the video, and I like it even more now. The energy and the mood shifts are amazing.

On re-watching I also feel that a viewer should be able to easily figure out the connection between focusing on deceases and avoiding building dependency. But I remember that first time I watched is it felt like there where a major step missing link there. I think it is now when I know what they will say, this gives me some more time to reflect and make those connections myself.

But people seeing this on the internet might only watch once, so...

Applications are open for EAGxVirtual 2020

Thanks for the feedback, Tyler!

To clarify: most of these questions aren’t actually *application/registration* questions for the event. Rather, they are meant to help us gather information about the community, and most are optional. I notice that you applied on May 12th - we have since gotten feedback that we should clarify that point, so we tried to make a clear distinction between the small number of application questions and the larger number of information-gathering questions. Half of the attendees got an especially long application form and shorter registration form, where the other half got a long registration form and shorter application form.

We hope to use this information to understand the different types of users that we attract and what kinds of content and interactions provide the most value, to help us get a better sense of the value EA Global and CEA as a whole provide the community.

I hope the length of the application doesn’t serve as too much of a deterrent. We don’t have many barriers to entry for this event (ticket prices are on a sliding scale starting at $5, and we expect to admit most applicants who aren’t completely new to EA), and we are hoping it can be the biggest event yet. So far, we have more than 1000 applications, so it looks like we could be on track! I look forward to seeing you there.

Climate Change Is Neglected By EA

I disagree with the way "neglectedness" is conceptualised in this post.

Climate change is not neglected among the demographics EA tends to recruit from. There are many, many scientists, activists, lawyers, policymakers, journalists, researchers of every stripe working on this issue. It has comprehensively (and justifiably) suffused the narrative of modern life. As another commenter here puts it, it is "The Big Issue Of Our Time". The same simply cannot be said of other cause areas, despite many of those problems matching or exceeding climate change in scale.

When I was running a local group, climate change issues were far and away the #1 thing new potential members wanted to talk about / complained wasn't on the agenda. This happened so often that "how do I deal with all the people who just want to talk about recycling" was a recurring question among other organisers I knew. I'd be willing to bet that >80% of other student group organisers have had similar experiences.

This post itself argues that EA is losing potential members by not focusing on climate change. But this claim is in direct tension with claims that climate change is neglected. If there are droves of potential EAs who only want to talk about climate change, then there are droves of new people eager to contribute to the climate change movement. The same can hardly be said for AI safety, wild animal welfare, or (until this year, perhaps) pandemic prevention.

Many of the claims cited here as reasons to work on climate change could be applied equally well to other cause areas. I don't think there's any reason to think simple models capture climate change less well than they do biosecurity, great power conflict, or transformative AI: these are all complex, systemic, "wicked problems" with many moving parts, where failure would have "a broad and effectively permanent impact".

This is why I object so strongly to the "war" framing used here. In (just) war, there is typically one default problem that must be solved, and that everyone must co-ordinate on solving or face destruction. But here and now we face dozens of "wars", all of which need attention, and many of which are far more neglected than climate change. Framing climate change as the default problem, and working on other cause areas as defecting from the co-ordination needed to solve it, impedes the essential work of cause-impartial prioritisation that is fundamental to doing good in a world like ours.

Climate Change Is Neglected By EA

I am sympathetic to the PR angle (ditto for global poverty): lots of EAs, including me, got to longtermism via more conventional cause areas, and I'm nervous about pulling up that drawbridge. I'm not sure I'd be an EA today if I hadn't been able to get where I am in small steps.

The problem is that putting more emphasis on climate change requires people to spend a large fraction of their time on a cause area they believe is much less effective than something else they could be working on, and to be at least somewhat dishonest about why they're doing it. To me, that sounds both profoundly self-alienating and fairly questionable impact-wise.

My guess is that people should probably say what they believe, which for many EAs (including me) is that climate change work is both far less impactful and far less neglected than other priority cause areas, and that many people interested in having an impact can do far more good elsewhere.

Aligning Recommender Systems as Cause Area

Thanks for sharing your perspective. I find it really helpful to hear reactions from practitioners.

If you value future people, why do you consider near term effects?

What do you think absorbers might be in cases of complex cluelessness? I see that delaying someone on the street might just cause them to spend 30 seconds less procrastinating, but how might this work for distributing bednets, or increasing economic growth?

Maybe there's an line of argument around nothing being counterfactual in the long-term - because every time you solve a problem someone else was going to solve it eventually. Eg. if you didn't increase growth in some region, someone else would have 50 years later. And now you did it they won't. But this just sounds like a weirdly stable system and I guess this isn't what you have in mind

Aligning Recommender Systems as Cause Area

I work at Netflix on the recommender. It's interesting to read this abstract article about something that's very concrete for me.

For example, the article asks, "The key question any model of the problem needs to answer is - why aren’t recommender systems already aligned."

Despite working on a recommender system, I genuinely don't know what this means. How does one go about measuring how much a recommender is aligned with user interests? Like, I guarantee 100% that people would rather have the recommendations given by Netflix and YouTube than a uniform random distribution. So in that basic sense, I think we are already aligned. It's really not obvious to me that Netflix and YouTube are doing anything wrong. I'm not really sure how to go about measuring alignment, and without a measurement, I don't know how to tell whether we're making progress toward fixing it.

My two cents.

EA Survey 2019 Series: How EAs Get Involved in EA

I was thinking it's perhaps best to list it like this:

"Brian Tomasik's Essays on Reducing Suffering (or FRI/CLR, EAF/GBS Switzerland, REG)"

I think Brian's work brought several people into EA and may continue to do so, whereas that seems less likely for the other categories.

I also see the point about historic changes, but I personally never thought the previous categories were particularly helpful.

Melbourne University Effective Altruism Study

Are there results from this? I would love to see :)

I knew a bit about misinformation and fact-checking in 2017. AMA, if you're really desperate.

One thing that you didn't raise, but which seems related and important, is how advancements in certain AI capabilities could affect the impacts of misinformation. I find this concerning, especially in connection with the point you make with this statement:

warning about it wil increase mistrust and polarization, which might be the goal of the campaign

Early last year, shortly after learning about EA, I wrote a brief research proposal related to the combination of these points. I never pursued the research project, and have now learned of other problems I see as likely more important, but I still do think it'd be good for someone to pursue this sort of research. Here it is:

AI will likely allow for easier creation of fake news, videos, images, and audio (AI-generated misinformation; AIGM) [note: this is not an established term]. This may be hard to distinguish from genuine information. Researchers have begun exploring potential political security ramifications of this (e.g., Brundage et al., 2018). Such explorations could valuably draw on the literatures on the continued influence effect of misinformation (CIE; e.g., Lewandowsky, Ecker, Seifert, Schwarz, & Cook, 2012), motivated reasoning (e.g., Nyhan & Reifler, 2010), and the false balance effect (e.g., Koehler, 2016).
For example, CIE refers to the finding that corrections of misinformation don’t entirely eliminate the influence of that misinformation on beliefs and behaviours, even among people who remember and believe the corrections. For misinformation that aligns with one’s attitudes, corrections are particularly ineffective, and may even “backfire”, strengthening belief in the misinformation (Nyhan & Reifler, 2010). Thus, even if credible messages debunking AIGM can be rapidly disseminated, the misinformation’s impacts may linger or even be exacerbated. Furthermore, as the public becomes aware of the possibility or prevalence of AIGM, genuine information may be regularly argued to be fake. These arguments could themselves be subject to the CIE and motivated reasoning, with further and complicated ramifications.
Thus, it’d be valuable to conduct experiments exposing participants to various combinations of fake articles, fake images, fake videos, fake audio, and/or a correction of one or more of these. This misinformation could vary in how indistinguishable from genuine information it is; whether it was human- or AI-generated; and whether it supports, challenges, or is irrelevant to participants’ attitudes. Data should be gathered on participants’ beliefs, attitudes, and recall of the correction. This would aid in determining how much the issue of CIE is exacerbated by the addition of video, images, or audio; how it varies by the quality of the fake or whether it’s AI-generated; and how these things interact with motivated reasoning.
Such studies could include multiple rounds, some of which would use genuine rather than fake information. This could explore issues akin to false balance or motivated dismissal of genuine information. Such studies could also measure the effects of various “treatments”, such as explanations of AIGM capabilities or how to distinguish such misinformation from genuine information. Ideally, these studies would be complemented by opportunistic evaluations of authentic AIGM’s impacts.
One concern regarding this idea is that I’m unsure of the current capabilities of AI relevant to generating misinformation, and thus of what sorts of simulations or stimuli could be provided to participants. Thus, the study design sketched above is preliminary, to be updated as I learn more about relevant AI capabilities. Another concern is that relevant capabilities may currently be so inferior to how they’ll later be that discoveries regarding how people react to present AIGM would not generalise to their reactions to later, stronger AIGM.


  • Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., ... & Anderson, H. (2018). The malicious use of artificial intelligence: forecasting, prevention, and mitigation. arXiv preprint arXiv:1802.07228.
  • Lewandowsky, S., Ecker, U. K., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106-131.
  • Koehler, D. J. (2016). Can journalistic “false balance” distort public perception of consensus in expert opinion?. Journal of experimental psychology: Applied, 22(1), 24-38.
  • Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303-330.
I knew a bit about misinformation and fact-checking in 2017. AMA, if you're really desperate.

Good questions!

Unfortunately, I think these specific questions are mostly about stuff that people started talking about a lot more after 2017. (Or at least, I didn't pick up on much writing and discussion about these points.) So it's a bit beyond my area.

But I can offer some speculations and related thoughts, informed in a general sense by the things I did learn:

  • I suspect misinformation at least could be an "effective weapon" against countries or peoples, in the sense of causing them substantial damage.
  • I'd see (unfounded) conspiracy theories and smear campaigns as subtypes of spreading misinformation, rather than as something qualitatively different. But I think today's technology allows for spreading misinformation (of any type) much more easily and rapidly than people could previously.
    • At the same time, today's technology also makes flagging, fact-checking, and otherwise countering misinformation easier.
      • I'd wildly speculate that, overall, the general public are much better informed than they used to be, but that purposeful efforts to spread misinformation will more easily have major effects now than previously.
      • This is primarily based on the research I've seen (see my other comment on this post) that indicates that even warnings about misinfo and (correctly recalled!) corrections of misinfo won't stop that misinfo having an effect.
      • But I don't actually know of research that's looked into this. We could perhaps call this question: How does the "offense-defense" balance of (mis)information spreading scale with better technology, more interconnectedness, etc.? (I take the phrase "offense-defense balance" from this paper, though it's possible my usage here is not in line with what the phrase should mean.)
  • My understanding is that, in general, standard ways of counteracting misinfo (e.g., fact-checking, warnings) tend to be somewhat but not completely effective in countering misinfo. I expect this would be true for accidentally spread misinfo, misinfo spread deliberately by e.g. just a random troll, or misinfo spread deliberately by e.g. a major effort on the part of a rival country.
    • But I'd expect that the latter case would be one where the resources dedicated to spreading the misinfo will more likely overwhelm the resources dedicated towards counteracting it. So the misinfo may end up having more influence for that reason.
    • We could also perhaps wonder about how the "offense-defense" balance of (mis)information spreading scales with more resources. It seems plausible that, after a certain amount of resources dedicated by both sides, the public are just saturated with the misinfo to such an extent that fact-checking doesn't help much anymore. But I don't know of any actual research on that.
Climate Change Is Neglected By EA

What a fantastic post. Thank you! Your frustration resonates strongly with me. I think the dismissive attitude towards climate issues may well be an enormous waste of goodwill towards EA concepts.

How many young/wealthy people stumble upon 80k/GiveWell/etc. with heartfelt enthusiasm for solving climate, The Big Issue Of Our Time, only to be snubbed? How many of them could significantly improve their career/giving plans if they received earnest help with climate-related cause prioritization, instead of ivory-tower lecturing about weirdo x-risks?

Can't we save that stuff for later? "If you like working on climate, you might also be interested in ..."?

This isn't to say that EA's marginal-impact priorities are wrong; I myself work mainly on AI safety right now. But a career in nuclear energy is still more useful than one in recyclable plastic straw R&D (or perhaps it isn't?), and that's worth researching and talking about.

I've spent a good bit of time in the environmental movement and if anyone could use a heavy dose of rationality and numeracy, it's climate activists. I consider a massive accomplishment and step in the right direction. It's sometimes dismissed on this forum for being too narrow-minded, and that's probably fair, but then what's EA's answer, besides a few uncertain charity recommendations? Where's our GiveWell for climate?

Given that people's concern about climate change is only going up, I hope that this important conversation is here to stay. Thanks again for posting!

Growth and the case against randomista development

2 immediate thoughts -

Firstly, in terms of human welfare per economic value, the graduation approach is probably more effient. The received by graduates is received by people who were previously in poverty (and people close to them, particularly their children). I expect that Growth in general, like that experienced by China in the Deng Xiaoping period, is less efficiently distributed than the graduation approach. But I expect the efficiency factor is less than 10. So Hallestead and Hillebrandts position stands that critique.

Secondly, H&H strawman the randomista position. Dufflo and Bannergjee argue in Poor Economics that the gains from effective charities are large relative to regular charities. But more importantly, the randomista development can shift the policies of development countries on important issues below macro-economics. The growth potential to changing textbook purchasing in India through RD could compete with development economics. If a RD study leads to new textbooks, tens of millions of children would read them per year. That lever is comparable to the lever described by H&H (one study to millions of children receiving an education-year-equivalent). For more of this perspective, check out the 80k interview with Rachel Glennerster -

The second point defends IPA, JPAL, One Acre fund from H&H's critique because each produce research outputs that may change developing country policies (da big lever). For the malaria and give directly approach H&H's critique stands.

Effective Animal Advocacy Resources

Super helpful, I'm about to cite this in the CE curriculum :)

Developing my inner self vs. doing external actions

More generally I think this is a question of what is sometimes called the explore/exploit trade-off: how much time to you spend building capacity compared to using that capacity, in cases where effort on those actions don't overlap.

In the real world there tends to be a lot of overlap, but there is always some marginal amount given up at any choice made along the explore/exploit Pareto frontier. So there's no one answer since it largely depends on what you are trying to achieve, other than to say you should look to expand the frontier wherever possible so you can get more of both.

MichaelA's Shortform

Thanks for adding those links, Jamie!

I've now added the first few into my lists above.

Is there a Price for a Covid-19 Vaccine?

If you find such markets, please can someone post them here?

Climate Change Is Neglected By EA

Thanks for the link, that's very interesting! I've seen that you direct donations to the Clean Energy Innovation program of the Information Technology and Innovation Foundation. How confident are you that the funds are actually fully used for the purpose? I understand that their accounting will show that all of your funds will go there, but how confident are you that they will not reduce their discretionary spending to this progam as a consequence? (I glanced at some of their recent work, and they have some pieces that are fairly confrontational towards China. While this may make sense from a short-term US perspective, it might even be net harmful if one takes a broader view and/or takes into account the possiblity of a military escalation between the US and China.) Did you consider the Clean Air Task Force when looking for giving opportunities?

Climate Change Is Neglected By EA

As a side comment, I think it can make perfect sense to work on some area and donate to another one. The questions "what can I do with my money to have maximal impact" and "how do I use my skill set to have maximal impact" are very different, and I think it's totally fine if the answers land on different "cause areas" (whatever that means).

[updated] Global development interventions are generally more effective than Climate change interventions

I'm still a bit worried about this.

It would have been reasonable for them to use the mean global income as the baseline, rather than dollars to the mean US citizen.

If I understand correctly, that would boost things by about a factor of 3 in favour of climate change (mean global income is about $20k, vs. mean US income of about $60k). Though, I suppose that's a fairly small uncertainty compared to the others listed here.

Updated estimates of the severity of a nuclear war

Thank you for updating your research. I understand that only a handful of scientists are working on the nuclear winter problem. It seems like this is an area where effective altruism and yourself can make a major difference. I do have a few questions about nuclear winter since you mentioned looking into that subject in greater detail for future publications.

  1. If cities burn without creating a firestorm to lift black carbon into the stratosphere then would a nuclear winter persist for years or would it quickly rain out?

  2. Smoke is the result of incomplete combustion. A firestorm generates enormous temperatures due to the blast furnace effect heating fuels to temperatures between 1,400F and 2,000F. Carbon, like in diamonds, burns at 1,292F to create CO2. A smokeless incinerator is able to burn plastics without releasing black smoke and only release CO2 and water vapor. Do the sources you reference already account for the combustion of pure black carbon in a high temperature firestorm?

  3. A few pyrocumulonimbus clouds have been studied. They appear to be mostly water vapor. If a firestorm releases a large amount of water vapor that condenses into ice as it rises then would the black carbon act as condensation nuclei? Would an ice coating change the color and stop the self-heating and rising necessary to reach the stratosphere? If an ice coated black carbon particle does reach the stratosphere then how does that impact the longevity of a nuclear winter?

  4. If the sources are ambiguous then is this something that smaller scale table top experiments or additional observations can factually determine?

Climate Change Is Neglected By EA

Though I'm not entirely sure the comparison is fair. The kind of global poverty interventions that EAs favour (for better or for worse) tend to be near-term, low-risk, with a quick payoff. Climate change interventions are much less certain, higher-variance, and with a long payoff.

MichaelA's Shortform

The only other very directly related resource I can think of is my own presentation on moral circle expansion, and various other short content by Sentience Institute's website, e.g. our FAQ, some of the talks or videos. But I think that the academic psychology literature you refer to is very relevant here. Good starting point articles are, the "moral expansiveness" article you link to above and "Toward a psychology of moral expansiveness."

Of course, depending on definitions, a far wider literature could be relevant, e.g. almost anything related to animal advocacy, robot rights, consideration of future beings, consideration of people on the other side of the planet etc.

There's some wider content on "moral advocacy" or "values spreading," of which work on moral circle expansion is a part:

Arguments for and against moral advocacy - Tobias Baumann, 2017

Values Spreading is Often More Important than Extinction Risk - Brian Tomasik, 2013

Against moral advocacy - Paul Christiano, 2013

Also relevant: "Should Longtermists Mostly Think About Animals?"

Climate Change Is Neglected By EA

A huge amount is already spent on global health and development, and yet the EA community is clearly happy to try and find particularly effective global health and development interventions. There are definitely areas within the hugely broad field of climate change action which are genuinely neglected.

This is true. To steelman your point - at Let's Fund we think funding advocacy for clean energy R&D funding is one such intervention, so they do exist.

Climate Change Is Neglected By EA

Thanks for your comments.

When I looked at the most recent IPCC report, one of the biggest health impacts listed was an increase in malaria. If we could reduce or eradicate malaria, we could also improve lives under climate change.

As I mention in my post, the issue with this is that you are fighting an uphill battle to tackle malaria while climate change continues to expand the territory of malaria and other tropical diseases.

I'd be interested in a direct comparison of some climate donations or careers with some global health donations or careers.

I have previously written a post titled Review of Climate Cost-Effectiveness Analyses which reviews prior attempts to do this kind of comparison. As I mention in my post above, my conclusion was that it is close to impossible to make this kind of comparison due to the very limited evidence available. However, the best that I think can be said is that action on climate change is likely to be at least as effective as action on global health, and it is plausible that action on climate change is actually more effective.

Climate Change Is Neglected By EA

Thank you for your comments - I have some responses:

hundreds of billions of dollars (and likely millions of work-years) are already spent every year on climate change mitigation (research, advocacy, or energy subsidies)

A huge amount is already spent on global health and development, and yet the EA community is clearly happy to try and find particularly effective global health and development interventions. There are definitely areas within the hugely broad field of climate change action which are genuinely neglected.

Given the relatively scarce resources we have, both in time and money, it seems like there are places where we could do more good

This seems pessimistic about the possible size of the EA movement. Maybe if EA didn't downplay climate change so much, it might attract more people to the movement and hence have a greater total amount of resources to distribute.

The Drowning Child and the Expanding Circle

It is an old article, so maybe a lot has already been said in other places, but at least not here in the comments.

I think the analogy with the child in the pond is good to show people that it doesn't matter whether the child is here in front of us or somewhere across the globe, but it doesn't really help with the situation we are in. Because in reality there are an almost limitless number of children in huge swamps every day on our way to university and if we were to try to save only a few of them every day we would never ever get to university or be able do anything else at all and we would quickly drown ourselves (at least that's what it feels like for many people). So what do we do, we chose the different route to university not the one that goes trough the swamps but the one that goes trough lovely green meadows. Maybe some days we get up a bit earlier go trough the swamps save a child, maybe even two or three if we are effective...

But we feel no matter what we do, we are not doing much to solve the problem but because we are still involved we can't really forget what is happening and can't help to feel a sense of desperation which I think is obviously not making anyone happy.

But of course in reality it's not just you and all those drowning children, it's you and many many others that could help as well and it would motivate a lot if we could see each other. So maybe we just need to add a "social" in front of altruism. Of course it is easier said than done, but there are examples ( comes to mind. And of course it also needs to be effective.

Developing my inner self vs. doing external actions

"Actually I think inner development is achieved through a combination of personal self reflection as well as interacting with others and external events"

It seems like the real question is how much time you should spend on self reflection or conversations with mentors. How much time do you think would be ideal? At what point do you think you might received significantly diminishing returns?

Or to put it another way, if you were optimising for inner development, how would that be practically different from your life now or your life if you optimized for taking action to help others?

Climate Change Is Neglected By EA

This post is incredibly detailed and informative. Thank you for writing it.

I work in a climate-adjacent field and I agree that climate change is an important area for some of the world's altruistically-minded people to focus on. Similar to you, I prefer to focus on problems that will clearly affect people in my lifetime or shortly afterwards.

However, even though I'm concerned about and even work on climate change, I donate to the Against Malaria Foundation. When I looked at the most recent IPCC report, one of the biggest health impacts listed was an increase in malaria. If we could reduce or eradicate malaria, we could also improve lives under climate change.

If at some point you're considering writing a follow up post, I'd be interested in a direct comparison of some climate donations or careers with some global health donations or careers. Given your ethical point of view, I don't think your posts are likely to change the minds of people who are focused on the long term future, but you might change my mind about what I should focus on.

Climate Change Is Neglected By EA

Concerning x-risks, my personal point of disagreement with the community is that I feel more skeptical of the chances to optimize our influence on the long-term future "in the dark" than what seems to be the norm. By "in the dark", I mean in the absence of concrete short-term feedback loops. For instance, when I see the sort of things that MIRI is doing, my instinctive reaction is to want to roll my eyes (I'm not an AI specialist, but I work as a researcher in an academic field that is not too distant). The funny thing is that I can totally see myself from 10 years ago siding with "the optimists", but with time I came to appreciate more the difficulty of making anything really happen. Because of this I feel more sympathetic to causes in which you can measure incremental progress, such as (but not restricted to) climate change.

Often times climate change is dismissed on the basis that there is already a lot of money going into this. But it's not clear to me that this proves the point. For instance, it may well be that these large resources that are being deployed are poorly directed. Some effort to reallocate these resources could have a tremendously large effect. (E.g. supporting the Clean Air Task Force, as suggested by the Founders Pledge, may be of very high impact, especially in these times of heavy state intervention, and of coming elections in the US.) We should be careful to apply the "Importance-Neglectedness-Tractability" framework with caution. In the last analysis, what matters is the impact of our best possible action, which may not be small just on the basis of "there is already a lot of money going into this". (And, for the record, I would personally rate AI safety technical research as having very low tractability, but I think it's good that some people are working on it.)

Finding an egg cell donor in the EA community

Hi linn!

Which country are you in? I have been putting a lot of thought into becoming an egg donor in the UK over the past few months and am currently in the evaluation process for one egg bank and one matching service.

First I would like to note that while most matching services primarily match on phenotype, there certainly are some where you get a detailed profile from the potential donors. I would be happy to tell you the name of the matching agency in the UK that I have been working with which strongly encourages getting a good personality match.

I would expect finding a donor directly from the EA community to be much harder, but maybe someone will respond to your request (but it would be good to know where you live!). Feel free to PM to chat more.

Empirical data on value drift

That doesn't seem right - since this comment was made, Holly's gone from being EA London strategy director to not really identifying with EA, which is more like the 5% per year.

Empirical data on value drift

I'm not so convinced on this. I think the framing of 'this was the founding team' was a little misleading: in 2011 all of us were volunteers and students. The lower bar for doing ~5 hours a week of volunteering for EA for ~1 year. Obviously students are typically in a uniquely good position for having time to volunteer. But it's not clear all the people on this list had uniquely large amounts of power. Also, I think situational effects were still strong: I felt it made a huge difference to what I did that I made a few friends who were very altruistic and had good ideas of how to put that into practice. I don't think we can assume that all of us on this list would have displayed similarly strong effective altruist inclinations without having met others in the group.

When can I eat meat again?

Hi Claire, I very much enjoyed reading your post. As you know from your time with us at GFI, we’re thrilled to have created many resources that allow for independent analysis of the alternative protein space — precisely the sort of analysis you’re doing here.

Your trajectory seems reasonable to me if things stay on their current path, where we are reliant on private money and private industry (the plant-based and cellular agriculture companies you spoke with), largely operating through start-ups that grow into larger companies. That said, I have some thoughts about how these dates might be moved up, and GFI is focused on doing exactly that.

Plant-Based Meat

I agree with you that it will be difficult, near-term, to achieve the same “texture, taste, aroma, appearance, and mouthfeel wholly from plants” for whole tissue-structured products, especially doing so using a highly scalable means of production. However, for ground meat products, it seems much more achievable. For instance, Impossible Foods was able to recapitulate the key sensory components of a beef burger — to the point that it can now fool many unsuspecting consumers — with only about 6 years of R&D (a single company starting essentially from scratch, as there was virtually no foundational research in the public domain when they started). As the plant-based sector picks up steam from all of the consumer, investor, entrepreneurial, researcher, and industry interest in recent years, there has been some growth in the amount of R&D funding devoted to plant-based meat products. While most of this has been happening privately within companies, some has gone toward open access research, which raises the floor of foundational knowledge for all subsequent companies, allowing them to achieve similarly impressive results with less time and money than it took the pioneering companies in this space. This is GFI’s focus — figuring out what will best help all companies and then getting it done.

So, in order to truly accelerate the pace of development industry-wide, we need vastly more open-access science (see: GFI’s Competitive Research Grant) to address questions of texture, taste, aroma, etc., across a variety of product formats and technological approaches. It is our belief that with continued public and private investment, plant-based meat will taste better, yes, but also cost less than animal-based meat in the relatively near future. The major explanations for the current premium pricing of plant-based meat products include the need to recoup R&D investment, a lack of competitors with greater scale, and a situation where demand exceeds production capacity, thus providing no incentive for lowering prices. The first two of those factors have been upended within the last 12 months, as demonstrated by the entry of virtually every major meat company launching their own plant-based brands — often after only a few months’ worth of R&D. These products thus far have been far lower quality, from a meat mimicry perspective, than products like Impossible. But the fact that these companies are producing reasonable “generation 1.0” facsimiles in a matter of months on which they can continuously iterate means that there will likely be steeper competition, thus driving down costs in the category while spurring additional efforts by brands to differentiate themselves via improvements in taste and texture. It is also worth mentioning that the now classic, premium pricing paradigm for the likes of Beyond and Impossible may quickly be changing.

Acellular Agriculture

I was excited to learn of Perfect Day’s recent “no questions” letter from the FDA for their fermentation-derived whey and the launch of their first commercial product (beyond their first limited release). This is a big step for the acellular dairy space. As it relates to the space more broadly, you raise some nice points about the difficulties inherent in increasing target protein yields and addressing CapEx needs as the industry scales. However, there are many examples of recombinant proteins that have achieved incredibly low price points through scale-up and host strain refinement. As you note, “Food enzymes such as amylase are currently sold for as little as $3/kg, which is already comparable to dairy and eggs,” but it’s worth keeping in mind that this price point is $3 per kg of pure protein. Milk and egg whites, on the other hand, are both about 90% water! So you’re only paying for <100g of protein for every kg of milk or egg you buy.

It’s also important to note that acellular dairy and egg products don’t necessarily require one-to-one substitution of their total protein content in order to recapitulate the functional properties and sensory aspects of animal-based dairy and egg. It may well be the case that a relatively minor fraction of the final product will be recombinant animal protein, reserving this production method for only the most highly functional, essential proteins, whereas the bulk of the product can be composed of plant-derived ingredients. The seminal example of this strategy is, of course, Impossible and their heme. However, you could imagine a similar approach for egg and dairy products.

In speaking with a variety of companies in the acellular agriculture space over the past four years, GFI is encouraged that there have been significant advances in titers for both egg and dairy-related proteins. And, increasingly, large life science companies (see: Evonik and Ginkgo Bioworks spinout Motif Ingredients, among others) that employ hundreds or thousands of fermentation scientists are investing millions of dollars to further optimize protein yield. In order to more speedily address CapEx and scaling needs, startups such as Perfect Day and Clara Foods are partnering with large food and distribution companies to ensure that their products are cost-competitive and broadly available to consumers in the next few years.

Cultivated Meat

As you’ve nicely outlined, there are a variety of key challenges that the cultivated meat industry must solve before achieving cost-competitiveness at scale. Doing so will require vast sums of money, substantial greenfield infrastructure, and deep technical expertise from cell biologists to process engineers. However, a lot of the true technical challenges are on the bioprocess design side, where we actually see progress on the main cost drivers — such as the media — being much more straightforward. I don’t think it will be the case that, “Significant time and investment are needed to bring cell culture media from a few hundred dollars per litre down to <$1/litre, which is approximately the price needed for cost-competitive cultivated meat.” Our impression has been that it’s more a matter of whether the market incentive yet exists for a company to hit this target, rather than fundamental technical challenges that remain. There is already at least one company targeting food-grade growth factor costs of a few dollars per gram, and this paper from Northwestern University showed that by making their own growth factors in small batches in-house, they were able to achieve 97% cost reduction. The path to cost reduction is actually quite straightforward for the media; it’s simply a matter of the CM market being sufficiently attractive for suppliers to hit that price point by catering to this industry, which can be a hard sell when these same suppliers are currently enjoying incredibly high profit margins with their existing pharma clientele.

The industry will of course require a great many “tools” companies -- from suppliers of cell culture media to engineering firms with expertise in cultivated meat manufacturing builds. This presents massive opportunities for both startups and, increasingly, incumbent industries. The involvement of major life science companies, specialty chemicals companies, engineering and bioprocess design firms, etc., can rapidly accelerate the commercialization of cultivated meat. Even the largest CM startup companies only have about 30-35 staff, and they were probably half that size two years ago. By contrast, a multinational life science company has tens of thousands of staff with hundreds of thousands of combined years of experience in the field. As soon as these companies take an interest in becoming solution providers to the CM industry, the technical challenges that would take decades of effort for a startup to solve can suddenly be addressed in a matter of months or years. Merck KGaA’s outspoken involvement has been a huge catalyst within the field, and it has been quietly spurring substantially more interest from incumbent life science solution providers. Most of them aren’t talking about it publicly yet, but there are far more partnerships happening today between CM startups and established industry leaders than, say, a year ago.

Force Multipliers to Compress Cost-Competitiveness Timelines

Funding remains a key bottleneck in this emerging industry, and public and private-sector investment in alternative proteins remains highly neglected as compared with, say, renewable energy. If our efforts to advocate for more public funding into alternative protein R&D are successful, an infusion of funding on par with even a single year’s worth of renewable energy R&D funding would increase the total public research in the field by several orders of magnitude. This would have an enormous impact on timelines. The Breakthrough Institute recently noted in its Federal Support for Alternative Protein for Economic Recovery and Climate Mitigation memo that “Coming out of the financial crisis of 2008-2009, the [U.S.] Department of Energy guaranteed $15.7 billion in loans” which ultimately “reduced the cost of renewable electricity generation by about 20%.”

Convincing governments to put R&D money into open access alternative protein science is a key focus of GFI in the U.S. and also through our affiliates in the EU, APAC, India, Brazil, and Israel — see for example our recent stimulus request in the US, and similar exercises in the EU. We are seeing some promising initial signs of progress, but there is far more that needs to be done, and we see lobbying for more public R&D as an exciting opportunity for highly-leveraged impact. (As well as seeking to accelerate the pace of technological progress on alternative proteins, we also see other important avenues to decisively influence the trajectory of the sector — for example in terms of regulatory authorization and consumer acceptance — in the years ahead.) As you know, GFI is working hard across all of these areas of influence.

I know you agree with this, but I would be remiss if I didn't mention that creating alternatives that taste the same or better and cost the same or less as industrial animal products is our best hope, if our goal is to transform the meat production system. So even on the timeframe you lay out, this is a worthy endeavor and path forward. That said, we should do all we can to accelerate the trajectory, which is the fundamental reason that GFI exists.

I enjoyed thinking through your analysis and offering my thoughts. Thank you again for sharing, Claire!

Blake Byrne, Business Innovation Specialist at The Good Food Institute

How Much Leverage Should Altruists Use?

The drawdowns of major ETFs on this (e.g. EMB / JNK) during the corona crash or 2008 are roughly 2/3 to 3/4 of how much stocks (the S&P 500) went down. So I agree the diversification benefit is limited. The question, bracketing the point on leverage extra cost, is whether the positive EV of emerging markets bonds / high yield bonds is more or less than 2/3 to 3/4 of the positive EV of stocks. That's pretty hard to say - there's a lot of uncertainty on both sides. But if that is the case and one can borrow at very good rates (e.g. through futures or box spread financing) then the best portfolio should be a levered up combination of bonds & stocks rather than just stocks.

FWIW, I'm in a similar position regarding my personal portfolio; I've so far not invested in these asset classes but am actively considering it.

Climate Change Is Neglected By EA

Thoughtful post!

I don't agree with your analysis in (3) - neglectedness to me is asking not 'is enough being done' but 'is this the thing that can generate the most benefit on the margin'.

For climate change it seems most likely not; hundreds of billions of dollars (and likely millions of work-years) are already spent every year on climate change mitigation (research, advocacy, or energy subsidies). The whole EA movement might move, what, a few hundred million dollars per year? Given the relatively scarce resources we have, both in time and money, it seems like there are places where we could do more good (the whole of the AI safety field has only a couple hundred people IIRC).

Helping wild animals through vaccination: could this happen for coronaviruses like SARS-CoV-2?

Good post, and this also seems to be a very opportune time to be promoting wild animal vaccination. A few thoughts:

To start with, programs of this kind would only be implemented after a vaccine is developed and distributed among human beings.

In relation to the current pandemic, the media often mentions that there are 7 coronaviruses that can effect humans and we don't have an effective vaccine for any of them. However, I was recently surprised to learn that there are several commercially available veterinary vaccines against coronaviruses - this raised my expectation that a human coronavirus vaccine could be successfully developed and seems promising for animal vaccination as well.

I think it's worth thinking more about what level of safety testing goes into developing animal vaccines. The Hendra virus vaccine for horses might be an interesting case study for this. Hendra virus was relatively recently discovered in Australian, and can be transmitted from flying foxes (a megabat species), via horses, to humans where it has 60%+ case fatality. Fruit bat culling was very widely called for after a series of outbreaks in 2011, but the government decided to fund development for a horse vaccine instead (by unfortunate coincidence, a heat-wave latter killed 1/3rd of the flying fox population a few years later). A vaccine was developed within a year and widely administered soon after. However, some owners (particularly those of racing horses) reported severe side-effects (including death) and eventually started a class-action against the vaccine manufacturer. I don't know if the anecdotal reports of side-effects stood up to further scrutiny (there could have been some motivated reasoning going on similar to that used by human anti-vaxxers), but it seems plausible that veterinary vaccine development accepts, or does not even attempt to consider, much worse side-effects that would be approved in a vaccine developed for humans. Given animal's inability to self-report, some classes of minor side-effects may only be noticed by owners of companion animals who are very familiar with their behaviour. While I don't think animal side-effects would be a consideration in developing vaccines for pandemic control or economic purposes, it seems more relevant in the context of vaccinating animals to increase their own welfare.

This may be the case especially for bats, because they have one of the highest disease burdens among wild mammals. Among other conditions, they are harmed by a number of different coronaviruses-caused diseases. In fact, they harbor more than half of all known coronaviruses.

Why do bats have so many diseases (lots of which humans seem to catch)? This comment (which I found in an SSC article) frames the question in another way:

There are over 1,250 bat species in existence. This is about one fifth of all mammal species. Just to get a sense of this, let me ask a modified version of the question in the title:
"Why do human beings keep getting viruses from cows, sheep, horses, pigs, deer, bears, dogs, seals, cats, foxes, weasels, chimpanzees, monkeys, hares, and rabbits?"

This re-framing doesn't really change the problem, but it suggests that just viewing 'bats' as a single animal group comparable to 'cows' or 'deers' is concealing the scope of species diversity involved.

I heard Jonathan Epstein talk at a panel discussion on biosecurity last year. He was in favour of disease monitoring and management in wild animal populations, and also seemed sympathetic to the idea of doing this from both a human health and animal welfare standpoints. He might be interested in discussing this further, and is in a position where he could advocate for or implement these ideas.

Interview with Aubrey de Grey, chief science officer of the SENS Research Foundation

Thanks for asking the questions I suggested. I thought found Aubrey's response to this question the most informative:

Has any effort been made to see if the effects of multiple treatments are additive, in terms of improved lifespan, in a pre-clinical study?
No, and indeed we would not expect them to be additive, because we would not expect any one of them to make a significant difference to lifespan. That’s because until we are fixing them all, the ones we are not yet fixing would be predicted to kill the organism more-or-less on schedule. Only more-or-less, because there is definitely cross-talk between different damage types, but still we would not expect that lifespan would be a good assay of efficacy until we’re fixing pretty much everything.

I don't have a background in anti-aging biology and my intuition was that the treatments would be have more of an additive effect. However, I agree with his view that there won't be much effect on total life-span until everything is fixed.

My feeling is that this may make the expected value of life-extension research lower (by decreasing probability of success) given that all hallmarks need to be effectively treated in parallel to realize any benefit. If one proves much harder to treat in humans, or if all the treatments don't work together, then that reduces the benefit gained from treating the other hallmarks, at least as far as LEV is concerned. This makes SRF's approach of focusing on the most difficult problems seem quite reasonable and probably the most effective way to make a marginal contribution to life-extension research at the moment. Once all hallmarks are treatable pre-clinically in-vivo, then it seems like research into treatment interactions may become the most effective way to contribute (as noted, this will probably also be hard to get main-stream funding for).

External evaluation of GiveWell's research

Whoever strongly disliked this, feel free to say why.

I Want To Do Good - an EA puppet mini-musical!

I produce some rap and would enjoy collaborating if you'd ever like to.

Regardless, thanks so much for your work!

External evaluation of GiveWell's research

It should be possible to give feedback on specific point.

In the future I am confident that all articles will be able to have comments on any part of the text, like comments in a google doc. This means people can edit or comment on specific points. This is particularly important with fermi models and could be implemented - people can comment on each part of an evaluation to criticise some specific bit. One wrong leap of logic in an argument makes the whole argument void, so GiveWell's models need this level of scrutiny.

External evaluation of GiveWell's research

All the most important models should have crowdsourced answers also.

I *think* GiveWell uses models to make decisions. It would be possible to crowdsource numbers for each step. I predict you would get better answers if you did this. The wisdom of crowds is a thing. It breaks down when the crowd doesn't understand the model, but if you are getting the to guess individual parts of a model, it works again.

Linked to the Stack Overflow point I made, I think there could easily be a site for crowdsourcing the answers to the GiveWells questions. I think there is a 10% chance that with 20k you could build a better site that could come up with better answers if EAs enjoyed making guesses for fun - wikipedia is the best encyclopaedia in the world. This is because it leverages the free time and energy of *loads* of nerds. GiveWell could do the same.

External evaluation of GiveWell's research

There should be more context on the important decision making tools

I could be wrong, but I think most decision are made using google sheets. I've read a few of these and I think there could be more context around which numbers are the most important.

External evaluation of GiveWell's research

I think is should be easier to give feedback on GiveWell. I would recommend not needing to login and allowing people to give suggestions on the text of pages.

External evaluation of GiveWell's research

I think StackOverflow is is the gold standard for criticism. It's a question answering website. It allows answers to be ranked and questions and answers to be edited. Not only do the best answers get upvoted, but answers and questions get clearer and higher quality over time. I suggest this should be the aim for GiveWell's anlyses.

See examples of all such features on this question:


  • The question was answered by one individual but the edited by a much more experienced user. I think GiveWell could easily allow suggestions to their articles by the communty, which could be upvoted by other readers.
  • If Givewell doesn't want to test this, maybe try it on this forum first - allow people to suggest edits to posts.
External evaluation of GiveWell's research

I'm gonna be a bit of a maverick and split my comment into separate ideas so you can upvote or downvote them separately. I think this is a better way to do comments, but looks a bit spammy.

EA Survey 2019 Series: How EAs Get Involved in EA

Thanks. That makes sense. I try not to change the historic categories too much though, since it messes up comparisons across years.

EA Survey 2019 Series: How EAs Get Involved in EA

(Btw, I think you can remove REG/FRI/EAF/Swiss from future surveys because we've deemphasized outreach and have been focusing on research. I also think the numbers substantially overlap with "local groups".)

Space governance is important, tractable and neglected

(Just FYI, your comment doesn’t seem to have a link to the podcast mentioned.)

Influencing pivotal Individuals

The Center for Security and Emerging Technology (funded by Open Philanthropy) seems like it’s pretty clearly focused on influencing people in power.

How Much Leverage Should Altruists Use?

(I'll preface by saying that I'm a new to finance, so I could be very wrong.)

I think it's plausible that an isoelastic utility function in wealth is a poor fit, even for those who are risk-neutral in their altruism (and even completely impartial). I wouldn't be surprised if our actual utility functions

1. have decreasing marginal returns at low wealth (and maybe even increasing marginal returns at some levels of low wealth),

2. have a roughly constant marginal returns for a while (the same rightmost derivative as in 1), at the rate of the best current donation opportunities, and

3. have decreasing marginal returns again at very high levels of wealth (maybe billions or hundreds of millions, within a few orders of magnitude of Good Ventures' funds.)

1 is because of personal risk aversion and/or better returns on self-investment than donations compared to 2, where the returns come mainly from donations. 3 is because of eventually marginally decreasing altruistic returns on donations.

I made a graph to illustrate. I think region 2 is probably much larger relative to the other regions, and 1 is probably much smaller than 3. I also think this is missing some temporal effects for 1: you need money every year to survive, not just in the long run, and donation opportunities may be better or worse in the future.

For this reason and psychological reasons, it might be better to compartmentalize your wealth and investments into:

A. Short-term expenses, including costs of living, fun stuff, and maybe some unforeseen expenses (school, medical expenses, unique donation opportunities during times where your investments are doing poorly, to avoid pulling from them). This should be pretty low risk. This is what your chequing account, high-interest savings account, CDs (certificates of deposit, GICs in Canada), and maybe bonds could be for.

B. Retirement.

C. Altruistic investments and donations. Here you can take on considerable risk and use high amounts of leverage, maybe even higher than what you've recommended. I would recommend against any risks that could leave you owing a lot of money, even in the short-term, enough to cause you to need to withdraw from A or B. Risk neutral altruists can maximize expected long-run returns here, although discounting long-run returns that go into scenario 3. Because of A and B, we're past scenario 1, so either in 2 or 3. Your mathematical arguments could approximately apply, with caveats, if most of the expected gains come from staying within 2.

If you plan to buy a house, that might deserve its own category. Your time frame is usually longer than in A but shorter than in B.

Comparisons of Capacity for Welfare and Moral Status Across Species

Hey Michael,

Thanks again. Regarding (2), I may be conflating a conversation I had with Luke about the subject back in February with the actual contents of his old LessWrong post on the topic. You're right that it's not clear that he's focusing on capacity for welfare in that post: he moves pretty quickly between moral status, capacity for welfare, and something like average realized welfare of the

"typical" conscious experience of "typical" members of different species when undergoing various "canonical" positive and negative experiences

Frankly, it's a bit confusing. (To be fair to Luke, he wrote that post before Kagan's book came out.) One hope of mine is that by collectively working on this topic more, we can establish a common conceptual framework within the community to better clarify points of agreement and disagreement.

Comparisons of Capacity for Welfare and Moral Status Across Species

1. To clarify, I don't necessarily see status-adjusted welfare as a bad term. I'd actually say it seems pretty good, as it seems to state what it's about fairly explicitly and intuitively.

I was just responding to the claim that it's better than "moral weight" in that it sounds more agnostic between unitarian and hierarchical approaches. I see it as perhaps scoring worse than "moral weight" on that particular criterion, or about the same.

(But I also still think it means a somewhat different thing to "moral weight" anyway, as best I can tell.)

2. I'm not confident about whether Muehlhauser meant moral status or capacity for welfare, and would guess your interpretation is more accurate than my half-remembered interpretation. Though looking again at his post on the matter, I see this sentence:

This depends (among other things) on how much “moral weight” we give to the well-being of different kinds of moral patients.

This sounds to me most intuitively like it's about adjusting a given unit of wellbeing/welfare by some factor that "we're giving" them, which therefore sounds like moral status. But that's just my reading of one sentence.

In any case, I think I poorly expressed what I actually meant, which was related to my third point: It seems like "status-adjusted welfare" is the product of moral status and welfare, whereas "moral weight" is either (a) some factor by which we adjust the welfare of a being, or (b) some factor that captures how intense the welfare levels of the being will tend to be (given particular experiences/events), or some mix of (a) and (b). So "moral weight" doesn't seem to include the being's actual welfare, and thus doesn't seem to be a synonym for "status-adjusted welfare".

(Incidentally, having to try to describe in the above paragraph what "moral weight" seems to mean has increased my inclination to mostly ditch that term and to stick with the "moral status vs capacity for welfare" distinction, as that does seem conceptually clearer.)

3. That makes sense to me.

Comparisons of Capacity for Welfare and Moral Status Across Species
I don't know of any serious contemporary philosopher who has denied that the conjunction of sentience and agency is sufficient for moral standing, though there are philosophers who deny that agency is sufficient and a small number who deny that sentience is sufficient.

Interesting, thanks!

But I don't know anybody who holds that view. Do you?

I don't (but I know very little about the area as a whole, so I'd wouldn't update on that in particular).

I can see why, if practically no one holds that view, "even most theologians will agree that all sentient agents have moral standing". I guess I asked my question because I interpreted the passage as saying that that followed logically from the prior statements alone, whereas it sounds like instead it follows given the prior statements plus a background empirical fact about theologians' view.

EA Survey 2019 Series: How EAs Get Involved in EA

Yeah, that seems fair. I do think that "LessWrong meetups" are a category that is more similar to the whole "Local Group" category, and the primary thing that is surprising to me is that there were so many people who choose LessWrong instead of Local Group and then decided to annotate choice that with a reference to their local group.

Load More