J

Jason

16208 karmaJoined Working (15+ years)

Bio

I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . . 

How I can help others

As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.

Posts
2

Sorted by New
6
Jason
· · 1m read

Comments
1874

Topic contributions
2

I think a related discussion could be had around funders making the decision to quit on projects too early, which is likely much more prevalent/an issue.

The lack of incentives to write posts criticizing one's former funders for pulling the plug early may be a challenge, though. After all, one may be looking to them for the next project. And writing such a post may not generate the positive community feeling that writing an auto-shutdown postmortem does.

The eschatology question is interesting. I think it can still make sense to work on what amounts practically to x-risk prevention even when expecting humans to be around at the Second Coming of Christ (or some eschatological event in other religions).

Also, one can think that x-risk work is also generally effective in mitigating near-x-risk (e.g., a pandemic that "only" kills 99% of us). Particularly given the existence of the Genesis flood narrative, I expect most Christians would accept the possibility of a mass catastrophe that killed billions but less than everyone.

With that being said, if and when having a positive impact on the world and satisfying community members does come apart, we want to keep our focus on the broader mission. 

 

I understand the primary concern posed in this comment to be more about balancing the views of donors, staff, and the community about having a positive impact on the world, rather than trading off between altruism and community self-interest. To my ears, some phrases in the following discussion make it sound like the community's concerns are primarily self-interested: "trying to optimize for community satisfaction," "just plain helping the community," "make our events less of a pleasant experience (e.g. cutting back on meals and snack variety)," "don’t optimize for making the community happy" for EAG admissions). 

I don't doubt that y'all get a fair number of seemingly self-interested complaints from not-satisfied community members, of course! But I think modeling the community's concerns here as self-interested would be closer to a strawman than a steelman approach.

On point 4:

I'm pretty sure we could come up with various individuals and groups of people that some users of this forum would prefer not to exist. There's no clear and unbiased way to decide which of those individuals and groups could be the target of "philosophical questions" about the desirability of murdering them and which could not. Unless we're going to allow the question as applied to any individual or group (which I think is untenable for numerous reasons), the line has to be drawn somewhere. Would it be ethical to get rid of this meddlesome priest should be suspendable or worse (except that the meddlesome priest in question has been dead for over eight hundred years).

And I think drawing the line at we're not going to allow hypotheticals about murdering discernable people[1] is better (and poses less risk of viewpoint suppression) than expecting the mods to somehow devise a rule for when that content will be allowed and consistently apply it. I think the effect of a bright-line no-murder-talk rule on expression of ideas is modest because (1) posters can get much of the same result by posing non-violent scenarios (e.g., leaving someone to drown in a pond is neither an act of violence nor generally illegal in the United States) and (2) there are other places to have discussions if the murder content is actually important to the philosophical point.[2]

  1. ^

    By "discernable people," I mean those with some sort of salient real-world characteristic as opposed to being 99-100% generic abstractions (especially if in a clearly unrealistic scenario, like the people in the trolley problem). 

  2. ^

    I am not expressing an opinion about whether there are philosophical points for which murder content actually is important. 

I suggest editing this comment to note the partial reversal on appeal and/or retracting the comment, to avoid the risk of people seeing only it and reading it as vaguely precedential.

And how counterfactual were the (quality) referrals -- are the resulting advising sessions unlikely to have ever happened otherwise, or would it be more appropriate to categorize most referrals as causing certain people to receive advising services sooner than they otherwise would have?

Fair -- I took Arturo's take to be that there was an undersupply of praise of people high in effectiveness relative to praise of people high in altruism, such that we should do more of the former. To me, the amount of "airtime" Washington et al. get is evidence against that take.

I'd probably give somewhat more credence to this if Washington didn't own 124 slaves at the time of his death. People in Virginia were emanicipating their slaves; Washington could have but did not during his lifetime. That suggests his actions were not merely constrained by what was possible for a politician to accomplish at the time.

Lincoln was pretty willing to enshrine slavery into the Constitution forever to save the Union (https://en.m.wikipedia.org/wiki/Corwin_Amendment), so I find his anti-slavery reputation to be too strong.

Thanks for writing this!

A few general points:

  • I think some of these factors relate to effective altruism as an idea ("EA-I"), while others relate to effective altruism as a particular community ("EA-C") that practices a form of EA-I. 
  • I would place somewhat more emphasis on members of different Christian groups being more or less comfortable with the particular cultural practices of EA-C. For example, those from evangelical backgrounds are probably less likely to feel comfortable in a subculture that is often enthusiastic about recreational use of controlled psychoactive drugs.
    • Of course, neither EA-I nor EA-C can make everyone happy. For EA-I, this is more of an epistemic issue; we don't want to water down what EA-I is. For EA-C, this is more of an unfortunate, practical issue (even if it is unavoidable). Aspects of EA-C may be historical accidents, or may be calculated to maximize the amount of aggregate good that the community can do (subject to the constraint that it is a single community). But there is no possible construction of EA-C that will maximize the good that each and every person who is open to EA-I will accomplish. Ideally, there would be multiple full communities[1] practicing EA-I, and each person open to EA-I could pick the full community that would be most conducive to them doing the most good.
  • Different Christian communities place different emphases on being (shall we say) publicly Christian. For some, it's OK for faith to be a more of a private thing. Others feel an obligation to be vocal about their faith. And there are of course many gradations and variations in between. Those in the more-vocal camp may be more concerned about not being accepted, or being discriminated against.

On cause areas:

One such point of doctrine is eschatology. Those who are who think the Second Coming is sure or very likely to happen within decades would reject the concept of a prolonged future for humanity and hence longtermism. This kind of eschatological expectation is common among the more conservative protestants. 

In the current meta, where longtermism is practically close enough to synonymous with x-risk reduction, any confident belief in the Second Coming may be sufficient to foreclose significant engagement with longtermism for many Christians. The Second Coming doesn't really work if there are no people left because the AI killed them all! I suspect similar rationales would be present in many other religions, either because they have their own eschatologies or because human extinction would seem at tension with a foundational belief in a deity who is at least mostly benevolent, at least nearly omnipotent, and interested in human welfare.

Even beyond that, other subfields in longtermism don't mesh as well with common Christian theological concepts. Transhumanism, digital minds, and similar concepts are likely to be non-starters for many Christians. In most Christian theologies, human beings are uniquely made[2] in the image of God and their creations would not share in that nature at all. Furthermore, EA thinking about the future may be seen as technoutopian, which is in tension with Christian theologies that identify sin (~ a religious version of evil or wrongdoing) as the fundamental cause of problems in the world. So EA thinking can come off as seeking mostly technological solutions to a spiritual problem.

Depending on their beliefs about soteriology, a Christian with longtermist tendencies might also focus on evangelism, theorizing that eternity is forever and that what happens in the life to come is far more important than what happens on earth.

Some Christians might perceive working on animal welfare as misdirected and reject EA because they see animal welfare being a prominent cause area in the movement.

My guess is that EA reasoning about cause prio, rather than beliefs about the need to reduce animal suffering per se, would be the major stumbling block here. After all, companion-animal charities have long been popular in the US, and I don't have any reason to think that US Christians were shunning them. But (e.g.) trying to quantify the moral weight of a chicken's welfare in comparison to that of a human is probably more likely to upset someone coming from a distinctly Christian point of view than (say) the median adult in a developed country. Suggesting that the resulting number is in the single digits, or that the meat-eater problem is relevant to deciding whether to donate to global health charities, is even more likely to be perceived as off-putting.[3] Cf. the discussion of humans as being made in the image of God above.

Characteristic to both of these stances is that they lead to a rejection of only a particular cause area within EA. This would leave room to engage with the other parts. 

Yes, although we don't know what EA content the hypothetical person would find first (or early). If the first content they happen to see is about (e.g.) the meat-eater problem, they may not come back for a second helping even though they would have resonated with GH&D work. With GH&D declining in the meta, this may be a bigger issue than it would have been years ago.

Also, I think many people -- Christian or not -- would be less likely to engage with a community if a significant portion of community effort and energy was devoted to something they found silly, inappropriate, or inconsistent with their deeply-held values.[4]

 

 

  1. ^

    "Full community" is not the greatest term. I mean something significantly more than an affinity group, but not necessarily something insular from other groups practicing EA-I. A full community can stand on its own two feet, as it were. To use a Christian metaphor, a church would ordinarily be a full community. One can receive the sacraments/ordinances, learn and study, obtain spiritual support and guidance, serve those who are less privileged, and get what we might consider the other key functions of a communal Christian life through a church. I'm less clear in my own mind on the key functions of a community practicing EA-I.

  2. ^

    There are of course, many different views about what "made" means here!

  3. ^

    I do not mean to express an opinion on the merits of these topics, or suggest that discussion of them should be avoided.

  4. ^

    Again, I am not expressing endorsement of a norm that we shouldn't talk about or do certain things because some group of people would object to that.

We praise people to hold them up as examples to emulate (even though all people are imperfect and thus all emulation should be partial). Holding people who committed large-scale crime up for emulation has a lot of downsides. Moreover, the effectiveness of effective historical figures is often context-dependent, and difficult to apply to greatly different circumstances. Finally, I'm not convinced that praise of effective leaders like Washington, Madison, and Churchill is neglected in at least American public education and discourse (but this may have changed since my childhood).

Load more