AppliedDivinityStudies

Wiki Contributions

Comments

What are some success stories of grantmakers beating the wider EA community?

I think the framing of good grantmaking as "spotting great opportunities early" is precisely how EA gets beat.

Fast Grants seems to have been hugely impactful for a fairly small amount of money, the trick is that the grantees weren't even asking, there was no institution to give no, and no cost-effectiveness estimate to run. It's a somewhat more entrepreneurial approach to grantmaking. It's not that EA thought it wasn't very promising, it's that EA didn't even see the opportunity.

I think it's worth noting that a ton of OpenPhil's portfolio would score really poorly along conventional EA metrics. They argue as much in this piece. So of course the community collectively gets credit because OpenPhil identifies as EA, but it's worth noting that their "hits based giving" approach divers substantially from more conventional EA-style (quantitative QALY/cost-effectiveness) analysis and asking what that should mean for the movement more generally.

Liberty in North Korea, quick cost-effectiveness estimate

Saying "I'd rather die than live like that" is distinct from "this is worse than non-existence." Can you clarify?

Even the implication that moving a NK person to SK is better than saving 10 SK lives is sort of implausible - for both NKs and SKs alike. I don't know what they would find implausible. To me it seems plausible.

Liberty in North Korea, quick cost-effectiveness estimate

I believe NK people would likely disagree with this conclusion, even if they were not being coerced to do so. I don't have good intuitions on this, it doesn't seem absurd to me.

Unrelated to NK, many people suffer immensely from terminal illnesses, but we still deny them the right to assisted suicide. For very good reasons, we have extremely strong biases against actively killing people, even when their lives are clearly net negative.

So yes, I think it's plausible that many humans living in extreme poverty or under totalitarian regimes are experiencing extremely negative net utility, and under some ethical systems, that implies that it would be a net good to let them die.

That doesn't mean we should promote policies that kill North Korean people or stop giving humanitarian food and medical aid.

Why hasn’t EA found agreement on patient and urgent longtermism yet?

EA has consensus on shockingly few big questions. I would argue that not coming to widespread agreement is the norm for this community.

Think about:

  • neartermism v.s. longtermism
  • GiveWell style CEAs v.s. Open Phil style explicitly non-transparent hits-based giving
  • Total Utilitarianism v.s. Suffering-focused Ethics
  • Priors on the hinge-of-history hypothesis
  • Moral Realism

These are all incredibly important and central to a lot of EA work, but as far as I've seen, there isn't strong consensus.

I would describe the working solution as some combination of:

  • Pursuing different avenues in parallel
  • Having different institutions act in accordance with different worldviews
  • Focusing on work that's robust to worldview diversification

Anyway, that's all to say, you're right, and this is an important question to make progress on, but it's not really surprising that there isn't consensus.

A Red-Team Against the Impact of Small Donations

I think I see the confusion.

No, I meant an intervention that could produce 10x ROI on $1M looked better than an intervention that could produce 5x ROI on $1B, and now the opposite is true (or should be).

A Red-Team Against the Impact of Small Donations

Uhh, I'm not sure if I'm misunderstanding or you are. My original point in the post was supposed to be that the current scenario is indeed better.

How should Effective Altruists think about Leftist Ethics?

I sort of expect the young college EAs to be more leftist, and expect them to be more prominent in the next few years. Though that could be wrong, maybe college EAs are heavily selected for not being already committed to leftist causes.

I don't think I'm the best person to ask haha. I basically expect EAs to be mostly Grey Tribe, pretty democratic, but with some libertarian influences, and generally just not that interested in politics. There's probably better data on this somewhere, or at least the EA-related SlateStarCodex reader survey.

How should Effective Altruists think about Leftist Ethics?

Okay, as I understand the discussion so far:

  • The RP authors said they were concerned about PR risk from a leftist critique
  • I wrote this post, explaining how I think those concerns could more productively be addressed
  • You asked, why I'm focusing on Leftist Ethics in particular
  • I replied, because I haven't seen authors cite concerns about PR risk stemming from other kinds of critique

That's all my comment was meant to illustrate, I think I pretty much agree with your initial comment.

How should Effective Altruists think about Leftist Ethics?

As I understand your comment, you think the structure of the report is something like:

  1. Here's our main model
  2. Here are it's implications
  3. By the way, here's something else to note that isn't included in the formal analysis

That's not how I interpret the report's framing. I read it more as:

  1. Here's our main model focused on direct benefits
  2. There are other direct benefits, such as Charter Cities as Laboratories of Governance
  3. Those indirect benefits might out-weight the direct ones, and might make Charter Cities attractive from a hits-based perspective
  4. One concern with the conception of Charter Cities as Laboratories of Governance is that it adds to the neocolonialist critique.
  5. "the laboratories of governance model may add to the neocolonialist critique of charter cities. Charter cities are not only risky, they are also controversial... Whether or not this criticism is justified, it would probably resonate with many socially-minded individuals, thereby reducing the appeal of charter cities."

So that's a bit different. It's not "here's a random side note". It's "Although we focus on modeling X, Charter Cities advocates might say the real value comes from Y, but we're not focusing on Y, in part, because of this neocolonialist critique."

A Red-Team Against the Impact of Small Donations

Yeah, that's a good question. It's underspecified, and depends on what your baseline is.

We might say "for $1 donated, how much can we increase consumption". Or "for $1 donated, how much utility do we create?" The point isn't really that it's 10x or 5x, just that one opportunity is roughly 2x better than the other.

https://www.openphilanthropy.org/blog/givewells-top-charities-are-increasingly-hard-beat

So if we are giving to, e.g., encourage policies that increase incomes for average Americans, we need to increase them by $100 for every $1 we spend to get as much benefit as just giving that $1 directly to GiveDirectly recipients.

That's not exactly "Return on Investment", but it's a convenient shorthand.

Load More