Matt_Lerner

Currently Research Director at Founders Pledge, but posts and comments represent my own opinions, not FP’s, unless otherwise noted.

Wiki Contributions

Comments

EA cause areas are just areas where great interventions should be easier to find

While I’m skeptical about the idea that particular causes you’ve mentioned could truly end up being cost effective paths to reducing suffering, I’m sympathetic to the idea that improving the effectiveness of activity in putatively non-effective causes is potentially itself effective. What interventions do you have in mind to improve effectiveness within these domains?

EA cause areas are just areas where great interventions should be easier to find

Now that you’ve given examples, can you provide an account of how increased funding in these areas can lead to improved well-being / preserves lives or DALYs / etc in expectation? Do you expect that targeted funds could be cost-competitive with GW top charities or likewise?

Why EAs researching mainstream topics can be useful

To clarify, I'm not sure this is likely to be the best use of any individual EA's time, but I think it can still be true that it's potentially a good use of community resources, if intelligently directed.

I agree that perhaps "constitutionally" is too strong - what I mean is that EAs tend (generally) to have an interest in / awareness of these broadly meta-scientific topics.

In general, the argument I would make would be for greater attention to the possibility that mainstream causes deserve attention and more meta-level arguments for this case (like your post).

Intervention Report: Charter Cities

Thanks for this! It seems like much of the work that went into your CEA could be repurposed for explorations of other potentially growth- or governance-enhancing interventions. Since finding such an intervention would be quite high-value, and since the parameters in your CEA are quite uncertain, it seems like the value of information with respect to clarifying these parameters (and therefore the final ROI distribution) is probably very high.

Do you have a sense of what kind of research or data would help you narrow the uncertainty in the parameter inputs of your cost-effectiveness model?

Why EAs researching mainstream topics can be useful

On the face of it, it seems like researching and writing about "mainstream" topics is net positive value for EAs for the reasons you describe, although not obviously an optimal use of time relative to other competing opportunities for EAs. I've tried to work out in broad strokes how effective it might be to move money within putatively less-effective causes, and it seems to me like (for instance) the right research, done by the right person or group, really could make a meaningful difference in one of these areas.

Items 2.2 and 2.3 (in your summary) are, to me, simultaneously the riskiest and most compelling propositions to me.  Could EAs really do a better job finding the "right answers" than there are to be found in existing work? I take "neglectedness" in the ITN framework to be a heuristic that serves mainly to forestall hubris in this regard: we should think twice before assuming we know better than the experts, as we're quite likely to be wrong.

But I think there is still reason to suspect that there is value to be captured in mainstream causes. Here are a few reasons I think this might be the case.

  • "Outcome orientation" and a cost-benefit mindset are surprisingly rare, even in fields that are nominally outcomes-focused. This horse has already been beaten to death, but the mistakes, groupthink, and general confusion in many corners of epidemiology and public health during the pandemic suggests that consequences are less salient in these fields than I would have expected beforehand. Alex Tabarrok, a non-epidemiologist, seems to have gotten most things right well before the relevant domain experts simply by thinking in consequentialist terms. Zeynep Tufekci, Nate Silver, and Emily Oster are in similar positions.
     
  • Fields have their own idiosyncratic concerns and debates that eat up a lot of time and energy, IMO to the detriment of overall effectiveness. My (limited) experience in education research and tech in the developed world led me to conclude that the goals of the field are unclear and ill-defined (Are we maximizing graduation rates? College matriculation? Test scores? Are we maximizing anything at all?). Significant amounts of energy are taken up by debates and concerns about data privacy, teacher well-being and satisfaction, and other issues that are extremely important but which, ultimately, are not directly related to the (broadly defined) goals of the field. The drivers behind philanthropic funding seem, to me, to be highly undertheorized.

    I think philanthropic money in the education sector should probably go to the developing world, but it's not obvious to me that developed-world experts are squeezing out all the potential value that they could. Whether the scale of that potential value is large enough to justify improving the sector, or whether such improvements are tractable, are different questions.
     
  • There are systematic biases within disciplines, even when those fields or disciplines are full of smart, even outcomes-focused people. Though not really a cause area, David Shor has persuasively argued that Democratic political operatives are ideological at the cost of being effective. My sense is that this is also true to some degree in education.
     
  • There are fields where the research quality is just really low. The historical punching bag for this is obviously social psychology, which has been in the process of attempting to improve for a decade now. I think the experience of the replication crisis—which is ongoing—should cause us to update away from thinking that just because lots of people are working on a topic, that means that there is no marginal value to additional research. I think the marginal value can be high, especially for EAs, who are constitutionally hyper-aware of the pitfalls of bad research, have high standards of rigor, and are often quantitatively sophisticated. EAs are also relatively insistent on clarity, the lack of which seems to be a main obstacle to identifying bad research.
     
Exporting EA discussion norms

I think about this all the time. It seems like a really high-value thing to do not just for the sake of other communities but even from a strictly EA perspective— discourse norms seem to have a real impact on the outcome of decision-relevant conversations, and I have an (as-yet unjustified) sense that EA-style norms lead to better normative outcomes. I haven't tried it, but I do have a few isolated, perhaps obvious observations.

  • For me at least, it is easier to hew to EA discussion norms when they are, in fact, accepted norms. That is, assuming the best intentions of an interlocutor, explaining instead of persuading, steelmanning, etc— I find it easier to do these things when I know they're expected of me. This suggests to me that it might hard to institute such norms unilaterally.
  • EA norms don't obviously all go together. You can imagine a culture where civility is a dominant norm but where views are still expressed and argued for in a tendentious way. This would suck in a community where the shared goal is some truth-seeking enterprise, but I imagine that the more substantive EA norms around debate and discussion would actually impose a significant cost on communities where truth-seeking isn't the main goal!
  • Per the work of Robert Frank, it seems like there are probably institutional design decisions that can increase the likelihood of observing these norms. I'm not sure how much the EA Forum's designers intended this, but it seems to me like hiding low-scoring answers, allowing real names, and the existence of strong upvotes/downvotes all play a role in culture on the forum in particular.
Matt_Lerner's Shortform

I guess a more useful way to think about this for prospective funders is to move things about again. Given that you can exert c/x leverage over funds within Cause Y, then you're justified in spending c to do so provided you can find some Cause Y such that the distribution of DALYs per dollar meets the condition...

...which makes for a potentially nice rule of thumb. When assessing some Cause Y, you need only ("only") identify a plausibly best or close-to-best opportunity, as well as the median one, and work from there.

Obviously this condition holds for any distribution and any set of quintiles, but the worked example above only indicates to me that it's a plausible condition for the log-normal.

Matt_Lerner's Shortform

Under what circumstances is it potentially cost-effective to move money within low-impact causes?

This is preliminary and most likely somehow wrong.  I'd love for someone to have a look at my math and tell me if (how?) I'm on the absolute wrong track here.

Start from the assumption that there is some amount of charitable funding that is resolutely non-cause-neutral. It is dedicated to some cause area Y and cannot be budged. I'll assume for these purposes that DALYs saved per dollar is distributed log-normally within Cause Y:

I want to know how impactful it might, in general terms, be to shift money from the median funding opportunity in Cause Y to the 90th percentile opportunity. So I want the difference between the value of spending a dollar at those two points on the impact distribution.

The log-normal distribution has the following quantile function:

So the value to be gained by moving from p = 0.5 to p = 0.9 is given by

This simplifies down to

Or

Not a pretty formula, but it's easy enough to see two things which were pretty intuitive before this exercise. First, you can squeeze out more DALYs from moving money in causes where the  mean DALYs per dollar across all funding opportunities is higher, and, for a given average, moving money is higher-value where there's more variation across funding opportunities (roughly, since variance is proportional to but not precisely given by sigma). Pretty obvious so far.

Okay, what about making this money-moving exercise cost-competitive with a direct investment in an effective cause, with a benchmark of $100/DALY? For that, and for a given investment amount $x, and a value c such that an expenditure of $c causes the money in cause Y to shift from the median opportunity to the 90th-percentile one, we'd need to satisfy the following condition:

Moving things around a bit...

Which, given reasonable assumptions about the values of c and x, holds true trivially for larger means and variances across cause Y.  The catch, of course, is that means and variances of DALYs per dollar in a cause area are practically never large, let alone in a low-impact cause area. Still, the implication is that (a) if you can exert inexpensive enough leverage over the funding flows within some cause Y and/or (b) if funding opportunities within cause Y are sufficiently variable, cost-effectiveness is at least theoretically possible.

So just taking an example: Our benchmark is $100 per DALY, or 0.01 DALYs per dollar, so let's just suppose we have a low-impact Cause Y that is between three and six orders of magnitude less effective than that, with a 95% CI of [0.00000001,0.00001], or one for which you can preserve a DALY for between $100,000 and $100 million, depending on the opportunity. That gives mu = -14.97 and sigma = 1.76. Plugging those numbers into the above, we get approximately...

...suggesting, I think, that if you can get roughly 4000:1 leverage when it comes to spending money to move money, it can be cost-effective to influence funding patterns within this low-impact cause area.

There are obviously a lot of caveats here (does a true 90th percentile opportunity exist for any Cause Y?), but this is where my thinking is at right now, which is why this is in my shortform and not anywhere else.

AMA: Tom Chivers, science writer, science editor at UnHerd

What do you see as the consequentialist value of doing journalism? What are the ways in which journalists can improve the world? And do you believe these potential improvements are measurable?

Do power laws drive politics?

One thing to note here is that lots of commonly-used power law distributions have positive support. Political choices can and sometimes do have dramatically negative effects, and many of the catastrophes that EAs are concerned with are plausibly the result of those choices (like nuclear catastrophe, for instance). 

So a distribution that describes the outcomes of political choices should probably have support on the whole real line, and you wouldn't want to model choices with most simple power-law distributions.  But you might be on to something-- you might think of a hierarchical model in which there's some probability that decisions are either good or bad, and that the degree to which they are good or bad is governed by a power law distribution. That's the model I've been working with, but it seems incomplete to me.

Load More