Matt_Lerner

Currently Research Director at Founders Pledge, but posts and comments represent my own opinions, not FP’s, unless otherwise noted.

I worked previously as a data scientist and as a journalist.

Topic Contributions

Comments

Please pitch ideas to potential EA CTOs

Something I've considered making myself is a Slackbot for group decision-making: forecasting, quadratic voting, etc. This seems like it would be very useful for lots of organizations and quite a low lift. It's not the kind of thing that seems easily monetizable at first, but it seems reasonable to expect that if it provides valuable, it could be the kind of thing that people would eventually have to buy "seats" for in larger organizations.

The Altruist - Proposal for an EA Newspaper

I appreciate your taking the time to write out this idea and the careful thought that went into your post. I liked that it was kind of in the form of a pitch, in keeping with your journalistic theme. I agree that EAs should be thinking more seriously about journalism (in the broadest possible sense) and I think that this is as good a place as any to start. I want to (a) nitpick a few things in your post with an eye to facilitating this broader conversation and (b) point out what I see as an important potential failure mode for an effort like this.

You characterize The Altruist at first as:

a news agency that provides journalistic coverage of EA topics and organisations

This sounds like more or less like a trade publication along the lines of Advertising Age or Publishers Weekly, or perhaps a subject-specific publication oriented more toward the general public, like Popular Science or Nautilus. Generally speaking, I think something like the former is a good idea, though trade publications are generally targeted at those working within an industry. I will describe later on why I am not sure the latter is feasible.

But you go on to say:

Other rough comparisons include The Atlantic, The Economist, the New Yorker, Current Affairs, Works in Progress, and Unherd

These publications are very different from each other. The Economist (where, full disclosure, I worked for a short time) is a general interest newspaper with a print circulation of ~1 million. The New Yorker is a highbrow weekly magazine known for its longform journalistic content. The Atlantic is an eclectic monthly that leans heavily on its regular output of short-form, nonreported digital content. Current Affairs is a bimonthly political magazine with an explicitly left-wing cultural and political agenda. Works in Progress is small, completely online, wholly dedicated to progress studies, and generally nonreported.

Unherd is evidently constructed in opposition to various trends and themes in mainstream political and cultural discourse, and its goal is to disrupt the homogeneity of that discourse. I really enjoy it, but I worry that it sometimes typifies the failure mode I'm worried about. Broadly, that failure mode is this: by defining itself in opposition to the dominant way of thinking, an outlet can sort potential readers out of being interested.

Consider: if a media outlet mainly publishes content that conflicts with the modal narrative, then the modal reader encountering it will find mostly content that challenges their views. I think it is a pernicious but nonetheless reliable feature of the media landscape that most readers who stumble onto such a publication will typically stumble off immediately to another, more comfortable one. I worry that a lot of EA is challenging enough that this could happen with something like The Altruist.

This may actually be fine- that's why I harp on the precision of the comparison classes: I think Works in Progress, for instance, is likely to serve the progress studies community very well in the years to come, and an EA version of that would serve well the initial goal you describe of improving resources for outreach. But I don't think that it would do a particularly good job of mitigating reputational risk or increasing community growth, because it would be a niche publication that might find it difficult to earn the trust of readers who find EA ideas challenging (in my experience, this is most people).

So I think as far as new publications go, we may have to pick between the various goals you have helpfully laid out here. But my aspirations for EA in journalism are a bit higher. Here's my question: what is an EA topic? It is not really obvious to me that there is such a thing. To most people, it is not intuitive, even when you explain, that there is something that ties together (for instance) worrying about AI risk, donating to anti-malaria charities, supporting human challenge trials, and eating vegan.

This is because EA is a way of approaching questions about how to do good in the world, not a collection of answers to those questions.

So my aspiration for journalism in general is not only that it more enthusiastically tackle those issues which this small and idiosyncratic community of people has determined is important. I also think it would be good if journalism in general moved in a more EA-aligned or EA-aware direction on all questions.  I think that, counterfactually, the past two decades of journalism in the developed world would look very different if the criterion for newsworthiness was more utilitarian, and if editorial judgments more robustly modeled truth-seeking behavior. Consequently my (weak, working) hypothesis is that the world would be a better place. I also think such a world would be an easier place to grow the community,  to combat bad-faith criticism, and to absorb and respond to good-faith critique.

One way to try to make this happen today would be to run a general-interest publication with an editorial position that is openly EA, much as The Economist's editorial slant is classically liberal. Such a publication would have to cover everything, not just deworming and the lives of people in the far future. But it would, of course, cover those things too.

To bring things back down to the actual topic of conversation: the considerations you have raised here are the right ones. My core concern is that a publication like this will try to do too many things at once, and the reason I've written so much above is to try to articulate some additional considerations that I hope will be useful in narrowing down its purpose.

EA cause areas are just areas where great interventions should be easier to find

While I’m skeptical about the idea that particular causes you’ve mentioned could truly end up being cost effective paths to reducing suffering, I’m sympathetic to the idea that improving the effectiveness of activity in putatively non-effective causes is potentially itself effective. What interventions do you have in mind to improve effectiveness within these domains?

EA cause areas are just areas where great interventions should be easier to find

Now that you’ve given examples, can you provide an account of how increased funding in these areas can lead to improved well-being / preserves lives or DALYs / etc in expectation? Do you expect that targeted funds could be cost-competitive with GW top charities or likewise?

Why EAs researching mainstream topics can be useful

To clarify, I'm not sure this is likely to be the best use of any individual EA's time, but I think it can still be true that it's potentially a good use of community resources, if intelligently directed.

I agree that perhaps "constitutionally" is too strong - what I mean is that EAs tend (generally) to have an interest in / awareness of these broadly meta-scientific topics.

In general, the argument I would make would be for greater attention to the possibility that mainstream causes deserve attention and more meta-level arguments for this case (like your post).

Intervention Report: Charter Cities

Thanks for this! It seems like much of the work that went into your CEA could be repurposed for explorations of other potentially growth- or governance-enhancing interventions. Since finding such an intervention would be quite high-value, and since the parameters in your CEA are quite uncertain, it seems like the value of information with respect to clarifying these parameters (and therefore the final ROI distribution) is probably very high.

Do you have a sense of what kind of research or data would help you narrow the uncertainty in the parameter inputs of your cost-effectiveness model?

Why EAs researching mainstream topics can be useful

On the face of it, it seems like researching and writing about "mainstream" topics is net positive value for EAs for the reasons you describe, although not obviously an optimal use of time relative to other competing opportunities for EAs. I've tried to work out in broad strokes how effective it might be to move money within putatively less-effective causes, and it seems to me like (for instance) the right research, done by the right person or group, really could make a meaningful difference in one of these areas.

Items 2.2 and 2.3 (in your summary) are, to me, simultaneously the riskiest and most compelling propositions to me.  Could EAs really do a better job finding the "right answers" than there are to be found in existing work? I take "neglectedness" in the ITN framework to be a heuristic that serves mainly to forestall hubris in this regard: we should think twice before assuming we know better than the experts, as we're quite likely to be wrong.

But I think there is still reason to suspect that there is value to be captured in mainstream causes. Here are a few reasons I think this might be the case.

  • "Outcome orientation" and a cost-benefit mindset are surprisingly rare, even in fields that are nominally outcomes-focused. This horse has already been beaten to death, but the mistakes, groupthink, and general confusion in many corners of epidemiology and public health during the pandemic suggests that consequences are less salient in these fields than I would have expected beforehand. Alex Tabarrok, a non-epidemiologist, seems to have gotten most things right well before the relevant domain experts simply by thinking in consequentialist terms. Zeynep Tufekci, Nate Silver, and Emily Oster are in similar positions.
     
  • Fields have their own idiosyncratic concerns and debates that eat up a lot of time and energy, IMO to the detriment of overall effectiveness. My (limited) experience in education research and tech in the developed world led me to conclude that the goals of the field are unclear and ill-defined (Are we maximizing graduation rates? College matriculation? Test scores? Are we maximizing anything at all?). Significant amounts of energy are taken up by debates and concerns about data privacy, teacher well-being and satisfaction, and other issues that are extremely important but which, ultimately, are not directly related to the (broadly defined) goals of the field. The drivers behind philanthropic funding seem, to me, to be highly undertheorized.

    I think philanthropic money in the education sector should probably go to the developing world, but it's not obvious to me that developed-world experts are squeezing out all the potential value that they could. Whether the scale of that potential value is large enough to justify improving the sector, or whether such improvements are tractable, are different questions.
     
  • There are systematic biases within disciplines, even when those fields or disciplines are full of smart, even outcomes-focused people. Though not really a cause area, David Shor has persuasively argued that Democratic political operatives are ideological at the cost of being effective. My sense is that this is also true to some degree in education.
     
  • There are fields where the research quality is just really low. The historical punching bag for this is obviously social psychology, which has been in the process of attempting to improve for a decade now. I think the experience of the replication crisis—which is ongoing—should cause us to update away from thinking that just because lots of people are working on a topic, that means that there is no marginal value to additional research. I think the marginal value can be high, especially for EAs, who are constitutionally hyper-aware of the pitfalls of bad research, have high standards of rigor, and are often quantitatively sophisticated. EAs are also relatively insistent on clarity, the lack of which seems to be a main obstacle to identifying bad research.
     
Exporting EA discussion norms

I think about this all the time. It seems like a really high-value thing to do not just for the sake of other communities but even from a strictly EA perspective— discourse norms seem to have a real impact on the outcome of decision-relevant conversations, and I have an (as-yet unjustified) sense that EA-style norms lead to better normative outcomes. I haven't tried it, but I do have a few isolated, perhaps obvious observations.

  • For me at least, it is easier to hew to EA discussion norms when they are, in fact, accepted norms. That is, assuming the best intentions of an interlocutor, explaining instead of persuading, steelmanning, etc— I find it easier to do these things when I know they're expected of me. This suggests to me that it might hard to institute such norms unilaterally.
  • EA norms don't obviously all go together. You can imagine a culture where civility is a dominant norm but where views are still expressed and argued for in a tendentious way. This would suck in a community where the shared goal is some truth-seeking enterprise, but I imagine that the more substantive EA norms around debate and discussion would actually impose a significant cost on communities where truth-seeking isn't the main goal!
  • Per the work of Robert Frank, it seems like there are probably institutional design decisions that can increase the likelihood of observing these norms. I'm not sure how much the EA Forum's designers intended this, but it seems to me like hiding low-scoring answers, allowing real names, and the existence of strong upvotes/downvotes all play a role in culture on the forum in particular.
Matt_Lerner's Shortform

I guess a more useful way to think about this for prospective funders is to move things about again. Given that you can exert c/x leverage over funds within Cause Y, then you're justified in spending c to do so provided you can find some Cause Y such that the distribution of DALYs per dollar meets the condition...

...which makes for a potentially nice rule of thumb. When assessing some Cause Y, you need only ("only") identify a plausibly best or close-to-best opportunity, as well as the median one, and work from there.

Obviously this condition holds for any distribution and any set of quintiles, but the worked example above only indicates to me that it's a plausible condition for the log-normal.

Matt_Lerner's Shortform

Under what circumstances is it potentially cost-effective to move money within low-impact causes?

This is preliminary and most likely somehow wrong.  I'd love for someone to have a look at my math and tell me if (how?) I'm on the absolute wrong track here.

Start from the assumption that there is some amount of charitable funding that is resolutely non-cause-neutral. It is dedicated to some cause area Y and cannot be budged. I'll assume for these purposes that DALYs saved per dollar is distributed log-normally within Cause Y:

I want to know how impactful it might, in general terms, be to shift money from the median funding opportunity in Cause Y to the 90th percentile opportunity. So I want the difference between the value of spending a dollar at those two points on the impact distribution.

The log-normal distribution has the following quantile function:

So the value to be gained by moving from p = 0.5 to p = 0.9 is given by

This simplifies down to

Or

Not a pretty formula, but it's easy enough to see two things which were pretty intuitive before this exercise. First, you can squeeze out more DALYs from moving money in causes where the  mean DALYs per dollar across all funding opportunities is higher, and, for a given average, moving money is higher-value where there's more variation across funding opportunities (roughly, since variance is proportional to but not precisely given by sigma). Pretty obvious so far.

Okay, what about making this money-moving exercise cost-competitive with a direct investment in an effective cause, with a benchmark of $100/DALY? For that, and for a given investment amount $x, and a value c such that an expenditure of $c causes the money in cause Y to shift from the median opportunity to the 90th-percentile one, we'd need to satisfy the following condition:

Moving things around a bit...

Which, given reasonable assumptions about the values of c and x, holds true trivially for larger means and variances across cause Y.  The catch, of course, is that means and variances of DALYs per dollar in a cause area are practically never large, let alone in a low-impact cause area. Still, the implication is that (a) if you can exert inexpensive enough leverage over the funding flows within some cause Y and/or (b) if funding opportunities within cause Y are sufficiently variable, cost-effectiveness is at least theoretically possible.

So just taking an example: Our benchmark is $100 per DALY, or 0.01 DALYs per dollar, so let's just suppose we have a low-impact Cause Y that is between three and six orders of magnitude less effective than that, with a 95% CI of [0.00000001,0.00001], or one for which you can preserve a DALY for between $100,000 and $100 million, depending on the opportunity. That gives mu = -14.97 and sigma = 1.76. Plugging those numbers into the above, we get approximately...

...suggesting, I think, that if you can get roughly 4000:1 leverage when it comes to spending money to move money, it can be cost-effective to influence funding patterns within this low-impact cause area.

There are obviously a lot of caveats here (does a true 90th percentile opportunity exist for any Cause Y?), but this is where my thinking is at right now, which is why this is in my shortform and not anywhere else.

Load More