Matt_Lerner

587Joined Oct 2019

Bio

Currently Research Director at Founders Pledge, but posts and comments represent my own opinions, not FP’s, unless otherwise noted.

I worked previously as a data scientist and as a journalist.

Comments
91

(I am research director at FP)

Thanks for all of your work on this analysis, Vasco. We appreciate your thoroughness and your willingness to engage with us beforehand. The work is obviously methodologically sound and, as Johannes indicated, we generally agree that climate is not among the top bets for reducing existential risk.

I think that "mitigating existential risk as cost-effectively as possible" is entailed by the goal of doing as much good as possible in the world, which is why FP exists. To be absolutely clear, FP's goal is to do the maximum possible amount of good, and to do so in a cause-neutral way.

A common misconception about our research agenda is that it is driven by the interests of our members. This is most assuredly not the case. To some degree, member-driven research was a component of previous iterations of the research team, and our movement away from this is indeed a relatively recent change. There remain some exceptions, but as a general rule we do not devote research resources to any cause area or charity investigation unless we have a good reason to suspect it might be genuinely valuable from a strictly cause-neutral standpoint.

Still, FP does operate under some constraints, one of which is that many of our 1700 members are not cause-neutral. This is by design. We facilitate our members' charitable giving to all (legal and feasible) grantees in hopes that we can influence some portion of this money toward highly effective ends. This works. Since our members are often not EAs, such giving is strictly counterfactual: in the absence of FP's recommendations, it simply would not have been given to effective charities. 

Climate plays two roles in a portfolio that is constrained in this way. First, it introduces members who are not cause-neutral to our way of thinking about problems and solutions, which builds credibility and opens the door to further education on cause areas that might not immediately resonate with them (e.g. AI risk). This also works. Second, it reallocates non-cause-neutral funds to the most effective opportunities within a cause area in which the vast majority of philanthropic funds are, unfortunately, misspent. As I have tried to work out in my Shortform, this reallocation can be cost-effective under certain conditions even within otherwise unpromising cause areas (of which climate is not one).

Finally, I do want to emphasize that the Climate Fund does not serve a strictly instrumental role. We genuinely think that the climate grants we make and recommend are a comparatively cost-effective way to improve the value of the long-term future, though not the most cost-effective way. I don't see any particular tension in that: every EA charity evaluator (or grantmaker) recommends (or grants to) options across a wide range of cost-effectiveness. From our perspective, the Climate Fund is better than most things, but not as good as the best things.

Do you have any plans for interoperability with other PPLs or languages for statistical computing? It would be pretty useful to be able to, e.g. write a model in Squiggle and port it easily to R or to PyMC3, particularly if Bayesian updating is not currently supported in Squiggle. I can easily imagine a workflow where we use Squiggle to develop a prior, which we'd then want to update using microdata in, say, Stan (via R).

I very strongly downvoted this comment because I think that personal attacks of any sort have a disproportionately negative impact on the quality of discussion overall, and because responding to a commenter's identity or background instead of the content of their comment is a bad norm.

Founders Pledge is hiring an Applied Researcher to work with our climate lead evaluating funding opportunities, finding new areas to research within climate, evaluating different theories of change, and granting from FP's Climate Fund.

We're open to multiple levels of seniority, from junior researchers all the way up to experienced climate grantmakers. Experience in climate and a familiarity with energy systems is a big plus, but not 100% necessary.

Our job listing is here. Please note that the first round consists of a resume screen and a preliminary task. If the task looks doable to you, I strongly encourage you to complete and submit it along with your resume, no matter what doubts you may have about the applicability of your past experience.

Feel free to message me here with any questions.

Something I've considered making myself is a Slackbot for group decision-making: forecasting, quadratic voting, etc. This seems like it would be very useful for lots of organizations and quite a low lift. It's not the kind of thing that seems easily monetizable at first, but it seems reasonable to expect that if it provides valuable, it could be the kind of thing that people would eventually have to buy "seats" for in larger organizations.

I appreciate your taking the time to write out this idea and the careful thought that went into your post. I liked that it was kind of in the form of a pitch, in keeping with your journalistic theme. I agree that EAs should be thinking more seriously about journalism (in the broadest possible sense) and I think that this is as good a place as any to start. I want to (a) nitpick a few things in your post with an eye to facilitating this broader conversation and (b) point out what I see as an important potential failure mode for an effort like this.

You characterize The Altruist at first as:

a news agency that provides journalistic coverage of EA topics and organisations

This sounds like more or less like a trade publication along the lines of Advertising Age or Publishers Weekly, or perhaps a subject-specific publication oriented more toward the general public, like Popular Science or Nautilus. Generally speaking, I think something like the former is a good idea, though trade publications are generally targeted at those working within an industry. I will describe later on why I am not sure the latter is feasible.

But you go on to say:

Other rough comparisons include The Atlantic, The Economist, the New Yorker, Current Affairs, Works in Progress, and Unherd

These publications are very different from each other. The Economist (where, full disclosure, I worked for a short time) is a general interest newspaper with a print circulation of ~1 million. The New Yorker is a highbrow weekly magazine known for its longform journalistic content. The Atlantic is an eclectic monthly that leans heavily on its regular output of short-form, nonreported digital content. Current Affairs is a bimonthly political magazine with an explicitly left-wing cultural and political agenda. Works in Progress is small, completely online, wholly dedicated to progress studies, and generally nonreported.

Unherd is evidently constructed in opposition to various trends and themes in mainstream political and cultural discourse, and its goal is to disrupt the homogeneity of that discourse. I really enjoy it, but I worry that it sometimes typifies the failure mode I'm worried about. Broadly, that failure mode is this: by defining itself in opposition to the dominant way of thinking, an outlet can sort potential readers out of being interested.

Consider: if a media outlet mainly publishes content that conflicts with the modal narrative, then the modal reader encountering it will find mostly content that challenges their views. I think it is a pernicious but nonetheless reliable feature of the media landscape that most readers who stumble onto such a publication will typically stumble off immediately to another, more comfortable one. I worry that a lot of EA is challenging enough that this could happen with something like The Altruist.

This may actually be fine- that's why I harp on the precision of the comparison classes: I think Works in Progress, for instance, is likely to serve the progress studies community very well in the years to come, and an EA version of that would serve well the initial goal you describe of improving resources for outreach. But I don't think that it would do a particularly good job of mitigating reputational risk or increasing community growth, because it would be a niche publication that might find it difficult to earn the trust of readers who find EA ideas challenging (in my experience, this is most people).

So I think as far as new publications go, we may have to pick between the various goals you have helpfully laid out here. But my aspirations for EA in journalism are a bit higher. Here's my question: what is an EA topic? It is not really obvious to me that there is such a thing. To most people, it is not intuitive, even when you explain, that there is something that ties together (for instance) worrying about AI risk, donating to anti-malaria charities, supporting human challenge trials, and eating vegan.

This is because EA is a way of approaching questions about how to do good in the world, not a collection of answers to those questions.

So my aspiration for journalism in general is not only that it more enthusiastically tackle those issues which this small and idiosyncratic community of people has determined is important. I also think it would be good if journalism in general moved in a more EA-aligned or EA-aware direction on all questions.  I think that, counterfactually, the past two decades of journalism in the developed world would look very different if the criterion for newsworthiness was more utilitarian, and if editorial judgments more robustly modeled truth-seeking behavior. Consequently my (weak, working) hypothesis is that the world would be a better place. I also think such a world would be an easier place to grow the community,  to combat bad-faith criticism, and to absorb and respond to good-faith critique.

One way to try to make this happen today would be to run a general-interest publication with an editorial position that is openly EA, much as The Economist's editorial slant is classically liberal. Such a publication would have to cover everything, not just deworming and the lives of people in the far future. But it would, of course, cover those things too.

To bring things back down to the actual topic of conversation: the considerations you have raised here are the right ones. My core concern is that a publication like this will try to do too many things at once, and the reason I've written so much above is to try to articulate some additional considerations that I hope will be useful in narrowing down its purpose.

While I’m skeptical about the idea that particular causes you’ve mentioned could truly end up being cost effective paths to reducing suffering, I’m sympathetic to the idea that improving the effectiveness of activity in putatively non-effective causes is potentially itself effective. What interventions do you have in mind to improve effectiveness within these domains?

Now that you’ve given examples, can you provide an account of how increased funding in these areas can lead to improved well-being / preserves lives or DALYs / etc in expectation? Do you expect that targeted funds could be cost-competitive with GW top charities or likewise?

To clarify, I'm not sure this is likely to be the best use of any individual EA's time, but I think it can still be true that it's potentially a good use of community resources, if intelligently directed.

I agree that perhaps "constitutionally" is too strong - what I mean is that EAs tend (generally) to have an interest in / awareness of these broadly meta-scientific topics.

In general, the argument I would make would be for greater attention to the possibility that mainstream causes deserve attention and more meta-level arguments for this case (like your post).

Thanks for this! It seems like much of the work that went into your CEA could be repurposed for explorations of other potentially growth- or governance-enhancing interventions. Since finding such an intervention would be quite high-value, and since the parameters in your CEA are quite uncertain, it seems like the value of information with respect to clarifying these parameters (and therefore the final ROI distribution) is probably very high.

Do you have a sense of what kind of research or data would help you narrow the uncertainty in the parameter inputs of your cost-effectiveness model?

Load More