ML

Matt_Lerner

1232 karmaJoined

Bio

Currently Research Director at Founders Pledge, but posts and comments represent my own opinions, not FP’s, unless otherwise noted.

I worked previously as a data scientist and as a journalist.

Comments
121

Hey Darren, thanks for doing this AMA — and thanks for doing your part to steer money to such a valuably and critically important cause.

Can you describe a bit about the decision-making process at Beast Philanthropy? More to the point, what would an optimal decision-making process look like, in your view? e.g. how would you use research, how would you balance giving locally vs globally, think about doing the most good possible (or constraining that in some way), etc?

I listened to the whole episode — if I understood correctly, they are mostly skeptical that there are effects at very low blood lead levels. At the end of the podcast, Stuart or Tom (can't remember which) explicitly says that they're not skeptical that lead affects IQ, and they spend most of the episode addressing the claimed relationship at low BLLs (rather than the high ones addressed by LEEP, CGD, other interventions).

I'd be interested in exploring funding this and the broader question of ensuring funding stability and security robustness for critical OS infrastructure. @Peter Wildeford is this something you guys are considering looking at?

I'm also strongly interested in this research topic — note that although the problem is worst in the U.S., the availability and affordability of fentanyl (which appears to be driving OD deaths) suggests that this could easily spread to LMICs in the medium-term, suggesting that preventive measures such as vaccines could even be cost-effective by traditional metrics.

Easily reconciled — most of our money moved is via advising our members. These grants are in large part not public, and members also grant to many organizations that they choose irrespective of our recommendations. We provide the infrastructure to enable this.

The Funds are a relatively recent development, and indeed some of the grants listed on the current Fund pages were actually advised by the fund managers, not granted directly from money contributed to the Fund (this is noted on the website if it's the case for each grant). Ideally, we'd be able to grow the Funds a lot more so that we can do much more active grantmaking, and at the same time continue to advise members on effective giving.

My team (11 people at the moment) does generalist research across worldviews — animal welfare, longtermism/GCRs, and global health and development. We also have a climate vertical, as you note, which I characterize in more detail in this previous forum comment.

EDIT:

Realized I didn't address your final question. I think we are a mix, basically — we are enabling successful entrepreneurs to give, period (in fact, we are committing them to do so via a legally binding pledge), and we are trying to influence as much of their giving as possible toward the most effective possible things. It is probably more accurate to represent FP as having a research arm, simply given staff proportions, but equally accurate to describe our recommendations as being "research-driven."

We (Founders Pledge) do have a significant presence in SF, and are actively trying to grow  much faster in the U.S. in 2024.

A couple weakly held takes here, based on my experience:

  • Although it's true that issues around effective giving are much more salient in the Bay Area, it's also the case that effective giving is nearly as much of an uphill battle with SF philanthropists as with others. People do still have pet causes, and there are many particularities about the U.S. philanthropic ecosystem that sometimes push against individuals' willingness to take the main points of effective giving on board.
     
  • Relatedly, growing in SF seems in part to be hard essentially because of competition. There's a lot of money and philanthropic intent, and a fair number of existing organizations (and philanthropic advisors, etc) that are focused on capturing that money and guiding that philanthropy. So we do face the challenge of getting in front of people, getting enough of their time, etc.
     
  • Since FP has historically offered mostly free services to members, growing our network in SF is something we actually need to fundraise for. On the margin I believe it's worthwhile, given the large number of potentially aligned UHNWs, but it's the kind of investment (in this case, in Founders Pledge by its funders) that would likely take a couple years to bear fruit in terms of increased amounts of giving to effective charities. I expect this is also a consideration for other existing groups that are thinking about raising money for a Bay Area expansion.

I think your arguments do suggest good reasons why nuclear risk might be prioritized lower; since we operate on the most effective margin, as you note, it is also possible at the same time for there to be significant funding margins in nuclear that are highly effective in expectation.

My point is precisely that you should not assume any view. My position is that the uncertainties here are significant enough to warrant some attention to nuclear war as a potential extinction risk, rather than to simply bat away these concerns on first principles and questionable empirics.

Where extinction risk is concerned, it is potentially very costly to conclude on little evidence that something is not an extinction risk. We do need to prioritize, so I would not for instance propose treating bad zoning laws as an X-risk simply because we can't demonstrate conclusively that they won't lead to extinction. Luckily there are very few things that could kill very large numbers of people, and nuclear war is one of them.

I don't think my argument says anything about how nuclear risk should be prioritized relative to other X-risks, I think the arguments for deprioritizing it relative to others are strong and reasonable people can disagree; YMMV.

If you leave 1,000 - 10,000 humans alive, the longterm future is probably fine

This is a very common claim that I think needs to be defended somewhat more robustly instead of simply assumed. If we have one strength as a community, is in not simply assuming things.

My read is that the evidence here is quite limited, the outside view suggests that losing 99.9999% of a species / having a very small population is a significant extinction risk, and that the uncertainty around the long-term viability of collapse scenarios is enough reason to want to avoid near-extinction events.

Has there been any formal probabilistic risk assessment on AI X-risk? e.g. fault tree analysis or event tree analysis — anything of that sort?

Load more