T

tlevin

AI Governance Program Associate @ Open Philanthropy
2078 karmaJoined Working (0-5 years)

Bio

(Posting in a personal capacity unless stated otherwise.) I help allocate Open Phil's resources to improve the governance of AI with a focus on avoiding catastrophic outcomes. Formerly co-founder of the Cambridge Boston Alignment Initiative, which supports AI alignment/safety research and outreach programs at Harvard, MIT, and beyond, co-president of Harvard EA, Director of Governance Programs at the Harvard AI Safety Team and MIT AI Alignment, and occasional AI governance researcher. I'm also a proud GWWC pledger and vegan.

Comments
119

Giving now vs giving later, in practice, is a thorny tradeoff. I think these add up to roughly equal considerations, so my currently preferred policy is to split my donations 50-50, i.e. give 5% of my income away this year and save/invest 5% for a bigger donation later. (None of this is financial/tax advice! Please do your own thinking too.)

In favor of giving now (including giving a constant share of your income every year/quarter/etc, or giving a bunch of your savings away soon):

  • Simplicity.
  • The effects of your donation might have compounding returns, e.g. field-building gets more people doing great stuff, this can in turn build the field, etc., or be path-dependent, e.g. someone does some writing that establishes better concepts for the field.
  • Value drift: maybe you don't trust your future self to give as much, or to be as good at picking good stuff. (Some commitment mechanisms exist for this, like DAFs, but that really only fixes the "give as much" problem, and there are lots of opportunities that DAFs can't fund, such as 501c4 advocacy organizations, individuals, political campaigns, etc.)
  • Expropriation risk: you might lose the money, including via global catastrophe.

In favor of giving later:

  • Value of information: especially in a fast-changing field like AI, we'll continue learning more about what kinds of interventions work as time goes on.
  • Philanthropic learning: basically the opposite of value drift: you specifically might become a wiser donor, especially if you're currently young and/or new to the field.
  • Returns to scale: it's probably better to make e.g. a single $150k donation than ten donations averaging $15k, because orgs can act pretty decisively with an amount like that, like hire somebody or run a program. (Eventually you hit diminishing returns, but not for most individual donors.)
  • Compounding returns on investment.
  • Tax bunching (only applies to donations that you can write off): in my understanding, at least in the US, there's a threshold below which you effectively can't write off donations (the standard deduction), so there's effectively a fixed cost in any year that you make donations. This makes donating a fixed amount every year a pretty suboptimal strategy, other things equal; if you're donating an amount below or not that far above the standard deduction to c3 orgs every year, you might be able to save or donate significantly more if you instead donate once every few years.

Are you a US resident who spends a lot of money on rideshares + food delivery/pickup? If so, consider the following:

  • Costco members can buy up to four Uber gift cards of $50 value every two weeks (that is, 2 packs of 2 $50 gift cards). Now, and I think typically, these sell at 20% off face value.
  • Costco membership costs $65/year.
  • It takes ~2 minutes per gift card all-in.
  • You can use them on rides, scooters, and Uber Eats.
  • According to o3-mini-high, this means it's worth it if you spend $1625 / (5 - how much you value your marginal minute) per year on these services, if you get no other use out of the Costco membership. (If you do, this number goes down, of course.)
  • Hooray, you now have more money for donations, consumption, savings, or investment for a small time cost!
  • I was not paid by Costco or Uber to say this, I swear.

I think the opposite might be true: when you apply it to broad areas, you're likely to mistake low neglectedness for a signal of low tractability, and you should just look at "are there good opportunities at current margins." When you start looking at individual solutions, it starts being quite relevant whether they have already been tried. (This point already made here.)

  1. Would it be good to solve problem P?
  2. Can I solve P?

What is gained by adding the third thing? If the answer to #2 is "yes," then why does it matter if the answer to #3 is "a lot," and likewise in the opposite case, where the answers are "no" and "very few"?

Edit: actually yeah the "will someone else" point seems quite relevant.

Fair enough on the "scientific research is super broad" point, but I think this also applies to other fields that I hear described as "not neglected" including US politics.

Not talking about AI safety polling, agree that was highly neglected. My understanding, reinforced by some people who have looked into the actually-practiced political strategies of modern campaigns, is that it's just a stunningly under-optimized field with a lot of low-hanging fruit, possibly because it's hard to decouple political strategy from other political beliefs (and selection effects where especially soldier-mindset people go into politics).

I sometimes say, in a provocative/hyperbolic sense, that the concept of "neglectedness" has been a disaster for EA. I do think the concept is significantly over-used (ironically, it's not neglected!), and people should just look directly at the importance and tractability of a cause at current margins.

Maybe neglectedness useful as a heuristic for scanning thousands of potential cause areas. But ultimately, it's just a heuristic for tractability: how many resources are going towards something is evidence about whether additional resources are likely to be impactful at the margin, because more resources mean its more likely that the most cost-effective solutions have already been tried or implemented. But these resources are often deployed ineffectively, such that it's often easier to just directly assess the impact of resources at the margin than to do what the formal ITN framework suggests, which is to break this hard question into two hard ones: you have to assess something like the abstract overall solvability of a cause (namely, "percent of the problem solved for each percent increase in resources," as if this is likely to be a constant!) and the neglectedness of the cause.

That brings me to another problem: assessing neglectedness might sound easier than abstract tractability, but how do you weigh up the resources in question, especially if many of them are going to inefficient solutions? I think EAs have indeed found lots of surprisingly neglected (and important, and tractable) sub-areas within extremely crowded overall fields when they've gone looking. Open Phil has an entire program area for scientific research, on which the world spends >$2 trillion, and that program has supported Nobel Prize-winning work on computational design of proteins. US politics is a frequently cited example of a non-neglected cause area, and yet EAs have been able to start or fund work in polling and message-testing that has outcompeted incumbent orgs by looking for the highest-value work that wasn't already being done within that cause. And so on.

What I mean by "disaster for EA" (despite the wins/exceptions in the previous paragraph) is that I often encounter "but that's not neglected" as a reason not to do something, whether at a personal or organizational or movement-strategy level, and it seems again like a decent initial heuristic but easily overridden by taking a closer look. Sure, maybe other people are doing that thing, and fewer or zero people are doing your alternative. But can't you just look at the existing projects and ask whether you might be able to improve on their work, or whether there still seems to be low-hanging fruit that they're not taking, or whether you could be a force multiplier rather than just an input with diminishing returns? (Plus, the fact that a bunch of other people/orgs/etc are working on that thing is also some evidence, albeit noisy evidence, that the thing is tractable/important.) It seems like the neglectedness heuristic often leads to more confusion than clarity on decisions like these, and people should basically just use importance * tractability (call it "the IT framework") instead.

It's also just jargon-y. I call them "AI companies" because people outside the AGI memeplex don't know what an "AI lab" is, and (as you note) if they infer from someone's use of that term that the frontier developers are something besides "AI companies," they'd be wrong!

Biggest disagreement between the average worldview of people I met with at EAG and my own is something like "cluster thinking vs sequence thinking," where people at EAG are like "but even if we get this specific policy/technical win, doesn't it not matter unless you also have this other, harder thing?" and I'm more like, "Well, very possibly we won't get that other, harder thing, but still seems really useful to get that specific policy/technical win, here's a story where we totally fail on that first thing and the second thing turns out to matter a ton!"

Thanks, glad to hear it's helpful!

  • Re: more examples, I co-sign all of my teammates' AI examples here -- they're basically what I would've said. I'd probably add Tarbell as well.
  • Re: my personal donations, I'm saving for a bigger donation later; I encounter enough examples of very good stuff that Open Phil and other funders can't fund, or can't fund quickly enough, that I think there are good odds that I'll be able to make a really impactful five-figure donation over the next few years. If I were giving this year, I probably would've gone the route of political campaigns/PACs.
  • Re: sub-areas, there are some forms of policy advocacy and moral patienthood research for which small-to-medium-size donors could be very helpful. I don't have specific opportunities in mind that I feel like I can make a convincing public pitch for, but people can reach out if they're interested.

I hope to eventually/maybe soon write a longer post about this, but I feel pretty strongly that people underrate specialization at the personal level, even as there are lots of benefits to pluralization at the movement level and large-funder level. There are just really high returns to being at the frontier of a field. You can be epistemically modest about what cause or particular opportunity is the best, not burn bridges, etc, while still "making your bet" and specializing; in the limit, it seems really unlikely that e.g. having two 20 hr/wk jobs in different causes is a better path to impact than a single 40 hr/wk job.

I think this applies to individual donations as well; if you work in a field, you are a much better judge of giving opportunities in that field than if you don't, and you're more likely to come across such opportunities in the first place. I think this is a chronically underrated argument when it comes to allocating personal donations.

Load more