"To see the world as it is, rather than as I wish it to be."
Currently I work for EA Funds. My job title is still tbd, but I'm responsible for a lot of the communications on behalf of EA Funds and constituent funds. I also work on grantmaking, fundraising, hiring, and some strategy setting.
I used to be a Senior Researcher on the General Longtermism team at Rethink Priorities. Concurrently, I also volunteered* as a funds manager for EA Funds' Long-term Future Fund.
*volunteering was by choice. LTFF offers payment for fund managers, but I was unsure whether it made sense to be paid for a second job while I was a salaried employee for RP with a lot of in-practice independence to do what I think is best for the world.
Thanks Hayven! I'm glad you like this direction. The challenge remaining, from my perspective is how we can practically build a robust community. Particularly one that's not directly tied to singular short-term object-level metrics[1] like lives saved or money donated or people who work in impactful jobs, without being overly inward-facing and losing track of why we're here in the first place.
We want the community to be neither a factory nor a social club.
Because judging a community too closely on specific object-level metrics risks biasing a specific worldview, plus might be long-term unhealthy for a community.
(own views only) Thank you Jason; I think you've correctly nailed the most important (short-term) issue with the changed scope.
I think there are two huge uncertainties with trying to do grants in global health and development meta. The first is that I'm not sure this is what donors want. The second is that I'm not sure there are good grantmakers who are willing to work in this area.
For the first confusion, I don't have survey results or anything, but I think many GHDF donors will feel betrayed if they learned that a significant fraction of their money goes to funding ambiguously meta activities[1].
I do think GHDF donors with high risk tolerance are currently poorly served by the current ecosystem (and may have to either handpick projects themselves to support, or donate to a meta fund with a large cause split). I don't have a good sense of how large this population of donors actually is.
For the second confusion, as an empirical matter I believe it's been difficult to find grantmakers excited about evaluating GHD meta. Even if donors are on board, I don't think the current EAIF is set up well to do this, nor is the current GHDF.
(In the medium- to long- term, I don't necessarily expect grantmakers to be a significant bottleneck in itself. Having enough assured funding + us focusing more time on hiring might be enough to solve that problem.)
Longer term, I think it probably makes sense for some fund to do global health and development meta[2] (It might even be under EA Funds!) I just don't think it's a good choice right now for either EAIF or GHDF.
I like your exit strategy suggestion and will probably bring it up with the team (note that I don't have any direct decision-making power for EAIF).
Again, these are just my own views. Caleb and other fund managers might disagree, and provide their own input.
I think many people give to GHDF because they want something that's maybe 10-20% more risky than GiveWell's All Grants Fund. Whereas I expect many meta activities, particularly projects with a longer chain of impact than, say, paying for a fundraiser, to be much more risky.
I do think having a non-OP source of funding is good here. In addition to greater independence as you've noted, I think OP GHD community building is just quite conservative, e.g. more inclined to fund things with "one step of meta" and clear metrics like fundraisers that counterfactually raise more money than they cost, or incubate GH charities that are on track to become future GiveWell top charities. Whereas I think people should be excited about the types of programs that originally got people like AGB to donate to global health, or fund neglected interventions research, even when the payoffs are not immediate.
I think I agree with the rest of this analysis (or at least the parts I could understand). However, the following paragraph seems off:
To its credit, the write-up does highlight this, but does not seem to appreciate the implications are crazy: any PT intervention, so long as it is cheap enough, should be thought better than GD, even if studies upon it show very low effect size
Apologies if I'm being naive here, but isn't this just a known problem of first-order cost-effectiveness analysis, not with this particular analysis per se? I mean, since cheapness could be arbitrarily low (or at least down to $0.01), "better than GD" is a bit of a red herring, the claim is merely that a single (even high-quality) study is not enough for someone to update their prior all the way down to zero, or negative.
And stated in English, this seems eminently reasonable to me. There might be good second-order etc reasons to not act on a naive first-order analysis (eg risk/ambiguity aversion, wanting to promote better studies, etc). But ultimately I don't think that literal claim is crazy to me, and naively seems like something that naturally falls out of a direct cost-effectiveness framework.
Re 2: It's plausible, but I'm not sure that this is true. Points against:
(EDIT 2023/12/04: Changed wording to be slightly more precise and slightly less strong) So there's at least some evidence that any or all of Hoffman/Hurd/Zilis (the 3 board members that left recently) would've opposed Sam trying to ouster Toner. Far from certain, but I'd currently say[3] >50% (EDIT: that at least one of them would be opposed). Especially if it turns out that one or all of them were themselves pushed out by Altman and they started sharing notes. Of course, ousting Altman in retaliation is a pretty big move, and the more politically savvy ones might've found a better compromise solution.
His wikipedia page says "On September 20, 2023, Hurd unveiled a detailed plan for how he would regulate AI if elected President, comparing AI to nuclear fusion, and proposing creating a branch of the executive to deal solely and directly on the issue of AI, and proposing strict regulations on civilian AI usage." The last one in particular doesn't sound necessarily conducive to OpenAI/Microsoft's advanced AI ambitions.
Convoluted wording because of "executives claimed that they were born via in vitro fertilization (IVF)."
Of course, this counterfactual is hard to verify. The Twitter backlash + OAI revolt probably means people would be hesitant to be publicly pro-Toner, now.
Is it normal for nonprofits to initiate this? Rather than employees of the Federal gov't who are invested? Most employee giving programs I'm aware of have primarily employee-initiated ways of adding charities.
We also see moderate to large effects from short periods of supplementation for the non-cognitive effects, so we might expect similar saturation for the cognitive effects.
Relatedly, I'd be interested to know whether his thoughts on the public's support for AI pauses or other forms of strict regulation have updated since his last comment exchange with Katja, now that we have many reasonably high-quality polls on the American public's perception of AI (much more concerned than excited), as well as many more public conversations.