xccf

Wiki Contributions

Comments

You can now apply to EA Funds anytime! (LTFF & EAIF only)

I don't think risk goes up linearly with time. Many people quit their PhDs when they aren't a good fit.

Fair enough.

Maybe a pragmatic solution here is to emphasize to people who get a grant to do independent research that they can quit and give back the remainder of their grant at any time?

You can now apply to EA Funds anytime! (LTFF & EAIF only)

Sure. Well when the LTFF funds graduate students who aren't even directly focused on improving the long-term future, just to help them advance their careers, I think that sends a strong signal that the LTFF thinks grad school should be the default path. Counterfactually, if grad school is 5-10x the risk of independent research, it seems like you should be 5-10x as hesitant to fund grad students compared to independent researchers. (Assuming for the moment that paternalism is in fact the correct orientation for a grantmaker to have.)

You can now apply to EA Funds anytime! (LTFF & EAIF only)

My model for why there's a big discrepancy between what NIH grantmakers will fund and what Fast Grants recipients want to do is that NIH grantmakers adopt a sort of conservative, paternalistic attitude. I don't think this is unique to NIH grantmakers. For example, in your comment you wrote:

we want to avoid funding people for independent research when they might do much better in an organization

The person who applies for a grant knows a lot more about their situation than the grantmaker does: their personal psychology, the nature of their research interests, their fit for various organizations. They seem a lot better equipped to make career decisions for themselves than busy grantmakers.

It seems worth considering the possibility that there are psychological dynamics to grantmaking that are inherent in the nature of the activity. Maybe the NIH has just had more time to slide down this slope than EA Funds has.

You can now apply to EA Funds anytime! (LTFF & EAIF only)

The feedback loops in grantmaking aren't great. There's a tendency for everyone to assume that because you control so much money, you must know what you're doing. (I talked to an ex-grantmaker who said that even after noticing and articulating this tendency, he continued to see it operating in himself.) And people who want to get a grant will be extra deferential:

once you become a philanthropist, you never again tell a bad joke… everyone wants to be on your good side. And I think that can be a very toxic environment…

source

So I think it's important to be extra self-skeptical if you're working as a grantmaker.

You can now apply to EA Funds anytime! (LTFF & EAIF only)

I think you're splitting hairs here--my point is that your "hesitation" doesn't really seem to be justified by the data.

trying to pursue an independent research path will be a really big waste of human capital, and potentially cause some pretty bad experiences

I think this is even more true for graduate school:

Independent research seems superior to graduate school for multiple reasons, but one factor is that the time commitment is much lower.

In my opinion it's not enough to carefully think through independent research grants... with so much longtermist funding centralized through your organization, you also have to carefully think through a default of funneling people through another thing that can waste a lot of human capital and cause a lot of bad experiences, but lasts 5-10x longer.

You can now apply to EA Funds anytime! (LTFF & EAIF only)

It’s hard to find great grants

Pardon me if this is overly pedantic, but I think you might be missing a map/territory distinction here. "It's hard to find great grants" seems different than "It's hard to find grants we really like". For example, the LTFF managers mentioned multiple times in this post that they're skeptical of funding independent researchers, but this analyst found (based on a limited sample size) that independent researchers outperformed organizations among LTFF grant recipients. Similarly, a poll of Fast Grants recipients found that almost 80% would make major changes to their research program if funders relaxed constraints on what their grants could be used for, suggesting that the preferences of grantmakers can diverge wildly from the preferences of researchers applying for grants.

The Long-Term Future Fund has room for more funding, right now

Sure. I guess I don't have a lot of faith in your team's ability to do this, since you/people you are funding are already saying things that seem amateurish to me. But I'm not sure that is a big deal.

Status update: Getting money out of politics and into charity

Perhaps the biggest area of agreement was that one hurdle we would face is getting voters to trust us -- not just that it was a good idea to give money to our platform, but that we wouldn’t steal their money. This requires getting some high-profile backing (from both parties).

Is there any way to create legal infrastructure so that voters could sue if you didn't follow through on your promises? And so that your finances are transparent? Perhaps the legal concept of "escrow" could be useful?

The Long-Term Future Fund has room for more funding, right now

I'm not in favor of funding exclusively based on talent, because I think a lot of the impact of our grants is in how they affect the surrounding field, and low-quality work dilutes the quality of those fields and attracts other low-quality work.

Let's compare the situation of the Long-Term Future Fund evaluating the quality of a grant proposal to that of the academic community evaluating the quality of a published paper. Compared to the LTFF evaluating a grant proposal, the academic community evaluating the quality of a published paper has big advantages: The work is being evaluated retrospectively instead of prospectively (i.e. it actually exists, it is not just a hypothetical project). The academic community has more time and more eyeballs. The academic community has people who are very senior in their field, and your team is relatively junior--plus, "longtermism" is a huge area that's really hard to be an expert in all of.

Even so, the academic community doesn't seem very good at their task. "Sleeping beauty" papers, whose quality is only recognized long after publication, seem common. Breakthroughs are denounced by scientists, or simply underappreciated, at first (often 'correctly' due to being less fleshed out than existing theories). This paper contains a list of 34 examples of Nobel Prize-winning work being rejected by peer review. "Science advances one funeral at a time", they say.

Problems compound when the question of first-order quality is replaced by the question of what others will consider to be high quality. You're funding researchers to do work that you consider to be work that others will consider to be good--based on relatively superficial assessments due to time limitations, it sounds like.

Seems like a recipe for herd behavior. But breakthroughs come from mavericks. This funding strategy could have a negative effect by stifling innovation (filtering out contrarian thinking and contrarian researchers from the field).

Keep longtermism weird?

(I'm also a little skeptical of your "low-quality work dilutes the quality of those fields and attracts other low-quality work" fear--since high citation count is often thought of as an ipso facto measure of quality in academia, it would seem that if work attracts additional related work, it is probably not low quality. I think the most likely fate of low-quality work is to be forgotten. If people are too credulous of work which is actually low-quality, it's unclear to me why the fund managers would be immune to this, and having more contrarians seems like the best solution to me. The general approach of "fund many perspectives and let them determine what constitutes quality through discussion" has the advantage of offloading work from the LTFF team.)

RyanCarey's Shortform

I thought this Astral Codex Ten post, explaining how the GOP could benefit from integrating some EA-aligned ideas like prediction markets into its platform, was really interesting. Karl Rove retweeted it here. I don't know how well an anti-classism message would align with EA in its current form though, if Habryka is right that EA is currently "too prestige-seeking".

Load More