The Long-Term Future Fund has room for more funding, right now

Sure. I guess I don't have a lot of faith in your team's ability to do this, since you/people you are funding are already saying things that seem amateurish to me. But I'm not sure that is a big deal.

Status update: Getting money out of politics and into charity

Perhaps the biggest area of agreement was that one hurdle we would face is getting voters to trust us -- not just that it was a good idea to give money to our platform, but that we wouldn’t steal their money. This requires getting some high-profile backing (from both parties).

Is there any way to create legal infrastructure so that voters could sue if you didn't follow through on your promises? And so that your finances are transparent? Perhaps the legal concept of "escrow" could be useful?

The Long-Term Future Fund has room for more funding, right now

I'm not in favor of funding exclusively based on talent, because I think a lot of the impact of our grants is in how they affect the surrounding field, and low-quality work dilutes the quality of those fields and attracts other low-quality work.

Let's compare the situation of the Long-Term Future Fund evaluating the quality of a grant proposal to that of the academic community evaluating the quality of a published paper. Compared to the LTFF evaluating a grant proposal, the academic community evaluating the quality of a published paper has big advantages: The work is being evaluated retrospectively instead of prospectively (i.e. it actually exists, it is not just a hypothetical project). The academic community has more time and more eyeballs. The academic community has people who are very senior in their field, and your team is relatively junior--plus, "longtermism" is a huge area that's really hard to be an expert in all of.

Even so, the academic community doesn't seem very good at their task. "Sleeping beauty" papers, whose quality is only recognized long after publication, seem common. Breakthroughs are denounced by scientists, or simply underappreciated, at first (often 'correctly' due to being less fleshed out than existing theories). This paper contains a list of 34 examples of Nobel Prize-winning work being rejected by peer review. "Science advances one funeral at a time", they say.

Problems compound when the question of first-order quality is replaced by the question of what others will consider to be high quality. You're funding researchers to do work that you consider to be work that others will consider to be good--based on relatively superficial assessments due to time limitations, it sounds like.

Seems like a recipe for herd behavior. But breakthroughs come from mavericks. This funding strategy could have a negative effect by stifling innovation (filtering out contrarian thinking and contrarian researchers from the field).

Keep longtermism weird?

(I'm also a little skeptical of your "low-quality work dilutes the quality of those fields and attracts other low-quality work" fear--since high citation count is often thought of as an ipso facto measure of quality in academia, it would seem that if work attracts additional related work, it is probably not low quality. I think the most likely fate of low-quality work is to be forgotten. If people are too credulous of work which is actually low-quality, it's unclear to me why the fund managers would be immune to this, and having more contrarians seems like the best solution to me. The general approach of "fund many perspectives and let them determine what constitutes quality through discussion" has the advantage of offloading work from the LTFF team.)

RyanCarey's Shortform

I thought this Astral Codex Ten post, explaining how the GOP could benefit from integrating some EA-aligned ideas like prediction markets into its platform, was really interesting. Karl Rove retweeted it here. I don't know how well an anti-classism message would align with EA in its current form though, if Habryka is right that EA is currently "too prestige-seeking".

The Long-Term Future Fund has room for more funding, right now

there are probably lots of people who could be doing useful direct work, but they would require resources and direction that we as a community don't have the capacity for.

I imagine this could be one of the highest-leverage places to apply additional resources and direction though. People who are applying for funding for independent projects are people who desire to operate autonomously and execute on their own vision. So I imagine they'd require much less direction than marginal employees at an EA organization, for instance.

I also think there's an epistemic humility angle here. It's very likely that the longtermist movement as it currently exists is missing important perspectives. To some degree, as a funder, you are diffing your perspective against that of applicants and rejecting applicants whose projects make sense according to their perspective and not yours. It seems easy for this to result in the longtermist movement developing more homogenous perspectives over time, as people Goodhart on whatever metrics are related to getting funding/career advancement. I'm actually not convinced that direction is a good thing! I personally would be more inclined to fund anyone who meets a particular talent bar. That also makes your job easier because you can focus on just the person/people and worry less about their project.

we do offer to give people one-off feedback on their applications.

Huh. I understood your rejection email says the fund was unable to provide further feedback due to high volume of applications.

The Long-Term Future Fund has room for more funding, right now

I see. That suggests you think the LTFF would have much more room for funding with some not-super-large changes to your processes, such as encouraging applicants to submit multiple project proposals, or doing calls with applicants to talk about other projects they could do, or modifications to their original proposal which would make it more appealing to you.

The Long-Term Future Fund has room for more funding, right now

We received 129 applications this round, desk rejected 33 of them, and are evaluating the remaining 96. Looking at our preliminary evaluations, I’d guess we’ll fund 20 - 30 of these.

I keep hearing that there is "plenty of money for AI safety" and things like that. But by the reversal test, don't these numbers imply you think that most LTFF applicants could do more good earning to give? (Assuming they can make at least the hourly wage they requested on their application in the private sector.)

If they request a grant with a wage of $X/hr, and you reject their proposal, that implies you think the value of the work they are doing is less than $X (since you are unwilling to purchase it at that price), so they would be better off spending a marginal hour to earn $X for the fund instead of putting a marginal hour into direct work.

Your post talks about "room for more funding" in reference to your previous standards for funding, but I think this might be a better way to think about it--if you would be sad to see an applicant give up on direct work and switch to earning to give after LTFF rejects them, I think you still have room for more funding. (This is ~80% of applicants who are getting rejected here--what message should they take away from a rejection? I realize that "give up on direct work" is probably an incredibly demoralizing message for those 80%, I'm just not sure why the above argument is incorrect.)

Please stand with the Asian diaspora

However, I'm not sure in practice there is very much we can directly do about the issue.

Maybe it's worth pointing out that the OP doesn't ask us to do anything other than "stand with the Asian diaspora", which doesn't seem very hard. (I'm reminded of that relationship cliche where one partner tells the other partner about a problem they have, and their partner responds by trying to solve the problem, when all that was really desired was a sympathetic ear.)

I stand with the Asian diaspora. Even if the shooting was not motivated by anti-Asian prejudice, it was still wrong. I'm not Asian, but I've had many Asian friends and colleagues over the course of my life, people I respect and care about. I hope they and everyone else in the diaspora are able to pull through this.

Politics is far too meta

Well put. Sadly, the horse race/popularity contest is more dramatic, more interesting, and easier to follow than important policy details. And the more polarized our discussion becomes, the easier it is to rationalize a focus on "making sure the good guys win" over figuring out policy details which might change our view regarding what it is good to do, or even who the good guys are.

In terms of concrete solutions, I wonder if the best approach is satirizing excessive meta-ness. (One possible example of how this might be done: "How will voters react to Clinton campaign messaging in response to Trump's tweet about the New York Times' take on how voters are reacting to the latest Clinton email bombshell? Tune in now to find out, it's important!" Another possible example.) It's a rather cynical solution, essentially giving in and saying that a well thought out argument like the one you wrote here isn't memetically fit enough to change behavior substantially amongst the pundit class. But I have a feeling that skewering meta by dialing it up to 11 could be quite effective if done by someone wittier than I am. (It's possible I've been reading Twitter too much lately...)

Articles are invitations

In my experience, when I Facebook message or email EAs I have met in person, bringing up conversation topics I think are substantially higher value than the median topic we would probably wander into during casual chitchat at a party, my message is ignored a large fraction of the time. I don't think this is specific to EAs, I think people are just really flakey when it comes to responding to messages. But it is demoralizing and IMO it destroys a lot of the value of the EA network.

I guess what I'm saying is, maybe keep your expectations low for sending people cold emails and cold messages...

And err on the side of responding to messages that people do send you. It's quick & easy to reply and say something like "Sorry, I'm not interested", which increases the probability that the person will send you another message later that you are interested in later (as opposed to deciding that "it would be awkward/humiliating to message Person X again since they ignored my last message, so I will refrain from doing so even though this new message might be really important for them to read.")

Load More