Hide table of contents

Here are some examples of fictional grants that the Long-Term Future Fund (LTFF) very narrowly rejected. Grants are fictionalized to preserve anonymity. They are examples of grants that were very close to our funding bar, but we couldn't fund due to insufficient resources. We hope that these examples are useful for donors and other community members to make an informed decision about which projects would be funded, given additional donations to LTFF.

Three months ago, Linch and I[1] wrote a post about marginal grants at LTFF. Many donors and community members have found the post helpful, the post generated substantial discussion, and it likely influenced EA Forum's decision to make marginal funding week.

But at the time of the post's creation, we were in the middle of a funding crunch, and we and many of our grantees were still reorienting to a new and confusing funding environment. This led to a large range of possible "tiers" where our marginal grants might land. Since then, many members of the community have generously given to us, we now have a better sense of how much money we can distribute per month, and we also have a better understanding of the current distribution of applicants. Thus, we are now able to estimate a much narrower range of answers for what a marginal $ at LTFF can most likely buy.

Below, these fictional grants represent the most promising applications we sadly had to turn down due to insufficient funding. Assuming a similar distribution of applicants and donations in the coming months, we expect additional donations to us will be able to fund projects similar to the grants below.

People interested in the Long-Term Future Fund may wish to donate here, or vote for us for Donation Election here.

Fictional grants that we rejected but were very close to our funding bar

Each grant is based on 1-3 real applications we have received in the past ~three months. You can see our original LTFF marginal funding post here, and our post on the usefulness of funding the EAIF and LTFF here.[2] Please note that these are a few of the most promising grants we've recently turned down - not the average rejected grant. [3]

($25,000)~ Funding to continue research on a multi-modal chess language model, focusing on alignment and interpretability. The project involves optimizing a data extraction pipeline, refining the model's behaviour to be less aggressive, and exploring ways to modify the model training. Additional tasks include developing a simple Encoder-Decoder chess language model as a benchmark and writing an article about AI safety. The primary objective is to develop methods ensuring that multi-modal models act according to high-level behavioural priorities. The applicant's background includes experience as a machine learning engineer and chess, competing and developing predictive models. The past year's work under a previous LTFF grant resulted in a training dataset and some initial analysis, laying the groundwork for this continued research.

($25,000) ~ Four months' salary for a former academic to tackle some unusually tractable research problems in disaster resilience after large-scale GCRs. 

Their work would focus on researching Australia's resilience to a northern hemisphere nuclear war. Their track record included several papers in high-impact factor journals, and their past experiences and networks made them well-positioned for further work in this area. The grantee would also work on public outreach to inform the Australian public about nuclear risks and resilience strategies.

($50,000)~ Six months of career transition funding to help the applicant enter a technical AI safety role. 

The applicant has seven years of software engineering experience at prominent tech companies and aims to pivot his career towards AI safety. They'll focus on interpretability experiments with Leela Go Zero during the grant. The grant covers 50% of his previous salary and will facilitate upskilling in AI safety, completion of technical courses, and preparation for interviews with AI safety organizations. He has pivoted his career successfully in the past and has been actively engaged in the effective altruism community, co-running a local group and attending international conferences. This is his first funding request.

($40,000)~ Six months dedicated to exploring and contributing to AI governance initiatives, focusing on policy development and lobbying in Washington, D.C. 

The applicant seeks to build expertise and networks in AI governance, aiming to talk with over 50 professionals in the field and apply to multiple roles in this domain. The grant will support efforts to increase the probability of the U.S. government enacting legislation to manage the development of frontier AI technologies. The applicant's background includes some experience in AI policy and a strong commitment to effective altruism principles. The applicant has fewer than three years of professional experience and an undergraduate degree from a top US university.

($100,000)~ 16 months of funding for a PhD completion, focusing on partially observable reward learning and developmental interpretability. 

The applicant proposes demonstrating the non-identifiability issues of reward functions under partial observability and the associated misalignment risks. They plan to contribute to developing Singular Learning Theory for reinforcement learning. They have a notable background in theoretical research and have supervised relevant projects. The applicant expects to publish two papers on developmental interpretability. This funding will enable the applicant to complete their PhD independently, diverging from their original PhD topic to focus more on alignment interests.

($130,000)~ 12 months of independent research in AI alignment, focusing on integrating machine learning inductive biases with Singular Learning Theory. 

The applicant aims to explore areas like collective identity, reflective stability, and value theory in AI systems. They have several well-received posts and contributions to various AI alignment discussions; the applicant proposes to continue developing scalable mechanistic interpretability and formal corrigibility concepts. This funding would support living expenses in a high-cost area, provide resources for productivity enhancements, and cover necessary computational costs.

($30,000)~ Four months of funding to develop and promote a report to influence investors to advocate for AGI safety and governance best practices. 

The project, led by an individual with experience in creating influential reports, focuses on guiding investors in tech firms and chipmakers to adopt and enforce AI safety guidelines. The funding covers salary, graphic design, equipment, and promotion costs. The report will encourage investors to leverage their positions to instigate corporate policy changes and support voluntary adoption of safety practices. The applicant has an undergraduate degree in philosophy from the University of Cambridge and three years of experience in responsible investment research; the applicant plans to use her network and expertise to disseminate the report effectively within relevant financial circles.

($50,000)~ Nine months of independent research to research LLM epistemology and build lie detectors for LLMs. 

The applicant has two workshop papers; one paper came in the top three in a competition at a top-tier ML conference. The applicant plans to The applicant has gotten a smaller research grant from us, which was fairly successful but not in the top 5% of outputs from grantees from last year.

($7,000)~ Four months of funding for research on enhancing AI models' ability to learn and interpret social norms, leading to the preparation of an academic paper submission. 

The researcher, having recently completed a Master's program in Artificial Intelligence at the University of Cambridge, aims to refine her model for interpretable norm learning. The project focuses on developing a generative model of possible norms and adapting to varying norm violation costs. This initiative will culminate in a submission to a respected multi-agent conference and will set the foundation for the researcher's upcoming PhD, focused on cooperative AI and AI safety. The applicant is proactive in AI safety communities and prior research fellowships. The funding primarily covers living expenses, enabling full-time dedication to this research. This work is expected to contribute meaningfully to the field, particularly in understanding and implementing human values in AI systems.

($10,000)~ Funding to assist the applicant in transitioning into a U.S. government policy role, focusing on AI ethics and regulation. 

The applicant is a fellow at a think tank in Washington, D.C., specializing in ethical governance frameworks for emerging AI technologies. The funding is required to hire an immigration lawyer to facilitate obtaining a green card, a prerequisite for most national security-focused jobs in the U.S. government. The applicant, a British citizen, has a background in AI ethics and policy. The estimated total cost, including lawyer fees and application expenses, is $9,200 USD. The applicant's work in AI ethics has been recognized, leading to informal job offers from the U.S. government. This funding is crucial for enabling the applicant to take up a policy role where they can influence AI governance and ethical guidelines at a national level.

Closing thoughts and information on donating 

We think that, under many worldviews, these grants are quite promising. It is a loss that our community wasn't able to fund them. That said, rational distribution of limited resources is always hard, and we don't have a great sense of which more established organizations narrowly exclude projects due to financial limitations. If you look at the list of projects above and think that they are more worthy of funding than your next best option, then I think you should consider increasing your donation to the LTFF; if you think that they aren't worthy of funding, then I think that you should consider reducing your contribution to the LTFF.[4] 

I welcome discussion of these grants relative to established organizations and other funding opportunities in the comments. Linch and I are also interested in responding to questions from potential applicants and donors, so if you have questions about the LTFF, now is an especially good time to ask. Please remember that these grants are fictional (though they are based on real applications).

Also, Open Phil is matching donations to the Long-Term Future Fund and EA Infrastructure Fund 2:1, up to $3.5m from them ($1.75m from donors). We have around  1.28m/1.75m of the matching filled. In theory, you should be more willing to donate to us if you think the world is better off with the marginal dollar at the Long-Term Future Fund than at Open Philanthropy.

You can donate to us either via Giving What We Can or every.org. You can also vote for us for Donation Election here. If you'd like to talk to us before deciding whether to donate, please message me on this forum or at [c***b] [at] effectivealtruismfunds.org. 
 

  1. ^

    Throughout the text, 'I' refers to Caleb Parikh, and 'we' refers to both Caleb Parikh and Linch Zhang. This reflects the perspectives of two individuals who are very familiar with the Long-Term Future Fund (LTFF). However, others associated with the LTFF might not agree that this accurately represents their impression of the LTFF's marginal (rejected) grants.

  2. ^

    Thanks to @Lizka for encouraging us to write a quick update to our marginal funding post for the LTFF. Linch also left a comment on this topic here.

  3. ^

    I am a little worried that publishing this list might discourage some promising projects from applying; on balance, I think it's better just to be transparent, but I'd also direct your attention to this post that I like on imposter syndrome.

  4. ^

    Also, if you work at a grantmaking organization and think these grants are competitive with or better than your current marginal grant, maybe we should chat. If you think these grants are worse than your marginal grant, I am also interested in chatting about directing money to projects you think are more valuable.

  5. ^

    I am seriously considering investigating how marginal funding would be spent at more established organizations over the next four months to work out whether we should continue to focus on small projects or instead give funding to larger, more established projects (which I think would also be significantly less effort to evaluate per dollar).

95

0
0

Reactions

0
0

More posts like this

Comments12
Sorted by Click to highlight new comments since: Today at 9:24 PM

Does the LTFF ever counter-offer with an amount that would move the grant past the funding bar for cost-effectiveness? I would guess that some of these hypothetical applicants would accept a salary at 80% of what they applied for, and if the grants are already marginal then a 25% increase in cost-effectiveness could push them over the bar.

Yes, we do that fairly regularly. We haven't discussed people's counterfactual options above, but this often makes a 20% reduction in grant size unattractive.

I suspect there would be less potential discouragement effect if you listed some grants that were just over the bar?

Grants that are just over the bar look pretty similar to these. I don't think there's a sharp drop-off in quality around the funding bar (though there has been in the past).

That makes sense. However, I do think that showing that would be less discouraging for anyone around the bar, which are probably the people most important not to discourage (people significantly below would be wasting their time, people significantly above are more likely to be confident enough to apply).

people significantly above are more likely to be confident enough to apply

I think this is directionally true but overstated; my impression is that most people are just pretty uncalibrated about this sort of thing.

I think the intent was primarily to speak to potential donors, for whom "these are sample grants like those that would be filled with your marginal donation dollar this giving season" is probably more actionable/motivating.

I guess the framing of the post is pretty relevant: these projects would be over the bar if the LTFF got more donations. (Although I appreciate it being important to avoid discouraging people.) 

I might also flag that I don't think getting rejected generally has costs besides the time you put in and your motivation (someone from LTFF could correct me if I'm wrong). So applying is often worth it even if you think it's pretty likely that you'll get rejected. This isn't to say that rejection is hard; here's a thread with tips and others' experiences. But it seems that "Don't think, just apply (usually)!" is pretty good advice. 

I mean, obviously it has non-zero costs from our end for us to evaluate applications. But I think applicants should basically not take that into account when applying; it's very easy for people to be overly scrupulous when deciding whether to apply. I almost always appreciate more applications for helping us make better informed decisions, and for improving the mean quality of grants that we do fund.

I wonder if LTFF has tried running a Kaggle competition for grant success vs grant rejection? I think this would be quite interesting for people looking to apply for grants as we could gain some idea of the likelihood of success or failure of applications before submitting and it would allow applicants to modify the grant until it looks promising

Hmm, we've accepted less than a thousand applications in our entire history, so I think a data science competition is massively overkill. 

Also there's the Goodharting worries.

Executive summary: The Long-Term Future Fund details fictional grants that provide insight into promising projects they would fund with additional donations.

Key points:

  1. The fictional grants represent promising applications the LTFF regrettably turned down recently due to insufficient funding.
  2. The grants cover areas like AI safety research, career transitions into AI safety, and influencing investors and policymakers on AI governance.
  3. Additional donations to the LTFF would likely fund projects similar to these fictional grants.
  4. The post invites discussion on these grants and encourages those who find them promising to consider donating.
  5. Open Philanthropy is matching donations to the LTFF 2:1 up to $1.75 million from donors.
  6. The LTFF welcomes questions from potential applicants and donors.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

More from calebp
Curated and popular this week
Relevant opportunities