The FTX Foundation's Future Fund is a philanthropic fund making grants and investments to ambitious projects in order to improve humanity's long-term prospects.
We have a longlist of project ideas that we’d be excited to help launch.
We’re now announcing a prize for new project ideas to add to this longlist. If you submit an idea, and we like it enough to add to the website, we’ll pay you a prize of $5,000 (or more in exceptional cases). We’ll also attribute the idea to you on the website (unless you prefer to be anonymous).
All submissions must be received in the next week, i.e. by Monday, March 7, 2022.
We are excited about this prize for two main reasons:
- We would love to add great ideas to our list of projects.
- We are excited about experimenting with prizes to jumpstart creative ideas.
To participate, you can either
- Add your proposal as a comment to this post (one proposal per comment, please), or
- Fill in this form
Please write your project idea in the same format as the project ideas on our website. Here’s an example:
Early detection center
Biorisk and Recovery from Catastrophes
By the time we find out about novel pathogens, they’ve already spread far and wide, as we saw with Covid-19. Earlier detection would increase the amount of time we have to respond to biothreats. Moreover, existing systems are almost exclusively focused on known pathogens—we could do a lot better by creating pathogen-agnostic systems that can detect unknown pathogens. We’d like to see a system that collects samples from wastewater or travelers, for example, and then performs a full metagenomic scan for anything that could be dangerous
You can also provide further explanation, if you think the case for including your project idea will not be obvious to us on its face.
Some rules and fine print:
- You may submit refinements of ideas already on our website, but these might receive only a portion of the full prize.
- At our discretion, we will award partial prizes for submissions that are proposed by multiple people, or require additional work for us to make viable.
- At our discretion, we will award larger prizes for submissions that we really like.
- Prizes will be awarded at the sole discretion of the Future Fund.
We’re happy to answer questions, though it might take us a few days to respond due to other programs and content we're launching right now.
We’re excited to see what you come up with!
(Thanks to Owen Cotton-Barratt for helpful discussion and feedback.)
Research to determine what human cultures minimize the risks of major catastrophes
Great Power Relations, Values and Reflective Processes, Artificial Intelligence
I posit that human cultures differ and that there’s a chance that some cultures are more likely to punish in minor ways and more likely to adapt to new situations peacefully while other may be more likely to wage wars. This may be completely wrong.
But if it is now, we could investigate what processes can be used to foster the sort of culture that is less likely to immanentize global catastrophes, and to structure the cultural learning of future AI systems such that they also learn that culture, so that (seeming) cooperation failures between AIs are frequent and minor and really part of their bargaining process rather than infrequent and civilization-ending. It might be even more important to set a clear cultural Schelling point for AIs if some cultures play well with all other cultures but all cultures play well with themselves.
Some more detail on my inspiration for the idea (copied from my blog):
Herrmann et al. (2008) have found that in games that resemble collective prisoners dilemmas with punishment, cultures worldwide fall into different groups. Those with antisocial punishment fail to realize the gains from cooperation but two other cultures succeed: In the first (cities Boston, Copenhagen, and St. Gallen), participants cooperated at a high level from the start and used occasional punishments to keep it that way. In the second (cities Seoul, Melbourne, and Chengdu), the prior appeared to be low cooperation, but through punishment they achieved after a few rounds the same level of cooperation as the first group.
These two strategies appear to me to map (somewhat imperfectly) to the successful Tit for Tat and Pavlov strategies in iterated prisoner’s dilemmas.
Sarah Constantin writes:
As mentioned, I think these strategies map somewhat imperfectly to human behavior, but I feel that I can often classify the people around me as tending toward one or the other strategy.
Pavlovian behaviors:
Tit for Tat behaviors:
This way of categorizing behaviors has led me to think that there are forms of both strategies that seem perfectly nice to me. In particularly, I’ve met socially astute agents who noticed that I’m a “soft culture” tit-for-tat type of person and adjusted to my interaction style. I don’t think it would make sense for an empathetic tit-for-tat agent to adjust to a Pavlovian agent in such a way, but it’s a straightforward self-modification for an empathetic Pavlovian agent.
Further, Pavlovian agents probably have a much easier time navigating areas like entrepreneurship where you’re always moving in innovative areas that don’t have any hard and fast rules yet that you could anticipate. Rather they need to be renegotiated all the time.
Pavlov also seems more time-consuming and cognitively demanding, so it may be more attractive for socially astute agents and for situations where there are likely gains to be had as compared to a tit for tat approach.
The idea is that one type of culture may be safer than another for AIs to learn from through, e.g., inverse reinforcement learning. My tentative hypothesis is that the Pavlovian culture is safer because punishments are small and routine with little risk of ideological, fanatical retributivism emerging.