The FTX Foundation's Future Fund is a philanthropic fund making grants and investments to ambitious projects in order to improve humanity's long-term prospects.
We have a longlist of project ideas that we’d be excited to help launch.
We’re now announcing a prize for new project ideas to add to this longlist. If you submit an idea, and we like it enough to add to the website, we’ll pay you a prize of $5,000 (or more in exceptional cases). We’ll also attribute the idea to you on the website (unless you prefer to be anonymous).
All submissions must be received in the next week, i.e. by Monday, March 7, 2022.
We are excited about this prize for two main reasons:
- We would love to add great ideas to our list of projects.
- We are excited about experimenting with prizes to jumpstart creative ideas.
To participate, you can either
- Add your proposal as a comment to this post (one proposal per comment, please), or
- Fill in this form
Please write your project idea in the same format as the project ideas on our website. Here’s an example:
Early detection center
Biorisk and Recovery from Catastrophes
By the time we find out about novel pathogens, they’ve already spread far and wide, as we saw with Covid-19. Earlier detection would increase the amount of time we have to respond to biothreats. Moreover, existing systems are almost exclusively focused on known pathogens—we could do a lot better by creating pathogen-agnostic systems that can detect unknown pathogens. We’d like to see a system that collects samples from wastewater or travelers, for example, and then performs a full metagenomic scan for anything that could be dangerous
You can also provide further explanation, if you think the case for including your project idea will not be obvious to us on its face.
Some rules and fine print:
- You may submit refinements of ideas already on our website, but these might receive only a portion of the full prize.
- At our discretion, we will award partial prizes for submissions that are proposed by multiple people, or require additional work for us to make viable.
- At our discretion, we will award larger prizes for submissions that we really like.
- Prizes will be awarded at the sole discretion of the Future Fund.
We’re happy to answer questions, though it might take us a few days to respond due to other programs and content we're launching right now.
We’re excited to see what you come up with!
(Thanks to Owen Cotton-Barratt for helpful discussion and feedback.)
A think tank to investigate the game theory of ethics
Values and Reflective Processes, Effective Altruism, Research That Can Help Us Improve, Space Governance, Artificial Intelligence
Caspar Oesterheld’s work on Evidential Cooperation in Large Worlds (ECL) shows that some fairly weak assumptions about the shape of the universe are enough to arrive at the conclusion that there is one optimal system of ethics: the compromise between all the preferences of all agents who cooperate with each other acausally. That would solve ethics for all practical purposes. It would therefore have enormous effects on a wide variety of fields because of how foundational ethics is.
The main catch is that it will take a lot more thought and empirical study to narrow down what that optimal compromise ethical system looks like. The ethical systems and bargaining methods used on earth can serve as a sample and convergent drives can help us extrapolate to unobserved types of agents. We may never have certainty that we’ve found the optimal ethical system, but we can go from a state of overwhelming Knightian uncertainty to a state of quantifiable uncertainty. Along the way we can probably rule out many ethical systems as likely nonoptimal.
First and foremost, this is a reflective process that will inform altruistic priorities, which suggests the categories Values and Reflective Processes, Effective Altruism, and Research That Can Help Us Improve. But I also see applications wherever agents have trouble communicating: cooperation between multiple mass movements, cooperation between large groups of donors, cooperation between anonymous donors, cooperation between camps of voters, cooperation on urgent issues between civilizations that are too far separated to communicated quickly enough, cooperation between agents on different levels of the simulation hierarchy. ECL may turn out to be a convergent goal of a wide range of artificial intelligences. Thus it also has indirect effects on the categories of Space Governance and Artificial Intelligence. (But I don’t think it would be good for someone to prioritize this over more direct AI safety work at this time.)
I see a few weaknesses in the argument for ECL, so first step may be to get experts in game theory and physics together to probe these and work out exactly what assumptions go into ECL and how likely they are.
Some people have thought about this more than I have – including (of course) Caspar Oesterheld, Johannes Treutlein, David Althaus, Daniel Kokotajlo, and Lukas Gloor – but I don’t think anyone is currently focused on it.
I think it’s closer to 2, and the clearer term to use is probably “superrational cooperator,” but I suppose that’s probably meant by “superrationalist”? Unclear. But “superrational cooperator” is clearer about (1) knowing about superrationality and (2) wanting to reap the gains from trade from superrationality. Condition 2 can be false because people use CDT or because they have very local or easily satisfied values and don’t care about distant or additional stuff.
So just as in all the thought experiments where EDT gets richer than CDT, your own behavior i... (read more)