Calculating the cost-effectiveness of research into foundational moral questions
Research That Can Help Us Improve
All actions aiming at improving the world are either implicitly or explicitly founded on a moral theory. However, there are many conflicting moral theories and little consensus regarding which theory, if any, can be considered the correct one (this issue is also known as Moral Uncertainty). Further adding to the confusion are issues such as whom to include as moral agents (animals? AIs?) and Moral Cluelessness. These issues make it extremely difficult to know whether our actions are actually improving the world.
Our foundation's goal is to improve humanity's long-term prospects. Therefore, it is potentially worthwhile to spend significant resources researching foundational issues such as reducing or eliminating Moral Uncertainty and Moral Cluelessness. However, it is currently unclear how cost-effective funding such research would be.
We are interested in projects that aim at calculating the cost-effectiveness of research into such foundational moral questions. We are also interested in smaller projects that aim to find solutions to parts of these cost-effectiveness equations, such as the scale of a foundational issue and the tractability of researching it. One concrete project could be calculating the expected value that is lost from not knowing which moral theory is correct, or equivalently, the expected value of information gained by learning which moral theory is correct.
Calculating the cost-effectiveness of research into foundational moral questions
Research That Can Help Us Improve
All actions aiming at improving the world are either implicitly or explicitly founded on a moral theory. However, there are many conflicting moral theories and little consensus regarding which theory, if any, can be considered the correct one (this issue is also known as Moral Uncertainty). Further adding to the confusion are issues such as whom to include as moral agents (animals? AIs?) and Moral Cluelessness. These issues make it extremely difficult to know whether our actions are actually improving the world.
Our foundation's goal is to improve humanity's long-term prospects. Therefore, it is potentially worthwhile to spend significant resources researching foundational issues such as reducing or eliminating Moral Uncertainty and Moral Cluelessness. However, it is currently unclear how cost-effective funding such research would be.
We are interested in projects that aim at calculating the cost-effectiveness of research into such foundational moral questions. We are also interested in smaller projects that aim to find solutions to parts of these cost-effectiveness equations, such as the scale of a foundational issue and the tractability of researching it. One concrete project could be calculating the expected value that is lost from not knowing which moral theory is correct, or equivalently, the expected value of information gained by learning which moral theory is correct.