Hide table of contents

In 1900 the mathematician David Hilbert published a list of 23 of the most important unsolved problems in mathematics. This list heavily influenced mathematical research over the 20th century: if you worked on one of Hilbert’s problems, then you were doing respectable mathematics.

There is no such list within moral philosophy. That’s a shame. Not all problems that are discussed in ethics are equally important. And often early graduate students have no idea what to write their thesis on – and so just pick something they’ve written on for coursework previously, or pick something that’s ‘hot’ at the time. I don’t know for sure, but I imagine the same is true of many other academic disciplines.

What would the equivalent list look like for moral philosophy? Of course, it’s difficult to define ‘important’, but let’s say here that they are the potentially soluble problems that, if solved and taken seriously, would make the greatest difference to the way the world is currently run. I’ve briefly discussed this idea with Nick Beckstead, and also Carl Shulman and Nick Bostrom, and here’s a select list of what we came up with. For more explanation of why, see my previous two posts on high impact philosophy, here) and here).

The Practical List

 

1. What’s the optimal career choice? Earning to give, advocacy, research and innovation, or something more common-sensically virtuous?

2. What’s the optimal donation area? Development charities? Animal welfare charities? Extinction risk mitigation charities? Meta-charities? Or investing the money and donating later?

3. What are the highest leverage political policies? Libertarian paternalism? Prediction markets? Cruelty taxes, such as taxes on caged hens; luxury taxes?

4. What are the highest value areas of research? Tropical medicine? Artificial intelligence? Economic cost-effectiveness analysis? Moral philosophy?

5. Given our best ethical theories (or best credence distribution in ethical theories), what’s the biggest problem we currently face?

 

The Theoretical List

1. What’s the correct population ethics? How should we value future people compared with present people? Do people have diminishing marginal value?

2. Should we maximise expected value when it comes to small probabilities of huge amounts of value? If not, what should we do instead?

3. How should we respond to the possibility of creating infinite value (or disvalue)? Should that consideration swamp all others? If not, why not?

4. How should we respond to the possibility that the universe actually has infinite value? Does it mean that we have no reason to do any action (because we don’t increase the sum total of value in the world)? Or does this possibility refute aggregative consequentialism?

5. How should we accommodate moral uncertainty? Should we apply expected utility theory? If so, how do we make intertheoretic value comparisons? Does this mean that some high-stakes theories should dominate our moral thinking, even if we assign them low credence?

6. How should intuitions weigh against theoretical virtues in normative ethics? Is common-sense ethics roughly correct? Or should we prefer simpler moral theories?

7. Should we prioritise the prevention of human wrongs over the alleviation of naturally caused suffering? If so, by how much?

8. What sorts of entities have moral value? Humans, presumably. But what about non-human animals? Insects? The natural environment? Artificial intelligence?

9. What additional items should be on these lists?

16

0
0

Reactions

0
0

More posts like this

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 1m read
 · 
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma