I work on AI alignment. Right now, I'm using ideas from decision theory to design and train safer artificial agents.
I also do work in ethics, focusing on the moral importance of future generations.
You can email me at thornley@mit.edu.
I said a little in another thread. If we get aligned AI, I think it'll likely be a corrigible assistant that doesn't have its own philosophical views that it wants to act on. And then we can use these assistants to help us solve philosophical problems. I'm imagining in particular that these AIs could be very good at mapping logical space, tracing all the implications of various views, etc. So you could ask a question and receive a response like: 'Here are the different views on this question. Here's why they're mutually exclusive and jointly exhaustive. Here are all the most serious objections to each view. Here are all the responses to those objections. Here are all the objections to those responses,' and so on. That would be a huge boost to philosophical progress. Progress has been slow so far because human philosophers take entire lifetimes just to fill in one small part of this enormous map, and because humans make errors so later philosophers can't even trust that small filled-in part, and because verification in philosophy isn't much quicker than generation.
I'm not sure but I think maybe I also have a different view than you on what problems are going to be bottlenecks to AI development. e.g. I think there's a big chance that the world would steam ahead even if we don't solve any of the current (non-philosophical) problems in alignment (interpretability, shutdownability, reward hacking, etc.).
try to make them "more legible" to others, including AI researchers, key decision makers, and the public
Yes, I agree this is valuable, though I think it's valuable mainly because it increases the probability that people use future AIs to solve these problems, rather than because it will make people slow down AI development or try very hard to solve them pre-TAI.
I don't think philosophical difficulty is that much of an increase to the difficulty of alignment, mainly because I think that AI developers should (and likely will) aim to make AIs corrigible assistants rather than agents with their own philosophical views that they try to impose on the world. And I think it's fairly likely that we can use these assistants (if we succeed in getting them and aren't disempowered by a misaligned AI instead) to help a lot with these hard philosophical questions.
I didn't meant to imply that Wei Dai was overrating the problems' importance. I agree they're very important! I was making the case that they're also very intractable.
If I thought solving these problems pre-TAI would be a big increase to the EV of the future, I'd take their difficulty to be a(nother) reason to slow down AI development. But I think I'm more optimistic than you and Wei Dai about waiting until we have smart AIs to help us on these problems.
I'm a philosopher who's switched to working on AI safety full-time. I also know there are at least a few philosophers at Anthropic working on alignment.
With regards to your Problems in AI Alignment that philosophers could potentially contribute to:
When I decided to start working on AI, I seriously considered working on the kinds of questions you list. But due to the points above, I chose to do my current work instead.
Oops yes, fundamentals between my and Bruce's cases are very similar. Should have read Bruce's comment!
The claim we're discussing - about the possibility of small steps of various kinds - sounds kinda like a claim that gets called 'Finite Fine-Grainedness'/'Small Steps' in the population axiology literature. It seems hard to convincingly argue for, so in this paper I present a problem for lexical views that doesn't depend on it. I sort of gestured at it above with the point about risk without making it super precise. The one-line summary is that expected welfare levels are finitely fine-grained.
Nice post! Miscellaneous thoughts:
Harsanyi's theorem also requires that the social planner's preferences satisfy the VNM axioms.
I think this all depends on what you mean by 'many'. I'd guess maybe 10% of analytic philosophy papers include a proof of some kind, so that at least hundreds of proofs are published every year. And in a sense, every valid (spelled-out) argument is a proof.
I agree that the Claude proofs are pretty bad. The Arrhenius point is fairly obvious: what Arrhenius means by 'theories' in that paper is weak orders on populations, so if after taking into account moral uncertainty you still have a weak order, then the impossibility theorem still applies. (And later Arrhenius theorems relax both completeness and transitivity, so even departing from a weak order doesn't get you off the hook.)
Claude makes this kind of point, but first it introduces an Agreement axiom that the proof never uses. Claude later comes close to admitting this ('Agreement plays almost no role'), tries to walk it back ('But Agreement rules out the escape route...'), and then fully admits it ('the fundamental impossibility holds regardless').
Which Claude model did you use? Did you use extended thinking? The flip-flopping above makes me think there was no extended thinking, and maybe a model with extended thinking would do better. (Though not much better I'd guess. I've found LLMs to be surprisingly bad at philosophy, even just the 'understanding the view and its implications' parts.)
I didn't bother checking the second population ethics proof but it looks sloppy:
Don't any pair of populations "differ by at most some fixed bounded amount"? What is Claude doing including 'e.g.'s in its formal statement of axioms?
Yes, you'd think so given that they've gotten so good at math! But when I've tried using LLMs to help with formal philosophy, I've found them to be really surprisingly bad, even at parts that seem very math-loaded (e.g. inventing proofs, following arguments, grasping views and their implications, coming up with counterexamples, etc.). I'm not sure why this is. I guess part of it is that it's hard to do RLVR on philosophy in the same way that you can do RLVR on math, but naively I'd expect more generalization from math to formal philosophy. Maybe the following is a factor: pretraining data doesn't contain that much bad mathematical reasoning, but it contains a huge amount of bad philosophical reasoning.