I'm a former data scientist with 5 years industry experience now working in Washington DC to bridge the gap between policy and emerging technology. AI is moving very quickly and we need to help the government keep up!
I work at IAPS, a think tank of aspiring wonks working to understand and navigate the transformative potential of advanced AI. Our mission is to identify and promote strategies that maximize the benefits of AI for society and develop thoughtful solutions to minimize its risks.
I'm also a professional forecaster with specializations in geopolitics and electoral forecasting.
You're right that I need to bite the bullet on epistemic norms too and I do think that's a highly effective reply. But at the end of the day, yes, I think "reasonable" in epistemology is also implicitly goal-relative in a meta-ethical sense - it means "in order to have beliefs that accurately track reality." The difference is that this goal is so universally shared, so universal across so many different value systems, and so deeply embedded in the concept of belief itself that it feels categorical.
You say I've "replaced all the important moral questions with trivial logical ones," but that's unfair. The questions remain very substantive - they just need proper framing:
Instead of "Which view is better justified?" we ask "Which view better satisfies [specific criteria like internal consistency, explanatory power, alignment with considered judgments, etc.]?"
Instead of "Would the experience machine be good for me?" we ask "Would it satisfy my actual values / promote my flourishing / give me what I reflectively endorse / give me what an idealized version of myself might want?"
These aren't trivial questions! They're complex empirical and philosophical questions. What I'm denying is that there's some further question -- "But which view is really justified?" -- floating free of any standard of justification.
Your challenge about moral uncertainty is interesting, but I'd say: yes, you can hedge across different moral theories if you have a higher-order standard for managing that uncertainty (like maximizing expected moral value across theories you find plausible). That's still goal-relative, just at a meta-level.
The key insight remains: every "should" or "justified" implicitly references some standard. Making those standards explicit clarifies rather than trivializes our discussions. We're not eliminating important questions - we're revealing what we're actually asking.
You raise a fair challenge about epistemic norms! Yes, I do think there are facts about which beliefs are most reasonable given evidence. But I'd argue this actually supports my view rather than undermining it.
The key difference: epistemic norms have a built-in goal - accurate representation of reality. When we ask "should I expect emeralds to be green or grue?" we're implicitly asking "in order to have beliefs that accurately track reality, what should I expect?" The standard is baked into the enterprise of belief formation itself.
But moral norms lack this inherent goal. When you say some goals are "intrinsically more rationally warranted," I'd ask: warranted for what purpose? The hypothetical imperative lurks even in your formulation. Yes, promoting happiness over misery feels obviously correct to us - but that's because we're humans with particular values, not because we've discovered some goal-independent truth.
I'm not embracing radical skepticism or saying moral questions are nonsense. I'm making a more modest claim: moral questions make perfect sense once we specify the evaluative standard. "Is X wrong according to utilitarianism?" has a determinate, objective, mind-indpendent answer. "Is X wrong simpliciter?" does not.
The fact that we share deep moral intuitions (like preferring happiness to misery) is explained by our shared humanity, not by those intuitions tracking mind-independent moral facts. After all, we could imagine beings with very different value systems who would find our intuitions as arbitrary as we might find theirs.
So yes, I think we can know things about the future and have justified beliefs. But that's because "justified" in epistemology means "likely to be true" - there's an implicit standard. In ethics, we need to make our standards explicit.
Thanks!
I think all reasons are hypothetical, but some hypotheticals (like "if you want to avoid unnecessary suffering...") are so deeply embedded in human psychology that they feel categorical. This explains our moral intuitions without mysterious metaphysical facts.
The concentration camp guard example actually supports my view - we think the guard shouldn't follow professional norms precisely because we're applying a different value system (human welfare over rule-following). There's no view from nowhere; there's just the fact that (luckily) most of us share similar core values.
You were negative toward the idea of hypothetical imperatives elsewhere but I don't see how you get around the need for them.
You say epistemic and moral obligations work "in the same way," but they don't. Yes, we have epistemic obligations to believe true things... in order to have accurate beliefs about reality. That's a specific goal. But you can't just assert "some things are good and worth desiring" without specifying... good according to what standard? The existence of epistemic standards doesn't prove there's One True Moral Standard any more than the existence of chess rules prove there's One True Game.
For morality, there are facts about which actions would best satisfy different value systems. I consider those to be a form of objective moral facts. And if you have those value systems, I think it is thus rationally warranted to desire those outcomes and pursue those actions. But I don't know how you would get facts about which value system to have without appealing to a higher-order value system.
Far from undermining inquiry, this view improves it by forcing explicitness about our goals. When you feel "promoting happiness is obviously better than promoting misery," that doesn't strike me as metaphysical truth but expressive assertivism. Real inquiry means examining why we value what we value and how to get it.
I'm far from a professional philosopher and I know you have deeply studied this much more than I have, so I don't mean to accuse you of being naive. Looking forward to learning more.
"Nihilism" sounds bad but I think it's smuggling in connotations I don't endorse.
I'm far from a professional philosopher but I don't see how you could possibly make substantive claims about desirability from a pure meta-ethical perspective. But you definitely can make substantive claims about desirability from a social perspective and personal perspective. The reason we don't debate racist normative advice is because we're not racists. I don't see any other way to determine this.
Morality is Objective
People keep forgetting that meta-ethics was solved back in 2013.
I recently made a forecast based on the METR paper with median 2030 timelines and much less probability on 2027 (<10%). I think this forecast of mine is weaker to much fewer of titotal's critiques, but still weak to some (especially not having sufficient uncertainty around the type of curve to fit).
What do the superforecasters say? Well, the most comprehensive effort to ascertain and influence superforecaster opinions on AI risk was the Forecasting Research Institute’s Roots of Disagreement Study.[2] In this study, they found that nearly all of the superforecasters fell into the “AI skeptic” category, with an average P(doom) of just 0.12%. If you’re tempted to say that their number is only so low because they’re ignorant or haven’t taken the time to fully understand the arguments for AI risk, then you’d be wrong; the 0.12% figure was obtained after having months of discussions with AI safety advocates, who presented their best arguments for believing in AI x-risks.
I see this a bunch but I think this study is routinely misinterpreted. I have some knowledge from having participated in it.
The question being posed to forecasters was about literal human extinction, which is pretty different from how I see p(doom)
be interpreted. A lot of the "AI skeptics" were very sympathetic to AI being the biggest deal, but just didn't see literal extinction as that likely. I also have moderate p(doom) (20%-30%) while thinking literal extinction is much lower than that (<5%).
Also the study ran 2023 April 1 to May 31, which was just right after the release of GPT-4. Since then there's been so much more development. My guess is if you polled the "AI skeptics" now, the p(doom) will have gone up.
Really warranted by what? I think I'm an illusionist about this in particular as I don't even know what we could be reasonably disagreeing over.
For a disagreement about facts (is this blue?), we can argue about actual blueness (measurable) or we can argue about epistemics (which strategies most reliably predict the world?) and meta-epistemics (which strategies most reliably figure out strategies that reliably predict the world?), etc.
For disagreements about morals (is this good?), we can argue about goodness but what is goodness? Is it platonic? Is it grounded in God? I'm not even sure what there is to dispute. I'd argue the best we can do is argue to our shared values (perhaps even universal human values, perhaps idealized by arguing about consistency etc.) and then see what best satisfies those.
~
Right - and this matches our experience! When moral disagreements persist after full empirical and logical agreement, we're left with clashing bedrock intuitions. You want to insist there's still a fact about who's ultimately correct, but can't explain what would make it true.
~
I think we're successfully engaging in a dispute here and that does kind of prove my position. I'm trying to argue that you're appealing to something that just doesn't exist and that this is inconsistent with your epistemic values. Whether one can ground a judgement about what is "really warranted" is a factual question.
~
I want to add that your recent post on meta-metaethical realism also reinforces my point here. You worry that anti-realism about morality commits us to anti-realism about philosophy generally. But there's a crucial disanalogy: philosophical discourse (including this debate) works precisely because we share epistemic standards - logical consistency, explanatory power, and various other virtues. When we debate meta-ethics or meta-epistemology, we're not searching for stance-independent truths but rather working out what follows from our shared epistemic commitments.
The "companions in guilt" argument fails because epistemic norms are self-vindicating in a way moral norms aren't. To even engage in rational discourse about what's true (including about anti-realism), we must employ epistemic standards. But we can coherently describe worlds with radically different moral standards. There's no pragmatic incoherence in moral anti-realism the way there would be in global philosophical anti-realism.