I graduated from Georgetown University in December, 2021 with degrees in economics, mathematics and a philosophy minor. There, I founded and helped to lead Georgetown Effective Altruism. Over the last few years recent years, I've interned at the Department of the Interior, the Federal Deposit Insurance Corporation, and Nonlinear, a newish longtermist EA org.
I'm now doing research thanks to an EA funds grant, trying to answer hard, important EA-relevant questions. My first big project (in addition to everything listed here) was helping to generate this team Red Teaming post.
Blog: aaronbergman.net
Late to the party (and please forgive me if I overlooked a part where you address this), but I think this all misses the boring and kinda unsatisfying but (I’d argue) correct answer to the question posed:
Why should ethical anti-realists do ethics?
Because they might be wrong!
Ok, my less elegant, more pedantically precise claim (argument?) is that:
... would in fact find him/herself (i) 'doing ethics' and [slightly less confident about this one] (ii) 'doing ethics' as though moral realism were true even if they believe that moral realism is probably not true.
[ok that's it for the argument]🔚
Two more things...
In terms of result, yeah it does, but I sorta half-intentionally left that out because I don't actually think LLS is true as it seems to often be stated.
Why the strikethrough: after writing the shortform, I get that e.g., "if we know nothing more about them" and "in the absence of additional information" mean "conditional on a uniform prior," but I didn't get that before. And Wikipedia's explanation of the rule,
Since we have the prior knowledge that we are looking at an experiment for which both success and failure are possible, our estimate is as if we had observed one success and one failure for sure before we even started the experiments.
seems both unconvincing as stated and, if assumed to be true, doesn't depend on that crucial assumption
The recent 80k podcast on the contingency of abolition got me wondering what, if anything, the fact of slavery's abolition says about the ex ante probability of abolition - or more generally, what one observation of a binary random variable says about as in
Turns out there is an answer (!), and it's found starting in paragraph 3 of subsection 1 of section 3 of the Binomial distribution Wikipedia page:
A closed form Bayes estimator for p also exists when using the Beta distribution as a conjugate prior distribution. When using a general as a prior, the posterior mean estimator is...
[...]
For the special case of using the standard uniform distribution as a non-informative prior, , the posterior mean estimator becomes:
Don't worry, I had no idea what was until 20 minutes ago. In the Shortform spirit, I'm gonna skip any actual explanation and just link Wikipedia and paste this image (I added the uniform distribution dotted line because why would they leave that out?)
Cool, so for the case, we get that if you have a prior over the ex ante probability space described by one of those curves in the image, you...
In the uniform case (which actually seems kind of reasonable for abolition), you...
At risk of jeopardizing EA's hard-won reputation of relentless internal criticism:
Even setting aside its object-level impact-relevant criteria (truth, importance, etc), this is just enormously impressive both in terms of magnitude and quality. The post itself gives us readers an anchor on which to latch critiques, questions, and comments, so it's easy to forget that each step or decision in the whole methodology had to be chosen from an enormous space of possibilities. And this looks— at least on a first red—like very many consecutive well-made steps and decisions
Note: inspired by the FTX+Bostrom fiascos and associated discourse. May (hopefully) develop into longform by explicitly connecting this taxonomy to those recent events (but my base rate of completing actual posts cautions humility)
Or fails to induce
A few Forum meta things you might find useful or interesting:
A resource that might be useful: https://tinyapps.org/
There's a ton there, but one anecdote from yesterday: referred me to this $5 IOS desktop app which (among other more reasonable uses) made me this full quality, fully intra-linked >3600 page PDF of (almost) every file/site linked to by every file/site linked to from Tomasik's homepage (works best with old-timey simpler sites like that)
Nice! (admit I've only just skimmed and looked at the eye-catching graphics and tables 🙃). A couple small potential improvements to those things:
This post is half object level, half experiment with “semicoherent audio monologue ramble → prose” AI (presumably GPT-3.5/4 based) program audiopen.ai.
In the interest of the latter objective, I’m including 3 mostly-redundant subsections:
1) Dubious asymmetry argument in WWOTF
In Chapter 9 of his book, What We Are the Future, Will MacAskill argues that the future holds positive moral value under a total utilitarian perspective. He posits that people generally use resources to achieve what they want - either for themselves or for others - and thus good outcomes are easily explained as the natural consequence of agents deploying resources for their goals. Conversely, bad outcomes tend to be side effects of pursuing other goals. While malevolence and sociopathy do exist, they are empirically rare.
MacAskill argues that in a future with continued economic growth and no existential risk, we will likely direct more resources towards doing good things due to self-interest and increased impartial altruism. He contrasts this eutopian scenario with an anti-eutopia: the worst possible world, which he argues (compellingly, I think) less probable because it requires convoluted explanations as opposed to simple desires like enjoying ice cream. He concludes that the probability of achieving a eutopia outweighs the low likelihood but extreme negative consequences of an anti-eutopia.
However, I believe McCaskill's analysis neglects an important aspect: considering not only these two extremes but also the middle distribution where neither significant amounts of resources nor agentic intervention occur.
When physics operates without agency-driven resource allocation, we have good reason to expect evolution to create conscious beings whose suffering we can attribute to the ease with which animals (or animal-like beings) can lose all future expected genetic reproduction - as MacAskill himself argues elsewhere in the book.
Importantly, though, this non-agentic suffering, seems more likely to complement agentic resource deployment - not substitute for it as one might intuit. That’s because human or post-human expansion necessarily entails the expansion of concentrated physical energy, and seems likely to entail the expansion of other scarce, pro-biotic resources such as DNA, water, and computation.
Although McCaskill does not explicitly claim his binary model comparing eutopia and anti-eutopia is sufficient for understanding this complex problem, it seems to me to be implied;
Only upon attempting to draft a blog post revisiting his work did I noticed the line “We can make some progress by focusing on just two extreme scenarios: the best or worst possible futures, eutopia and anti-eutopia,” acknowledging status of this model as a bit of evidence in the larger question of the future’s value.
In sum, I think MacAskill's analysis would benefit from addressing the morally relevant middle distribution to provide a more accurate representation of the future under total utilitarianism.
2) Utilitarian Utopia, Anti-Utopia and the Neglected Middle
In Chapter 9 of his book, What We Are the Future, Will McCaskill argues that the future holds positive moral value under a total utilitarian perspective. He posits that people generally use resources to achieve what they want - either for themselves or others - and thus good outcomes are often intentional. Conversely, bad outcomes tend to be side effects of pursuing other goals. While malevolence and sociopathy do exist, they are empirically rare.
McCaskill then extrapolates this argument to suggest that in a future with continued economic growth (assuming no existential risk), we will likely direct more resources towards doing good things due to self-interest and increased impartial altruism. He contrasts this utopian scenario with an anti-utopia: the worst possible world which is less probable because it requires convoluted explanations as opposed to simple desires like enjoying ice cream.
He concludes that the probability of achieving a utopia outweighs the low likelihood but extreme negative consequences of an anti-utopia. However, I believe McCaskill's analysis neglects an important aspect: considering not only these two extremes but also the middle distribution where neither significant amounts of resources nor agentic intervention occur.
In such cases where physics operates without agency-driven resource allocation, evolution can create conscious beings like plants and animals who experience suffering without any intentionality behind it. This middle distribution may actually skew negatively since evolution favors reproductive fitness at any cost; as a result, sentient beings could suffer more than they experience happiness during their lives.
I argue that wild animal welfare is net negative overall; if given a choice between having them continue living or ceasing their existence altogether, I would choose nonexistence on moral grounds. Although McCaskill does not explicitly claim his heuristic comparison between utopia and anti-utopia is sufficient for understanding this complex problem, he strongly implies it throughout most of his chapter.
Upon revisiting his work while drafting my response blog post, I noticed a single line acknowledging the limitations of his approach. However, this caveat does not adequately emphasize that his argument should be considered only as a first pass and not definitive proof. In my opinion, McCaskill's analysis would benefit from addressing the morally relevant middle distribution to provide a more accurate representation of the future under total utilitarianism.
-------
3) Original Transcript
Okay, so I'm going to describe where I think I disagree with Will McCaskill in Chapter 9 of his book, What We Are the Future, where he basically makes an argument that the future is positive in expectation, positive moral value under a total utilitarian perspective. And so his argument is basically that people, it's very easy to see that people deploy the resources in order to get what they want, which is either to help themselves and sometimes to help other people, whether it's just their family or more impartial altruism. Basically you can always explain why somebody does something good just because it's good and they want it, which is kind of, I think that's correct and compelling. Whereas when something bad happens, it's generally the side effect of something else. At least, yeah. So while there is malevolence and true sociopathy, those things are in fact empirically quite rare, but if you undergo a painful procedure, like a medical procedure, it's because there's something affirmative that you want and that's a necessary side effect. It's not because you actually sought that out in particular. And all this I find true and correct and compelling. And so then he uses this to basically say that in the future, presumably conditional on continued economic growth, which basically just means no existential risk and humans being around, we'll be employing a lot of resources in the direction of doing things well or doing good. Largely just because people just want good things for themselves and hopefully to some extent because there will be more impartial altruists willing to both trade and to put their own resources in order to help others. And once again, all true, correct, compelling in my opinion. So on the other side, so basically utopia in this sense, utopia basically meaning employing a lot of, the vast majority of resources in the direction of doing good is very likely and very good. On the other side, it's how likely and how bad is what he calls anti-utopia, which is basically the worst possible world. And he basically using... I don't need to get into the particulars, but basically I think he presents a compelling argument that in fact it would be worse than the best world is good, at least to the best of our knowledge right now. But it's very unlikely because it's hard to see how that comes about. You actually can invent stories, but they get kind of convoluted. And it's not nearly as simple as, okay, people like ice cream and so they buy ice cream. It's like, you have to explain why so many resources are being deployed in the direction of doing good things and you still end up with a terrible world. Then he basically says, okay, all things considered, the probability of good utopia wins out relative to the badness, but very low probability of anti-utopia. Again, a world full of misery. And where I think he goes wrong is that he neglects the middle of the distribution where the distribution is ranging from... I don't know how to formalize this, but something like percentage or amount of... Yeah, one of those two, percentage or amount of resources being deployed in the direction of on one side of the spectrum causing misery and then the other side of the spectrum causing good things to come about. And so he basically considers the two extreme cases. But I claim that, in fact, the middle of the distribution is super important. And actually when you include that, things look significantly worse because the middle of the distribution is basically like, what does the world look like when you don't have agents essentially deploying resources in the direction of anything? You just have the universe doing its thing. We can set aside the metaphysics or physics technicalities of where that becomes problematic. Anyway, so basically the middle of the distribution is just universe doing its thing, physics operating. I think there's the one phenomenon that results from this that we know of to be morally important or we have good reason to believe is morally important is basically evolution creating conscious beings that are not agentic in the sense that I care about now, but basically like plants and animals. And presumably I think you have good reason to believe animals are sentient. And evolution, I claim, creates a lot of suffering. And so you look at the middle of the distribution and it's not merely asymmetrical, but it's asymmetrical in the opposite direction. So I claim that if you don't have anything, if you don't have lots of resources being deployed in any direction, this is a bad world because you can expect evolution to create a lot of suffering. The reason for that is, as he gets into, something like either suffering is intrinsically more important, which I put some weight on that. It's not exactly clear how to distinguish that from the empirical case. And the empirical case is basically it's very easy to lose all your reproductive fitness in the evolutionary world very quickly. It's relatively hard to massively gain a ton. Reproduction is like, even having sex, for example, only increases your relative reproductive success a little bit, whereas you can be killed in an instant. And so this creates an asymmetry where if you buy a functional view of qualia, then it results in there being an asymmetry where animals are just probably going to experience more pain over their lives, by and large, than happiness. And I think this is definitely true. I think wild animal welfare is just net negative. I wish if I could just... If these are the only two options, have there not be any wild animals or have them continue living as they are, I think it would be overwhelmingly morally important to not have them exist anymore. And so tying things back. Yeah, so McCaskill doesn't actually... I don't think he makes a formally incorrect statement. He just strongly implies that this case, that his heuristic of comparing the two tails is a pretty good proxy for the best we can do. And that's where I disagree. I think there's actually one line in the chapter where he basically says, we can get a grip on this very hard problem by doing the following. But I only noticed that when I went back to start writing a blog post. And the vast majority of the chapter is basically just the object level argument or evidence presentation. There's no repetition emphasizing that this is a really, I guess, sketchy, for lack of a better word, dubious case. Or first pass, I guess, is a better way of putting it. This is just a first pass, don't put too much weight on this. That's not how it comes across, at least in my opinion, to the typical reader. And yeah, I think that's everything.