AB

Aaron Bergman

1032 karmaJoined Nov 2017Working (0-5 years)Maryland, USA
aaronbergman.net | admonymous.co/aaron_bergman

Bio

Participation
4

I graduated from Georgetown University in December, 2021 with degrees in economics, mathematics and a philosophy minor. There, I founded and helped to lead Georgetown Effective Altruism. Over the last few years recent years, I've interned at the Department of the Interior, the Federal Deposit Insurance Corporation, and Nonlinear, a newish longtermist EA org.

I'm now doing research thanks to an EA funds grant, trying to answer hard, important EA-relevant questions. My first big project (in addition to everything listed here) was helping to generate this team Red Teaming post.

Blog: aaronbergman.net

How others can help me

  • Suggest action-relevant, tractable research ideas for me to pursue
  • Give me honest, constructive feedback on any of my work
  • Introduce me to someone I might like to know :)
  • Convince me of a better marginal use of small-dollar donations than giving to the Fish Welfare Initiative, from the perspective of a suffering-focused hedonic utilitarian.
  • Offer me a job if you think I'd be a good fit
  • Send me recommended books, podcasts, or blog posts that there's like a >25% chance a pretty-online-and-into-EA-since 2017 person like me hasn't consumed
    • Rule of thumb standard maybe like "at least as good/interesting/useful as a random 80k podcast episode"

How I can help others

  • Open to research/writing collaboration :)
  • Would be excited to work on impactful data science/analysis/visualization projects
  • Can help with writing and/or editing
  • Discuss topics I might have some knowledge of
    • like: math, economics, philosophy (esp. philosophy of mind and ethics), psychopharmacology (hobby interest), helping to run a university EA group, data science, interning at government agencies

Comments
90

This post is half object level, half experiment with “semicoherent audio monologue ramble → prose” AI (presumably GPT-3.5/4 based) program audiopen.ai

In the interest of the latter objective, I’m including 3 mostly-redundant subsections: 

  1. A ’final’ mostly-AI written text, edited and slightly expanded just enough so that I endorse it in full (though recognize it’s not amazing or close to optimal) 
  2. The raw AI output
  3. The raw transcript


1) Dubious asymmetry argument in WWOTF

In Chapter 9 of his book, What We Are the Future, Will MacAskill argues that the future holds positive moral value under a total utilitarian perspective. He posits that people generally use resources to achieve what they want - either for themselves or for others - and thus good outcomes are easily explained as the natural consequence of agents deploying resources for their goals. Conversely, bad outcomes tend to be side effects of pursuing other goals. While malevolence and sociopathy do exist, they are empirically rare.

MacAskill argues that in a future with continued economic growth and no existential risk, we will likely direct more resources towards doing good things due to self-interest and increased impartial altruism. He contrasts this eutopian scenario with an anti-eutopia: the worst possible world, which he argues (compellingly, I think) less probable because it requires convoluted explanations as opposed to simple desires like enjoying ice cream. He concludes that the probability of achieving a eutopia outweighs the low likelihood but extreme negative consequences of an anti-eutopia. 

However, I believe McCaskill's analysis neglects an important aspect: considering not only these two extremes but also the middle distribution where neither significant amounts of resources nor agentic intervention occur. 

When physics operates without agency-driven resource allocation, we have good reason to expect evolution to create conscious beings whose suffering we can attribute to the ease with which animals (or animal-like beings) can lose all future expected genetic reproduction - as MacAskill himself argues elsewhere in the book.

Importantly, though, this non-agentic suffering, seems more likely to complement agentic resource deployment - not substitute for it as one might intuit. That’s because human or post-human expansion necessarily entails the expansion of concentrated physical energy, and seems likely to entail the expansion of other scarce, pro-biotic resources such as DNA, water, and computation.

Although McCaskill does not explicitly claim his binary model comparing eutopia and anti-eutopia is sufficient for understanding this complex problem, it seems to me to be implied; 

Only upon attempting to draft a blog post revisiting his work did I noticed the line “We can make some progress by focusing on just two extreme scenarios: the best or worst possible futures, eutopia and anti-eutopia,” acknowledging status of this model as a bit of evidence in the larger question of the future’s value.

In sum, I think MacAskill's analysis would benefit from addressing the morally relevant middle distribution to provide a more accurate representation of the future under total utilitarianism.
 

2) Utilitarian Utopia, Anti-Utopia and the Neglected Middle 

In Chapter 9 of his book, What We Are the Future, Will McCaskill argues that the future holds positive moral value under a total utilitarian perspective. He posits that people generally use resources to achieve what they want - either for themselves or others - and thus good outcomes are often intentional. Conversely, bad outcomes tend to be side effects of pursuing other goals. While malevolence and sociopathy do exist, they are empirically rare.

McCaskill then extrapolates this argument to suggest that in a future with continued economic growth (assuming no existential risk), we will likely direct more resources towards doing good things due to self-interest and increased impartial altruism. He contrasts this utopian scenario with an anti-utopia: the worst possible world which is less probable because it requires convoluted explanations as opposed to simple desires like enjoying ice cream.

He concludes that the probability of achieving a utopia outweighs the low likelihood but extreme negative consequences of an anti-utopia. However, I believe McCaskill's analysis neglects an important aspect: considering not only these two extremes but also the middle distribution where neither significant amounts of resources nor agentic intervention occur.

In such cases where physics operates without agency-driven resource allocation, evolution can create conscious beings like plants and animals who experience suffering without any intentionality behind it. This middle distribution may actually skew negatively since evolution favors reproductive fitness at any cost; as a result, sentient beings could suffer more than they experience happiness during their lives.

I argue that wild animal welfare is net negative overall; if given a choice between having them continue living or ceasing their existence altogether, I would choose nonexistence on moral grounds. Although McCaskill does not explicitly claim his heuristic comparison between utopia and anti-utopia is sufficient for understanding this complex problem, he strongly implies it throughout most of his chapter.

Upon revisiting his work while drafting my response blog post, I noticed a single line acknowledging the limitations of his approach. However, this caveat does not adequately emphasize that his argument should be considered only as a first pass and not definitive proof. In my opinion, McCaskill's analysis would benefit from addressing the morally relevant middle distribution to provide a more accurate representation of the future under total utilitarianism.

-------

3) Original Transcript


Okay, so I'm going to describe where I think I disagree with Will McCaskill in Chapter 9 of his book, What We Are the Future, where he basically makes an argument that the future is positive in expectation, positive moral value under a total utilitarian perspective. And so his argument is basically that people, it's very easy to see that people deploy the resources in order to get what they want, which is either to help themselves and sometimes to help other people, whether it's just their family or more impartial altruism. Basically you can always explain why somebody does something good just because it's good and they want it, which is kind of, I think that's correct and compelling. Whereas when something bad happens, it's generally the side effect of something else. At least, yeah. So while there is malevolence and true sociopathy, those things are in fact empirically quite rare, but if you undergo a painful procedure, like a medical procedure, it's because there's something affirmative that you want and that's a necessary side effect. It's not because you actually sought that out in particular. And all this I find true and correct and compelling. And so then he uses this to basically say that in the future, presumably conditional on continued economic growth, which basically just means no existential risk and humans being around, we'll be employing a lot of resources in the direction of doing things well or doing good. Largely just because people just want good things for themselves and hopefully to some extent because there will be more impartial altruists willing to both trade and to put their own resources in order to help others. And once again, all true, correct, compelling in my opinion. So on the other side, so basically utopia in this sense, utopia basically meaning employing a lot of, the vast majority of resources in the direction of doing good is very likely and very good. On the other side, it's how likely and how bad is what he calls anti-utopia, which is basically the worst possible world. And he basically using... I don't need to get into the particulars, but basically I think he presents a compelling argument that in fact it would be worse than the best world is good, at least to the best of our knowledge right now. But it's very unlikely because it's hard to see how that comes about. You actually can invent stories, but they get kind of convoluted. And it's not nearly as simple as, okay, people like ice cream and so they buy ice cream. It's like, you have to explain why so many resources are being deployed in the direction of doing good things and you still end up with a terrible world. Then he basically says, okay, all things considered, the probability of good utopia wins out relative to the badness, but very low probability of anti-utopia. Again, a world full of misery. And where I think he goes wrong is that he neglects the middle of the distribution where the distribution is ranging from... I don't know how to formalize this, but something like percentage or amount of... Yeah, one of those two, percentage or amount of resources being deployed in the direction of on one side of the spectrum causing misery and then the other side of the spectrum causing good things to come about. And so he basically considers the two extreme cases. But I claim that, in fact, the middle of the distribution is super important. And actually when you include that, things look significantly worse because the middle of the distribution is basically like, what does the world look like when you don't have agents essentially deploying resources in the direction of anything? You just have the universe doing its thing. We can set aside the metaphysics or physics technicalities of where that becomes problematic. Anyway, so basically the middle of the distribution is just universe doing its thing, physics operating. I think there's the one phenomenon that results from this that we know of to be morally important or we have good reason to believe is morally important is basically evolution creating conscious beings that are not agentic in the sense that I care about now, but basically like plants and animals. And presumably I think you have good reason to believe animals are sentient. And evolution, I claim, creates a lot of suffering. And so you look at the middle of the distribution and it's not merely asymmetrical, but it's asymmetrical in the opposite direction. So I claim that if you don't have anything, if you don't have lots of resources being deployed in any direction, this is a bad world because you can expect evolution to create a lot of suffering. The reason for that is, as he gets into, something like either suffering is intrinsically more important, which I put some weight on that. It's not exactly clear how to distinguish that from the empirical case. And the empirical case is basically it's very easy to lose all your reproductive fitness in the evolutionary world very quickly. It's relatively hard to massively gain a ton. Reproduction is like, even having sex, for example, only increases your relative reproductive success a little bit, whereas you can be killed in an instant. And so this creates an asymmetry where if you buy a functional view of qualia, then it results in there being an asymmetry where animals are just probably going to experience more pain over their lives, by and large, than happiness. And I think this is definitely true. I think wild animal welfare is just net negative. I wish if I could just... If these are the only two options, have there not be any wild animals or have them continue living as they are, I think it would be overwhelmingly morally important to not have them exist anymore. And so tying things back. Yeah, so McCaskill doesn't actually... I don't think he makes a formally incorrect statement. He just strongly implies that this case, that his heuristic of comparing the two tails is a pretty good proxy for the best we can do. And that's where I disagree. I think there's actually one line in the chapter where he basically says, we can get a grip on this very hard problem by doing the following. But I only noticed that when I went back to start writing a blog post. And the vast majority of the chapter is basically just the object level argument or evidence presentation. There's no repetition emphasizing that this is a really, I guess, sketchy, for lack of a better word, dubious case. Or first pass, I guess, is a better way of putting it. This is just a first pass, don't put too much weight on this. That's not how it comes across, at least in my opinion, to the typical reader. And yeah, I think that's everything.

Late to the party (and please forgive me if I overlooked a part where you address this), but I think this all misses the boring and kinda unsatisfying but (I’d argue) correct answer to the question posed:

Why should ethical anti-realists do ethics?

Because they might be wrong!  

Ok, my less elegant, more pedantically precise claim (argument?) is that: 

  1. Ethical anti-realists should do ethics iff moral realism is true in such a way that includes normativity (rather than just 'objective ordering'-flavor realism)
  2. It would be good for anti-realists to do ethics iff moral realism is true (even if normativity is fake but statements like "all else equal, a world with more suffering is worse than its alternative" have truth values )
  3. Anti-realism, if true, would (intrinsically) have no positive implications
    1. I know this is a controversial claim, and kinda what this whole post is about! 
    2. (I initially wrote 'anti-realismnihilism,' but the latter term seems to be defined in a sentiment-laden way in several places)
  4. A person who accepts the above three bulleted premises and also :
    1.  Has (for whatever reason) a desire or goal to 'do ethics' conditional on there being good reason to do ethics
    2. Uses (for whatever reason) any procedure consistent with any form of what I'll call "normal, prima facie non-insane,  coherent decision theoretic reasoning" to make decisions...

 ... would in fact find him/herself (i) 'doing ethics' and [slightly less confident about this one] (ii) 'doing ethics' as though moral realism were true even if they believe that moral realism is probably not true.

[ok that's it for the argument]🔚

Two more things...

  1. It looks like Will MacAskill's book Moral Uncertainty has a chapter on this which I haven't engaged with, but prima facie I can't see why his arguments concerning normative moral uncertainty wouldn't apply at the meta-ethical level as well
  2. I should note that I may be inclined towards this answer because I think anti-realists are wrong (for reasons discussed in this 80k episode )

In terms of result, yeah it does, but I sorta half-intentionally left that out because I don't actually think LLS is true as it seems to often be stated.

Why the strikethrough: after  writing the shortform, I get that e.g., "if we know nothing more about them" and "in the absence of additional information" mean "conditional on a uniform prior," but I didn't get that before. And Wikipedia's explanation of the rule, 

Since we have the prior knowledge that we are looking at an experiment for which both success and failure are possible, our estimate is as if we had observed one success and one failure for sure before we even started the experiments.

seems both unconvincing as stated and, if assumed to be true, doesn't depend on that crucial assumption

The recent 80k podcast on the contingency of abolition got me wondering what, if anything, the fact of slavery's abolition says about the ex ante probability of abolition - or more generally, what one observation of a binary random variable  says about  as in

Bernoulli vs Binomial Distribution: What's the Difference?

Turns out there is an answer (!), and it's found starting in paragraph 3 of subsection 1 of section 3 of the Binomial distribution Wikipedia page:

A closed form Bayes estimator for p also exists when using the Beta distribution as a conjugate prior distribution. When using a general  ⁡ as a prior, the posterior mean estimator is...

 

[...]

For the special case of using the standard uniform distribution as a non-informative prior,   , the posterior mean estimator becomes:  

Don't worry, I had no idea what  was until 20 minutes ago. In the Shortform spirit, I'm gonna skip any actual explanation and just link Wikipedia and paste this image (I added the uniform distribution dotted line because why would they leave that out?)

So...

Cool, so for the  case, we get that if you have a prior over the ex ante probability space described by one of those curves in the image, you...

  • 0) Start from 'zero empirical information guesstimate'  
  • 1a) observe that the thing happens , moving you, Ideal Bayesian Agent, to updated probability   OR
  • 1b) observe that the thing doesn't happen , moving you to updated probability 

 

In the uniform case (which actually seems kind of reasonable for abolition), you...

  • 0) Start from prior 
  • 1a) observe that the thing happens, moving you to updated probability    
  • 1a) observe that the thing doesn't happen, moving you to updated probability   

At risk of jeopardizing EA's hard-won reputation of relentless internal criticism:

Even setting aside its object-level impact-relevant criteria (truth, importance, etc), this is just enormously impressive both in terms of magnitude and quality. The post itself gives us readers an anchor on which to latch critiques, questions, and comments, so it's easy to forget that each step or decision in the whole methodology had to be chosen from an enormous space of possibilities. And this looks— at least on a first red—like very many consecutive well-made steps and decisions

Events as evidence vs. spotlights

Note: inspired by the FTX+Bostrom fiascos and associated discourse. May (hopefully) develop into longform by explicitly connecting this taxonomy to those recent events (but my base rate of completing actual posts cautions humility)

Event as evidence

  • The default: normal old Bayesian evidence
    • The realm of "updates," "priors," and "credences" 
  • Pseudo-definition: Induces [1] a change to or within a model (of whatever the model's user is trying to understand)
  • Corresponds to models that are (as is often assumed):
    1. Well-defined (i.e. specific, complete, and without latent or hidden information)
    2. Stable except in response to 'surprising' new information

Event as spotlight

  • Pseudo-definition: Alters the how a person views, understands, or interacts with a model, just as a spotlight changes how an audience views what's on stage
    • In particular, spotlights change the salience of some part of a model
  • This can take place both/either:
    • At an individual level (think spotlight before an audience of one); and/or
    • To a community's shared model (think spotlight before an audience of many)
  • They can also which information latent in a model is functionally available to a person or community, just as restricting one's field of vision increases the resolution of whichever part of the image shines through

Example

  1. You're hiking a bit of the Appalachian Trail with two friends, going north, using the following of a map (the "external model")   
  2. An hour in, your mental/internal model probably looks like this:
  3. Event: the collapse of a financial institution you hear traffic
    1. As evidence, this causes you to change where you think you are—namely, a bit south of the first road you were expecting to cross
    2. As spotlight, this causes the three of you to stare at the same map as before model but in such a way that your internal models are all very similar, each looking something like this
Really the crop should be shifted down some but I don't feel like redoing it rn
  1. ^

    Or fails to induce

A few Forum meta things you might find useful or interesting:

  1.  Two super basic interactive data viz apps 
    1. 1) How often (in absolute and relative terms) a given forum topic appears with another given topic
    2. 2) Visualizing the popularity of various tags
  2. An updated Forum scrape including the full text and attributes of 10k-ish posts as of Christmas, '22
    1. See the data without full text in Google Sheets here
    2. Post explaining version 1.0 from a few months back
  3. From the data in no. 2, a few effortposts that never garnered an accordant amount of attention (qualitatively filtered from posts with (1) long read times (2) modest positive karma (3) not a ton of comments.
    1.  Columns labels should be (left to right):
      1. Title/link
      2. Author(s)
      3. Date posted
      4. Karma (as of a week ago)
      5. Comments (as of a week ago)
 
Open Philanthropy: Our Approach to Recruiting a Strong Teampmk10/23/2021110
Histories of Value Lock-in and Ideology Critiqueclem9/2/2022111
Why I think strong general AI is coming soonporby9/28/2022131
Anthropics and the Universal DistributionJoe_Carlsmith11/28/2021180
Range and Forecasting Accuracyniplav5/27/2022122
A Pin and a Balloon: Anthropic Fragility Increases Chances of Runaway Global Warmingturchin9/11/2022161
Strategic considerations for effective wild animal suffering workAnimal_Ethics1/18/2022210
Red teaming a model for estimating the value of longtermist interventions - A critique of Tarsney's "The Epistemic Challenge to Longtermism"Anjay F, Chris Lonsberry, Bryce Woodworth7/16/2022210
Welfare stories: How history should be written, with an example (early history of Guam)kbog1/2/2020181
Summary of Evidence, Decision, and CausalityDawn Drescher9/5/2020270
Some AI research areas and their relevance to existential safetyAndrew Critch12/15/2020270
Maximizing impact during consulting: building career capital, direct work and more.Vaidehi Agarwalla, Jakob, Jona, Peter44448/13/2021212
Independent Office of Animal ProtectionAnimal Ask, Ren Springlea11/22/2022212
Investigating how technology-focused academic fields become self-sustainingBen Snodin, Megan Kinniment9/6/2021252
Using artificial intelligence (machine vision) to increase the effectiveness of human-wildlife conflict mitigations could benefit WAWRethink Priorities, Tapinder Sidhu10/28/2022223
Crucial questions about optimal timing of work and donationsMichaelA8/14/2020284
Will we eventually be able to colonize other stars? Notes from a preliminary reviewNick_Beckstead6/22/2014297
Philanthropists Probably Shouldn't Mission-Hedge AI ProgressMichaelDickens8/23/2022279

A resource that might be useful: https://tinyapps.org/ 

 

There's a ton there, but one anecdote from yesterday: referred me to this $5 IOS desktop app which (among other more reasonable uses) made me this full quality, fully intra-linked >3600 page PDF of (almost) every file/site linked to by every file/site linked to from Tomasik's homepage (works best with old-timey simpler sites like that)

Nice! (admit I've only just skimmed and looked at the eye-catching graphics and tables  🙃). A couple small potential improvements to those things: 

  1. Is a higher-quality/bigger file version of the infographic available? Shouldn't matter, of course, but may as well put it on a fair memetic playing field with all the other beautiful charts out there
  2. Would you consider adding a few "reference" columns to the Welfare Range Table, in particular values for:
    1. Human (or perhaps "human if introspection is epistemically meaningful and qualia exist")
    2. Organisms from other kingdoms: plant, bacteria, etc. (I think any single one without nervous tissue would suffice)
    3. A representative (very probable) non-moral patient physical object ("rock")
    4. Less important (intuitively to me) potential additions:
      1. chatGPT
      2. Video game character or anything else that "behaves" in some sense like an advanced organism but is ontologically very different 
      3. any other contrarian counterexample things that might push the limits of the taxonomy's applicability (maybe 'Roomba' or 'the IOS operating system' or 'a p-zombie' ?)
Load more