Aaron Bergman

Working (0-5 years experience)

Bio

Participation
4

I graduated from Georgetown University in December, 2021 with degrees in economics, mathematics and a philosophy minor. There, I founded and helped to lead Georgetown Effective Altruism. Over the last few years recent years, I've interned at the Department of the Interior, the Federal Deposit Insurance Corporation, and Nonlinear, a newish longtermist EA org.

I'm now doing research thanks to an EA funds grant, trying to answer hard, important EA-relevant questions. My first big project (in addition to everything listed here) was helping to generate this team Red Teaming post.

Blog: aaronbergman.net

How others can help me

  • Suggest action-relevant, tractable research ideas for me to pursue
  • Give me honest, constructive feedback on any of my work
  • Introduce me to someone I might like to know :)
  • Convince me of a better marginal use of small-dollar donations than giving to the Fish Welfare Initiative, from the perspective of a suffering-focused hedonic utilitarian.
  • Offer me a job if you think I'd be a good fit
  • Send me recommended books, podcasts, or blog posts that there's like a >25% chance a pretty-online-and-into-EA-since 2017 person like me hasn't consumed
    • Rule of thumb standard maybe like "at least as good/interesting/useful as a random 80k podcast episode"

How I can help others

  • Open to research/writing collaboration :)
  • Would be excited to work on impactful data science/analysis/visualization projects
  • Can help with writing and/or editing
  • Discuss topics I might have some knowledge of
    • like: math, economics, philosophy (esp. philosophy of mind and ethics), psychopharmacology (hobby interest), helping to run a university EA group, data science, interning at government agencies

Comments
89

Late to the party (and please forgive me if I overlooked a part where you address this), but I think this all misses the boring and kinda unsatisfying but (I’d argue) correct answer to the question posed:

Why should ethical anti-realists do ethics?

Because they might be wrong!  

Ok, my less elegant, more pedantically precise claim (argument?) is that: 

  1. Ethical anti-realists should do ethics iff moral realism is true in such a way that includes normativity (rather than just 'objective ordering'-flavor realism)
  2. It would be good for anti-realists to do ethics iff moral realism is true (even if normativity is fake but statements like "all else equal, a world with more suffering is worse than its alternative" have truth values )
  3. Anti-realism, if true, would (intrinsically) have no positive implications
    1. I know this is a controversial claim, and kinda what this whole post is about! 
    2. (I initially wrote 'anti-realismnihilism,' but the latter term seems to be defined in a sentiment-laden way in several places)
  4. A person who accepts the above three bulleted premises and also :
    1.  Has (for whatever reason) a desire or goal to 'do ethics' conditional on there being good reason to do ethics
    2. Uses (for whatever reason) any procedure consistent with any form of what I'll call "normal, prima facie non-insane,  coherent decision theoretic reasoning" to make decisions...

 ... would in fact find him/herself (i) 'doing ethics' and [slightly less confident about this one] (ii) 'doing ethics' as though moral realism were true even if they believe that moral realism is probably not true.

[ok that's it for the argument]🔚

Two more things...

  1. It looks like Will MacAskill's book Moral Uncertainty has a chapter on this which I haven't engaged with, but prima facie I can't see why his arguments concerning normative moral uncertainty wouldn't apply at the meta-ethical level as well
  2. I should note that I may be inclined towards this answer because I think anti-realists are wrong (for reasons discussed in this 80k episode )

In terms of result, yeah it does, but I sorta half-intentionally left that out because I don't actually think LLS is true as it seems to often be stated.

Why the strikethrough: after  writing the shortform, I get that e.g., "if we know nothing more about them" and "in the absence of additional information" mean "conditional on a uniform prior," but I didn't get that before. And Wikipedia's explanation of the rule, 

Since we have the prior knowledge that we are looking at an experiment for which both success and failure are possible, our estimate is as if we had observed one success and one failure for sure before we even started the experiments.

seems both unconvincing as stated and, if assumed to be true, doesn't depend on that crucial assumption

The recent 80k podcast on the contingency of abolition got me wondering what, if anything, the fact of slavery's abolition says about the ex ante probability of abolition - or more generally, what one observation of a binary random variable  says about  as in

Bernoulli vs Binomial Distribution: What's the Difference?

Turns out there is an answer (!), and it's found starting in paragraph 3 of subsection 1 of section 3 of the Binomial distribution Wikipedia page:

A closed form Bayes estimator for p also exists when using the Beta distribution as a conjugate prior distribution. When using a general  ⁡ as a prior, the posterior mean estimator is...

 

[...]

For the special case of using the standard uniform distribution as a non-informative prior,   , the posterior mean estimator becomes:  

Don't worry, I had no idea what  was until 20 minutes ago. In the Shortform spirit, I'm gonna skip any actual explanation and just link Wikipedia and paste this image (I added the uniform distribution dotted line because why would they leave that out?)

So...

Cool, so for the  case, we get that if you have a prior over the ex ante probability space described by one of those curves in the image, you...

  • 0) Start from 'zero empirical information guesstimate'  
  • 1a) observe that the thing happens , moving you, Ideal Bayesian Agent, to updated probability   OR
  • 1b) observe that the thing doesn't happen , moving you to updated probability 

 

In the uniform case (which actually seems kind of reasonable for abolition), you...

  • 0) Start from prior 
  • 1a) observe that the thing happens, moving you to updated probability    
  • 1a) observe that the thing doesn't happen, moving you to updated probability   

At risk of jeopardizing EA's hard-won reputation of relentless internal criticism:

Even setting aside its object-level impact-relevant criteria (truth, importance, etc), this is just enormously impressive both in terms of magnitude and quality. The post itself gives us readers an anchor on which to latch critiques, questions, and comments, so it's easy to forget that each step or decision in the whole methodology had to be chosen from an enormous space of possibilities. And this looks— at least on a first red—like very many consecutive well-made steps and decisions

Events as evidence vs. spotlights

Note: inspired by the FTX+Bostrom fiascos and associated discourse. May (hopefully) develop into longform by explicitly connecting this taxonomy to those recent events (but my base rate of completing actual posts cautions humility)

Event as evidence

  • The default: normal old Bayesian evidence
    • The realm of "updates," "priors," and "credences" 
  • Pseudo-definition: Induces [1] a change to or within a model (of whatever the model's user is trying to understand)
  • Corresponds to models that are (as is often assumed):
    1. Well-defined (i.e. specific, complete, and without latent or hidden information)
    2. Stable except in response to 'surprising' new information

Event as spotlight

  • Pseudo-definition: Alters the how a person views, understands, or interacts with a model, just as a spotlight changes how an audience views what's on stage
    • In particular, spotlights change the salience of some part of a model
  • This can take place both/either:
    • At an individual level (think spotlight before an audience of one); and/or
    • To a community's shared model (think spotlight before an audience of many)
  • They can also which information latent in a model is functionally available to a person or community, just as restricting one's field of vision increases the resolution of whichever part of the image shines through

Example

  1. You're hiking a bit of the Appalachian Trail with two friends, going north, using the following of a map (the "external model")   
  2. An hour in, your mental/internal model probably looks like this:
  3. Event: the collapse of a financial institution you hear traffic
    1. As evidence, this causes you to change where you think you are—namely, a bit south of the first road you were expecting to cross
    2. As spotlight, this causes the three of you to stare at the same map as before model but in such a way that your internal models are all very similar, each looking something like this
Really the crop should be shifted down some but I don't feel like redoing it rn
  1. ^

    Or fails to induce

A few Forum meta things you might find useful or interesting:

  1.  Two super basic interactive data viz apps 
    1. 1) How often (in absolute and relative terms) a given forum topic appears with another given topic
    2. 2) Visualizing the popularity of various tags
  2. An updated Forum scrape including the full text and attributes of 10k-ish posts as of Christmas, '22
    1. See the data without full text in Google Sheets here
    2. Post explaining version 1.0 from a few months back
  3. From the data in no. 2, a few effortposts that never garnered an accordant amount of attention (qualitatively filtered from posts with (1) long read times (2) modest positive karma (3) not a ton of comments.
    1.  Columns labels should be (left to right):
      1. Title/link
      2. Author(s)
      3. Date posted
      4. Karma (as of a week ago)
      5. Comments (as of a week ago)
 
Open Philanthropy: Our Approach to Recruiting a Strong Teampmk10/23/2021110
Histories of Value Lock-in and Ideology Critiqueclem9/2/2022111
Why I think strong general AI is coming soonporby9/28/2022131
Anthropics and the Universal DistributionJoe_Carlsmith11/28/2021180
Range and Forecasting Accuracyniplav5/27/2022122
A Pin and a Balloon: Anthropic Fragility Increases Chances of Runaway Global Warmingturchin9/11/2022161
Strategic considerations for effective wild animal suffering workAnimal_Ethics1/18/2022210
Red teaming a model for estimating the value of longtermist interventions - A critique of Tarsney's "The Epistemic Challenge to Longtermism"Anjay F, Chris Lonsberry, Bryce Woodworth7/16/2022210
Welfare stories: How history should be written, with an example (early history of Guam)kbog1/2/2020181
Summary of Evidence, Decision, and CausalityDawn Drescher9/5/2020270
Some AI research areas and their relevance to existential safetyAndrew Critch12/15/2020270
Maximizing impact during consulting: building career capital, direct work and more.Vaidehi Agarwalla, Jakob, Jona, Peter44448/13/2021212
Independent Office of Animal ProtectionAnimal Ask, Ren Springlea11/22/2022212
Investigating how technology-focused academic fields become self-sustainingBen Snodin, Megan Kinniment9/6/2021252
Using artificial intelligence (machine vision) to increase the effectiveness of human-wildlife conflict mitigations could benefit WAWRethink Priorities, Tapinder Sidhu10/28/2022223
Crucial questions about optimal timing of work and donationsMichaelA8/14/2020284
Will we eventually be able to colonize other stars? Notes from a preliminary reviewNick_Beckstead6/22/2014297
Philanthropists Probably Shouldn't Mission-Hedge AI ProgressMichaelDickens8/23/2022279

A resource that might be useful: https://tinyapps.org/ 

 

There's a ton there, but one anecdote from yesterday: referred me to this $5 IOS desktop app which (among other more reasonable uses) made me this full quality, fully intra-linked >3600 page PDF of (almost) every file/site linked to by every file/site linked to from Tomasik's homepage (works best with old-timey simpler sites like that)

Nice! (admit I've only just skimmed and looked at the eye-catching graphics and tables  🙃). A couple small potential improvements to those things: 

  1. Is a higher-quality/bigger file version of the infographic available? Shouldn't matter, of course, but may as well put it on a fair memetic playing field with all the other beautiful charts out there
  2. Would you consider adding a few "reference" columns to the Welfare Range Table, in particular values for:
    1. Human (or perhaps "human if introspection is epistemically meaningful and qualia exist")
    2. Organisms from other kingdoms: plant, bacteria, etc. (I think any single one without nervous tissue would suffice)
    3. A representative (very probable) non-moral patient physical object ("rock")
    4. Less important (intuitively to me) potential additions:
      1. chatGPT
      2. Video game character or anything else that "behaves" in some sense like an advanced organism but is ontologically very different 
      3. any other contrarian counterexample things that might push the limits of the taxonomy's applicability (maybe 'Roomba' or 'the IOS operating system' or 'a p-zombie' ?)
Load more