I graduated from Georgetown University in December, 2021 with degrees in economics, mathematics and a philosophy minor. There, I founded and helped to lead Georgetown Effective Altruism. Over the last few years recent years, I've interned at the Department of the Interior, the Federal Deposit Insurance Corporation, and Nonlinear, a newish longtermist EA org.
I'm now doing research thanks to an EA funds grant, trying to answer hard, important EA-relevant questions. My first big project (in addition to everything listed here) was helping to generate this team Red Teaming post.
Blog: aaronbergman.net
In terms of result, yeah it does, but I sorta half-intentionally left that out because I don't actually think LLS is true as it seems to often be stated.
Why the strikethrough: after writing the shortform, I get that e.g., "if we know nothing more about them" and "in the absence of additional information" mean "conditional on a uniform prior," but I didn't get that before. And Wikipedia's explanation of the rule,
Since we have the prior knowledge that we are looking at an experiment for which both success and failure are possible, our estimate is as if we had observed one success and one failure for sure before we even started the experiments.
seems both unconvincing as stated and, if assumed to be true, doesn't depend on that crucial assumption
Fixed, thanks!
The recent 80k podcast on the contingency of abolition got me wondering what, if anything, the fact of slavery's abolition says about the ex ante probability of abolition - or more generally, what one observation of a binary random variable says about as in
Turns out there is an answer (!), and it's found starting in paragraph 3 of subsection 1 of section 3 of the Binomial distribution Wikipedia page:
A closed form Bayes estimator for p also exists when using the Beta distribution as a conjugate prior distribution. When using a general as a prior, the posterior mean estimator is...
[...]
For the special case of using the standard uniform distribution as a non-informative prior, , the posterior mean estimator becomes:
Don't worry, I had no idea what was until 20 minutes ago. In the Shortform spirit, I'm gonna skip any actual explanation and just link Wikipedia and paste this image (I added the uniform distribution dotted line because why would they leave that out?)
Cool, so for the case, we get that if you have a prior over the ex ante probability space described by one of those curves in the image, you...
In the uniform case (which actually seems kind of reasonable for abolition), you...
At risk of jeopardizing EA's hard-won reputation of relentless internal criticism:
Even setting aside its object-level impact-relevant criteria (truth, importance, etc), this is just enormously impressive both in terms of magnitude and quality. The post itself gives us readers an anchor on which to latch critiques, questions, and comments, so it's easy to forget that each step or decision in the whole methodology had to be chosen from an enormous space of possibilities. And this looks— at least on a first red—like very many consecutive well-made steps and decisions
Note: inspired by the FTX+Bostrom fiascos and associated discourse. May (hopefully) develop into longform by explicitly connecting this taxonomy to those recent events (but my base rate of completing actual posts cautions humility)
Or fails to induce
A few Forum meta things you might find useful or interesting:
A resource that might be useful: https://tinyapps.org/
There's a ton there, but one anecdote from yesterday: referred me to this $5 IOS desktop app which (among other more reasonable uses) made me this full quality, fully intra-linked >3600 page PDF of (almost) every file/site linked to by every file/site linked to from Tomasik's homepage (works best with old-timey simpler sites like that)
Nice! (admit I've only just skimmed and looked at the eye-catching graphics and tables 🙃). A couple small potential improvements to those things:
Thank you - fixed!
Late to the party (and please forgive me if I overlooked a part where you address this), but I think this all misses the boring and kinda unsatisfying but (I’d argue) correct answer to the question posed:
Because they might be wrong!
Ok, my less elegant, more pedantically precise claim (argument?) is that:
... would in fact find him/herself (i) 'doing ethics' and [slightly less confident about this one] (ii) 'doing ethics' as though moral realism were true even if they believe that moral realism is probably not true.
[ok that's it for the argument]🔚
Two more things...