M

MichaelDickens

4607 karmaJoined

Bio

I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me). Lately, I mainly write about EA investing strategy, but my attention span is too short to pick just one topic.

I have a website: https://mdickens.me/ Most of the content on my website gets cross-posted to the EA Forum.

My favorite things that I've written: https://mdickens.me/favorite-posts/

I used to work as a software developer at Affirm.

Sequences
1

Quantitative Models for Cause Selection

Comments
708

I think most people don't talk about it because they don't think it's a big deal. FWIW I don't think it's a huge deal but it's still concerning.

Ok so all major funders of AI safety are personally, and probably quite significantly going to profit from the large AI companies making AI powerful and pervasive.

I know of only two major funders in AI safety—Jaan Tallinn and Good Ventures—and both have investments in frontier AI companies. Do you know of any others?

This post has been sitting in my open tabs for 4 months and I am finally getting to it today.

Using η=1.87 would require us to start taking into account absolute income levels in our cost-effectiveness analyses.

I'm not entirely sure, but by my reading, the article is using this as an argument against η > 1. But I don't think it's really an argument. If η > 1 then indeed you should take into account absolute income levels, and I do in fact think you should do that. And yes that would change prioritization, and that's a good thing because the current prioritization is probably wrong—it doesn't assign enough weight to people with lower incomes.

You never actually say you're arguing against η=1.87, but the title implies it.

FWIW I am not convinced by the evidence on what value of η to use, different lines of evidence point in different directions. I lean toward η > 1, so I think it's reasonable to use η > 1 everywhere.

  • I think most people intuitively feel that income doublings matter more for poorer people, which requires η > 1.
  • η = 1 implies you can get arbitrarily high utility if you're sufficiently rich, which seems wrong. It seems more likely that income doublings provide diminishing returns—once you reach very high income levels (like $100M+), income doublings hardly matter at all. This makes sense when you look at the space of consumable goods: doubling your income increases the size of the set of things you can buy, but each doubling increases the set size by less than the previous doubling.[1]

The best literature review I've seen comes from Gordon Irlam. The only real conclusion is that different methods see huge variance in estimates of η.

One paper that I particularly like[2], A New Method of Estimating Risk Aversion, estimates η using labor elasticity and finds η = 0.71 (see Table 1). But if you look at the various data sources it uses, the estimates of η vary greatly depending on source so the η = 0.71 average hides a lot of underlying uncertainty.

I often see people cite life satisfaction data showing η = 1. The commonly-cited paper, Stevenson & Wolfers (2013), didn't perform any statistical tests for non-linearity on log-income vs. happiness. In Table 1, the paper did binary comparisons of the slope of log-income vs. happiness for rich vs. poor people and did not find clear differences, but it did find that the slope was generally steeper for rich people[3], which suggests η < 1 (I'm pretty sure on priors that η >= 1 so I don't know what's up with that result).

I briefly looked for more recent papers that test for non-linearity of log-income vs. happiness. I didn't find exactly that, but I did find Happiness, income satiation and turning points around the world which finds that life satisfaction levels off at a certain income level. I didn't read carefully but it looks like this paper used a sketchy spline curve-fitting method that I don't trust (the fitted curves show that higher income decreases happiness above a certain point, which suggests that they're using the wrong kind of curve; see Fig 1 and Fig 2[4]). But the fact that their spline curves level off suggests that happiness increases sub-linearly with income.

I feel like there's room for a solid meta-analysis on income and life satisfaction, and I'm not satisfied with any of the existing literature.


In summary, the existing evidence is so high-variance that none of it meaningfully updates me away from my intuition that η must be greater than 1.


[1] This brings to mind a method for estimating η that I've never seen: Assume the prices of goods are Pareto-distributed and estimate the alpha parameter of the underlying Pareto distribution. Use that to estimate η (using this method).

[2] Even though I haven't actually read most of it lol. I just like the concept. Maybe it contains a bunch of math errors, I don't know.

[3] The paper did this comparison across a bunch of surveys. You'd need to do some kind of sophisticated non-standard significance test to determine if the overall difference is statistically significant, and the paper did not do that. (I think what you'd want to do is create a combined likelihood function that includes every survey and then get the p-value from the likelihood function. Or just skip the p-value and report the shape of the likelihood function because that's more informative anyway.)

[4] Perhaps this is a property of the data, not the curve-fitting method. The paper says "in [some comparisons], the SWB [subjective well-being] level at satiation was greater due to turning-point effects (Bayes factor < 1/3)." They say they present this data in the supplementary appendix, which isn't publicly available and isn't on Sci-Hub (AFAICT), so it seems I can't check.

This principle has seemingly strange implications:

  • If and nothing has been done yet, then the first thing you do produces infinite utility (assuming you start by doing the best thing possible and then move to progressively worse things).
  • If , then a randomly-chosen opportunity has infinite expected utility.

I feel this way—I recently watched some footage of a PauseAI protest and it made me cringe, and I would hate participating in one. But also I think there are good rational arguments for doing protests, and I think AI pause protests are among the highest-EV interventions right now.

Jaan Tallinn, who funds SFF, has invested in DeepMind and Anthropic. I don't know if this is relevant because AFAIK Tallinn does not make funding decisions for SFF (although presumably he has veto power).

I'm confident in PauseAI US's ability to run protests and I think the case for doing protests is pretty strong. You're also doing lobbying, headed by Felix De Simone. I'm less confident about that so I have some questions.

  1. There are no other organized groups (AFAIK) doing AI pause protests in the US of the sort you're doing. But there are other groups talking to policy-makers, including Center for AI Policy, Center for AI Safety, and Palisade (plus some others outside the US, and some others that focus on AI risk but that I think are less value-aligned). What is the value-add of PauseAI US's direct lobbying efforts compared these other groups? And are you coordinating with them at all?
  2. What is Felix's background / experience in this area? Basically, why should I expect him to be good at lobbying?

Given that a protein needs a very exact number of each amino acid to be synthesized, for essential amino acids like Methionine, I would expect their consumptions to be a bottleneck for muscle building which needs protein. Even if all the other proteins are in good amounts and thus the PDCAAS score is decent, you can drown in a river that's on average 20cm deep.

I could be misreading but it sounds like you're misunderstanding how PDCAAS works. If a food contains lots of every other essential amino acid but only (say) 30% of the required amount of methionine, then its PDCAAS will be 30. If the PDCAAS is (say) 90%, that means it contains at least 90% of the requirement for every essential amino acid, not just 90% on average.

Load more