EG

Erich_Grunewald

Associate Researcher @ Institute for AI Policy and Strategy
2345 karmaJoined Dec 2020Working (6-15 years)Berlin, Germanywww.erichgrunewald.com

Bio

Anything I write here is written purely on my own behalf, and does not represent my employer's views (unless otherwise noted).

Comments
263

Interesting post! Another potential downside (which I don't think you mention) is that strict liability could disincentivize information sharing. For example, it could make AI labs more reluctant to disclose new dangerous capabilities or incidents (when that's not required by law). That information could be valuable for other AI labs, for regulators, for safety researchers, and for users.

Thank you for writing this! I love rats and found this -- and especially watching the video of the rodent farm and reading your account of the breeder visit -- distressing and pitiful.

Can you specify what you mean with "2.7x is a ridiculous number"?

I ask because it does happen that economies grow like that in a fairly short amount of time. For example, since the year 2000:

  • China's GDPpc 2.7x'd about 2.6 times
  • Vietnam's did it ~2.4 times
  • Ethiopia's ~2.1 times
  • India's ~1.7 times
  • Rwanda's ~1.3 times
  • The US's GDPpc is on track to 2.7x from 2000 in about 2029, assuming a 4% annual increase

So I assume you don't mean something like "2.7x never happens". Do you mean something more like "it's hard to find policies that produce 2.7x growth in a reasonable amount of time" or "typically it takes economies decades to 2.7x"?

I think the biggest danger to that reasoning is the premise that they are caused by GDP, and only by gdp, which I quite flatly dispute.

Well, this seems like something that is actually worth finding out. Because if it is the case that GDP (/ GDP per capita) does have a significant causal influence on one (or more) of them, then you are conditioning on a mediator, (partially) hiding the causal effect of GDP on the outcome. It seems to me like your model assumes that GDP does not have any casual influence on any of these variables, which seems like a pretty strong assumption. Unless I am misunderstanding something.

(ETA: Similarly, if both GDP and life satisfaction causally influence one of the variables, you are conditioning on a collider. That could introduce a spurious negative correlation masking a real correlation between GDP and life satisfaction, via Berkson's paradox. For example, suppose both life satisfaction and GDP cause social stability. Then, when you stratify by social stability, it would not be surprising to find a spurious negative correlation between GDP and life satisfaction, because a high-social-stability country, if it happens to have relatively low GDP, must have very high life satisfaction in order to achieve high social stability, and vice versa.)

Any attempt of a defense of GDP, specifically, needs to take into the account the fact that it’s just a deeply flawed measure of value. That’s why econ nobelists have been arguing against it for over a decade (and likely much longer, given that whole international reports were being published on it in 2012). So even if it *were *more predictive than the model suggests, that still wouldn’t address the fact it’s known to be misleading, all on its own, and not something I would spend a lot of time defending on the merits.

My understanding of these critiques is that they say either that (1) GDP is not intrinsically valuable, (2) GDP does not measure perfectly anything that we care about, or fails to measure many things that we care about, and/or (3) GDP focuses too narrowly on quantifiable economical transactions.

But if you were to find empirically that GDP causes something we do care about, e.g., life satisfaction, then I don't understand how those critiques would be relevant? (1) would not be relevant because we don't care about increasing GDP for its own sake, only in order to increase life satisfaction. (2) would not be relevant because whatever GDP would or would not succeed in measuring, it does measure something, and it would be desirable to increase whatever it measures (since whatever that is, causes life satisfaction). (3) would not be relevant because whatever does or does not go into the measure, again, it does measure something, and it would be desirable to increase whatever it measures.

But perhaps the most definitive argument against the unique value of gdp is in simple counterexamples. Between 2005 and 2022, Costa Rica had a higher life satisfaction than the United States, with less than a third of the GDPpc. This simply wouldn’t be possible, if gdp just bought you happiness. Ergo, that simply cannot be the answer.

Your reductio shows that GDP cannot be the only thing that has a causal influence on life satisfaction (assuming measurements are good, etc.). But I don't think OP or anyone else in this comment section is saying that GDP/wealth/money is the only thing that influences life satisfaction, only at most that it is one thing that has a comparatively strong influence on it. And your counterexample does not disprove that.

I don't know if these things make it robustly good, but some considerations:

  • Raising and killing donkeys for their skin seems like it could scale up more than the use of working donkeys, since (1) there may be increasing demand for donkey skin as China develops economically, and (2) there may be diminishing demand for working donkeys as Africa develops economically. So it could be valuable to have a preemptive norm/ban against slaughtering donkeys for this use, even if the short-term effect is net-negative.
  • It is not obvious that working donkeys have net-negative lives. My impression is that their lives are substantially better than the lives of most factory farmed animals, though that is a low bar. One reason to think that is the case is that working donkeys' owners live more closely to, and are more dependent on, their animals, than operators of factory farms, meaning they benefit more from their animals being healthy and happy.
  • Markets in donkey skin could have some pretty bad externalities, e.g., with people who rely on working donkeys for a living seeing their animals illegally poached. (On the other hand, this ban could also make such effects worse, by pushing the market underground.) Meanwhile, working donkeys do useful work, so they probably improve human welfare a bit. (I doubt donkey skin used for TCM improves human welfare.)
  • On non-utilitarian views, you may place relatively more value on not killing animals, and/or relatively less value on reducing suffering. So if you give some weight to those views, that may be another reason to think this ban is net positive.

That makes sense. I agree that capitalism likely advances AI faster than other economical systems. I just don’t think the difference is large enough for economic system to be a very useful frame of analysis (or point of intervention) when it comes to existential risk, let alone the primary frame.

Thank you for writing this. I thought it was an interesting article. I want to push back a bit against the claim that AI risk should primarily or even significantly be seen as a problem of capitalism. You write:

[I]n one corner are trillion-dollar companies trying to make AI models more powerful and profitable; in another, you find civil society groups trying to make AI reflect values that routinely clash with profit maximization.

In short, it’s capitalism versus humanity.

I do think it is true that the drive to advance AI technology faster makes AI safety harder, and that competition under the capitalist system is one thing generating this drive. But I don't think this is unique to capitalism, or that things would be much better under some other economic system.

The Soviet Union was not capitalist, and yet it developed dangerous nuclear weapons and bioweapons. It put tremendous resources into technological development, e.g., space technology, missile systems, military aircraft, etc. I couldn't find figures for the Cold War as a whole, but in 1980 the USSR outspent the US (and Japan, and Germany) on R&D, in terms of % of GDP. And it did not seem to do better at developing these technologies in a safe way than capitalist countries did (cf. Sverdlovsk, Chernobyl).

If you look at what is probably the second most capable country when it comes to AI, China, you see an AI industry driven largely by priorities set by the state, and investment partly directed or provided by the state. China also has markets, but the Chinese state is highly interested in advancing AI progress, and I see no reason why this would be different under a non-market-based system. This is pretty clear from e.g., its AI development plan and Made in China 2025, and it has much more to do with national priorities (of security, control, strategic competition, and economic growth) than free market competition.

I think the idea is that lots of money is spent on treating diseases caused by aging, but little is spent on preventing aging in the first place. So I don't see a contradiction.

I reckon my donations this year will amount to about:

  • $3.7K to animal welfare, via Effektiv Spenden.
  • $1.7K to global health and development, via Effektiv Spenden.
  • $1.1K to the Donation Election Fund.
  • And my labour to mitigating risks from AI. In a way, this amounts to way more than the above, given that I would be earning 2x+ what I am earning now if I were doing what I did before, i.e., software engineering.

I recently reconfigured my giving to be about 85% animal welfare and 15% global health, however, for reasons similar to those spelled out in this post (I think, though I only skimmed that post, and came to my decision independently).

Some non-fiction books I enjoyed this year were James Gleick's The Information (a sprawling book about information theory, communication, and much else), Wealth and Power by Orville Schell & John Delury (about the intellectual history of modern China), Fawn M. Brodie's No Man Knows My History (about Joseph Smith and the early days of the LDS Church, or Mormonism), and David Stove's The Plato Cult (polemics against Popper, Nozick, idealism, and more). Some of these are obviously rather narrow, and you probably would not enjoy them if you are not at all interested in the subject matters.

Load more