Wei_Dai

Topic Contributions

Comments

Preventing a US-China war as a policy priority

which could mean that invading Taiwan would give China a substantial advantage in any sort of AI-driven war.

My assessment is that actually the opposite is true. Invading (and even successfully conquering) Taiwan would actually cause China to fall behind in any potential AI race. The reason is that absent a war, China can hope to achieve parity with the West (by which I mean the countries allied with the US including South Korea and Japan) on the hardware side by buying chips from Taiwan like everyone else, but if a war happened, the semiconductor foundries in Taiwan would likely be destroyed (to prevent them from falling to the Chinese government), and China lacks the technology to rebuild them without Western help. Even if the factories are not destroyed, critical supplies (such as specialty chemicals) would be cut off and the factories would become useless. Almost all of the machines and supplies that go into a semi foundry are made outside Taiwan in the West, and while China is trying to develop its own domestic semiconductor supply chains, it's something like 10 years behind the state of the art in most areas, and not catching up, because the enormous amount of R&D going into the industry across all of the supply chains across the West is not something China can match on its own.

So my conclusion is that if China invades Taiwan, it would lose access to the most advanced semiconductor processes, while the West can rebuild the lost Taiwan foundries without too much trouble. (My knowledge about all of this came from listening to a bunch of different podcasts, but IIRC, Jon Y (Asianometry) on Semiconductor Tech and U.S.-China Competition should cover most of it.)

How to engage with AI 4 Social Justice actors

Here are some of my previous thoughts (before these SJ-based critiques of EA were published) on connections between EA, social justice, and AI safety, as someone on the periphery of EA. (I have no official or unofficial role in any EA orgs, have met few EA people in person, etc.) I suspect many EA people are reluctant to speak candidly about SJ for fear of political/PR consequences.

Updating on Nuclear Power

It’s economically feasible to go all solar without firm generation, at least in places at the latitude of the US (further north it becomes impossible, you’d need to import power).

How much does this depend on the costs of solar+storage continuing to fall? (In one of your FB posts you wrote "Given 10-20 years and moderate progress on solar+storage I think it probably makes sense to use solar power for everything other than space heating") Because I believe since you wrote the FB posts, these prices have been going up instead. See this or this.

Covering 8% of the US or 30% of Japan (eventually 8-30% of all land on Earth?) with solar panels would take a huge amount of raw materials, and mining has obvious diseconomies at this kind of scale (costs increase as the lowest cost mineral deposits are used up), so it seems premature to conclude "economically feasible" without some investigation into this aspect of the problem.

Where is the Social Justice in EA?

Taking the question literally, searching the term ‘social justice’ in EA forum reveals only 12 mentions, six within blog posts, and six comments, one full blog post supports it, three items even question its value, the remainder being neutral or unclear on value.

That can't be right. I think what may have happened is that when you do a search, the results page initially shows you only 6 each of posts and comments, and you have to click on "next" to see the rest. If I keep clicking next until I get to the last pages of posts and comments, I can count 86 blog posts and 158 comments that mention "social justice", as of now.

BTW I find it interesting that you used the phrase "even question its value", since "even" is "used to emphasize something surprising or extreme". I would consider questioning the values of things to be pretty much the core of the EA philosophy...

Modelling Great Power conflict as an existential risk factor

It seems to me that up to and including WW2, many wars were fought for economic/material reasons, e.g., gaining arable land and mineral deposits, but now, due to various changes, invading and occupying another country is almost certainly economically unfavorable (causing a net loss of resources) except in rare circumstances. Wars can still be fought for ideological ("spread democracy") and strategic ("control sea lanes, maintain buffer states") reasons (and probably others I'm not thinking of right now), but at least one big reason for war has mostly gone away at least for the foreseeable future?

Curious if you agree with this, and what you see as the major potential causes of war in the future.

Introducing a New Course on the Economics of AI

Not directly related to the course, but since you're an economist with an interest in AI, I'm curious what you think about AGI will drastically increase economies of scale.

Remove An Omnivore's Statue? Debate Ensues Over The Legacy Of Factory Farming

My own fantasy is that people will eventually be canceled for failing to display sufficient moral uncertainty. :)

Why AI alignment could be hard with modern deep learning

Sounds like their positions are not public, since you don't cite anyone by name? Is there any reason for that?

Why AI alignment could be hard with modern deep learning

There’s a very wide range of views on this question, from “misalignment risk is essentially made up and incoherent” to “humanity will almost certainly go extinct due to misaligned AI.” Most people’s arguments rely heavily on hard-to-articulate intuitions and assumptions.

My sense is that the disagreements are mostly driven "top-down" by general psychological biases/inclinations towards optimism vs pessimism, instead of "bottom-up" as the result of independent lower-level disagreements over specific intuitions and assumptions. The reason I think this is that there seems to be a strong correlation between concern about misalignment risk and concern about other kinds of AI risk (i.e., AI-related x-risk). In other words, if the disagreement was "bottom-up", then you'd expect that at least some people who are optimistic about misalignment risk would be pessimistic about other kinds of AI risk, such as what I call "human safety problems" (see examples here and here) but in fact I don't seem to see anyone whose position is something like, "AI alignment will be easy or likely solved by default, therefore we should focus our efforts on these other kinds of AI-related x-risks that are much more worrying."

(From my limited observation, optimism/pessimism on AI risk also seems correlated to optimism/pessimism on other topics. It might be interesting to verify this through some systematic method like a survey of researchers.)

In favor of more anthropics research

See this comment by Vladimir Slepnev and my response to it, which explain why I don't think UDT offers a full solution to anthropic reasoning.

Load More