L

Linch

@ EA Funds
26144 karmaJoined Working (6-15 years)openasteroidimpact.org

Posts
72

Sorted by New
8
Linch
· · 1m read
22

Comments
2761

Linch
11
2
0
1

Another aspect here is that scientists in the 1940s are at a different life stage/might just be more generally "mature" than people of a similar age/nationality/social class today. (eg most Americans back then in their late twenties probably were married and had multiple children, life expectancy at birth in the 1910s is about 50 so 30 is middle-aged, society overall was not organized as a gerontocracy, etc). 

I like the New Yorker for longform writings about topics in the current "zeitgeist", but they aren't a comprehensive news source, and don't aim to be. (I like their a) hit rate for covering topics that I subjectively consider important, b) quality of writing, and c) generally high standards for factual accuracy)

Linch
85
3
0
1
2

The Economist has an article about China's top politicians on catastrophic risks from AI, titled "Is Xi Jinping an AI Doomer?"

 

Western accelerationists often argue that competition with Chinese developers, who are uninhibited by strong safeguards, is so fierce that the West cannot afford to slow down. The implication is that the debate in China is one-sided, with accelerationists having the most say over the regulatory environment. In fact, China has its own AI doomers—and they are increasingly influential.

[...]

China’s accelerationists want to keep things this way. Zhu Songchun, a party adviser and director of a state-backed programme to develop AGI, has argued that AI development is as important as the “Two Bombs, One Satellite” project, a Mao-era push to produce long-range nuclear weapons. Earlier this year Yin Hejun, the minister of science and technology, used an old party slogan to press for faster progress, writing that development, including in the field of AI, was China’s greatest source of security. Some economic policymakers warn that an over-zealous pursuit of safety will harm China’s competitiveness.

But the accelerationists are getting pushback from a clique of elite scientists with the Communist Party’s ear. Most prominent among them is Andrew Chi-Chih Yao, the only Chinese person to have won the Turing award for advances in computer science. In July Mr Yao said AI poses a greater existential risk to humans than nuclear or biological weapons. Zhang Ya-Qin, the former president of Baidu, a Chinese tech giant, and Xue Lan, the chair of the state’s expert committee on AI governance, also reckon that AI may threaten the human race. Yi Zeng of the Chinese Academy of Sciences believes that AGI models will eventually see humans as humans see ants.

The influence of such arguments is increasingly on display. In March an international panel of experts meeting in Beijing called on researchers to kill models that appear to seek power or show signs of self-replication or deceit. [...]

The debate over how to approach the technology has led to a turf war between China’s regulators. [...]The impasse was made plain on July 11th, when the official responsible for writing the AI law cautioned against prioritising either safety or expediency.

The decision will ultimately come down to what Mr Xi thinks. In June he sent a letter to Mr Yao, praising his work on AI. In July, at a meeting of the party’s central committee called the “third plenum”, Mr Xi sent his clearest signal yet that he takes the doomers’ concerns seriously. The official report from the plenum listed AI risks alongside other big concerns, such as biohazards and natural disasters. For the first time it called for monitoring AI safety, a reference to the technology’s potential to endanger humans. The report may lead to new restrictions on AI-research activities.

More clues to Mr Xi’s thinking come from the study guide prepared for party cadres, which he is said to have personally edited. China should “abandon uninhibited growth that comes at the cost of sacrificing safety”, says the guide. Since AI will determine “the fate of all mankind”, it must always be controllable, it goes on. The document calls for regulation to be pre-emptive rather than reactive[...]

Overall this makes me more optimistic that international treaties with teeth on GCRs from AI is possible, potentially before we have warning shots from large-scale harms.

I'm thinking less of total number of people and more like probability of having specific collaborators work in your exact area or are otherwise useful to have around. 

You might believe that there are network effects, or that the "best" people are only willing to come along if there's a sufficiently large intellectual scene. (Not saying either is likely, just illustrating that the implied underlying model is not a tautology).

To quickly clarify what I mean by "confused," 

to the degree there are any health trade-offs, the veganism focus tends to make the EA coalition intellectually weaker and more politically polarized.

I mean that I expect veganism's health tradeoffs and political polarization to almost be entirely independent of each other. It could be the case that veganism has no health tradeoffs but nonetheless EA should not focus on it because there is extreme political political polarization. It could also be the case that veganism has many health costs but its support is divided equally among partisan lines. 

I also would be surprised if there's a strong correlational case. In general the world isn't that neat.

So I basically think your claim is pretty close to formally invalid. I'm a bit surprised people haven't noticed this even after I pointed it out initially. 

Do you have an example of people disagreeing with you? When I made similar points before, I think they've been received relatively positively. 

Thanks, interesting letter/link! 

(Quickly noting for casual readers that I didn't say all the things or hold all the views that this comment ascribed to me, though no particular detail was especially egregious. Just wanted to provide a heads-up for any onlookers to reread my own comments to understand any specific claims I make; people who know me well can also DM for clarifications).

If you want people to have more children, it's unclear why you'd support a candidate whose primary policy goal is to prevent immigrants from providing affordable services to Americans.

I mean TFR (total fertility rate) is falling everywhere, but it's at least plausible that preventing people from higher-TFR countries immigrating to lower-TFR countries will increase net number of children. (Just narrowly making a claim about the logical implication; not saying that I endorse this policy).

Load more