This is a linkpost for https://jacobin.com/2023/05/longtermism-new-cold-war-biden-administration-china-semiconductors-ai-policy/
Jacob Davis, a writer for the socialist political magazine Jacobin, raises an interesting concern about how current longermist initiatives in AI Safety are in his assessment escalating tensions between the US and China. This highlights a conundrum for the Effective Altruism movement which seeks to advance both AI Safety and avoid a great power conflict between the US and China.
This is not the first time this conundrum has been raised which has been explored on the forum previously by Stephen Clare.
The key points Davis asserts are that:
- Longtermists have been key players in President Biden’s choice last October to place heavy controls on semiconductor exports.
- Key longtermist figures advancing export controls and hawkish policies against China include former Google CEO Eric Schmidt (through Schmidt Futures and the longtermist political fund Future Forward PAC), former congressional candidate and FHI researcher Carrick Flynn, as well as other longtermists in key positions at Gerogetown Center for Security and Emerging Technology and the RAND Corporation.
- Export controls have failed to limit China's AI research, but have wrought havoc on global supply chains and seen as protectionist in some circles.
I hope this linkpost opens up a debate about the merits and weaknesses of current strategies and views in longtermist circles.
I think there's something to this, but:
This seems no-true-Scotsmany. It seems to have become almost commonplace for organisations that started from a longtermist seed to have become competitors in the AI arms race, so if many people who are influenced by longtermist philosophy end up doing stuff that seems harmful, we should update towards 'longtermism tends to be harmful in practice' much more than towards 'those people are not longtermists'.
I agree with this, but "longtermists may do harmful stuff" doesn't mean "this person doing harmful stuff is a longtermist". My understanding is that Schmidt (1) has never espoused views along the lines of "positively influencing the long-term future is a key moral priority of our time", and (2) seems to see AI/AGI kind of like the nuclear bomb -- a strategically important and potentially dangerous technology that the US should develop before its competitors.
I think it's fair for Davis to characterise Schmidt as a longtermist.
He's recently been vocal about AI X-Risk. He funded Carrick Flynn's campaign which was openly longtermist, via the Future Forward PAC alongside Moskovitz & SBF. His philanthropic organisation Schmidt Futures has a future focused outlook and funds various EA orgs.
And there are longtermists who are pro AI like Sam Altman, who want to use AI to capture the lightcone of future value.
https://www.cnbc.com/amp/2023/05/24/ai-poses-existential-risk-former-google-ceo-eric-schmidt-says.html
Yeah, but so have lots of people; it doesn't mean they're all longtermists. Same thing with Sam Altman -- I haven't seen any indication that he's longtermist, but would definitely be interested if you have any sources. This tweet seems to suggest that he does not consider himself a longtermist.
Do you have a source on Schmidt funding Carrick Flynn's campaign? Jacobin links this Vox article which says he contributed to Future Forward, but it seems implied that it was to defeat Donald Trump. Though I actually don't think this is a strong signal, as Carrick Flynn was mostly campaigning on pandemic prevention and that seems to make sense on neartermist views too.
I know Schmidt Futures has "future" in its name, but as far as I can tell they're not especially focused on the long-term future. They seem to just want to boost innovation through scientific research and talent growth, but so does, like, nearly every government. For example, their Our Mission page does not mention the word "future".
Can you give some examples? My impression was that the funding has been minimal at best, would be surprised if EA orgs receive say >10% of their funding, and likely <1%.
Also I don't want to overstate this point, but I don't think I've yet met a longtermist researcher who claims to have had a extended (or any) conversation with Schimdt. Given that there aren't many longtermist researchers to begin with (<500 worldwide defined rather broadly?), it'd be quite surprising for someone to claim to be a longtermist (or for others to claim that they are) if they've never even talked to someone doing research in the space.
To be fair, I think a few of Schmidt Futures people were looking around EA Global for things to fund in 2022. I can imagine why someone would think they're a longtermist.
I agree there are probably a few longtermist and/or EA-affliated people at Schimdt Futures, just as there are probably such people at Google, Meta, the World Bank, etc. This is a different claim than whether Schimdt Futures institutionally is longtermist, which is again a different claim from whether Eric Schimdt himself is.
I don't think that's so important a distinction. Prominent longtermists have declared the view that longtermism basically boils down to x-risk, which (again in their view) overwhelmingly boils down to AI risk. If, following their messaging, we get highly influential people doing harmful stuff in the name of AI risk, I think we should still update towards 'longtermism tends to be harmful in practice'.
Not as much as if they were explicitly waving a longtermist banner, but the more we believe the longtermist movement has had any impact on society at all, the stronger this update should be.
The posts linked in support of "prominent longtermists have declared the view that longtermism basically boils down to x-risk" do not actually advocate this view. In fact, they argue that longtermism is unnecessary in order to justify worrying about x-risk, which is evidence for the proposition you're arguing against, i.e. you cannot conclude someone is a longtermist because they're worried about x-risk.
Are you claiming that if (they think and we agree that) longtermism is 80+% concerned with AI safety work and AI safety work turns out to be bad, we shouldn't update that longtermism is bad? The first claim seems to be exactly what they think.
Scott:
You could argue that he means 'socially promote good norms on the assumption that the singularity will lock in much of society's then-standard morality', but 'shape them by trying to make AI human-compatible' seems a much more plausible reading of the last sentence to me, given context of both longtermism.
Neel:
He identifies as a not-longtermist (mea culpa), but presumably considers longtermism the source of these as 'the core action relevant points of EA', since they certainly didn't come from the global poverty or animal welfare wings.
Also, at EAG London, Toby Ord estimated there were 'less than 10' people in the world working full time on general longtermism (as opposed to AI or biotech) - whereas the number of people who'd consider themselves longtermist is surely in the thousands.
I don't know how we got to whether we should update about longtermism being "bad." As far as I'm concerned, this is a conversation about whether Eric Schmidt counts as a longtermist by virtue of being focused on existential risk from AI.
It seems to me like you're saying: "the vast majority of longtermists are focused on existential risks from AI; therefore, people like Eric Schmidt who are focused on existential risks from AI are accurately described as longtermists."
When stated that simply, this is an obvious logical error (in the form of "most squares are rectangles, so this rectangle named Eric Schmidt must be a square"). I'm curious if I'm missing something about your argument.
This is a true claim in general, but seems quite an implausible claim for Schimdt specifically, who has been in tech and at Google for much longer than people in our parts have been around.
Mind if I re-frame this discussion? The relevant question here shouldn't be a matter of beliefs, "is he a longtermist?", it's a matter of identity and identity strength. This isn't to say beliefs aren't important and knowing his wouldn't be informative, but identity (at least to some considerable degree) precedes and predicts beliefs and behavior.
But I also don't want to overemphasize particular labels, there are enough discernible positions out there that this isn't very helpful. Especially for individuals with some expertise, in positions of authority who may be reluctant to carelessly endorse particular groups.
Accepting this, here's some of what we could look into:
I agree that identity and identity strength are important variables for collective guilt assignment.
That said, I think the case for JM is substantially stronger than the case for Schimdt, which we were previously talking about upthread.