AI safety
AI safety
Studying and reducing the existential risks posed by advanced artificial intelligence

Shortforms

3
3h
TL;DR: Someone should probably write a grant to produce a spreadsheet/dataset of past instances of where people claimed a new technology would lead to societal catastrophe, with variables such as “multiple people working on the tech believed it was dangerous.” Slightly longer TL;DR: Some AI risk skeptics are mocking people who believe AI could threaten humanity’s existence, saying that many people in the past predicted doom from some new tech. There is seemingly no dataset which lists and evaluates such past instances of “tech doomers.” It seems somewhat ridiculous* to me that nobody has grant-funded a researcher to put together a dataset with variables such as “multiple people working on the technology thought it could be very bad for society.” *Low confidence: could totally change my mind  ——— I have asked multiple people in the AI safety space if they were aware of any kind of "dataset for past predictions of doom (from new technology)"? There have been some articles and arguments floating around recently such as "Tech Panics, Generative AI, and the Need for Regulatory Caution [https://datainnovation.org/2023/05/tech-panics-generative-ai-and-regulatory-caution/]", in which skeptics say we shouldn't worry about AI x-risk because there are many past cases where people in society made overblown claims that some new technology (e.g., bicycles, electricity) would be disastrous for society. While I think it's right to consider the "outside view" on these kinds of things, I think that most of these claims 1) ignore examples of where there were legitimate reasons to fear the technology (e.g., nuclear weapons, maybe synthetic biology?), and 2) imply the current worries about AI are about as baseless as claims like "electricity will destroy society," whereas I would argue that the claim "AI x-risk is >1%" stands up quite well against most current scrutiny. (These claims also ignore the anthropic argument/survivor bias—that if they ever were right about doom we wouldn
46
5d
1
Protesting at leading AI labs may be significantly more effective than most protests, even ignoring the object-level arguments for the importance of AI safety as a cause area. The impact per protester is likely unusually big, since early protests involve only a handful of people and impact probably scales sublinearly with size. And very early protests are unprecedented and hence more likely (for their size) to attract attention, shape future protests, and have other effects that boost their impact.
4
1d
7
Hey - I’d be really keen to hear peoples' thoughts on the following career/education decision I'm considering (esp. people who think about AI a lot): * I’m about to start my undergrad studying PPE at Oxford. * I’m wondering whether re-applying this year to study CS & philosophy at Oxford (while doing my PPE degree) is a good idea. * This doesn’t mean I have to quit PPE or anything.  * I’d also have to start CS & philosophy from scratch the following year. * My current thinking is that I shouldn’t do this - I think it’s unlikely that I’ll be sufficiently good to, say, get into a top 10 ML PhD or anything, so the technical knowledge that I’d need for the AI-related paths I’m considering (policy, research, journalism, maybe software engineering) is either pretty limited (the first three options) or much easier to self-teach and less reliant on credentials (software engineering). * I should also add that I’m currently okay at programming anyway, and plan to develop this alongside my degree regardless of what I do - it seems like a broadly useful skill that’ll also give me more optionality. * I do have a suspicion that I’m being self-limiting re the PhD thing - if everyone else is starting from a (relatively) blank slate, maybe I’d be on equal footing?  * That said, I also have my suspicions that the PhD route is actually my highest-impact option: I’m stuck between 1) deferring to 80K here, and 2) my other feeling that enacting policy/doing policy research might be higher-impact/more tractable. * They’re also obviously super competitive, and seem to only be getting more so. * One major uncertainty I have is whether, for things like policy, a PPE degree (or anything politics-y/economics-y) really matters. I’m a UK citizen, and given the record of UK politicians who did PPE at Oxford, it seems like it might? What mistakes am I making here/am I being too self-limiting? I s
20
10d
I thought the recent Hear This Idea podcast episode with Ben Garfinkel [https://hearthisidea.com/episodes/garfinkel] was excellent. If you are at all interested in AI governance (or AI safety generally), you probably want to check it out.
17
21d
I'm thinking about the matching problem of "people with AI safety questions" and "people with AI safety answers". Snoop Dogg hears Geoff Hinton on CNN (or wherever), asks "what the fuck?" [https://twitter.com/pkedrosky/status/1653955254181068801], and then tries to find someone who can tell him what the fuck. I think normally people trust their local expertise landscape--if they think the CDC is the authority on masks they adopt the CDC's position, if they think their mom group on Facebook is the authority on masks they adopt the mom group's position--but AI risk is weird because it's mostly unclaimed territory in their local expertise landscape. (Snoop also asks "is we in a movie right now?" because movies are basically the only part of the local expertise landscape that has had any opinion on AI so far, for lots of people.) So maybe there's an opportunity here to claim that territory (after all, we've thought about it a lot!). I think we have some 'top experts' who are available for, like, mass-media things (podcasts, blog posts, etc.) and 1-1 conversations with people they're excited to talk to, but are otherwise busy / not interested in fielding ten thousand interview requests. Then I think we have tens (hundreds?) of people who are expert enough to field ten thousand interview requests, given that the standard is "better opinions than whoever they would talk to by default" instead of "speaking to the whole world" or w/e. But just like connecting people who want to pay to learn calculus and people who know calculus and will teach it for money, there's significant gains from trade from having some sort of clearinghouse / place where people can easily meet. Does this already exist? Is anyone trying to make it? (Do you want to make it and need support of some sort?)
10
1mo
6
Why aren't we engaging in direct action (including civil disobedience) to pause AI development? Here's the problem: Yudkowksy [https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/]: "Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die." Here's one solution: FLI Open Letter [https://futureoflife.org/open-letter/pause-giant-ai-experiments/]: "all AI labs...immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium." Here's what direct action in the pursuit of that solution could look like (most examples are from the UK climate movement): Picketing AI offices [https://twitter.com/Radlib4/status/1653135998501662722?s=20] (this already seems to be happening!) Mass non-disruptive protest [https://www.theguardian.com/world/2023/apr/21/big-one-extinction-rebellion-cliimate-protest-london-xr] Strikes/walk-outs [https://www.theguardian.com/science/2021/sep/24/people-in-99-countries-take-part-in-global-climate-strike] (by AI developers/researchers/academics) Slow marches [https://www.itv.com/news/border/2023-04-29/just-stop-oil-protestors-stage-slow-march-through-town-centre] Roadblocks [https://www.bbc.co.uk/news/uk-england-london-59061509] Occupation [https://www.theguardian.com/uk/2012/jan/16/belfast-occupy-bank-of-ireland] of AI offices Performative vandalism [https://www.bbc.co.uk/news/uk-england-gloucestershire-64193016] of AI offices Performative vandalism of art [https://www.theguardian.com/environment/2022/oct/14/just-stop-oil-activists-throw-soup-at-van-goghs-sunflowers] Sabotage of AI computing infrastructure (on the model of ecotage [https://www.theguardian.
7
19d
I wonder if anyone has moved from longtermist cause areas to neartermist cause areas. I was prompted by reading the recent Carlsmith piece and Julia Wise's Messy personal stuff that affected my cause prioritization. 
20
2mo
1
I'd like to try my hand at summarizing / paraphrasing Matthew Barnett's interesting twitter thread on the FLI letter [https://twitter.com/MatthewJBar/status/1643775707313741824].[1] The tl;dr is that trying to ban AI progress will increase the hardware overhang, and risk the ban getting lifted all of a sudden in a way that causes a dangerous jump in capabilities. Background reading: this summary will rely on an understanding of hardware overhangs [https://aiimpacts.org/hardware-overhang/] (second link [https://www.lesswrong.com/tag/computing-overhang]), which is a somewhat slippery concept, and I myself wish I understood at a deeper level. *** BARNETT AGAINST MODEL SCALING BANS Effectiveness of regulation and the counterfactual It is hard to prevent AI progress. There's a large monetary incentive to make progress in AI, and companies can make algorithmic progress on smaller models. "Larger experiments don't appear vastly more informative than medium sized experiments."[2] The current proposals on the table on ban the largest runs. Your only other option is draconian regulation, which will be hard to do well and will unpredictable and bad effects. Conversely, by default, Matthew is optimistic about companies putting lots of effort into alignment. It's economically incentivized. And we can see this happening: OpenAI has put more effort into aligning its models over time, and GPT-4 seems more aligned than GPT-2. But maybe some delay on the margin will have good effects anyway? Not necessarily: Overhang Matthew's arguments above about algorithmic progress still occurring imply that AI progress will occur during a ban.[3] Given that, the amount of AI power that can be wrung out humanity's hardware stock will be higher at the end of the ban than at the start. What are these consequences of that? Nothing good, says Matthew: First, we need to account for the sudden jump in capabilities when the ban is relaxed. Companies will suddenly train up to the economicall
Load more (8/32)