Hide table of contents

A few months ago effective altruism-sympathetic journalist Dylan Matthews wrote this about enthusiasm for AI safety research in Vox:

"At the risk of overgeneralizing, the computer science majors have convinced each other that the best way to save the world is to do computer science research."

I have seen this claim made in various forms several times. While I understand why Dylan Matthews could form this view at a distance from the people involved, I think it is an inaccurate description.

Reason one

The early adopters of the view were more mathematicians, philosophers or interdisciplinary researchers. I personally first became worried about artificial intelligence when the community was much smaller, and was studying genetics and economics. Many computer scientists have now come around, but were if anything relatively resistant compared to people in related fields.

Reason two

More importantly, belief that artificial intelligence presents a big risk is more likely to lead you to despair than a rosy outlook in which you can be a hero. In the early days, most people who were worried about AI felt completely disempowered because i) there was no clear path to a solution, and indeed one may not exist at all, ii) even if a solution exists, most people who were concerned thought they were not personally qualified to do any of the relevant work to solve the problem, including me.

The thought that superintelligent machines may destroy everything you care about, and there may well be nothing you can do about it, is hardly the most appealing belief. Rather than walking into a phone booth and putting on a superhero outfit, many people who read these arguments became anxious and despondent. That remains the case today.

But it is a belief that nevertheless spread, in my view because the underlying arguments, best laid out in the book Superintelligence, are remarkably hard to convincingly rebut.

Reason three

Almost everyone I know who is now especially worried about artificial intelligence initially thought they could use their skills well in another cause areas such as reducing poverty or animal suffering in factory farms. They didn't need to change the cause area they worked on to feel like they could greatly improve the world.

Even if the facts were true the underlying argument can't work

Finally, even if it the claim were true, I don't think it could be a coherent argument against worrying about AI risk.

The reason is this.

Imagine the reverse were the case and it wasn't the domain experts most qualified to work on the problem who thought it was a big deal. Instead it was chefs, musicians and magazine copy editors who were most concerned. Would that be an argument in favour of worrying more, because such non-field experts could clearly have no self-serving bias that would cause them to worry?

I don't think so. The concern of non-field experts is clearly less persuasive and I expect almost everyone would agree. Indeed, many people have claimed in the past that the fact that many computer scientists didn't seem so concerned was a good reason not to be worried.

But if you want to say it's both unconvincing when field experts like computer scientists are concerned, and also unconvincing when non-field experts like farmers are concerned, congratulations: you've just made your view unresponsive to anyone else's judgement. That's a very bad place to be.

In fact, computer scientists being concerned about artificial intelligence - and other relevant domain experts such as philosophers, mathematicians, brain scientists and machine learning experts - is, on balance, the strongest piece of evidence available, because they are most likely to know what they are talking about.

Finally, how could the situation ever be otherwise? It's natural that the first people to notice a new potential problem from a technology are domain experts, or at least people in adjacent domains. I bet the first people who sounded the alarm about potential threats from nuclear weapons were people who were close to relevant physics research. No one else could have been aware of the issue, or able to evaluate the merits of the arguments. I don't think the fact that physicists were initially the main group worried about the power of nuclear weapons would be a good reason to doubt them.

What biases might really create problems?

The fact that the argument above doesn't work isn't evidence that AI really is a problem. The opposite of a wrong idea isn't a right one. Maybe there are other cognitive biases causing people to exaggerate the problem.

For example, throughout history it has been widespread for people to believe they are living at a particularly crucial time in history, when either a huge disaster, or a revolution gain, could occur. Sometimes they are right (e.g. at the point when nuclear weapons were invented), but usually they have been wrong. People concerned with catastrophic risks call this 'Millenialist cognitive bias', discussed here.

But as is usually the case, when it comes to weighing artificial intelligence against other causes there are factors that could bias you both in favour of one view, and others that could bias you in favour of the other view. I don't find throwing potential biases at people you disagree with to be very helpful, unless you can get evidence about their relative magnitude.
Comments7


Sorted by Click to highlight new comments since:

Jeff pointed out at the time that that the money in CS is in for-profits, not AI research. So CS majors do much better in terms of profit and glory to believe that earning to give, not research, is the best way to save the world. Which opens up a different line of possible criticism. :-)

Yeah, Matthews really should have replaced "CS" with "math" or "math and philosophy".

That would be more accurate, more consistent with AI safety researchers' self-conception, and less susceptible to some of these counterarguments (especially Julia's point that in CS, the money and other external validation is much more in for-profits than AI safety).

Perhaps a more charitable framing of Dylan's argument is not that computer scientists think AI is a problem but that they think the solution is to do more CS research -- rather than, e.g., to do political lobbying, changing society's general approach to AI, etc.

Of course, some replies include that (1) some people have done activities outside the realm of CS research, (2) it's often good to focus on the area you know, and (3) CS research may be particularly relevant to solving this problem, compared with other issues that are primarily political.

This is a DH1 argument on the part of Matthews:

...if a senator wrote an article saying senators' salaries should be increased, one could respond:

"Of course he would say that. He's a senator."

This wouldn't refute the author's argument, but it may at least be relevant to the case. It's still a very weak form of disagreement, though. If there's something wrong with the senator's argument, you should say what it is; and if there isn't, what difference does it make that he's a senator?

Also, it seems likely to me that publicizing concern with AI risk works against the interests of many in the tech community.

If Matthews wants to engage with this further, I think it's only fair to the people he's criticizing to take the time to read Superintelligence so he can understand the position he's arguing against.

http://www.vox.com/2014/8/19/6031367/oxford-nick-bostrom-artificial-intelligence-superintelligence

Dylan Matthews has read Superintelligence and wrote an article+interview with Bostrom about it in August 2014.

Thanks for the link, I wasn't aware of that. Still disappointed to see an ad hominem argument. It's true that Matthews only included it as a throwaway sentence, but this is the kind of characterization that readers can latch on to... feels like within-EA movement disagreements should be kept above the ad hominem level. (Also in favor of engaging opponents in a classy way, but let's walk before we run.)

When experts say the fate of the world lies in the hands of their field...it's not an indication they're wrong. It's just not really strong evidence they're right either. In any field, we can reasonably expect researchers to believe their field has great significance. From the outside, they sort of all sound the same. When I read this quip, that's what is think of.

Unfortunately, with nuclear weapons, it was virtually impossible to convince officials of their feasibility right up until the months leading to Hiroshima, and then, really only with demos, not expert opinions. It's a really tough sell. And if it's not being explained and argued extremely well, and even then, I'll be unlikely to be moved by just about anything, because it sounds very similar to all the times people have tried to convince me something they're really into has all the answers.

Curated and popular this week
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 2m read
 · 
I speak to many entrepreneurial people trying to do a large amount of good by starting a nonprofit organisation. I think this is often an error for four main reasons. 1. Scalability 2. Capital counterfactuals 3. Standards 4. Learning potential 5. Earning to give potential These arguments are most applicable to starting high-growth organisations, such as startups.[1] Scalability There is a lot of capital available for startups, and established mechanisms exist to continue raising funds if the ROI appears high. It seems extremely difficult to operate a nonprofit with a budget of more than $30M per year (e.g., with approximately 150 people), but this is not particularly unusual for for-profit organisations. Capital Counterfactuals I generally believe that value-aligned funders are spending their money reasonably well, while for-profit investors are spending theirs extremely poorly (on altruistic grounds). If you can redirect that funding towards high-altruism value work, you could potentially create a much larger delta between your use of funding and the counterfactual of someone else receiving those funds. You also won’t be reliant on constantly convincing donors to give you money, once you’re generating revenue. Standards Nonprofits have significantly weaker feedback mechanisms compared to for-profits. They are often difficult to evaluate and lack a natural kill function. Few people are going to complain that you provided bad service when it didn’t cost them anything. Most nonprofits are not very ambitious, despite having large moral ambitions. It’s challenging to find talented people willing to accept a substantial pay cut to work with you. For-profits are considerably more likely to create something that people actually want. Learning Potential Most people should be trying to put themselves in a better position to do useful work later on. People often report learning a great deal from working at high-growth companies, building interesting connection
 ·  · 31m read
 · 
James Özden and Sam Glover at Social Change Lab wrote a literature review on protest outcomes[1] as part of a broader investigation[2] on protest effectiveness. The report covers multiple lines of evidence and addresses many relevant questions, but does not say much about the methodological quality of the research. So that's what I'm going to do today. I reviewed the evidence on protest outcomes, focusing only on the highest-quality research, to answer two questions: 1. Do protests work? 2. Are Social Change Lab's conclusions consistent with the highest-quality evidence? Here's what I found: Do protests work? Highly likely (credence: 90%) in certain contexts, although it's unclear how well the results generalize. [More] Are Social Change Lab's conclusions consistent with the highest-quality evidence? Yes—the report's core claims are well-supported, although it overstates the strength of some of the evidence. [More] Cross-posted from my website. Introduction This article serves two purposes: First, it analyzes the evidence on protest outcomes. Second, it critically reviews the Social Change Lab literature review. Social Change Lab is not the only group that has reviewed protest effectiveness. I was able to find four literature reviews: 1. Animal Charity Evaluators (2018), Protest Intervention Report. 2. Orazani et al. (2021), Social movement strategy (nonviolent vs. violent) and the garnering of third-party support: A meta-analysis. 3. Social Change Lab – Ozden & Glover (2022), Literature Review: Protest Outcomes. 4. Shuman et al. (2024), When Are Social Protests Effective? The Animal Charity Evaluators review did not include many studies, and did not cite any natural experiments (only one had been published as of 2018). Orazani et al. (2021)[3] is a nice meta-analysis—it finds that when you show people news articles about nonviolent protests, they are more likely to express support for the protesters' cause. But what people say in a lab setting mig