Cross-posted from something I wrote on Facebook. I doubt there's really anything new or important here for an EA audience, but I figured I'd cross-post to get feedback.

Epistemic status: Not fully thought through, views still in flux. I do AI risk research but I wouldn't consider myself especially knowledgeable  compared to lots of other people on this group.

My current take on existential AI risk: I don't know whether the risks are extremely likely (Eliezer Yudkowsky, Zvi Mowshowitz, etc.), extremely unlikely (Yann LeCun, Robin Hanson, etc.), or somewhere in between (Paul Christiano, etc.). I also don't know if a temporary or permanent slowdown is the right action to take at this time given the costs of such a move. To me it looks like there are experts on all sides of this debate who are extremely smart and extremely knowledgeable (at least about important subsets of the arguments and relevant background knowledge*), and I don't feel I know enough to come to any really strong conclusions myself.

I am however in favor of the following:

1) Looking for technical and policy proposals that seem robustly good across different perspectives. For example, Robin Hanson's idea of Foom Liability (https://www.overcomingbias.com/p/foom-liability).

2) Pouring much more resources into getting better clarity on the problems and the costs / benefits of potential solutions. Some ideas in this category might include:

a) Providing financial, prestige, or other incentives to encourage experts to write up their views and arguments on these topics in comprehensive, clearly articulated ways.

b) Pairing experts with journalists, technical writers, research assistants, etc. to help them write up their arguments at minimal cost to themselves.

c) Running workshops, conferences, etc. where experts can sit down and really discuss these topics in depth.

d) Massively increasing the funding / prestige / etc. for people working to make the arguments on all sides more rigorous, mathematical, and empirical. This can take the form of direct funding, setting up new institutions or journals, funding new positions at prestigious universities, funding large research prize contests, etc.

e) Funding more research into how to make good policy decisions despite all the extreme uncertainties involved.

[Conflict of interest note: I work in this area, particularly (e) and a bit of (d), so some of the above is basically calling for people to give researchers like me lots of money.]

3) Massively increasing the funding / prestige / etc. for direct work on technical and policy solutions. However, this needs to done very carefully in consultation with experts on all sides of the discussion to make sure it's done in a way that pretty much all the experts would agree seems worth it. Otherwise this runs the risk of inadvertently funding research or encouraging policies that end up making the problems worse - as has in fact happened in the past (at least according to some of the experts). Several of the ideas I mentioned above might also run similar risks, although I think to a lesser degree.

In particular, I think I'd be very interested in seeing offers of lucrative funding and prestigious positions aimed at getting AI capabilities researchers to switch into safety / alignment research. Maybe Geoff Hinton can afford to leave Google due to safety concerns if he wants, but lots of lower-level researchers cannot afford to leave capabilities research jobs and switch to safety research, while still paying their bills. I'd love to see that dynamic change.

4) Increasing awareness of the potential risks among the public, academics, and policy makers, although again this needs to be done carefully in consultation with experts. (See https://www.cold-takes.com/spreading-messages-to-help.../.)

5) Doing whatever it takes to generally improve global tech policy coordination and cooperation mechanisms between governments, academia, and corporations.

-----

* Note: I doubt anybody has real expert-level knowledge on *all* important facets of the conversation. If you delve into the debates they get very complex very fast and draw heavily on fields as diverse as computer science, mathematics, hardware and software engineering, economics, evolutionary biology, cognitive science, political theory, sociology, corporate governance, epistemology, ethics, and several others.

I also think that a lot of people tend to underestimate the importance of background knowledge when judging who has relevant "expertise" in a field. In my experience, at least, people who have a lot of background domain knowledge in some field have a lot of good intuitions for which theories and ideas are worth their time to look into and which ones are not. It can be very difficult for people who are not themselves experts in the field (and sometimes even for people who are) to judge whether some domain expert is being dismissive for valid intuition-based reasons vs. when they're being dismissive because they're being obtuse or biased. Often it's some mixture of both, which makes it even harder to judge.

All of this touches on the topic of "modest epistemology" - i.e., under which circumstances should we defer to "experts" rather than forming our own opinions based solely on the object-level arguments, who should be considered a relevant "expert" (hint: not necessarily the people with the fanciest credentials), how much to defer, etc. More broadly this falls under the category of epistemology of disagreement. This is one of my all-time favorite topics and an ongoing area of research for me.

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 2m read
 · 
I speak to many entrepreneurial people trying to do a large amount of good by starting a nonprofit organisation. I think this is often an error for four main reasons. 1. Scalability 2. Capital counterfactuals 3. Standards 4. Learning potential 5. Earning to give potential These arguments are most applicable to starting high-growth organisations, such as startups.[1] Scalability There is a lot of capital available for startups, and established mechanisms exist to continue raising funds if the ROI appears high. It seems extremely difficult to operate a nonprofit with a budget of more than $30M per year (e.g., with approximately 150 people), but this is not particularly unusual for for-profit organisations. Capital Counterfactuals I generally believe that value-aligned funders are spending their money reasonably well, while for-profit investors are spending theirs extremely poorly (on altruistic grounds). If you can redirect that funding towards high-altruism value work, you could potentially create a much larger delta between your use of funding and the counterfactual of someone else receiving those funds. You also won’t be reliant on constantly convincing donors to give you money, once you’re generating revenue. Standards Nonprofits have significantly weaker feedback mechanisms compared to for-profits. They are often difficult to evaluate and lack a natural kill function. Few people are going to complain that you provided bad service when it didn’t cost them anything. Most nonprofits are not very ambitious, despite having large moral ambitions. It’s challenging to find talented people willing to accept a substantial pay cut to work with you. For-profits are considerably more likely to create something that people actually want. Learning Potential Most people should be trying to put themselves in a better position to do useful work later on. People often report learning a great deal from working at high-growth companies, building interesting connection
 ·  · 31m read
 · 
James Özden and Sam Glover at Social Change Lab wrote a literature review on protest outcomes[1] as part of a broader investigation[2] on protest effectiveness. The report covers multiple lines of evidence and addresses many relevant questions, but does not say much about the methodological quality of the research. So that's what I'm going to do today. I reviewed the evidence on protest outcomes, focusing only on the highest-quality research, to answer two questions: 1. Do protests work? 2. Are Social Change Lab's conclusions consistent with the highest-quality evidence? Here's what I found: Do protests work? Highly likely (credence: 90%) in certain contexts, although it's unclear how well the results generalize. [More] Are Social Change Lab's conclusions consistent with the highest-quality evidence? Yes—the report's core claims are well-supported, although it overstates the strength of some of the evidence. [More] Cross-posted from my website. Introduction This article serves two purposes: First, it analyzes the evidence on protest outcomes. Second, it critically reviews the Social Change Lab literature review. Social Change Lab is not the only group that has reviewed protest effectiveness. I was able to find four literature reviews: 1. Animal Charity Evaluators (2018), Protest Intervention Report. 2. Orazani et al. (2021), Social movement strategy (nonviolent vs. violent) and the garnering of third-party support: A meta-analysis. 3. Social Change Lab – Ozden & Glover (2022), Literature Review: Protest Outcomes. 4. Shuman et al. (2024), When Are Social Protests Effective? The Animal Charity Evaluators review did not include many studies, and did not cite any natural experiments (only one had been published as of 2018). Orazani et al. (2021)[3] is a nice meta-analysis—it finds that when you show people news articles about nonviolent protests, they are more likely to express support for the protesters' cause. But what people say in a lab setting mig