SiebeRozendal

2872 karmaJoined

Bio

Participation
4

Unable to work. Was community director of EA Netherlands, had to quit due to long covid. Everything written since 2021 with considerable brain fog, and bad at maintaining discussions since.

I have a background in philosophy, risk analysis, and moral psychology. I also did some x-risk research. Currently most worried about AI and US democracy. (Regarding the latter, I'm highly ranked on Manifold).

Comments
438

Here's an argument I made in 2018 during my philosophy studies:

A lot of animal welfare work is technically "long-termist" in the sense that it's not about helping already existing beings. Farmed chickens, shrimp, and pigs only live for a couple of months, farmed fish for a few years. People's work typically takes longer to impact animal welfare.

For most people, this is no reason to not work on animal welfare. It may be unclear whether creating new creatures with net-positive welfare is good, but only the most hardcore presentists would argue against preventing and reducing the suffering of future beings.

But once you accept the moral goodness of that, there's little to morally distinguish the suffering from chickens in the near-future from the astronomic amounts of suffering that Artificial Superintelligence can do to humans, other animals, and potential digital beings. It could even lead to the spread of factory farming across the universe! (Though I consider that unlikely)

The distinction comes in at the empirical uncertainty/speculativeness of reducing s-risk. But I'm not sure if that uncertainty is treated the same as uncertainty about shrimp or insect welfare.

I suspect many people instead work on effective animal advocacy because that's where their emotional affinity lies and it's become part of their identity, because they don't like acting on theoretical philosophical grounds, and they feel discomfort imagining the reaction of their social environment if they were to work on AI/s-risk. I understand this, and I love people for doing so much to make the world better. But I don't think it's philosophically robust.

Have you weighed the potential reputational risks if AI development follows a more moderate trajectory than anticipated? If we see AI plateau at impressive but clearly non-transformative capabilities, this strategic all-in approach could affect 80,000 Hours' credibility for years to come. 

I feel like this argument has been implicitly holding back a lot of EA focus on AI (for better or worse), so thanks for putting it so clearly. I always wonder about the asymmetry of it: what about the reputational benefits that accrue to 80K/EA for correctly calling the biggest cause ever? (If they're correct)

One perspective one could have is that this is a positive-sum approach to influence-/power-seeking: supporting neglected policies that would benefit large amounts of the US public buys goodwill, helps develop connections with other funders, and might put people in positions of power that are highly sympathetic to EA ideas. With the current state of the EA brand, this might not be such a bad idea. 

There are other ways of seeking influence but they tend to have fewer positive effects (donating to politicians, trying to run one's own candidates for office) and solely relying on the strategy "become experts and try to convince those in power of the necessary policies" isn't really bearing fruit. And it seems increasingly untenable to ignore politics, with US & UK (and Netherlands) already drastically slashing international aid and the AGI trajectory depending heavily on those in power.

It is of course different from the default EA strategy of "do the actual thing you believe is directly most cost-effective and communicate very explicitly about your theory of change". But I don't think that explicitly communicating this would be well-received by third parties. Even explicitly thinking of it this way internally is risky PR-wise. 

It does seem important to clearly delineate within EA when and whose communication is meant to be representative of one's thinking, and which communication isn't. Muddying this could be quite detrimental to EA in the long-term. I'm not sure how OpenPhil should've acted here. Perhaps better if they had not posted it to the EA Forum so that they don't signal "we believe this is good on EA grounds".

All in all, I'm positively inclined towards this fund though. 

I don't think I follow: why can only political moderates make it to the final four? 

(It does seem like there are better ways to implement RCV than this though, because it still has many First past the post dynamics)

Here's one idea as a reference: 

The campaign for ranked choice voting-style reforms in 7 states cost nearly $100 million, and failed in 6 out of 7 states (it was narrowly protected in Alaska) in 2024. The linked article is a decent description.

Ranked Choice Voting is a good way to reduce polarization in politics, elect more popular (and less extreme) candidates, and increase competition. It would also reduce the power of Trump over the Republican Party, which could lead to more Congressional pushback. 

Despite the disappointing 2024 results, I believe there significant opportunity in 2026: the midterms have a more politically engaged turnout, and given the current situation, voters might be more open to reform. 

There are alternatives to Ranked Choice Voting (like approval voting) and I'm no expert on them. It does seem like, if this campaign were to run for 2026 it would need to start soon. 

There's also a decent chance that it would be perceived as very hostile by the current administration, and retaliation could do significant damage to the community or the specific funders behind a campaign

Ah yeah that seems fine then! "Life" is an imprecise term and I'd prefer "sentience" or "sentient beings" but maybe I'm overdoing it

For forecasts, here's Manifold's US Democracy questions, which I suggest sorting by total traders (and unfortunately, anything n<30 traders becomes quite unreliable) and I also have a Manifold dashboard compiled where questions are grouped a bit more by theme here.

Main questions are:

  • "If Trump is elected, will the US still be a liberal democracy at the end of his term? (58%, n = 191)" - criticism of the V-DEM benchmark here
  • "Will the United States experience a constitutional crisis before 2030? (73%, n = 123)"
  • "Will Donald Trump arrest his political opponents [before 2026]? (41%, n = 76)"
  • "Will a sitting US President refuse to follow or ignore a Supreme Court ruling by 2032? (55%, n = 68)"
  • "Will Donald Trump remain de facto leader of the United States beyond the end of his second term? (7%, n = 44)"

I really like that you've chosen this topic and think it's an important one! I wrote my MA Philosophy thesis on this (in 2019, now outdated).

On the margin[1], it is better to work on reducing the chance of our[2] extinction than increasing the value of futures where we survive

I want to flag that I disagree with this framing, as it's very anthropocentric. There are futures in which we go extinct but that are nevertheless highly valuable (happy sentient AI spreading via VNM probes). Perhaps more empirically relevant, I expect almost all effects to go via making the transition to superintelligence go well, and the most distinct action is focusing on digital sentience (which has little effect on extinction risk and much effect on the value of the future). 

That sounds like [Cooperative AI](https://www.cooperativeai.com/post/new-report-multi-agent-risks-from-advanced-ai) 

https://www.cooperativeai.com/post/new-report-multi-agent-risks-from-advanced-ai 

Load more