Hide table of contents

Please offer me a quantitative estimate and supporting reasoning what you think the additional value is of having an EA like Wayne as mayor of Berkeley. In dollars, QALYs-- whatever makes sense to you.

Wayne is the leader of Direct Action Everywhere. He is now running for mayor of Berkeley.

Wayne has told me he wants to use evidence-based reasoning for deciding city policy and has identified as EA for years. I am reasonably confident he would take actions in favor of cause areas like animal welfare, poverty, and climate change.

Consider immediate impact and tail impact / n-order effects, the latter which may predominate. e.g. What are the chances this will unlock additional political wins for us that would otherwise be unavailable?

This is very important for deciding whether people in the EA movement (particularly in Berkeley) should coordinatedly help him get elected, or not (and whether I should spearhead that happening, or not).

His campaign site: https://www.wayneformayor.com/.

New Answer
New Comment


6 Answers sorted by

I recall Hsiung being in favour of conducting disruptive protests against EAG 2015:

I honestly think this is an opportunity. "EAs get into fight with Elon Musk over eating animals" is a great story line that would travel well on both social and possibly mainstream media.
...

Organize a group. Come forward with an initially private demand (and threaten to escalate, maybe even with a press release). Then start a big fight if they don't comply.

Even if you lose, you still win because you'll generate massive dialogue!

It is unclear whether the motivation was more 'blackmail threats to stop them serving meat' or 'as Elon Musk will be there we can co-opt this to raise our profile'. Whether Hsiung calls himself an EA or not, he evidently missed the memo on 'eschew narrow minded obnoxious defection against others in the EA community'.

For similar reasons, it seems generally wiser for a community not to help people who previously wanted to throw it under the bus.

Where does this quote come from?

This was posted to a relatively large (> 100 people) but private FB groups where various people who were active in EA and animal activism were talking to each other. I can confirm that it is accurate (since I am still part of the group).

Buck
14
0
0

I think he wouldn't have thought of this as "throwing the community under the bus". I'm also pretty skeptical that this consideration is strong enough to be the main consideration here (as opposed to eg the consideration that Wayne seems way more interested in making the world better from a cosmopolitan perspective than other candidates for mayor).

I find "Wayne has told me he wants to use evidence-based reasoning for deciding city policy and has identified as EA for years" to be extraordinarily weak evidence. Anyone can say either of those things.

From a few conversations with him, I think he semi-identifies as an EA. He's definitely known about EA for a while, there is evidence for that (just search his name in the EA Forum search). 

I think he would admit that he doesn't fully agree with EAs on many issues.  I think that most EAs I know wouldn't exactly classify him as an EA if they were to know him, but as EA-adjacent.

He definitely knows far more about it than most politicians.

I would trust that he would use "evidence-based reasoning". I'm sure he has for DXE. However, "evidence-based reasoning" by itself is a pretty basic claim at this point. It's almost meaningless at this stage, I think all politicians can claim this.

Well I guess someone who hasn't heard of EA couldn't say that.

So I don't think that statement is quite as useless as you do. It shows that he:

A) Knows about EA

B) Has at least implied that he wants to use EA thinking in the role

EAs generally tend to think that the cause areas they focus on and the prioritisation they do within those cause areas allow them to be many magnitudes more effective than a typical non-EA. So I might expect him, in expectation, to be more effective than a typical mayor.

I do take your point that that alone isn't much and we will want to examine his track record and specific proposals in more detail.

2
Elizabeth
My default belief is that a politician implying something he knows the listener wants to hear is not evidence he's believes or will act on that implication. Do you disagree with that, in general or for Hsuing in particular?
5
Timothy_Liptrot
No time to call up the paper, but the basic answer is that such statements are evidence. A common pattern is that politicians can propose policy A or B before entering office, but have an incentive to implement A once elected. So some of the politicians who propose B will switch to A once elected. But none of the politicians who support A will switch to B. For example this happens with economic security vs. economic efficiency platforms in Latin America (politicians prefer efficiency policies more once elected). About half of them switched in the study I read, and no efficiency campaigns switched to security after election. That means the voter choice is simple. Even if you belief a politician might switch off B, the politician who is campaigning on B is always more likely to do B than the politician campaigning on A. This applies to head to head elections only ofc. So the optimal decision theoretic choice is to support the politician who advocates for your policy in the election.
4
Elizabeth
But "I will use evidence based thinking" isn't a policy, and is completely unverifiable.
3
Timothy_Liptrot
Maybe. That's orthogonal to my comment. I was responding to As to the empirical content of "evidence-based policy", I'm not an expert on that question yet.
5
JackM
It’s certainly not strong evidence, but it is evidence. All other things equal I would vote for someone who claims they are an effective altruist over someone who doesn’t. Politicians do have at least some incentive to deliver on promises. If they don’t it should reduce the probability of them getting elected / tarnish their reputation. I accept this is certainly not a perfect rule by any means but it’s still got a grain of truth. Overall I don’t take much from him saying he’s an EA, but that doesn’t mean I take nothing at all.

He has in the past used evidence-based reasoning in other EA-related issues, particularly for the animal space which is his focus. Well, only one example comes to mind specifically, surrounding the debate on cage-free campaigns with Open Phil. See here, here and here.

I'm personally skeptical of the disruption tactics DxE has used (under his lead). There was another debate on that, starting here , which suggested their disruption tactics might do more harm than good (DxE's official response was taken down , but you can find it here. Wayne didn't write it.).... (read more)

Wayne at least sort-of identified as an EA in 2015, eg hosting EA meetups at his house. And he's been claiming to be interested in evidence-based approaches to making the world better since at least then.

Here is a recent newspaper article describing Wayne as using cult-like techniques and abuse with DxE, and also here.

I don't have a quantitative estimate that isn't extremely made up, but right now, I'm in favor of Wayne winning the Berkeley election. I know there were accusations of DxE being culty and fucked up in various ways, and I believe most of them, though I'm not particularly in the know. I also agree that it would have been better if Wayne had handled CEA's reversal on serving meat at EAG more cooperatively. I don't think DxE's strategy is super compelling. I don't think Wayne is a perfect candidate, but I don't think his wrongdoings/level of uncooperativeness are out of distribution for a politician; they actually seem pretty middle-of-the-road in severity, though perhaps unusually lurid and interesting to discuss.

Those things just seem way way less important to me than his stance on farm animal welfare. It seems like one candidate is strongly against the mass torture and killing of sentient beings, and has worked hard to stop it, and as far as I can tell, the other doesn't particularly have a stance. It feels directionally analogous to me to choosing between a vaguely sketchy candidate who is actively anti-racist before the civil rights movement, or pro women's suffrage before women had the chance to vote, or in favor of letting in Jewish refugees during the Holocaust and one who isn't (and who may or may not be sketchy). (I don't expect this argument to resonate for people who don't put a lot of moral weight on animal lives). I don't know how he'd do good for animals as mayor (I know he wants to ban meat, don't know how likely that is to work), and I'd be interested in arguments that it's implausible he'd do much good, but by default it doesn't seem crazy.

I don't know much about the incumbent; I'd guess we know more about Wayne's shortcomings than his, because Wayne has been more adjacent to EA. I also think Wayne has shown great energy and had some meaningful successes, e.g. in community organizing, and getting fur banned in Berkeley, that are indicative of him being an agenty person. My current silly guess based on not much at all is that electing him in expectation saves tens of thousands of farm animals from torture.

My biggest worry is that Wayne's work will backfire and have a negative effect on efforts to help farmed animals, e.g. because he gets elected but handles things poorly.

[edited just to fix a typo]

DC
7
0
0

Thank you for this answer! I liked how reflectively balanced it was on the different considerations and how it tracked the object-level sentient beings at stake.

I am a rather strong proponent of publishing credible accusations and calling out community leadership if they engage in abuse enabling behavior. I published a long post on Abuse in the Rationality/EA Community. I also publicly disclosed details of a smaller incident. People have a right to know what they are getting into. If community processes are not taking abuse seriously in the absence of public pressure then information has to be made public. Though anyone doing this should be careful.

Several people are discussing allegations of DXE being abusive and/or a cult. I joined in early 2020. I have not personally observed or heard any credible accusations of abusive or abuse enabling behavior by the leadership of DXE during the time I have been a member. It is hard for me to know what happened in 2016 or 2017.

Given my history in the rationality you should trust that if I had evidence I could post about systematic abuse within DXE I would post it. Even if I did not have the consent of victims to share evidence I will still publicly state I knew of abuse. I will note it is highly plausible DXE is acting badly behind closed doors. If this becomes clear to me I will certainly let people know.

(This is explicitly not a claim there is no evidence I find concerning. But I think you should be quite critical of most organizations and your eyes open for signs of abusive behavior.)

I am a member of DXE and have interacted with Wayne. I think if you care about animals the amount of QALYs gained would be massive. In general Wayne has always seemed like a careful, if overly optimistic, thinker to me. He always tries to follow good leadership practices. Even if you are not concerned with animal welfare I think Wayne would be very effective at advancing good policies.

Wayne being mayor would result in huge improvements for climate change policy. Having a city with a genuine green policy is worth a lot of QALYs. My only real complaint about Wayne is that he is too optimistic but that isn't the most serious issue for a Mayor.

I haven't checked the claims myself, but "follow good leadership practices" seems to be a heavily disputed claim. Some people claim DxE is a cult, see e.g. here.

I think it's possible to use good leadership practices and bad leadership practices.  I think the success of DxE has shown that he can do some things quite well.  

I've met Wayne before. I get the impression is he quite intelligent and has definitely been familiar with EA for some time. At the same time, DXE has used much more intense / controversial practices in general than many EA orgs, many practices others would be very uncomfortable with. Very arguably this contributed to their successes and failures. 

Sometimes I'm the most scared of the people who are the most capable. 

I really don't know much about Wayne, all things considered. I could imagine a significant amount of investigation concluding that he'd either be really great or fairly bad.

"really great or fairly bad" sounds like you're ruling out "really bad", but I think the worst outcomes are produced by combining very good with very bad leadership practices. If you're bad at everything, you're unlikely to have much of a negative impact because nobody will pay attention to you. So I would have said "really great or really bad". I agree with you otherwise.

This answer would be strengthened by one or two examples of his careful thinking, or especially by a counterpoint to the claim that DxE uses psychological manipulation techniques on its members.

Comments9
Sorted by Click to highlight new comments since:

Hi folks, someone pointed me to this thread a while back. Coming back to it because I came on the site to apply for EA Global, which I'm looking forward to attending. Briefly: 

- Running for mayor was a VERY interesting experience that updated me against conventional political efforts to address large scale problems. The short answer to this is that there are far too many institutional obstacles within conventional politics. 

- My view on disrupting EAG was not, from what I recall, based on non-collaborative principles but rather with the idea that a disruption might actually be good for all relevant stakeholders (including the speaker disrupted, Elon Musk). I think that's probably still the case, but I appreciate that's probably a minority view. 

- Re: DxE and cult-like behavior, hard to respond because there are no specific allegations, but I would say that the Berkeleyside article was sadly not done in a high-integrity way. Contrary sources I suggested the writer talk to were not even interviewed, including the then-current lead organizer (executive director equivalent) Almira Tanner. And clear factual errors were not addressed. I posted about some of them here: https://whhsiung.medium.com/factual-errors-in-berkeleysides-reporting-on-our-mayoral-campaign-eaf0e2d52b04

More generally, while I have not been in leadership for nearly 3 years (and did not have nearly the control folks sometimes suggest even when I was in leadership, e.g., I was opposed to most of the political disruptions performed by DxE), I try to be pretty open about my mistakes, and receptive to feedback. Always open to others' thoughts. For example, Jonas, I don't know you personally but certainly know you through mutual friends/acquaintances via your (important) animal welfare work, and I definitely would be interested in hearing what you think I could have done better. There are lots of hard choices one has to make when leading hundreds of people. Inevitably, one will make mistakes. (Some of mine include: pushing for a work culture that was probably unsustainable; not devoting enough time to personal relationships and communication when painful decisions were made.) The best way to respond to those mistakes, though, is to learn from them. Always eager to hear what I can learn from mine! 

Dony: Since we just posted our policy on political Forum content, I wanted to let you know that this post will be kept in the "Personal Blog" category (as it endorses a specific electoral candidate). However, I think it's an excellent question, and I would encourage you to promote this post on Facebook/Reddit/etc.

 

Personally, I doubt Wayne's victory would "unlock additional political wins" to any great extent; Berkeley is a small city, and I can't think of many (any?) other EA leaders who want to become elected leaders.

I do think it would be interesting to see how EA ideas could be implemented on the level of city policy, and Wayne could be the source of a lot of positive media coverage of EA ideas (journalists like Berkeley, and Wayne has solid media experience, e.g. his Ezra Klein interview).

However, there's also some risk that Berkeley's politics are such that a mayor whose ideas aren't in line with those of e.g. most City Council members might struggle a lot. Berkeley is one of the most progressive cities in the U.S.: if they haven't made strong progress in addressing poverty/climate change already, I'm not sure what Wayne's leadership would add.

On that topic, I'm curious about how Wayne's policies and approach differ from those of the current mayor. I can see what he wants to do on his site, but not what he thinks Jesse Arreguín is wrong about (or what he hasn't implemented well as mayor, even if he had the right ideas).

Some other EAs or people close to EA have run:

  • Meret Schneider, who has been interested in EA and animal welfare, and works at EAF's spin-off Sentience Politics, is a Swiss MP.
I can't think of many (any?) other EA leaders who want to become elected leaders.

While this was before contemporary EA, Peter Singer has run for office before:

In 1992, he became a founding member of the Victorian Greens.[43] He has run for political office twice for the Greens: in 1994 he received 28% of the vote in the Kooyong by-election, and in 1996 he received 3% of the vote when running for the Senate (elected by proportional representation).[43] Before the 1996 election, he co-authored a book The Greens with Bob Brown.[44]

https://en.wikipedia.org/wiki/Peter_Singer#Political_views

Of course, some of our even earlier predecessors, like the old school English utilitarians, or the Chinese Mohists, were substantially more interested in direct politics (rather than precursors to think-tank style policy analysis) than we are.

FWIW, I don't think this post actually endorses a specific candidate, and instead is asking if endorsing a specific candidate makes sense. Maybe that's too close for comfort, but I don't see this post as arguing for a particular candidate, but asking for arguments for or against a particular candidate. Thus as the policy is worded now this seems okay for frontpage or community to me.

Allowing such a post would totally neuter the rule. All one would have to do is take your draft "Trump is actually the best candidate from an EA perspective" and re-title it "Is Trump actually the best candidate from an EA perspective?" Scatter in a few question marks in the text and you are fully compliant.

I think I agree, but my point is maybe more that the policy as worded now should allow this, so the policy probably needs to be worded more clearly so that a post like this is more clearly excluded.

DC
10
0
0

For all of the new commenters: it would have been more valuable to comment when I asked this question, as I was considering trying to coordinate EAs using an assurance contract to provide enough volunteers to help his campaign win. Given how the comments turned out, I decided it was not worth pursuing and therefore assume the Wayne campaign will lose with 50-80% probability, moreso because I didn't think EAs would buy-in (for better or worse) than due to having a sense of how good Wayne's mayorship would actually be for the world on the object-level.

(Since basically no one gave a good, quantitative answer to the question beyond their own social-emotional reasoning.)

So I've moved on. In general, dialogue about an election is worth much less in expectation a couple weeks out from the election than it is in advance.

More from DC
73
DC
· · 10m read
63
DC
· · 2m read
Curated and popular this week
 ·  · 52m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI) by 2028?[1] In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote).[1] This means that, while the co
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal