Eric Adams won the NYC mayoralty in 2021 after a narrow Democratic primary. In the final primary runoff between Adams and Kathryn Garcia, Adams won by 42.9% to 42.2% (seven thousand ballots). 

Before the election, I disagreed with other EAs about the relative merits of Adams and Garcia. I wrote an analysis of the Democratic candidates. One of the heaviest factors in this analysis was fitness for office - looking at politically neutral factors like experience, education, and corruption. I noted that Adams had a pattern of political misbehavior, so I rated him poorly for fitness for office. And ultimately I gave Adams the lowest combined score out of all major candidates. Meanwhile, I rated Garcia the highest out of all major candidates.

Shortly before election day, my webpage caught the attention of a couple of NYC Effective Altruists who happened to be working for the Adams campaign. I was surprised to learn that Adams had EA support. We had a brief zoom call - I, the mentally ill NEET writing a political blog from my parents' basement on the other side of the country, versus the productive EA who was doing real work on the ground in her own city. 

The main factor for the two EAs' support for Adams was his commitment to animal rights. Of that we were in agreement - I too had noted that Adams supported animal rights, and gave him points accordingly. However, the two EAs who worked for Adams were more dedicated animal activists, whereas I have always had a more pluralist view of public policy, so while I did rate animal welfare highly (in fact I rated it the #2 most important issue, behind fitness for office), I still wasn't focused on it as strongly as they were. 

As I recall, the EAs who worked for Adams didn't give me anything to assuage my concerns about Adams' views on housing or other policy topics. They were matters of public record, anyway. The main point of contention was in evaluating Adams' fitness for office. They said they were pretty shocked to see me dismiss his character and merits so readily. According to them, who worked closely with him and knew him, he was a pretty good guy. We only had a brief conversation without going into details, but out of epistemic modesty, plus faith that there would be a positive influence from EAs being within his administration, I increased Adams' fitness for office score (I still gave him a low score, but it was higher than I'd given him previously).

I don't know what Adams has achieved for animal welfare in his tenure. But he won't be able to help animals when he's in prison. The FBI has indicted him for five counts of wire fraud and bribery, and Manifold gives him an 83% probability of felony conviction. Adams accusedly took bribes from the Turkish government in exchange for allowing the skyscraper which housed their consulate to open despite fire safety violations. Adams is also under investigation for allowing the Turkish government to fund his mayoral campaign. Other members of Adams' administration are under investigation for other law and ethics violations, and Adams is being sued for sexual assault.

The fire safety of the Turkish building was not the only reason that Turkey bribed Adams. They also pressured him to agree to stay silent about the Armenian Genocide, to which his staffer replied with an assurance that he would. Of course, it's possible that Adams would have stayed silent about the Armenian Genocide with or without the bribes. After all, he has a track record of philia for Turkey and Azerbaijan, two countries with strong racist tendencies against Armenians, one of whom (Azerbaijan) actually implemented their own, more modest genocide against the Armenians of Nagorno-Karabakh during Adams' mayoral term. Shortly before the last Armenians of Nagorno-Karabakh fled in fear of Azeri persecution, Azerbaijan paid for two of Adams' staffers to visit Azerbaijan.

I kinda doubt that the EA community could have made a difference and won the election for Garcia even if we'd all tried; 0.7% of NYC is a still a lot. And for all we know, maybe the cause of animal rights has been significantly advanced by Adams' election, considering whatever he may have achieved before he (hopefully) gets removed from office in disgrace. But I don't care about the benefits for animals as much as I care about the nonsense he's brought to America's political system along with his shameless philia for genocidal dictatorships. I am personally Armenian, so I'm relatively miffed to see that a man supported by EA animal advocates took Turkish and Azeri bribes. But I think that no matter who you are, honorable EA behavior would be to steer clear of this type of politician, even if they are pro-animal-rights. And I feel like this level of moral debasedness and crookery should have been noticeable to people within his administration, but what do I know.

I don't know anything particular about the EAs who worked for Adams, I don't remember their names. Does anyone know if they're still around, if they've resigned in protest, or anything like that?

Edit: I want to clarify that I wouldn't necessarily begrudge someone for working in his administration in the past, if they had good EA reasons to do so. I know it can be tough to find a job, so do what you have to do even if your boss is kind of scummy. But working to get him elected, or staying aboard the ship after it becomes clear he's this level of crook, are things that would irk me.

Comments2


Sorted by Click to highlight new comments since:

I don't think the community tag is warranted on this post.

I sort of think this is a reason not to have EA-endorsed politicians unless someone has really done the due diligence. This is a pretty high trust community and people expect something someone says confidently to be rubustly tested but political recommendations (and some charity ones to be fair) seem much less well researched than general discussions on policy etc.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
 ·  · 9m read
 · 
TL;DR In a sentence:  We are shifting our strategic focus to put our proactive effort towards helping people work on safely navigating the transition to a world with AGI, while keeping our existing content up. In more detail: We think it’s plausible that frontier AI companies will develop AGI by 2030. Given the significant risks involved, and the fairly limited amount of work that’s been done to reduce these risks, 80,000 Hours is adopting a new strategic approach to focus our efforts in this area.   During 2025, we are prioritising: 1. Deepening our understanding as an organisation of how to improve the chances that the development of AI goes well 2. Communicating why and how people can contribute to reducing the risks 3. Connecting our users with impactful roles in this field 4. And fostering an internal culture which helps us to achieve these goals We remain focused on impactful careers, and we plan to keep our existing written and audio content accessible to users. However, we are narrowing our focus as we think that most of the very best ways to have impact with one’s career now involve helping make the transition to a world with AGI go well.   This post goes into more detail on why we’ve updated our strategic direction, how we hope to achieve it, what we think the community implications might be, and answers some potential questions. Why we’re updating our strategic direction Since 2016, we've ranked ‘risks from artificial intelligence’ as our top pressing problem. Whilst we’ve provided research and support on how to work on reducing AI risks since that point (and before!), we’ve put in varying amounts of investment over time and between programmes. We think we should consolidate our effort and focus because:   * We think that AGI by 2030 is plausible — and this is much sooner than most of us would have predicted 5 years ago. This is far from guaranteed, but we think the view is compelling based on analysis of the current flow of inputs into AI