Today, The Guardian published an article titled " ‘Eugenics on steroids’: the toxic and contested legacy of Oxford’s Future of Humanity Institute ". I thought I should flag this article here, since it's such a major news organization presenting a rather scathing picture of EA and longtermism. 

Personally, I see much of this article as unfair, but I imagine it will be successful in steering some readers away from engaging with the ideas of EA and longtermism.

I have a lot of thoughts about this article, but I don't want to turn this into an opinion piece. I'll just say that I like this quote from the recent conversation between Sam Harris and Will MacAskill: "ideas about existential risk and actually becoming rational around the real effects of efforts to do good, rather than the imagined effects or the hoped-for effects... all of that still stands. I mean, none of that was wrong, and none of that is shown to be wrong, by the example of Sam Bankman Fried, and so I do mourne any loss that those ideas have suffered in public perception because of this." -Sam Harris, ~1:01:52, episode #361 of the Making Sense podcast.

13

6
2

Reactions

6
2
Comments12


Sorted by Click to highlight new comments since:

Seems like a rather vague collection of barely connected anecdotes haphazardly strung together.

I am not particularly concerned as I don't see this persuading anybody.

Gonna roll the dice and not click the link, but will guess that Torres and/or Gebru gets cited extensively! https://markfuentes1.substack.com/p/emile-p-torress-history-of-dishonesty - such a shame this excellent piece doesn't get more circulation

I found this very concerning. I posted it but then a helpful admin showed me where it was already posted, I need to be better at searching :D 

When we consider the impact of this, we need to forget for a moment everything we know about EA and imagine the impact this will have on someone who has never heard of EA, or who has just a vague idea about it. 

I do not agree at all with the content of the article, and especially not with the tone of the article, which frankly surprised me from the Guardian. But even this shows how marginal EA is, even in the UK - that one columnist can write a pretty ill-informed and unresearched article, and apparently nobody challenged it. 

BUT: I also see an opportunity. If someone credible from the UK EA community were to write an even, balanced rebuttal of this piece, that might turn this into a positive. Focusing on the way that people like Tony Ord choose to live frugally and donate most of their salary to good causes as being far more reflective of EA than the constant reference to SBF (who of course is one of the very few EA's mentioned in the article). 

I'm not sure the editors at the Guardian realise how closely EA's philosophy aligns with many of the values they promote, and maybe this is a chance to change that and get some positive publicity.

I think the association of EA with eugenics and far-right views about race are potentially a bigger reputational hazard than what happened with FTX. Because with FTX, there is no evidence (that I’m aware of) that anyone in EA knew about the fraud before it became publicly known. The racism in EA is happening out in the open and the community at large is complacent and, therefore, complicit.

Example 1: https://forum.effectivealtruism.org/posts/kgBBzwdtGd4PHmRfs/an-instance-of-white-supremacist-and-nazi-ideology-creeping

Example 2: https://forum.effectivealtruism.org/posts/mZwJkhGWyZrvc2Qez/david-mathers-s-quick-takes?commentId=AnGzk7gjzpbMsHXHi

  • Example 1 is referencing a post that's sitting at a score of –6. It was not a well-received post.
  • Example 2 is a very popular post denouncing Richard Hanania.

I would not interpret that as the community being complacent.

One of the defenses offered for the apparent number and weight of upvotes on the Ives Parr posts (cf. Example 1) was that voters may reach their voting decisions by comparing the amount of karma a post/comment has and the amount it should have, rather than by making an independent decision. In other words, maybe some upvoters thought the post/comment should have zero or some negative karma, but not that negative karma.

I'm updating against that theory based on the voting on this comment, which is sitting at -43 as I write this. This is not a norm-breaking comment, and it's extremely uncommon for a comment to get to this level without being norm-breaking. While one may disagree with the perspective offered (and I do find portions of it to be overstated), evidentiary support has been offered. It is far more negative in karma than the Ives Parr posts; this says something concerning about what content the user base believes is deserving of a heavy karma penalty.

it's extremely uncommon for a comment to get to this level without being norm-breaking.

That doesn't match my impression. IMO internet downvotes are generally rather capricious and the Forum is no exception. For example, this polite comment recommending a neuroscience book got downvoted to -60, apparently leading the author to delete their account.

In any case, Concerned User is concerned about a reputational risk. From the perspective of reputational risk, repeatedly harping on e.g. a downvoted post from many months ago that makes us look bad seems like a very unclear gain. I didn't downvote Concerned User's comment and I think they meant well by writing it, but it does strike me as an attempt to charge into quicksand, and I tend to interpret the downvotes as a strong feeling that we shouldn't go there.

I've been reading discussions like this one on the EA Forum for years, and they always seem to go the same way. Side A wants to be very sure we're totally free of $harmful_ideology; Side B wants EA to be a place that's focused on factual accuracy and free of intellectual repression. The discussion generally ends up unsatisfactory to both sides. Side A interprets Side B's arguments as further evidence of $harmful_ideology. And Side B just sees more evidence of a chilling intellectual climate. So I respect users who have decided to just downvote and move on. I don't know if there is any solution to this problem -- my best idea is to simultaneously condemn Nazis and affirm a commitment to truth and free thought, but I expect this would end up going wrong somehow.

The base rate of good-faith, norm-compliant comments being massively downvoted remains extremely low. I think that is pretty relevant in choosing how much to update on the karma votes here and in the Parr votes. 

Substantively, the problem is that the evidence suggests the voting userbase is at least as opposed to Concerned User reminding us of Parr's posts than it is opposed to Parr making the posts in the first place. While an optics-focused user might not be happy that Concerned User is bringing this up, one would expect their downvotes on the posts that created the optics problem in the first place to be equally as strong. If they aren't downvoting the Parr posts due to "free speech" concerns, they shouldn't be downvoting Concerned User for exercising their free-speech rights to call out what they see as a pattern of racism in EA.

One hypothesis: Forum users differ on whether they prioritize optics vs intellectual freedom.

  • Optics voters downvote both Parr and Concerned User. They want it all to go away.

  • Intellectual freedom voters upvote Parr, but downvote Concerned User. They appreciate Parr exploring a new cause proposal, and they feel the censure from Concerned User is unwarranted.

Result: Parr gets a mix of upvotes and downvotes. Concerned User is downvoted by everyone, since they annoyed both camps, for different reasons.

This is plausible, although I'd submit that it requires enough "optics voters" to be pretty bad at optics. Specifically, they would need to be unaware of the negative optical consequences of the comment here having been at -43. 

Moreover, there are presumably voters who downvoted Parr and upvoted Concerned User because they thought Parr's posts were deeply problematic and that Concerned User was right to call them out. For this hypothesis to work, they must have been substantially outnumbered by the group you describe as "intellectual freedom voters." (I say the "group you describe" because the described voting behavior would be the same one would expect from people who sympathize with Parr's views on the merits; I see no clear way to exclude the sympathy rationale on voting behavior alone.)

I don't necessarily agree that the community is either complacent or complicit, but I do agree that this is potentially a massive reputational hazard. It's not about anyone proving that EA's are racist, it's just about people starting to subconsciously associate "racism" and "EA", even a tiny bit. It could really hurt the movement. 

Again, as per my comment above, I think there is great value in a firm rebuttal from a credible voice in the UK EA community. 

It's just absurd that one email from nearly 30 years ago, taken out of context, is being used to tar an entire global community. 

We also need to remember that back in 1996, when the email was written, the world was not in the state it's in now where people believe that any phrase, even if uttered provocatively or in jest, can be taken literally and assumed to represent a person's true beliefs, even if there are 10000 examples of them saying the exact opposite. I remember when I was in college it was quite normal to write or say shocking things just to get a reaction or a laugh, we didn't yet have the mentality that you shouldn't write or say anything that you wouldn't be happy to see on the front page of the Times. 

It's just absurd that one email from nearly 30 years ago, taken out of context, is being used to tar an entire global community. 

I think the commenter's point is about the presence of current racism, and two recent discussions on the Forum are offered as evidence. So while this statement may work as a response to criticism based predominately on the Bostrom e-mail, I don't find it particularly responsive to criticism based on current racism.

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
Joris 🔸
 ·  · 5m read
 · 
Last week, I participated in Animal Advocacy Careers’ Impactful Policy Careers programme. Below I’m sharing some reflections on what was a really interesting week in Brussels! Please note I spent just one week there, so take it all with a grain of (CAP-subsidized) salt. Posts like this and this one are probably much more informative (and assume less context). I mainly wrote this to reflect on my time in Brussels (and I capped it at 2 hours, so it’s not a super polished draft). I’ll focus mostly on EU careers generally, less on (EU) animal welfare-related careers. Before I jump in, just a quick note about how I think AAC did something really cool here: they identified a relatively underexplored area where it’s relatively easy for animal advocates to find impactful roles, and then designed a programme to help these people better understand that area, meet stakeholders, and learn how to find roles. I also think the participants developed meaningful bonds, which could prove valuable over time. Thank you to the AAC team for hosting this! On EU careers generally * The EU has a surprisingly big influence over its citizens and the wider world for how neglected it came across to me. There’s many areas where countries have basically given a bunch (if not all) of their decision making power to the EU. And despite that, the EU policy making / politics bubble comes across as relatively neglected, with relatively little media coverage and a relatively small bureaucracy. * There’s quite a lot of pathways into the Brussels bubble, but all have different ToCs, demand different skill sets, and prefer different backgrounds. Dissecting these is hard, and time-intensive * For context, I have always been interested in “a career in policy/politics” – I now realize that’s kind of ridiculously broad. I’m happy to have gained some clarity on the differences between roles in Parliament, work at the Commission, the Council, lobbying, consultancy work, and think tanks. * The absorbe
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr