My mental model of the rationality community (and, thus, some of EA) is "lots of us are mentally weird people, which helps us do unusually good things like increasing our rationality, comprehending big problems, etc., but which also have predictable downsides."

Given this, I'm pessimistic that, in our current setup, we're able to attract the absolute "best and brightest and also most ethical and also most epistemically rigorous people" that exist on Earth.

Ignoring for a moment that it's just hard to find people with all of those qualities combined... what about finding people with actual-top-percentile any of those things?

The most "ethical" (like professional-ethics, personal integrity, not "actually creates the most good consequences) people are probably doing some cached thing like "non-corrupt official" or "religious leader" or "activist".

The most "bright" (like raw intelligence/cleverness/working-memory) people are probably doing some typical thing like "quantum physicist" or "galaxy-brained mathematician".

The most "epistemically rigorous" people are writing blog posts, which may or may not even make enough money for them to do that full-time. If they're not already part of the broader "community" (including forecasters and I guess some real-money traders), they might be an analyst tucked away in government or academia.

A broader problem might be something like: promote EA --> some people join it --> the other competent people think "ah, EA has all those weird problems handled, so I can keep doing my normal job" --> EA doesn't get the best and brightest.

(This was originally a comment, but I think it deserves more in-depth discussion.)

4

0
0

Reactions

0
0
Comments10


Sorted by Click to highlight new comments since:

EA's meta-strategy is 'simp for tech billionaires and AI companies'. EA systematically attracts people who enjoy this strategy. So no it does not attract the best. Maybe a version of EA with more integrity would attract the best people.

I think this is entirely legitimate criticism. It's not at all clear to me that the net impact of Effective Altruism, from end to end, has even been positive. And if it has been negative, it has been negative BECAUSE of the impact the movement has had on AI timelines. 

This should prompt FAR more reflection than I have seen within the community. People should be racking their brains for what went wrong and crying mea culpa. And working for OpenAI/Anthropic/etc/etc should not be seen as "effective".  (Well, maybe now  it's okay. Cat's out of the bag. But certainly being an AI capabilities researcher in 2020 did a lot of harm.)

As far as I can tell, the "Don't Build the Torment Nexus" community went ahead and built the Torment Nexus because it was both intellectually interesting and a path for individuals to acquire more power. Oops. 

And to be clear, this pales in comparison - in my mind at least - to any harms done from the FTX debacle or the sexual abuse scandals. And that is not in any way a trivialization of either of those harms, both of which were also pretty severe. "Accelerate AI timelines" is just that bad. 

Lorenzo Buonanno🔸
Moderator Comment7
3
10

We have a higher bar for taking moderation action against criticism, but considering that sapphire was warned two days ago we have decided to ban sapphire for one month for breaking forum norms multiple times.

I strongly, strongly, strongly disagree with this decision. 

Per my own values and style of communication, I think that welcoming people like sapphire or Sabs who a) are or can be intensely disagreeable, and b) have points worth sharing and processing, is strongly on the side of worth doing, even if c) they make other people uncomfortable, and d) even if they occasionally misfire, and even if they are wrong most of the time, as long as the expected value of the stuff they say remains high. 

In particular, I think that doing so is good for arriving at correct beliefs and for becoming stronger, which I value a whole lot. It is the kind of communication which we use on my forecasting group, where the goal is to arrive at correct beliefs.

I understand that the EA Forum moderators may have different values, and that they may want to make the forum a less spiky place. Know that this has the predictable consequence of losing a Nuño, and it is part of the reason why I've bothered to create a blog and added comments to it in a way which I expect to be fairly uncensorable[1].

Separately, I do think it is the case that EA "simps" for tech billionaires[2]. An answer I would have preferred to see would be a steelmanning of why that is good, or an argument of why this isn't the case.

  1. ^

    Uncensorable by others: I am hosting the blog on top of nja.la and the comments on my own servers. Not uncensorable by me; I can and will censor stuff that I think is low value by my own utilitarian/consequentialist lights.

  2. ^

    Less sure of AI companies, but you could also make the case, e.g., 80kh does recommend positions at OpenAI (<https://jobs.80000hours.org/?query=OpenAI>) 

I'm conflicted on this: on the one hand I agree that it's worth listening to people who aren't skilled at politeness or aren't putting enough effort into it. On the other hand, I think someone like Sapphire is capable of communicating the same information in a more polite way, and a ban incentivizes people to put more effort into politeness, which will make the community nicer.

Yeah, you also see this with criticism, where for any given piece of criticism, you could put more effort into it and make it more effective. But having that as a standard (even as a personal one) means that it will happen less.

So I don't think we disagree on the fact that there is a demand curve? Maybe we disagree that I want to have more sapphires and less politeness, on the margin?

The mods can't realistically call different strike zones based on whether or not "expected value of the stuff [a poster says] remains high." Not only does that make them look non-impartial, it actually is non-impartial.

Plus, warnings and bans are the primary methods by which the mods give substance to the floor of what forum norms require. That educative function requires a fairly consistent floor. If a comment doesn't draw a warning, it's at least a weak signal that the comment doesn't cross the line.

I do think a history of positive contributions is relevant to the sanction.

Can you say which norms the current comment breaks? I think it was not clear to me upon reading both the comment, and looking at the forum norms again.

Sorry for the late reply,

The comment was unnecessarily rude and antagonistic — it didn't meet the minimum bar for civility. (See the Forum norm "Stay civil, at the minimum".)

In isolation, this comment is a mild norm violation. But having a lot of mildly-bad (unnecessarily antagonistic) comments often corrodes the quality of Forum discourse more than a single terrible comment.

It's hard to know how to respond to someone who seems to have a pattern of posting such comments. There's often no "smoking gun" comment that clearly deserves a ban. That's why we have our current setup — we generally give warnings and then proceed to bans if nothing changes.

I think we've not been responding to cases like this enough, recently. At the same time, I wish we could figure out a more collaborative approach than our current one, and it's possible that a 1-month warning was too long — we're discussing it in the moderation team.

(Note: some parts of this comment, as with some other comments that moderators post, were written by other moderators, but I personally believe what I'm posting. This seems worth flagging, given that I'm sharing these opinions as my own. I don't know if all the people on the moderation team agree with everything as I put it here.)

The meaning of "simp" differs from place to place, but it's not particularly civil and decidedly not in this context. I support a suspension action in light of the recent warning, but given the dissimilar type of violation maybe a week or two would have been sufficient.

https://www.cnn.com/2021/02/19/health/what-is-simp-teen-slang-wellness/index.html

Curated and popular this week
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 2m read
 · 
I speak to many entrepreneurial people trying to do a large amount of good by starting a nonprofit organisation. I think this is often an error for four main reasons. 1. Scalability 2. Capital counterfactuals 3. Standards 4. Learning potential 5. Earning to give potential These arguments are most applicable to starting high-growth organisations, such as startups.[1] Scalability There is a lot of capital available for startups, and established mechanisms exist to continue raising funds if the ROI appears high. It seems extremely difficult to operate a nonprofit with a budget of more than $30M per year (e.g., with approximately 150 people), but this is not particularly unusual for for-profit organisations. Capital Counterfactuals I generally believe that value-aligned funders are spending their money reasonably well, while for-profit investors are spending theirs extremely poorly (on altruistic grounds). If you can redirect that funding towards high-altruism value work, you could potentially create a much larger delta between your use of funding and the counterfactual of someone else receiving those funds. You also won’t be reliant on constantly convincing donors to give you money, once you’re generating revenue. Standards Nonprofits have significantly weaker feedback mechanisms compared to for-profits. They are often difficult to evaluate and lack a natural kill function. Few people are going to complain that you provided bad service when it didn’t cost them anything. Most nonprofits are not very ambitious, despite having large moral ambitions. It’s challenging to find talented people willing to accept a substantial pay cut to work with you. For-profits are considerably more likely to create something that people actually want. Learning Potential Most people should be trying to put themselves in a better position to do useful work later on. People often report learning a great deal from working at high-growth companies, building interesting connection
 ·  · 31m read
 · 
James Özden and Sam Glover at Social Change Lab wrote a literature review on protest outcomes[1] as part of a broader investigation[2] on protest effectiveness. The report covers multiple lines of evidence and addresses many relevant questions, but does not say much about the methodological quality of the research. So that's what I'm going to do today. I reviewed the evidence on protest outcomes, focusing only on the highest-quality research, to answer two questions: 1. Do protests work? 2. Are Social Change Lab's conclusions consistent with the highest-quality evidence? Here's what I found: Do protests work? Highly likely (credence: 90%) in certain contexts, although it's unclear how well the results generalize. [More] Are Social Change Lab's conclusions consistent with the highest-quality evidence? Yes—the report's core claims are well-supported, although it overstates the strength of some of the evidence. [More] Cross-posted from my website. Introduction This article serves two purposes: First, it analyzes the evidence on protest outcomes. Second, it critically reviews the Social Change Lab literature review. Social Change Lab is not the only group that has reviewed protest effectiveness. I was able to find four literature reviews: 1. Animal Charity Evaluators (2018), Protest Intervention Report. 2. Orazani et al. (2021), Social movement strategy (nonviolent vs. violent) and the garnering of third-party support: A meta-analysis. 3. Social Change Lab – Ozden & Glover (2022), Literature Review: Protest Outcomes. 4. Shuman et al. (2024), When Are Social Protests Effective? The Animal Charity Evaluators review did not include many studies, and did not cite any natural experiments (only one had been published as of 2018). Orazani et al. (2021)[3] is a nice meta-analysis—it finds that when you show people news articles about nonviolent protests, they are more likely to express support for the protesters' cause. But what people say in a lab setting mig