Hide table of contents

Sometimes new EA projects are compared to startups, and the EA project ecosystem to the startup ecosystem. While I'm often making such comparisons myself, it is important to highlight one crucial difference.

Causing harm is relatively easy - as explained in

Also most startups fail.

However, when startups fail, they usually can't go "much bellow zero". In contrast, projects aimed at influencing long-term future can have negative impacts many orders of magnitude larger than the size of the project. It is possible for small teams, or even small funders, to cause large harm.

While in the usual startup ecosystem investors are looking for unicorns, they do not have to worry about anti-unicorns.

Also, the most the harmful long-term oriented projects could be the ones which succeed in a narrow project sense: they have competent teams, produce outcomes, and actually change the world - but in a non-obviously wrong way.

Implications

It is in my opinion wrong to directly apply models like "we can just try many different things" or "we can just evaluate the ability of teams to execute" to EA projects aiming to influence long-term future or decrease existential risk.

Also for this reason, I believe projects aiming to influence long-term future, decrease existential risk, or do something ambitious in the meta- or outreach space, are often actually vetting constrained, and also grant-making in this space is harder than in many other areas.

Note: I want to emphasise this post should not be rounded off to "do not start projects" or "do not fund projects".

67

0
0

Reactions

0
0
Comments9
Sorted by Click to highlight new comments since: Today at 11:21 AM

Also relevant: Speeding up social science 10-fold, how to do research that’s actually useful, & why plenty of startups cause harm (and Spencer's blog post)

Spencer and Rob think plenty of startups are actually harmful for society. Spencer explains how companies regularly cause harm by injuring their customers or third parties, or by drawing people and investment away from more useful projects.

So EAs should be more cautious even about ordinary start-ups in comparison to the typical Venture Capitalist.

Do we have examples of this? I mean, there are obviously wrong examples like socialist countries, but I'm more interested in examples of the types of EA projects we would expect to see causing harm. I tend to think the risk of this type of harm is given too much weight

I don't think risk of this type is given too much weight now. In my model, considerations like this got at some point in the past rounded of to some over-simplified meme like "do not start projects, they fail and it is dangerous". This is wrong and led to some counterfactual value getting lost.

This was to some extent reaction to the previous mood, which was more like "bring in new people; seed groups; start projects; grow everything". Which was also problematic.

In my view we are looking at something like pendulum swings, where we were somewhere at the extreme position of not many projects started recently, but the momentum is in direction of more projects, and the second derivative is high. So I expect many projects will actually get started. In such situation the important thing is to start good projects, and avoid anti-unicorns.

IMO the risk was maybe given too much weight before, but is given too little weight now, by many people. Just look at many of the recent discussions, where security mindset seem rare, and many want to move fast forward.

Just wanted to say I appreciate the nuance you're aiming at here. (Getting that nuance right is real hard)

Discussing specific examples seems very tricky - I can probably come up with a list of maybe 10 projects or actions which come with large downside/risks, but I would expect listing them would not be that useful and can cause controversy.

Few hypothetical examples

  • influencing mayor international regulatory organisation in a way leading to creating some sort of "AI safety certification" in a situation where we don’t have the basic research yet, creating false sense of security/fake sense of understanding
  • creating a highly distorted version of effective altruism in a mayor country e.g. by bad public outreach
  • coordinating effective altruism community in a way which leads to increased tension and possibly splits in the community
  • producing and releasing some infohazard research
  • influencing important players in AI or AI safety in a harmful leveraged way, e.g. by bad strategic advice

A few examples are mentioned in the resources linked above. The most well-known and commonly accepted one is Intentional Insights, but I think there are quite a few more.

I generally prefer not to make negative public statements about well-intentioned EA projects. I think this is probably the reason why the examples might not be salient to everyone.

I wasn't asking for examples from EA, just the type of projects we'd expect from EAs.

Do you think intentional insights did a lot of damage? I'd say it was recognized by the community and pretty well handled whole doing almost no damage.

Do you think intentional insights did a lot of damage? I'd say it was recognized by the community and pretty well handled whole doing almost no damage.

As I also say in my above-linked talk, if we think that EA is constrained by vetting and by senior staff time, things like InIn have a very significant opportunity cost because they tend to take up a lot of time from senior EAs. To get a sense of this, just have a look at how long and thorough Jeff Kaufman's post is, and how many people gave input/feedback - I'd guess that's several weeks of work by senior staff that could otherwise go towards resolving important bottlenecks in EA. On top of that, I'd guess there was a lot of internal discussion in several EA orgs about how to handle this case. So I'd say this is a good example of how a single person can have a lot of negative impact that affects a lot of people.

I wasn't asking for examples from EA, just the type of projects we'd expect from EAs.

The above-linked 80k article and EAG talk mention a lot of potential examples. I'm not sure what else you were hoping for? I also gave a concise (but not complete) overview in this facebook comment.

Curated and popular this week
Relevant opportunities