111 karmaJoined Sep 2014


Ooh. This looks interesting! To accomplish goals like these requires over ten times as much time, so this definitely requires funding. I'm now envisioning starting up a new EA org which serves the purpose of preventing disruptions to EA productivity through identifying risks and planning in advance!

I would love to do this!

Thanks for the inspiration, Ben! :D

At the current time, I suspect the largest disaster risk is war in the US or UK. That's why I'm focusing on war. I haven’t seriously looked into the emerging risks related to antibiotic resistance, but it might be a comparable source of concern (with a lower probability of harming EA, of course, but with a much higher level of severity). The most probable risk I currently see is that there are certain cultural elements in EA which appear to have resulted in various problems. For a really brief summary: there is a set of misunderstandings which is having a negative impact on inclusiveness, possibly resulting in a significantly smaller movement than we’d have otherwise and potentially damaging the emotional health and productivity of an unknown number of individual EAs. The severity of that is not as bad as disease or war could get, but the probability of this set of misunderstandings destroying productivity is much higher than the others (That this is happening is basically guaranteed, so it’s just a matter of degree.). The reason I chose to work on the risk of war is because of the combination of the probability and severity of war which I currently suspect, and the relative severity/probability compared with other issues I could have focused on.

I have done a lot of thinking about some of the questions you pose here! I wish I could dedicate my life to doing justice to questions like "What is the worst threat to productivity in the effective altruism movement?" and I have been working on interventions for some of them. I have a pretty good basis for an intervention that would help with these cultural misunderstandings I mentioned, and this would also do the world a lot of good because second biggest problem in the world, as identified by the World Economic Forum for 2017, would be helped through this contribution. Additionally, continuing my work on misunderstandings could reduce the risk of war. I really, really want to continue with pursuing that, but I’m taking a few weeks to get on top of this potentially more urgent problem.

I have been stuck with making estimations based on the amount of information I have time to gather, so, sadly, my views aren’t nearly as comprehensive as I really wish they were.

I tend to keep an eye on risks in everything that's important to me, like the effective altruism movement, because I prefer to prevent problems in my life wherever possible. Advanced notice about big problems helps me do that.

As part of this, I have worked hard to compensate for around 5-10 biases that interfere with reasoning about risks like optimism bias, normalcy bias, and affect heuristic. These three can prevent you from realising bad things will happen, cause one to fail to plan for disasters, and make you disregard information just because it is unpleasant. The one bias I saw on the list that actually supports risk identification, pessimism bias, is badly outnumbered by the 5-10 biases that interfere with reasoning about risks. That is not to say that pessimism bias is actually helpful. Given that one can get distracted by the wrong risks, I'm wary of it. I think quality reasoning about risks looks like ordering risks by priority, choosing your battles, and making progress on a manageable number of problems rather than being paralysed thinking about every single thing that could go wrong. I think it also looks like problem-solving because that's a great way to avoid paralysis. I’ve been thinking about solutions as well.

After compensating for the biases I listed and others which interfere with reasoning about risks, I found my new perspective a bit stressful, so I worked very hard to become stronger. Now, I find it easy to face most risks, and I have a really, really high level of emotional stamina when it comes to spending time thinking about stressful things in general. In 2016, I managed to spend over 500 hours reading studies about sexual violence and doing related work while being randomly attacked by seven sex offenders throughout the year. I’ve never experienced anything that intense before. I can’t claim that I was unaffected, but I can claim that I made really serious progress despite a level of stress the vast majority of people would find too overwhelming. I managed to put together a solid skeleton of a solution which I will continue to build on. In the meantime, the solution can expand as needed.

I have discovered it’s difficult to share thoughts about risks and upsetting problems because other people have these biases, too. I've upgraded my communication skills a lot to compensate for that as much as possible. That is very, very hard. To become really excellent at it, I need to do more communication experiments, but I think what I've got at this time is sufficient to get through after a few tries with a bit of effort. Considering the level of difficulty, that’s a success!

Now that I think about it, I appear to have a few valuable comparative advantages when it comes to identifying and planning for risks. Perhaps I should seek funding to start a new org. :)

Okay, what information do you think they need? You mentioned "directions" and "approaches" but that is very vague. I need the specific questions you think readers need answered before they will notify me of similar projects or express interest in what I'm doing.

I think you're saying "There isn't enough information for most readers to decide whether they want to PM you." is that right?

I'm open to going in whatever direction gives the EA community the most insight into the truth, with whatever presentation encourages the most constructive use of that information. In case you're interested in specifics, I am currently working on a planning document about how specifically to accomplish all that. I can give you access if you wish (Just give me your Google Docs address via PM.).

I'm open to considering directions / direction changes. What are your thoughts so far? :)

I am not sure if you are requesting to see the project, or if you are making a complaint of some sort. It's easy enough for anyone to PM me and request to see the project. Just in case, I updated my post to explicitly invite people to PM me to see the project.

In case this wasn't clear, the project isn't finished yet. Before dumping a lot more hours into it, I want to see whether I'm duplicating anyone's work.

The fact that it is not yet finished is why I did not publish anything about it so far. It's not ready to be published.

The main point of this post is simply to find out whether there are others doing a similar project, and find other people who are interested in helping make sure the project gets completed.

I agree that most people will not understand the most strange ideas until they understand the basic ideas. Ensuring they understand the foundation is a good practice.

I definitely agree that the instances of weirdness that are beneficial are only a tiny fraction of the weirdness that is present.

Regarding weirdness:

There are effective and ineffective ways to be weird.

There are several apparently contradictory guidelines in art: "use design principles", "break the conventions", and "make sure everything looks intentional".

The effective ways to be weird manage all three guidelines.

Examples: Picasso, Björk, Lady Gaga

One of the major and most observable differences between these three artists vs. many weird people is that the behavior of the artists can be interpreted as a communication about something specific, meaningful, and valuable. Art is a language. Everything strange we do speaks about us. If you haven't studied art, it might be rather hard to interpret the above three artists. The language of art is sometimes completely opaque to non-artists, and those who interpret art often find a variety of different meanings rather than a consistent one. (I guess that's one reason why they don't call it science.) Quick interpretations: In Picasso, I interpret an exploration of order and chaos. In Björk, I interpret an exploration of the strangeness of nature, the familiarity and necessity of nature, and the contradiction between the two. In Lady Gaga, I interpret an edgy exploration of identity.

These artists have the skill to say something of meaning as they follow principles and break conventions in a way that looks intentional. That is why art is a different experience from, say, looking at an odd-shaped mud splatter on the sidewalk, and why it can be a lot more special.

Ineffective weirdness is too similar to the odd-shaped mud splatter. There need to be signs of intentional communication. To interpret meaning, we need to see that combination of unbroken principles and broken conventions arranged in an intentional-looking pattern.

Edit: I agree that there aren't a large number of people advocating for dishonesty. My concern is that if even a small number of EAs get enough attention for doing something dishonest, this could cause us all reputation problems. Since we could be "painted with the same brush" due to the common human bias called stereotyping bias, I think it's worthwhile to make sure it's easy to find information about how to do honest promotion, and why.

I updated my post to mention some specific examples of the problems I've been seeing. Thank you, David.

It would protect the movement to have a norm that organizations must supply good evidence of effectiveness to the group and only if the group accepts this evidence should they claim to be an effective altruism organization.

I think some similar norm should also extend to individual people who want to publish articles about what effective altruism is. Obviously, this cannot be required of critics, but we can easily demand it from our allies. I'm not sure what we should expect individual people to do before they go out and write articles about effective altruism on Huffington Post or whatever, but expecting something seems necessary.

To prevent startups from being utterly ostracized by this before they've got enough data / done enough experiments to show effectiveness, maybe they could be encouraged to use a different term that includes EA but modifies it in a clear way like "aspiring effective altruism organization".

Wow. More excellent arguments. More updates on my side. You're on fire. I almost never meet people who can change my mind this much. I would like to add you as a friend.

I'm not completely sure what's going on with Gleb, but I feel a great deal of concern for people with Asperger's, and I think it made me overly sympathetic in this case. Thank you for this.

Load more