Recent Discussion

If you have an idea for an interactive learning module (on topics such as rationality, psychology, self-improvement, statistics, probability, effective altruism, philosophy economics, math, science, behavior change, etc.) that you think would be valuable to a wide audience, we hope you'll consider applying to our second annual ClearerThinking.org micro grants program! Our program last year was a huge success, resulting in 15 new interactive modules, so we decided to do it again.

Applying to stage 1 is very fast and simple. Using the step-by-step process we provide, and feedback from our team plus a study we run on your work, winners will end up producing full-blown interactive learning modules like those on ClearerThinking.org. The top programs will also be featured on our website and sent out to our 130,000 email list subscribers!

Applications are due by May 17, 2021. All of the details of the program can be found here.

Some benefits you...

Every time I come across an old post in the EA forum I wonder if the karma score is low because people did not get any value from it or if people really liked it and it only got a lower score because fewer people were around to upvote it at that time. Fortunately, you can send queries to the EA Forum API to get the data that could answer this question. In the following post I will describe how karma and post controversy (measured by the amount of votes a post got and the relation of downvotes to upvotes, more detailed explanation below) developed over time in the EA forum and provide a list of the best rated posts relative to the amount of activity in the forum. Before you read the post, take a moment to think about the following questions:

  • What is your best guess for the most
...

Yep, it's an admin-only property. Sorry for the confusion!

2NunoSempere4hYou can query by year, and then aggregate the years. From a past project, in nodejs: /* Imports */ import fs from "fs" import axios from "axios" /* Utilities */ let print = console.log; let sleep = (ms) => new Promise(resolve => setTimeout(resolve, ms)) /* Support function */ let graphQLendpoint = 'https://www.forum.effectivealtruism.org/graphql/' async function fetchEAForumPosts(start, end){ let response = await axios(graphQLendpoint, ({ method: 'POST', headers: ({ 'Content-Type': 'application/json' }), data: JSON.stringify(({ query: ` { posts(input: { terms: { after: "${start}" before: "${end}" } enableTotal: true }) { totalCount results{ pageUrl user { slug karma } } } }` })), })) .then(res => res ? res.data ? res.data.data ? res.data.data.posts ? res.data.data.posts.results : null : null : null : null) return response } /* Body */ let years = []; for (var i = 2005; i <= 2021; i++) { years.push(i); } let main0 = async () => { let data = await fetchEAForumPosts("2005-01-01","2006-01-01") console.log(JSON.stringify(data,null,2)) } //main0() let main = async () => { let results = [] for(let year of years){ print(year) let firstDayOfYear = `${year}-01-01` let firstDayOfNextYear = `${year+1}-01-01` let data = await fetchEAForumPosts(firstDayOfYear, firstDayOfNextYear) //console.log(JSON.stringify(data,null,2)) //console.log(data.slice(0,5)) results.push(...data) await sleep(5000) } print(results) fs.writeFileSync("lwPosts.json", JSON.stringify(results, 0, 2)) } main()

Epistemic Status: I feel pretty confident that the core viewpoint expressed in this post is correct, though I'm less confident in some specific claims. I have not shared a draft of this post with ACE, and so it’s possible I’ve missed important context from their perspective.

EDIT: ACE board member Eric Herboso has responded with his personal take on this situation. He believes some points in this post are wrong or misleading. For example, he disputes my claim that ACE (as an organization) attempted to cancel a conference speaker.

EDIT: Jakub Stencel from Anima International has posted a response. He clarifies a few points and offers some context regarding the CARE conference situation.

Background

In the past year, there has been some concern in EA surrounding the negative impact of “cancel culture”[1] and worsening discourse norms. Back in October, Larks wrote a post criticizing EA Munich's decision to de-platform Robin Hanson.The post was generally well-received,...

31David_Kristoffersson4hWould you really call Jakub's response "hostile"?
10Max_Daniel4h(I mostly agree with your comment, but note that from the wording of ACE's comment it isn't clear to me if (a) they think that Jakub's comment is hostile or (b) that Hypatia's OP is hostile, or (c) that the whole discussion is hostile or whatever. To be clear, I think that kind of ambiguity is also a strike against that comment.)

Oh, yeah, that's fair. I had interpreted it as referring to Jakub's comment. I think there is a slightly stronger case to call Hypatia's post hostile than Jakub's comment, but in either case the statement feels pretty out of place. 

The ideological Turing test (sometimes called the political Turing test (Hannon 2020: 10)) is a test of a person's ability to state opposing views as clearly and persuasively as those views are stated by their proponents. The test was originally proposed by Bryan Caplan (Caplan 2011), in analogy with Alan Turing's "imitation game"—more widely known as the Turing test—, which measures a machine's ability to exhibit intelligent behaviour indistinguishable from that of a human.

Bibliography

Brin, David (2000) Disputation arenas: Harnessing conflict and competitiveness for society’s benefit, Ohio State Journal on Dispute Resolution, vol. 15, pp. 597–618.

Caplan, Bryan (2011) The ideological Turing test, Econlog, June 20.

Galef, Julia (2021) The Scout Mindset: Why Some People See Things Clearly and Others Don’t, New York: Portfolio.

Hannon, Michael (2020) Empathetic understanding and deliberative democracy, Philosophy and Phenomenological Research, vol. 101, pp. 591–611.

Kling, Arnold (2013) The Three Languages of Politics: Talking across the Political Divides, Washington: Cato Institute.

This update covers CEA's work in the first quarter of 2021.

Background

Our mission is to build a community of students and professionals acting on EA principles, by creating and sustaining high-quality discussion spaces.

In 2019, we focused on stabilizing the organization and improving execution. In 2020, we clarified and narrowed our scope (by setting strategy and spinning off Funds and GWWC).

In 2021, we are focused on working towards our annual goals, as well as growing our team.

Program progress

These are brief summaries; you can find more details for each program further down in this post.

Groups

  • Support
    • We had around 100 calls and 120 in-depth email / Slack exchanges with group leaders. We received positive feedback on the calls (average likelihood to recommend >9/10). We increased 1:1 support for highly-ranked university groups, and helped to seed a group at Georgetown University.
  • Fellowships
    • We worked with Emma Abele (a contractor and CBG recipient) and EA groups at Oxford and
...

Would you be able to provide a Net Promoter Score analysis of your Likelihood to Recommend metrics? I find NPS yields different, interesting information from an averaged LTR and should be very straightforward to compute.

8Khorton5h"We have ... improved our cybersecurity, and streamlined a number of HR systems." Hurray! Well done
2David_Kristoffersson6hThanks for posting this. I find it quite useful to get an overview of how the EA community is being managed and developed.

Today we're launching a new podcast feed that might be useful to you or someone you know.

It's called Effective Altruism: An Introduction, and it's a carefully chosen selection of ten episodes of The 80,000 Hours Podcast, with various new intros and outros to guide folks through them.

We think that it fills a gap in the introductory resources about effective altruism that are already out there. It's a particularly good fit for people:

  • prefer listening over reading, or conversations over essays
  • have read about the big central ideas, but want to see how we actually think and talk
  • want to get a more nuanced understanding of how the community applies EA principles in real life — as an art rather than science.

The reason we put this together now, is that as the number of episodes of The 80,000 Hours Podcast show has grown, it has become less and less practical to suggest that new...

14RyanCarey3hOh come on, this is clearly unfair. I visited that group for a couple of months over seven years ago, because a trusted mentor recommended them. I didn't find their approach useful, and quickly switched to working autonomously, on starting the EA Forum and EA Handbook v1. For the last 6-7 years, (many can attest that) I've discouraged people from working there! So what is the theory exactly?

I’m glad you’ve been discouraging people from working at Leverage, and haven’t been involved with them for a long time.

In our back and forth, I noticed a pattern of behavior that I so strongly associate with Leverage (acting as if one’s position is the only “rational” one , ignoring counterevidence that’s been provided and valid questions that have been asked, making strong claims with little evidence, accusing the other party of bad faith) that I googled your name plus Leverage out of curiosity. That’s not a theory, that’s a fact (and as I said originally, perhaps a meaningless one). 

But you're right: it was a mistake to mention that fact, and I’m sorry for doing so. 

8BrianTan16hThanks for taking action on the feedback! I welcome this change and am looking forward to that new episode. Here's 3 people I would nominate for that episode: Tied as my top preference: 1. Peter Hurford - Since he has already volunteered to be interviewed anyway, and I don't think Rethink Priorities's work has been featured yet on the 80K podcast. They do research across animal welfare, global health and dev't, meta, and long-termist causes, so seems like they do a lot of thinking about cause prioritization. 2. Joey Savoie - Since he has experience starting or helping start new charities in the near-termist space, and Charity Entrepreneurship hasn't been prominently featured yet on the 80K podcast. And Joey probably leans more towards the neartermist side of things than Peter, since Rethink does some longtermist work, while CE doesn't really yet. 2nd preference: 1. Neil Buddy Shah - Since he is now Managing Director at GiveWell, and has talked about animal welfare [https://www.youtube.com/watch?v=sMuP9OldVHc] before too. I could think of more names (i.e. the ones Peter listed), but I wanted to make a few strong recommendations like the ones I wrote above instead. I think one name missing on Peter's list of people to consider interviewing is Michael Plant.

Bostrom, Nick, Thomas Douglas & Anders Sandberg (2016) Unilateralist'The unilateralist’s curse and the case for a principle of conformity [EA Concepts], Social Epistemology, vol. 30, pp. 350–371

Lewis, Gregory (2018) Horsepox synthesis: A case of the unilateralist's curse? [Lewis] (usefully, Bulletin of the Atomic Scientists, February 19.
Usefully connects the curse to other factors)factors

The Unilateralist's Curse and the Case for a Principle of Conformity [Bostrom et al.’s original paper]

Schubert, Stefan & Ben Garfinkel (2017) Hard-to-reverse decisions destroy option value [CEA], Centre for Effective Altruism, March 17.

Zhang, Linchuan (2020) Framing issues with the unilateralist's curse - Linch, 2020, Effective Altruism Forum, January 17.

Unilateralist's curse [EA Concepts].

i.e. surprising, interesting, engaging, mind-changing etc...

Can be specific to cause areas or other subsections of effective altruism.

That 11,000 children died yesterday, will die today and are going to die tomorrow from preventable causes. (I'm not sure if that number is correct, but it's the one that comes to mind most readily.)

As an experiment, I'm combining the "Open and Welcome" and "Progress" threads this month, with the goal of clearing a bit more space on the frontpage.

If you have something to share that doesn't feel like a full post, add it here! 

(You can also create a Shortform post.)


If you're new to the EA Forum, consider using this thread to introduce yourself! 

You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all. 

(You can also put this info into your Forum bio.)


Open threads are also a place to share good news, big or small. See this post for ideas.

Hi, I just heard about this site today. It was mentioned during a  podcast with Julia Galef and Sean Carroll on 'openess, bias and rationality'. To be honest she sounded a lot like me! 

Ok I'm a guy but...

Well anyway, please take a look at this and see if you agree:  ScientistsOnAcid.com