Hide table of contents

Four days ago I posted a question Why are you reluctant to write on the EA Forum?, with a link to Google Form. I received 20 responses.

This post is in three parts:

  1. Summary of reasons people are reluctant to write on the EA Forum
  2. Suggestions for making it easier
  3. Positive feedback for the EA Forum
  4. Replies in full

Summary of reasons people are reluctant to write on the EA Forum

The form received 20 responses over four days. 

All replies included a reason for being reluctant or unable to write on the EA Forum. Only a minority of replies included a concrete suggestion for improvement.

I have attempted to tally how many times each reason appeared across the 20 responses[2]:

Suggestions for making it easier to contribute

I give all concrete suggestions for helping people be less reluctant to contribute to the forum, in chronological order in which they were received:

  • More discourse on increasing participation: "more posts like these which are aimed at trying to get more people contributing"
  • Give everyone equal Karma power: "If the amount of upvotes and downvotes you got didn't influence your voting power (and was made less prominent), we would have less groupthink and (pertaining to your question) I would be reading and writing on the EA-forum often and happily, instead of seldom and begrudgingly."
  • Provide extra incentives for posting: "Perhaps small cash or other incentives given each month for best posts in certain categories, or do competitions, or some such measure? That added boost of incentive and the chance that the hours spent on a post may be reimbursed somehow."
  • "Discussions that are less tied to specific identities and less time-consuming to process - more Polis like discussions that allow participants to maintain anonymity, while also being able to understand the shape of arguments."
  • Lower the stakes for commenting: "I'm not sure if comment section can include "I've read x% of the article before this comment"?"

Positive feedback for the EA Forum

The question invited criticism of the Forum, but it did nevertheless garner some positive feedback.

 

For an internet forum it's pretty good. But it's still an internet forum. Not many good discussions happen on the internet. 

 

Forum team do a great job :)

Responses in full

All responses can be found here.

  1. ^
  2. ^

    You can judge for yourself here whether I correctly classified the responses.

    I considered lumping "too time-consuming" and "lack of time" together, but decided against this because the former seems to imply "bar is very high", while the latter is merely a statement on how busy the respondent's life is.

    Show all footnotes
    Comments5


    Sorted by Click to highlight new comments since:

    I'd be interested to see you weigh the pros and cons of making it easier to contribute - you don't explicitly say it in the post, but you imply that this would be a good thing by default. The forum is the way it is for a reason, and there are mechanisms put in place both by the forum team and by the community in order to try to keep the quality of the discussion high. 

    For example, I would argue that having a high bar for posting isn't a bad thing, and the sliding-scale karma system that helps regulate that is, in extension, valuable. If writing a full post of sufficient quality is time consuming, then there is the quick takes section. 

    The Alignment Forum has a significantly higher barrier to entry than this one does, but I think that is fairly universally regarded as an important factor in facilitating a certain kind of discussion either. I can see a lot of value in the EA forum trying to maintain it's current norms in order to mean it still has the potential for productive discussion between people who are sufficiently well-researched. I think meaningfully lowering the bar for participation would mean that the forum would lose some of its ability to generate anything especially novel or useful to the community and I think the quote you included:

    For an internet forum it's pretty good. But it's still an internet forum. Not many good discussions happen on the internet.

    Somewhat points to that too. I think there should be other forums for people less familiar with EA to participate in discussions, and I think whether or not those currently exist is an interesting discussion.

    Having said all that, I do wonder if that leaves the current forum community particularly vulnerable to groupthink. I'm not really sure what the solution to that is though.

    Pseudonymity should work in most cases to address the risk of reputational damage, albeit at the cost of the potential reputational upsides for posting.

    For an internet forum it's pretty good. But it's still an internet forum. Not many good discussions happen on the internet. 


    This makes me sad because I think EAF in 2015 or 2016 had much better discussions, and shortform still does. 

    If you or someone else wants to expand on this point, I'd be interested in understanding whether that's really the case or in which ways it got better or worse

    This is great! 

    Curated and popular this week
    trammell
     ·  · 25m read
     · 
    Introduction When a system is made safer, its users may be willing to offset at least some of the safety improvement by using it more dangerously. A seminal example is that, according to Peltzman (1975), drivers largely compensated for improvements in car safety at the time by driving more dangerously. The phenomenon in general is therefore sometimes known as the “Peltzman Effect”, though it is more often known as “risk compensation”.[1] One domain in which risk compensation has been studied relatively carefully is NASCAR (Sobel and Nesbit, 2007; Pope and Tollison, 2010), where, apparently, the evidence for a large compensation effect is especially strong.[2] In principle, more dangerous usage can partially, fully, or more than fully offset the extent to which the system has been made safer holding usage fixed. Making a system safer thus has an ambiguous effect on the probability of an accident, after its users change their behavior. There’s no reason why risk compensation shouldn’t apply in the existential risk domain, and we arguably have examples in which it has. For example, reinforcement learning from human feedback (RLHF) makes AI more reliable, all else equal; so it may be making some AI labs comfortable releasing more capable, and so maybe more dangerous, models than they would release otherwise.[3] Yet risk compensation per se appears to have gotten relatively little formal, public attention in the existential risk community so far. There has been informal discussion of the issue: e.g. risk compensation in the AI risk domain is discussed by Guest et al. (2023), who call it “the dangerous valley problem”. There is also a cluster of papers and works in progress by Robert Trager, Allan Dafoe, Nick Emery-Xu, Mckay Jensen, and others, including these two and some not yet public but largely summarized here, exploring the issue formally in models with multiple competing firms. In a sense what they do goes well beyond this post, but as far as I’m aware none of t
     ·  · 1m read
     · 
    [This post was written quickly and presents the idea in broad strokes. I hope it prompts more nuanced and detailed discussions in the future.] In recent years, many in the Effective Altruism community have shifted to working on AI risks, reflecting the growing consensus that AI will profoundly shape our future.  In response to this significant shift, there have been efforts to preserve a "principles-first EA" approach, or to give special thought into how to support non-AI causes. This has often led to discussions being framed around "AI Safety vs. everything else". And it feels like the community is somewhat divided along the following lines: 1. Those working on AI Safety, because they believe that transformative AI is coming. 2. Those focusing on other causes, implicitly acting as if transformative AI is not coming.[1] Instead of framing priorities this way, I believe it would be valuable for more people to adopt a mindset that assumes transformative AI is likely coming and asks: What should we work on in light of that? If we accept that AI is likely to reshape the world over the next 10–15 years, this realisation will have major implications for all cause areas. But just to start, we should strongly ask ourselves: "Are current GHW & animal welfare projects robust to a future in which AI transforms economies, governance, and global systems?" If they aren't, they are unlikely to be the best use of resources. 
Importantly, this isn't an argument that everyone should work on AI Safety. It's an argument that all cause areas need to integrate the implications of transformative AI into their theory of change and strategic frameworks. To ignore these changes is to risk misallocating resources and pursuing projects that won't stand the test of time. 1. ^ Important to note: Many people believe that AI will be transformative, but choose not to work on it due to factors such as (perceived) lack of personal fit or opportunity, personal circumstances, or other pra
    Rasool
     ·  · 1m read
     · 
    In 2023[1] GiveWell raised $355 million - $100 million from Open Philanthropy, and $255 million from other donors. In their post on 10th April 2023, GiveWell forecast the amount they expected to raise in 2023, albeit with wide confidence intervals, and stated that their 10th percentile estimate for total funds raised was $416 million, and 10th percentile estimate for funds raised outside of Open Philanthropy was $260 million.  10th percentile estimateMedian estimateAmount raisedTotal$416 million$581 million$355 millionExcluding Open Philanthropy$260 million$330 million$255 million Regarding Open Philanthropy, the April 2023 post states that they "tentatively plans to give $250 million in 2023", however Open Philanthropy gave a grant of $300 million to cover 2023-2025, to be split however GiveWell saw fit, and it used $100 million of that grant in 2023. However for other donors I'm not sure what caused the missed estimate Credit to 'Arnold' on GiveWell's December 2024 Open Thread for bringing this to my attention   1. ^ 1st February 2023 - 31st January 2024