Hide table of contents

For the past ~8 months, I've been summarizing the top posts on the EA and LW forums each week (see archive here), a project supported by Rethink Priorities (RP).

I’ve recently taken on a new role as an AI Governance & Strategy Research Manager at RP. As a result of this we're going to be putting the forum summaries on hiatus while we work on what they should look like in the future, and hire for someone new to run the project. A big thank you to everyone who completed our recent survey - it’s great input for us as we evaluate this project going forward!

The hiatus will likely last for ~4-6 months. We’ll continue to use the existing email list and podcast channel (EA Forum Podcast (Summaries)) when it's back up and running, so subscribe if you’re interested and feel free to continue to share it with others.

If you’re looking for other ways to stay up to date in the meantime, some resources to consider:

Newsletters

The EA Forum Digest - a weekly newsletter recommending new EA forum posts that have high karma, active discussion, or could use more input.

Monthly Overload of Effective Altruism - a monthly newsletter with top research, organizational updates and events in the EA community.

The EA Newsletter - a monthly newsletter with news and updates about the effective altruism community, a selection of top posts, and highlights of progress and discussion in different cause areas.

Podcasts

EA forum podcast (curated posts) - human narrations of some of the best posts from the EA forum.

Nonlinear library - AI narrations of all posts from the EA Forum, Alignment Forum, and LessWrong that meet a karma threshold.

There are heaps of cause-area specific newsletters out there too - if you have suggestions, please share them in the comments.
 

I’ve really enjoyed my time running this project! Thanks for reading and engaging, to Coleman Snell for narrating, and to all the writers who’ve shared their ideas and helped people find new opportunities to do good.

51

0
0

Reactions

0
0
Comments4


Sorted by Click to highlight new comments since:

That's a real shame, It was a really good idea, they were really well put together, and I found them really helpful, and they definitely looked like something that would be extremely high EV. But I almost never encountered them because they frequently didn't seem to get enough upvotes to persist on the main page for long (maybe extremely large numbers of people didn't have their posts make the cut so a few of them strong downvoted it?). I definitely thought that they should have been integrated into the site somehow.

I'll go over the archive when I have time to do some reading, since they really were a great way to find a post that interested me.

Thanks, this is great feedback to hear, even if things are closing shop for a while.

I was really appreciating these, but (as I said privately) this seems like a great opportunity, so congratulations! 

As an aside, there's also the monthly "EA Newsletter" (which I run, so I'm biased when I'm recommending it) and newsletters on the "Newsletters" topic page.

Thanks! And good call, sorry for missing that one - added it into the post :-)

Curated and popular this week
trammell
 ·  · 25m read
 · 
Introduction When a system is made safer, its users may be willing to offset at least some of the safety improvement by using it more dangerously. A seminal example is that, according to Peltzman (1975), drivers largely compensated for improvements in car safety at the time by driving more dangerously. The phenomenon in general is therefore sometimes known as the “Peltzman Effect”, though it is more often known as “risk compensation”.[1] One domain in which risk compensation has been studied relatively carefully is NASCAR (Sobel and Nesbit, 2007; Pope and Tollison, 2010), where, apparently, the evidence for a large compensation effect is especially strong.[2] In principle, more dangerous usage can partially, fully, or more than fully offset the extent to which the system has been made safer holding usage fixed. Making a system safer thus has an ambiguous effect on the probability of an accident, after its users change their behavior. There’s no reason why risk compensation shouldn’t apply in the existential risk domain, and we arguably have examples in which it has. For example, reinforcement learning from human feedback (RLHF) makes AI more reliable, all else equal; so it may be making some AI labs comfortable releasing more capable, and so maybe more dangerous, models than they would release otherwise.[3] Yet risk compensation per se appears to have gotten relatively little formal, public attention in the existential risk community so far. There has been informal discussion of the issue: e.g. risk compensation in the AI risk domain is discussed by Guest et al. (2023), who call it “the dangerous valley problem”. There is also a cluster of papers and works in progress by Robert Trager, Allan Dafoe, Nick Emery-Xu, Mckay Jensen, and others, including these two and some not yet public but largely summarized here, exploring the issue formally in models with multiple competing firms. In a sense what they do goes well beyond this post, but as far as I’m aware none of t
 ·  · 19m read
 · 
I am no prophet, and here’s no great matter. — T.S. Eliot, “The Love Song of J. Alfred Prufrock”   This post is a personal account of a California legislative campaign I worked on March-June 2024, in my capacity as the indoor air quality program lead at 1Day Sooner. It’s very long—I included as many details as possible to illustrate a playbook of everything we tried, what the surprises and challenges were, and how someone might spend their time during a policy advocacy project.   History of SB 1308 Advocacy Effort SB 1308 was introduced in the California Senate by Senator Lena Gonzalez, the Senate (Floor) Majority Leader, and was sponsored by Regional Asthma Management and Prevention (RAMP). The bill was based on a report written by researchers at UC Davis and commissioned by the California Air Resources Board (CARB). The bill sought to ban the sale of ozone-emitting air cleaners in California, which would have included far-UV, an extremely promising tool for fighting pathogen transmission and reducing pandemic risk. Because California is such a large market and so influential for policy, and the far-UV industry is struggling, we were seriously concerned that the bill would crush the industry. A partner organization first notified us on March 21 about SB 1308 entering its comment period before it would be heard in the Senate Committee on Natural Resources, but said that their organization would not be able to be publicly involved. Very shortly after that, a researcher from Ushio America, a leading far-UV manufacturer, sent out a mass email to professors whose support he anticipated, requesting comments from them. I checked with my boss, Josh Morrison,[1] as to whether it was acceptable for 1Day Sooner to get involved if the partner organization was reluctant, and Josh gave me the go-ahead to submit a public comment to the committee. Aware that the letters alone might not do much, Josh reached out to a friend of his to ask about lobbyists with expertise in Cal
Rasool
 ·  · 1m read
 · 
In 2023[1] GiveWell raised $355 million - $100 million from Open Philanthropy, and $255 million from other donors. In their post on 10th April 2023, GiveWell forecast the amount they expected to raise in 2023, albeit with wide confidence intervals, and stated that their 10th percentile estimate for total funds raised was $416 million, and 10th percentile estimate for funds raised outside of Open Philanthropy was $260 million.  10th percentile estimateMedian estimateAmount raisedTotal$416 million$581 million$355 millionExcluding Open Philanthropy$260 million$330 million$255 million Regarding Open Philanthropy, the April 2023 post states that they "tentatively plans to give $250 million in 2023", however Open Philanthropy gave a grant of $300 million to cover 2023-2025, to be split however GiveWell saw fit, and it used $100 million of that grant in 2023. However for other donors I'm not sure what caused the missed estimate Credit to 'Arnold' on GiveWell's December 2024 Open Thread for bringing this to my attention   1. ^ 1st February 2023 - 31st January 2024
Recent opportunities in Building effective altruism
26
CEEALAR
· · 1m read