I run the non-engineering side of the EA Forum (this platform), run the EA Newsletter, and work on some other content-related tasks at CEA. Please feel free to reach out! You can email me. [More about my job.]
Some of my favorite of my own posts:
I finished my undergraduate studies with a double major in mathematics and comparative literature in 2021. I was a research fellow at Rethink Priorities in the summer of 2021 and was then hired by the Events Team at CEA. I've since switched to the Online Team. In the past, I've also done some (math) research and worked at Canada/USA Mathcamp.
Some links I think people should see more frequently:
Thanks for sharing this, I appreciate it! I'm really excited about the study.
I haven't read the full study yet, but I came across a Twitter thread by one of the authors, and I thought it was helpful: https://twitter.com/aaronrichterman/status/1663957463291265032?s=46&t=A7sa4lqau2E-U-pxX7DJWQ
Key points from the thread (on top of what you summarized in the post):
Table with results
We used difference-in-difference models to show these programs led to a 20% reduction in mortality for women, and an 8% reduction in risk of death for children under 5
Mortality reductions in different groups over time
Mortality reductions began within 2 years of program introduction and generally got larger over time
(Can someone make a more easily parsable version of this graphic? The data from the study is publicly available..)
I didn't know this term, but it's the method that I was imagining. From Wikipedia: [This method] calculates the effect of a treatment [...] on an outcome [...] by comparing the average change over time in the outcome variable for the treatment group to the average change over time for the control group.
I appreciate this post, thanks for sharing it! I'm curating it. I should flag that I haven't properly dug into it (curating after a quick read), and don't have any expertise in this.
Another flag is that I would love to see more links for parts of this post like the following: "In an especially egregious example, one of the largest HVAC companies in the state had its Manual J submission admin go on vacation. The temporary replacement forgot to rename files and submitted applications named for their installed capacity (1 ton, 2 ton, 3 ton, etc.), revealing that the company had submitted copies of the same handful of designs for thousands of homes." I don't fully understand what happened or why (the company was cutting corners and was pretending the designs were customized when they were actually not?), and a link would help me learn more (and see if I agree with your use of this example!). (Same with the example about building managers in schools in the early days of COVID.)
I'm really grateful that you've shared this; I think the topic is relevant, and I'd be excited to see more experts sharing their experiences and what various proposals might be missing. I particularly appreciated that you shared a bit about your background, that you used a lot of examples, and that the "Complexity and opacity strongly predict failure" heuristic is clear and makes sense. I think further work here would be great! I'd be particularly excited about something like a lit review on many of the interventions you listed as promising (which would also help collect more readings), and estimates for their potential costs and impacts.
Minor suggestion/question: would you mind if I made the headings in your post into actual headings? Then they'd show up in the table of contents on the side, and we could link to them. (E.g.)
No need to apologize, and thanks for making the topic page, Will! I batch-approve and remove new Wiki entries sometimes (and reorganize the Wiki more generally), but I'm not prioritizing this right now. I do hope that we'll get more attention on the Wiki soon, though (in the next couple of months). I've added a note to the Wiki FAQ — thanks for that suggestion!
Note that this was covered in the New York Times (paywalled) by Kevin Roose. I found it interesting to skim the comments. (Thanks for working on this, and sharing!)
Following up on this: we've expanded the Community section on the Frontpage to show 5 posts instead of 3. Nothing else should have changed with this section right now.
Re Library page: I agree with and appreciate this suggestion. I'd be excited for that to be a list you can sort in different ways. I think it's on the list of things to prioritize, but I'll make sure.
Re top right search bar: I think they do, but they're at the bottom of the results, and in some cases that might get cut off. But you can also use the full search page for this, e.g.: https://forum.effectivealtruism.org/search?contentType=Sequences&query=classic%20posts%20from%20the%20&page=1
I also find the following chart interesting (although I think none of this is significant) — particularly the fact that pausing training of dangerous models and security standards have more agreement from people who aren't in AGI labs, and (at a glance):
got more agreement from people who are in labs (in general, apparently "experts from AGI labs had higher average agreement with statements than respondents from academia or civil society").
Note that 43.9% of respondents (22 people?) are from AGI labs.
I'm surprised at how much agreement there is about the top ideas. The following ideas all got >70% "Strongly agree" and at most "3%" "strong disagree" (note that not everyone answered each question, although most of these 14 did have all 51 responses):
The ideas that had the most disagreement seem to be:
(Ideas copied from here — thanks!)
In case someone finds it interesting, Jonas Schuett (one of the authors) shared a thread about this: https://twitter.com/jonasschuett/status/1658025266541654019?s=20
He says that the thread is to discuss the survey's:
Also, there's a nice graphic from the paper in the thread:
Thanks for sharing this! I might try to write a longer comment later, but for now just a quick note that I'm curating this post. I should note that I haven't followed any of the links yet.