Hide table of contents

It’s only been around a month since our last update post, but we have a bunch of new features for everyone. A quick summary of the updates: 

  • We’ve significantly updated the Forum search feature ⬇️
  • Two pilot subforums have launched ⬇️
  • The site is now ~25% faster ⬇️
  • September was a month of record usage on the Forum ⬇️
  • Some other changes and updates ⬇️

As a reminder, if you have any feature requests or suggestions, you’re very welcome to share them on the feature suggestion thread (or to get in touch). If you have comments or questions about anything here, you can also just leave a comment on this post. 

An updated search feature

We have a new search UI! This is hopefully easier to use, and it also lets you filter by date, topic, and a couple of other things. You can also exclude terms by writing phrases like “Forum -update” (to search for things that include the word “Forum” but exclude the word “update”), and search for specific phrases by using quotation marks. 

For now, filtering is only possible on desktop. You can access the search feature by typing something into the search bar that appears at the top right corner of your Forum window and hitting enter, or just going to this link

This update doesn’t incorporate all the suggestions we’ve received, but it’s a start. If you have other suggestions – particularly if there are instances where the old UI was better for you – please let us know! We might not implement your suggestions right away, as we’re focusing on some unrelated projects right now, but we’d love to hear them and may prioritize them.

Two pilot subforums: bioethics and software engineering

As the amount of content on the Forum grows, we want to help people to engage with the content that is most relevant and helpful for them. 

One approach we are exploring is subforums, and we launched our first pilot subforum (the bioethics subforum) last month. We later launched a software engineering subforum and will likely be rolling out more subforums over time. The direction we take will depend on how these pilots go.

We are continually adding features to the subforums, and your feedback is always appreciated!

Performance improvements

We’ve made a number of technical changes that improve the site’s performance. For instance, the Frontpage should now load around 25% faster.

We’ve also had a couple of brief outages recently, which caused some distress. We fixed an issue with our deployment process which should prevent this from happening in the future.

Record usage

The Forum has continued its rapid growth, and September hit records in every single engagement metric we track; total views, number of logged-in users, number of comments, number of accounts created with at least 5 posts viewed, number of posts with at least 2 upvotes, number of unique voters, total votes, and the number of monthly active users. The final day of the EA Criticism Contest also led to the highest single engagement day in Forum history.

Thanks all for using the Forum!

Other changes and updates

The first draft of this post was written by Ben, who wanted to make it sound like me (Lizka). He inserted this image with the rationale: “How can I make this post sound like it’s in Lizka’s voice? Oh right, triceratops AI art.” This reasoning seems very good, and I really appreciate it. :) (Image generated by DALL-E)


 

Comments12


Sorted by Click to highlight new comments since:

Subscribing to the new subforums adds a tag to the user's profile, see below.

This reads like a career/attribute tag.

 

I suspect people might read into this tag inappropriately, e.g. associating them with the skills/experiences/perspectives of being a SWE, when the person may not be a SWE at all or have any of these traits.

Also as a UX thing, getting a publicly viewable side effect of a subscription seems unexpected and out of the norm.


 

It's still there!

This is association is harmful for my EA forum experience. Pls fix senpai.

This week, someone correctly used "race condition" in a reply. 

If you click on your name in the top right corner, then click edit profile, you can scroll down and delete tags under "my activity" by clicking the x on the right side of each block.

Yes! Thank you!

Wait!

Unfortunately, this doesn't work, this unjoined me.

 

Hooray. Love the yellow six-legged triceratops. (Or is that one a biceratops? hmm or diceratops?)

Important clarification: the bi?ceratops has five feet! Much like Lamassu

The welcome message for the sub-forums seems like it should be dismissable as it is too much of an attention grabber to be there all the time.

Below is one case where EA Forum dark mode might be failing.

Might be a minor CSS/"missed one case" sort of thing .

https://forum.effectivealtruism.org/posts/A5mh4DJaLeCDxapJG/ben_west-s-shortform

The forum feels a bit cleaner these last few days. Maybe because there are fewer pinned / curated posts. I think I still like the <https://www.lesswrong.com/> or <https://ea.greaterwrong.com/> frontpages better (no pinned posts, or just one).

Personally, I've moved to mostly using <https://newsboat.org/> (an RSS reader) to browse new posts, rather than visiting the frontpage. 

I don't think that the above implies any particular course of action for forum maintainers, because they might be optimizing for the general majority. But for readers who really like clean interfaces, these other frontends might be worth looking into.

Excited about better search!

Two questions regarding license:

  1. Maybe the license/copyright info should be mentioned somewhere prominent, like at the bottom of every page? I can't see anything of the sort on mobile.
  2. What's the current license that we're transitioning away from?

Thanks! 

  1. The new license requirement doesn't start until December 1; when that starts we will think through how to display it. My current guess is that it shouldn't be very prominent because it's not something that most viewers will care about. Interested to hear push back on that if you disagree.
  2. There is no license currently.
Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
Recent opportunities in Building effective altruism
6
2 authors
· · 3m read