Hide table of contents

TL;DR: “More posts like this one” recommendations at the bottom of posts, design changes to the Frontpage and to feature better info on users’ profiles, and more.

A longer summary:

We’re trialling recommendations at the end of posts as a way to help you find content that is more suited to your interests. This will only appear for a small fraction of users currently, but you can opt in if you want to try it (see below).

On the design side, we have made a few changes to give more context on other users. We have started showing more info about authors (including profile images!) when you hover over their username. We have also added cute little icons next to the names of new users and post authors in the comments of their posts.

It's been a while since our last feature update, so there are a fair few other changes to go through.

Recommendations at the end of posts

Currently on the Forum it's somewhat hard to find high quality posts that are relevant to your specific interests. The Frontpage is weighted by recency + karma, which tends to mainly surface new posts that everyone likes. Topic filters[1] help with this to an extent but:

  1. Not that many people use them
  2. You may not know ahead of time exactly which topics you will be interested in

We are trying to solve this in the only way tech companies know how: with a recommendation algorithm. We’re trying this out on post pages at first because the majority of the traffic to the Forum is direct to posts, and currently there isn’t an obvious “next thing to read” once you have finished the one post.

This is being trialled for 10% of users initially. If you would like to opt in[2] you can go to this page and select "Recommendations on the posts page" in the final dropdown. We expect to make a version of this live for everyone soon.

The recommendations box we have added at the bottom of the page looks like this:

The recommendations chosen are similar posts that you haven’t read before, the main factors that go into selecting these are:

  • Being upvoted by the same users that upvoted the post you are on
  • Being tagged with the same topics
  • Karma

Design changes

Context on other users: icons by usernames and new profile hover previews

We have cleaned up and added more info to the preview of the user’s profile that appears when you hover over someone's name, including showing their profile image:

And we added these icons for new users (the green sprout) and the author of the post you are reading (the grey person-with-quill icon):

The two changes here are aimed at giving you more context on other users when you are casually scrolling around the Forum, and generally making the Forum seem (slightly) more friendly.

Frontpage changes (shortform!)

We have added a section for shortform posts to the Frontpage and simplified the “Classic posts” section (formerly called “Recommendations”):

Shortform has been a somewhat sidelined feature for a long time. Some people do use it and I think the things they post are great. But it was (and still is) relatively hard to find. We are experimenting with more changes to give shortform more prominence in the near future.

We also found that the “Recommendations” section was not being used much. We have simplified it, renamed it to "Classic posts" (which is closer to what it actually is), and hidden it by default for logged-in users (you can bring it back by toggling the arrow in the heading for that section).

A brief update on “Community” posts

A big theme of the previous few updates was that we were thinking about what to do with "Community" posts. Community posts tend to get systematically more karma (for reasons discussed here), which means that when they’re on the Frontpage together with other posts, they will crowd the other posts out whether people want to read those or not. People have also reported getting sucked into reading discussions on Community posts when they didn’t endorse this. We had moved them to a separate section, and later collapsed this section and removed comments on Community posts from the Recent Discussion feed under the Frontpage to try to address this.

Since then, the relative amount of engagement on community posts has gone down a lot from the peaks it had reached over the previous few months, possibly due to our change or possibly due to a natural lull in community discussions. We're not sure this is the best long-term solution though. Personally, many of my favourite posts of all time are community posts, and I’m a bit sad to see that they are now getting less attention.

We have added community posts with under 10 comments back into the "Recent discussion" feed as a way to try and keep community discussion as a central part of the Forum without it getting out of hand, and we may make some more changes in this direction in the near future. We're very interested in any thoughts you might have about this.

A separate site for bots

We have an open API on the Forum, and people can and do set up bots to scrape the site for various reasons. This has been causing a few performance issues recently (and in fact for a fairly long time), so we have set up a separate environment for bots to use. This is exactly the same as the regular Forum, with all the same data, just running on different servers: https://forum-bots.effectivealtruism.org/

Shortly after this is posted we’ll start blocking bots from the main site and redirecting them to this site instead.

If you live in the UK or EU[3] you will now have to explicitly accept the use of cookies. This enables a few things, such as google analytics and remembering whether you have toggled various sections open or closed. You can read our full cookie policy here.

Assorted other changes

  • There is a rudimentary read history page(!) which shows the most recent posts you have clicked on
  • Comments on posts are now sorted by “new & upvoted” by default (rather than “top”)
  • Footnotes will now be collapsed if there are more than 3 of them
  • Lots of other design tweaks:
    • The top of core topic pages have been redesigned
    • The comment box is now a lot simpler
    • Buttons are now in sentence case rather than upper case

Please give us your feedback

We’re always interested in getting feedback on the changes we make! You can comment on this post with your thoughts or contact us another way.

  1. ^

     The “customize feed” button on the frontpage — see more here.

  2. ^

    This footnote is only here to be used as a link

  3. ^

     If you saw it outside these places, then I’m sorry

Comments6


Sorted by Click to highlight new comments since:

I would suggest keeping the recommended posts optional. I like them a lot, but I worry they might keep me on the forum too long. They can definitely be on by default.

Thanks for the suggestion! We'll add a user setting for this 👍

We have an open API on the Forum, and people can and do set up bots to scrape the site for various reasons. This has been causing a few performance issues recently (and in fact for a fairly long time), so we have set up a separate environment for bots to use. This is exactly the same as the regular Forum, with all the same data, just running on different servers: https://forum-bots.effectivealtruism.org/

Also for the /graphql endpoint? Anyways, moved forum.nunosempere.com to that endpoint. But I'd expect this to be a small burden to others running their own stuff.

Yes this applies to all requests including /graphql. If the user agent of the request matches a known bot we will return a redirect to the forum-bots site. Some libraries (such as python requests and fetch in javascript) automatically follow redirects so hopefully some things will magically keep working, but this is not guaranteed.

I appreciate that this is annoying, and we didn't really want to do it. But the site was being taken down by bots (for a few minutes) almost every day a couple of weeks ago so we finally felt this was necessary.

Following up on this: we've expanded the Community section on the Frontpage to show 5 posts instead of 3. Nothing else should have changed with this section right now. 

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
 ·  · 2m read
 · 
In my opinion, we have known that the risk of AI catastrophe is too high and too close for at least two years. At that point, it’s time to work on solutions (in my case, advocating an indefinite pause on frontier model development until it’s safe to proceed through protests and lobbying as leader of PauseAI US).  Not every policy proposal is as robust to timeline length as PauseAI. It can be totally worth it to make a quality timeline estimate, both to inform your own work and as a tool for outreach (like ai-2027.com). But most of these timeline updates simply are not decision-relevant if you have a strong intervention. If your intervention is so fragile and contingent that every little update to timeline forecasts matters, it’s probably too finicky to be working on in the first place.  I think people are psychologically drawn to discussing timelines all the time so that they can have the “right” answer and because it feels like a game, not because it really matters the day and the hour of… what are these timelines even leading up to anymore? They used to be to “AGI”, but (in my opinion) we’re basically already there. Point of no return? Some level of superintelligence? It’s telling that they are almost never measured in terms of actions we can take or opportunities for intervention. Indeed, it’s not really the purpose of timelines to help us to act. I see people make bad updates on them all the time. I see people give up projects that have a chance of working but might not reach their peak returns until 2029 to spend a few precious months looking for a faster project that is, not surprisingly, also worse (or else why weren’t they doing it already?) and probably even lower EV over the same time period! For some reason, people tend to think they have to have their work completed by the “end” of the (median) timeline or else it won’t count, rather than seeing their impact as the integral over the entire project that does fall within the median timeline estimate or
Recent opportunities in Building effective altruism
49
Ivan Burduk
· · 2m read