I want the forum to remain a place where there is open discussion and one where upvotes and attention are tied to argument quality and usefulness. I suspect that seeing people's names doesn't help foster open discussion and agnostic use of attention, though I have no concrete evidence or analysis to provide.  

There are also different options for how to anonymize. It could be an optional add on, It could be hidden text that unhides when you click, it could only anonymize posts that are presenting arguments, etc.  We could also just run a week of anonymization and see how it goes. 

One last point is that if it is an issue of getting credit for your work, we could still all have accounts with our names that include all of our comments, but you would have to click directly on profiles to see what comments/posts they have made.

Interested to hear if people feel similarly or a steel-man for why names are important/why anonymization would be harmful. 

10

0
0

Reactions

0
0
Comments9


Sorted by Click to highlight new comments since:

I find names helpful to decide what to read and to associate ideas with people. If you want to hide names on your end, you can view the forum through https://ea.greaterwrong.com and click the anti-kibitzer button on the right side of the page.

Oh awesome thanks for the link, did not know that. 

I find names helpful for similar reasons. I'm curious how much more productive  you feel it makes you (vs a counterfactual where you can click on the accounts that made posts to see true identity), though it might be hard to give a concrete answer. 

It's sort of interesting that the thing you like about names is also the thing I think could cause problems. 

Using names to sort content could improve epistemics  if you sorted people well. But it also could reduce if you sort badly. Personally I'm not confident my own views of the people in this community are well-founded. Plus people with overall bad epistemics can write good arguments and vice versa. 

I also like to associate ideas with people, would be curious if anyone has information of if this is actually useful for learning important stuff faster.

If it is an issue of getting credit for your work

For me it isn't an issue of credit, it's an issue of accountability. It's easy to write fringy or even harmful things when you're anonymous.

I also think EA is likely to be taken more seriously if outsiders see that EAs are willing to stake their real-world status on their comments.

I don't really see an issue with an optional add-on (on the reader side), but I think it's important that names are relatively easily viewable.

Agreed that accountability and bad behavior would be an issue if everything was fully anonymous. I definitely wouldn't be in favor of any sort of full anonymity. Moreso some very surface level version, to try to give people a chance to assess things without preconceived notions. If you could click on the author and see who is was it really wouldn't incentivize more bad behavior (I think).

I feel more neutral  about your point about outsiders respecting the status staking because I don't think there are many "swing voters" spending time on the forum nor do I think public names would be the make or break for the majority of those people. But this response from me is complete speculation and ultimately we would have to see the data. 

I think curious looks from the outside will grow the more people, money, and power the movement has access to. But otherwise I mostly agree.

I use the LessWrong anti-kibitzer to hide names. All you have to do to make it work on the EA Forum is change the URL from lesswrong.com to forum.effectivealtruism.org.

Summary: Doing this in practice is literally almost possible. 

The goals of having the EA Forum be a platform with open discussion and arguments evaluated based on quality and usefulness with impartiality to whoever wrote them can only be achieved by other means. 

It's not a matter of steelmanning why it might be harmful. I would be okay with anonymizing all user accounts on the EA Forum too but it's not a matter of opinion because any attempt to do so is practically guaranteed to fail. The EA Forum serves other necessary functions that are (considered) just as important. 

Every EA-affiliated organization or project uses the EA Forum for public communications and requests for feedback, fundraisers, job postings and quarterly/annual reports. There is no way all that content can be posted on the EA Forum without most of the authors being identified based on them being a staff member for whichever organization any post is about. 

If everyone stopped publishing all of those kinds of posts on the EA Forum, there would be almost no content on the EA Forum. Discussions and arguments on the EA Forum are almost all related to posts whose authors can't be anonymized on both the EA Forum and even other websites. 

I've also thought about this as a way to minimize bandwagoning and "hero worshiping[1]." I remember seeing posts by well-known and influential people that I thought were fine/good, but not amazing/great, and within a few hours of posting the posts already had a lot more upvotes than a fine/good post by a not well-known author. I don't have specific examples off the top of my head, but I can easily imagine two posts of equivalent value/quality getting very different engagement based on who the author is.

  1. ^

    I really don't like this term, and my guess is that well-known and influential people in EA also don't like this term being used to refer to them. But I can't think of a word that means something roughly similar but on a lower intensity.

This is a really cool idea and I think your proposed solutions are really neat ! I've been thinking of ways to make posting on the forum less scary, and anonymous posting came up as a way of doing that, but I never thought about it's value in de-clouting readers' perceptions of pieces.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f