I was positively surprised to find out that I was able to edit my username in the forum to be my full name. As I was previously under the impression that this was impossible, I wanted to share this and encourage users to consider switching to their full names.

The suggestion in the how-to-guide is:

In general, we think that real names are good for community bonding, and we encourage you to use yours. But it's not required.

I think this is a good policy. I can imagine cases where using a pseudonym might make it easier to communicate openly without people outside the community being able to connect the post to the author. For most posters, especially the frequent posters, it seems relatively easy to find out who the author is. After meeting several users at EAGs I'm building up a mental database where I keep track of real names (used on Swapcard and emails), forum user names, and sometimes nicknames and Twitter handles. This seems unnecessary.

Connecting names you saw in comments and posts to name tags at conferences makes it easier for people new to the community to start conversations based on what you read. It's also easier when you hear others refer to people by their real name.

In a growing community that aspires to be welcoming, I think it's a good norm to make it easy for people to learn about the engaged participants. In addition to using the real name, I would also like to encourage adding a description to the profile. This can include the current organisation, group, university, cause area or GWWC membership. Similar to Swapcard at the EAG conferences, it helps to understand better where someone is coming from or is currently active.

A counter-argument might be that readers might defer too much to people with impressive affiliations instead of focussing on the content. I would agree with that. However, currently, many pseudonyms seem to be known to engaged members, which leads to different levels of knowledge.

Looking at the posts with the highest karma it's nice to see many using real names already and I hope to see more in future.

Comments30


Sorted by Click to highlight new comments since:

I'd add that as a person who has done recruiting for EA orgs, I like to try to hire from talented-seeming EA forum posters and it is a lot easier to try to recruit someone when their full name is accessible from their username or bio.

did you ever search the forum for negative indicators of whether someone is unfit?

As someone who works in your org, I'm confused about how this works in practice fwiw. As far as I understand it:

  •  in the main hiring rounds, we have application blinding in approximately all rounds except the interview stage, so having an impressive EA Forum presence, real-name or otherwise, shouldn't be too applicable.
    • sometimes we allow people who either perform well in past hiring rounds, or we are otherwise very confident in their competence, to skip certain stages, but I'm not sure how much real vs  pseudonym EA Forum acc will be relevant here. 
  • Occasionally we try to recruit people off-cycle, but to the best of my knowledge a) this is rarely due to EAF contributions (compared to other contributions) and b) it's not hard to just ping an username expressing interest. EAF has a messaging system!
  • I'm not aware of many (any?) research hires that were actually made off-cycle.

The main thing would be reaching out to invite people to apply to our hiring rounds.

I do concede we could invite anonymous people to apply though.

You're right that we don't do much off-cycle recruiting.

I do concede we could invite anonymous people to apply though.

I've done this before fwiw.

Can't people just link to their EA Forum profile, just like linking to their GitHub profile?

I think you mean "your first name" or something like that, rather than necessarily "your full name"? 

My suggested default would be to write your full real name in your bio, fill in other info about you in your bio, and make your Forum name sufficiently related to your real name that people who at one point learned the connection will easily remember it. (As I've done.) 

If one does that, then also making one's Forum name their full real name seems to add little value and presumably adds some risk to their 'real life' reputation if they want to later pursue a policy/political career or something, since a lot of discussion on the Forum would look pretty weird to a lot of people. (Though I'm not sure how large that risk really is, or how much of it occurs anyway just via the sort of approach I've taken where my name is in my bio.)

My policy on this, to the extent I have one, is a sort of soft lockdown: I don't mind sharing enough personal info on here that an EA who knows me in real life could figure out my identity, but I need to always have at least plausible deniability in the face of any malicious actor. 

As far as the risks in policy careers, I think the risk is very high for appointed jobs and real but lower for elected ones. Politicians are more risk averse than voters, and when they can pick from a pool of 100, they'll look for any reason to turn you down. When the voters have to pick one of two or a small handful of candidates, they gotta make a decision, by election day, and maybe they don't care so much about a few mildly controversial statements.  

If EA-aligned employers are using ppl saying smart stuff on here as a basis for hiring, but only if they have a real name account, I suggest they simply stop arbitrarily eliminating a major portion of their potential talent pool. Pretty easy to reach out to someone and ask for their identity if u are interested in hiring them. 

Hugely seconded. When I was signing up for an account, I considered going anonymous (what if I want to discuss controversial things!), but I figured the upside career & social potential of using my real name outweighed the downside risk that cancel culture might someday come for Effective Altruism. Since then, my decision has been totally vindicated -- numerous people have reached out to me for conversations about EA stuff, or even ask if I'd like to apply for a job at their EA org. I feel like this would have happened less if I wasn't using my real name, since people wouldn't be able to take the intermediate getting-to-know-me step of googling for my linkedin, visiting https://jacksonw.xyz/, or etc. That intermediate step of internet research probably makes people more comfortable reaching out and making a connection.

Nah, I wanna be able to speak freely without it affecting my job. 

I changed my display name as a result of this post, thanks!

Me too!

Just throwing another comment here for support, read and changed.

Thanks for the post! We do encourage people to use their real names as their usernames.

Our current policy is that each user can change their own username once[1] - you can do this by going to the Edit Account page and updating your "Display Name".

After that, further changes to your username need to be done by moderators. Please contact us to ask to change your username. :)

  1. ^

    Unfortunately we had a bug that took this one chance away for many users. This should be fixed for new accounts going forward, but if you don't see this option in your Edit Account page, then please reach out to us and we will change your username for you.

I changed my username following the advice of Edo Arad. Trust him, he's the founder of Naming What We Can!

I'd be interested to hear if he has something more to say on top of the reasoning here. I was in discussion with a group of EA/rationalists recently, and they were all very opposed to the idea which I tried to support with the arguments here. I think I came out still confident it is the right choice for me, but am unsure if I'm confident enough to be prescriptive about it for others. 

It would probably help if you'd list out the reasoning (?)

 

Meta: Do you think this is a situation that one side is correct and the other side is wrong and you better try together to find the "truth"?

Yeah so a big part of it is the simple and straightforward "I don't want a potential employer to be able to assess what I've said all over the forum". 

The second part was more interesting to me, because there was also this argument that a norm of anonymity has some strong benefits, like people feeling able to truly express how they feel on a topic. I think this connects to the sort of "always be polite" heuristic, where it seems like generally speaking the world could use more straightforward, honest responses, and that anonymity is likely to increase this sort of response and is thus good.

I threw out what I felt to be the common replies to this, that you probably shouldn't be making a comment if it's to the point that a potential future employer would downrate your quality based on reading it, that anonymity gives free reign to people being inconsiderate to a trollish level sometimes that is the opposite of productively honest, that connections to real people seem important and that using real names seems like it would foster that. But alas, it was all to no avail, the room was still overwhelmingly pro anonymity in the case of the forum (they conceded that smaller virtual communities could probably drop the anonymity as it becomes somewhat useless as you get to know all the specific people well). 

On the meta note, I think this is a situation where I feel like for around maybe 99% of people on the forum there is probably a generally better option they could opt for that would trend towards a healthier community. But I'm also very generally against the idea of lots of things just being up to individual circumstance,  so this is a rather unsurprising response given my outside thoughts. What do you think though?

Ah

I agree with the tradeoff of [feeling comfortable to post stuff] vs [being closer to others with their name].

For myself, I try to push myself slowly towards being "open", but I don't want to override my own comfort zone to strongly (also because I know I have 1000 things to fix and I can't work on them all at once).

I also wouldn't want to push others, for similar reasons.

I do endorse what Edo did for me - a small nudge which was no pressure but was enough to get me to think about the question.

I’ve reversed an earlier decision and have settled on using my real name. Wish me luck!

I'm sure you've all seen the EA hub post that was put up about a month ago.  But it's worth re-stating that it's hard to find someone specific in EA sometimes. 

I sometimes use the forum when I'm trying to get in contact with people, primarily by searching their name! 

I'd add that having people use their real names adds to the forum looking like a platform for professional discussion, and adds transparency - both of which are important because of the impact and reach we wish to eventually achieve as a movement.

While pseudonyms have some use cases - the main one I can think of, is when one may fear retaliation for reporting bad behaviour of another EA or organisation - they should indeed be otherwise extremely discouraged.

Edit: ok, this paragraph was in hindsight somewhat exaggerated, and I can think of a few use cases that may be more common. But I still think anyone using a pseudonym should at least have a good reason in mind.

"Extremely discouraged" seems a bit dramatic. Some of us would rather not have our heavy EA involvement be the first thing that shows up when people Google us.

I don't personally think that's a good reason to not use one's name, but I'll concede my phrasing was indeed a bit too dramatic. It's probably because my experience on the forum is that it's really frustrating not being able to connect other commenters to a human identity.

fair enough 

Have Fun

FYI you can contact the EA Forum team to get your profile hidden from search engines (see here).

I also thought this was impossible, so ended up creating a new account with my name as my username. In fact even now I can't see how to do it, I don't see an option in either my profile or account settings?

Thanks for asking! It sounds like you were affected by our bug, so please contact us and we will update your username for you.

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
 ·  · 2m read
 · 
In my opinion, we have known that the risk of AI catastrophe is too high and too close for at least two years. At that point, it’s time to work on solutions (in my case, advocating an indefinite pause on frontier model development until it’s safe to proceed through protests and lobbying as leader of PauseAI US).  Not every policy proposal is as robust to timeline length as PauseAI. It can be totally worth it to make a quality timeline estimate, both to inform your own work and as a tool for outreach (like ai-2027.com). But most of these timeline updates simply are not decision-relevant if you have a strong intervention. If your intervention is so fragile and contingent that every little update to timeline forecasts matters, it’s probably too finicky to be working on in the first place.  I think people are psychologically drawn to discussing timelines all the time so that they can have the “right” answer and because it feels like a game, not because it really matters the day and the hour of… what are these timelines even leading up to anymore? They used to be to “AGI”, but (in my opinion) we’re basically already there. Point of no return? Some level of superintelligence? It’s telling that they are almost never measured in terms of actions we can take or opportunities for intervention. Indeed, it’s not really the purpose of timelines to help us to act. I see people make bad updates on them all the time. I see people give up projects that have a chance of working but might not reach their peak returns until 2029 to spend a few precious months looking for a faster project that is, not surprisingly, also worse (or else why weren’t they doing it already?) and probably even lower EV over the same time period! For some reason, people tend to think they have to have their work completed by the “end” of the (median) timeline or else it won’t count, rather than seeing their impact as the integral over the entire project that does fall within the median timeline estimate or