Hide table of contents

A week ago @Rockwell wrote this list of things she thought EAs shoudn't do. I am glad this discussion is happening. I ran a poll (results here) and here is my attempt at a list more people can agree with: .

We can do better, so someone feel free to write version 3.

The list

  • These are not norms for what it is to be a good EA, but rather some boundaries around things that would damage trust. When someone doesn't do these things we widely agree it is a bad sign
  • EAs should report relevant conflicts of interest
  • Eas should not date coworkers they report to or who report to them
  • EAs should not use sexist or racist epithets
  • EAs should not date their funders/grantees
  • EAs should not retain someone as a full-time contractor or grant recipient for the long term, where this is illegal
  • EAs should not promote illegal drug use to their colleagues who report to them

Commentary

Beyond racism, crime and conflicts of interest the clear theme is "take employment power relations seriously". 

Some people might want other things on this list, but I don't think there is widespread enough agreement to push those things as norms. Some examples:

  • Illegal drugs - "EAs should not promote illegal drug use to their colleagues" - 41% agreed, 20% disagreed, 35% said "it's complicated", 4% skipped
  • Romance during business hours - "EA, should in general, be as romanceless a place as possible during business hours" - 40% Agreed, 21% disagreed, 36% said "it's complicated", 2% skipped
  • Housing - "EAs should not offer employer-provided housing for more than a predefined and very short period of time" - 27% Agreed 37% Disagreed 31% said "it's complicated", 6% skipped.

I know not everyone loves my use of polls or my vibes as a person. But consensus is a really useful tool for moving forward. Sure we can push aside those who disagree, put if we find things that are 70% + agreed, then that tends to move forward much more quickly and painlessly. And it builds trust that we don't steamroll opposition.

So I suggest that rather a big list of things that some parts of the community think are obvious and others think are awful, we try and get a short list of things that most people think are pretty good/fine/obvious.

Once we have a "checkpoint" that is widely agreed, we can tackle some thornier questions.

Full poll results

60

4
0

Reactions

4
0
Comments26


Sorted by Click to highlight new comments since:
vin
15
0
0

What was the sample size?

59 people.

Thanks! And do you think the sample was representative?

Probably not that representative, no. I guess like 3-6/10

I think these polls would benefit from a clause along the lines of "On balance, EAs should X" because a lot of the discourse collapses into examples and corner cases about when the behaviour is acceptable (e.g. the discussion over illegal actions ending up being around melatonin). I think having a conversation centred about where the probability mass of these phenomena actually are is important. 

I dunno, we're talking about risks worth monitoring or being bad signals. That's about the edge cases. 

I don't want a big list of rules of things EAs should or shouldn't do.

I think the should/shouldn't do list is too binary. To make it onto this list it needs to be bag in almost all circumstances, which necessarily makes the list and narrow.

A list of things that you should think carefully before doing, and attempt to mitigate the downside risks if you decide to proceed, is more useful IMO. This can be broader and cover more of the grey area issues.

Can you suggest how you'd word it?

"Here is a list of behaviors/circumstances that tend to be risky. You should give serious consideration to avoiding these circumstances unless you have reason to believe that the risks don't apply to you. Be very careful if you choose to engage with these."

In my mind I'm thinking that it is roughly parallel to certain sports or certain financial investments: plenty of people come out fine, but the risks are much more elevated compared to the average/norm in that field (compared to the sports that people normally play, or compared to similar investments). I think that the personal circumstances matter a lot: to continue the financial and sport analogy, some people have the discipline to not pull money out of a bear market, or have years of practice walking a tightrope, and thus they are less likely to be hurt/damaged from certain behaviors.

Something like the following (I don't like this wording, but the vibe I'm going for):

EAs and EA organizations taking actions on this list should perform a risk analysis. If they decide to proceed, put in-place mitigations where reasonable and appropriate. If necessary, review the risk analysis; for example, if circumstances change or the situation lasts longer than expected.

I think Rockwell's list was a good basis for discussion, and this poll and post can help move that discussion - but a priori consensus is just one (albeit important) criterion for choosing norms. The expected effect from their adoption or rejection is another.

There should probably be some place and time where this can be discussed with more focus. Something akin to a conditional constitutional convention.

What would that convention look like?

Barring autocorrect (see edit to my comment), I imagine it'd be some collection of EAs who have discussion groups for a week or two on specific topics, and at the same time try to reach consensus in the full group on a set of norms.

I think until we choose that group this is a non-awful way of doing that?

This not dating funders/grantees is a little strange to me as phrased, although I certainly strongly agree in the cases most people are imagining.

As phrased it sounds like there is a problem (for example) of paying a girlfriend/boyfriend with your own funds to do an extended project. Which is sort of weird and unusual but what exactly is the problem with that? I think what this is getting at is you shouldn't date a grantee that you are deciding to pay with someone else's money or on behalf of a larger organization. Correct?

What "EAs think EAs should do" might not be a great way of dealing with these questions, but it is valuable information. People might also get thrown off by the title when the post seems to care more about signals pointing towards risks (as opposed to monitoring EA behaviour).

Sorry, what would you title it? I was trying to be in the same vein as the first post.

I’d be keen to hear your views and whether they differed from the poll results in any aspects.

I found the 17% of people who agreed that there shouldn't be discussion of polyamory a little upsetting. I doubt they really meant it the way it came across but it felt judgemental. 

I think in general I dislike much of the EA is too weird discussion tonally. As if weirdness is something that's cheap to change rather than very expensive.

I think it is extremely ambiguous about what "talk about polyamory" is, for example I imagine many people (tbh I'd guess more than 17% of EAs) would find it unpleasant if there's regular and unavoidable discussions of whether polyamory* is net bad for society, EA, etc, in their workplace. I'd personally be fine with it if other people are, but there's always going to be a part of me that'd be tracking whether people are likely to be non-visibly upset etc.

Now whether a non-work topic being upsetting means people shouldn't discuss it in the workplace is debatable. I think it'd be too draconian to have workplace rules against it (at least by what I understood to be coastal American norms), but having soft norms against it seems probably preferable.

*other examples that might fall into this category: monogamy, body positivity, feminism, Christianity; I'm sure people can generate other examples.

To be clear, when I voted that talking about polyamory in the workplace is OK, I meant someone telling a coworker about their own life/preferences/experiences.

For context on my own vote: I’d give the same answer for talking about monogamy.

  • People should clearly be able to say “my partner(s) and I are celebrating my birthday tonight” and “it’s my anniversary!” and look at this cute picture of my metamour’s dog!” and then answer questions if a colleague says, “what’s a metamour?” Just like all colleagues should be able to talk about their families at work.

  • People should be aware that it’s risky to spend work time nerding out about dating, romantic issues, sex, hitting on people, etc. People should be aware that mono people in the Bay have often reported feeling pressured or judged for not being poly. But just like with any relation type, discussing romance at work is very likely to make someone feel uncomfortable and junior people often won’t feel like they can say so.

Maybe this would provide a little more context. Politics, sexual and romantic relationships, money, and religion are topics that are traditionally considered somewhat private in the USA, and are widely viewed as somewhat rude to talk about in public. I would feel fine talking about any of these topics with a close friend, but I wouldn't want to hear a colleague discuss the details of their romantic relationship anymore than I want to hear the particulars of their money issues or their faith. Naturally, these norms can vary across cultures, but there is a fairly strong norm to not discuss these topics in a workplace in the USA, at least.

The other big factor that comes to mind for me is the difference between a mere mention in passing and a repeated/regular topic of conversation. On a very superficial level, we are there to work, not to talk about relationships. On a more social/conversational level, I don't want to be repeatedly badgered with an someone else's relationship status or romantic adventures. I don't think that polyamory should be a prohibited topic any more than "do you want to have kids someday" or "I'm excited for a date this weekend" should be prohibited. But if any of those are repeatedly brought up in the workplace... Well, I'd like to have a workplace free from that type of annoyance. So (for me at least) it is less about there shouldn't be discussion of polyamory in the workplace, and more about there shouldn't be regular and extended discussions of people's personal relationships in the workplace.

  1. ^

    I'm assuming that the colleague is an acquaintance, rather than a friend.

I think this is something to be careful of but I think putting it on a risk register or saying people shouldn't do it is a big step. And not what people do with other relationships.

Seems more of a post hoc justification than a coherent position regardless of relationship type.

Talking about about a partner's existence or day to day life with them is not widely considered private or rude (source: an American). Getting specific about feelings or sex is private, but serious partners come up in a lot of casual ways (what'd you do this weekend? Went roller skating with my girlfriend).

Elizabeth, if the meaning coming across is that I am proposing the mere acknowledgement of a partner's existence as rude, then I have phrased my writing poorly. I agree that talking about about a partner's existence or day to day life with them is not widely considered private or rude. It seems that we both agree that mentioning (What'd you do this weekend? Went roller skating with my girlfriend) is fine, and getting into specifics is more private.

I think maybe the misunderstanding might be focused on what "talking" means. 

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 15m read
 · 
In our recent strategy retreat, the GWWC Leadership Team recognised that by spreading our limited resources across too many projects, we are unable to deliver the level of excellence and impact that our mission demands. True to our value of being mission accountable, we've therefore made the difficult but necessary decision to discontinue a total of 10 initiatives. By focusing our energy on fewer, more strategically aligned initiatives, we think we’ll be more likely to ultimately achieve our Big Hairy Audacious Goal of 1 million pledgers donating $3B USD to high-impact charities annually. (See our 2025 strategy.) We’d like to be transparent about the choices we made, both to hold ourselves accountable and so other organisations can take the gaps we leave into account when planning their work. As such, this post aims to: * Inform the broader EA community about changes to projects & highlight opportunities to carry these projects forward * Provide timelines for project transitions * Explain our rationale for discontinuing certain initiatives What’s changing  We've identified 10 initiatives[1] to wind down or transition. These are: * GWWC Canada * Effective Altruism Australia funding partnership * GWWC Groups * Giving Games * Charity Elections * Effective Giving Meta evaluation and grantmaking * The Donor Lottery * Translations * Hosted Funds * New licensing of the GWWC brand  Each of these is detailed in the sections below, with timelines and transition plans where applicable. How this is relevant to you  We still believe in the impact potential of many of these projects. Our decision doesn’t necessarily reflect their lack of value, but rather our need to focus at this juncture of GWWC's development.  Thus, we are actively looking for organisations and individuals interested in taking on some of these projects. If that’s you, please do reach out: see each project's section for specific contact details. Thank you for your continued support as we
 ·  · 11m read
 · 
Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.  Overview Background This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.  This conference has evolved since 2023: * The 1st conference mainly consisted of philosophers and was a single track lecture/panel. * The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working. * This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year round. We have been quickly expanding this conference in order to prepare those that are working toward the reduction of nonhuman suffering to adapt to the drastic and rapid changes that AI will bring.  Luckily, it seems like it has been working!  This year, many animal advocacy organizations attended (mostly smaller and younger ones) as well as newly formed groups focused on digital minds and funders who spanned both of these spaces. We also had more diversity of speakers and attendees which included economists, AI researchers, investors, tech companies, journalists, animal welfare researchers, and more. This was done through strategic targeted outreach and a bigger team of volunteers.  Outcomes On our feedback survey, which had 85 total responses (mainly from in-person attendees), people reported an average of 7 new connections (defined as someone they would feel comfortable reaching out to for a favor like reviewing a blog post) and of those new connections, an average of 3