Hide table of contents

Context

Saulius commented that he was told off by Rethink Priorities’ management because he:

Wrote the sentence “I don’t think that EAs should fund many WAW [wild animal welfare] researchers since I don’t think that WAW is a very promising cause area” in an email to OpenPhil [context]

Feel free to check the thread for more context.

Questions

To better understand how common experiences like Saulius’ are, you can fill this form to answer the following questions:

  • How much pressure do you feel against publicly expressing views which do not conform to those of your manager or organisation?
    • This is a multiple choice question with 5 options, no, mild, moderate, significant and extreme pressure. Details about the meaning of each of these are in the form.
    • In rigour, only individuals have views, but you can think about those of your organisation as the view of the median person working there weighted by seniority (as proxied by e.g. annual salary).
  • What is your name?
  • What is your position?
  • Who is your manager?
  • What is your organisation?
  • What is your experience?

All questions are optional, so you can remain anonymous. That being said, the more you share, the easier it will be to identify you. To minimise selection bias, I encourage you to answer at least the 1st question. You are also welcome to share your responses below as answers to this post.

Feel free to check the submitted responses. You can update your own by submitting the form again.

For reference, you do not need to read the rest of the post to answer the questions.

My thoughts on transparency

Here are my thoughts on transparency:

  • Open truth-seeking and collaborative spirit are among the values at the heart of effective altruism, and openness, integrity and collaborative spirit are among the guiding principles of the Centre for Effective Altruism. I believe these values and principles are great heuristics to contribute to a better world, and that strong evidence is required to go against them.
  • I think pressure to withhold views that do not conform to those of one’s manager or organisation is mostly detrimental (although not always), as long as:
  • I suppose it is worth discussing controversial topics like The Meat Eater Problem given the above are satisfied (relatedly).
  • I would say people should be encouraged to share their views publicly and privately with funders, especially if they differ from those of one’s manager and organisation. In this case, the views will arguably be less known, and therefore update others more. Relatedly, I think:
    • Donor engagement should not fall solely under the purview of senior management.
    • Non-disparagement clauses in contracts should mostly apply in cases where it is clear that the person sharing the information is not being honest.
    • One should mostly focus on the truthfulness of the information being shared rather than how it will influence the funding and reputation of one’s organisation.
    • People will in expectation update towards the truth, and I expect this to be good.
      • In particular, I trust funders’ ability to account for the potential biases of controversial information.
      • Oliver Habryka, fund manager of the Long-Term Future Fund (LTFF), and very much involved in Lightspeed Grants, commented the following. “I feel very capable of not interpreting the statements of a single researcher as being non-representative of a whole organization, I expect other funders are similarly capable of doing that”[1].
  • I wish people working at the same organisation discussed their disagreements more publicly, as they are often quite knowledgeable about the topics at hand (see examples in the comments of this and this posts).
  • I understand organisations having open discussions could harm their reputation, but I do not see this as necessarily bad[2]. For example, if it comes to light that:
    • An organisation has systematically overestimated the cost-effectiveness of their work, it makes sense to start trusting less their cost-effectiveness analyses. In turn, this will correct biases, and contribute towards a better resource allocation across different organisations.
    • There is wide disagreement about the cost-effectiveness of certain projects among people working at the organisation, the organisation will not look particularly coherent. On the other hand, avoiding compressing all the views into one means there is more information. So funders will arguably be better positioned to assess which projects they find more impactful.
  • Open discussion may erode unity within organisations, thus distracting from work on their core mission. This is a drawback which has to be kept in mind. Yet, my hope is that unity can be extended from the organisational to the community and global level. Transparency is arguably key for this.
  • The usefulness of open discussions depends on power dynamics.
  • There is a coordination problem around sharing information which does not favour one’s own organisation, but I guess open discussions help solve it.
    • I appreciate organisations being unusually transparent about the potential downsides of their work could put themselves at an unwarranted disadvantage relative to others.
    • However, I would rather have a race to the top where organisations try to share the relevant information about their work than one where they try to maintain a non-ideal equilibrium by sharing as much as the typical organisation.
    • Open discussions will tend to bring about more open discussions, thus enforcing truth-seeking norms, which I consider good. As an example, I am only posting this thanks to Saulius’ comment.
  • I wonder how organisations and individuals can be incentivised to share information which (naively) does not seem to benefit them. As a starting point, I encourage organisations to internally discuss and publicly share their formal/informal policy around sharing potentially controversial information, like Rethink Priorities’ co-CEO Marcus Davis did.
  • Anonymous accounts have been used to share (often thoughtful) controversial information (e.g. Omega’s posts).

My experience

I am a research associate at Alliance to Feed the Earth in Disasters (ALLFED), for which I work as a contractor. I have publicly expressed views which do not conform to those of my organisation nor manager. My sense is that both ALLFED and my manager would have preferred it if I had not:

I felt mild pressure to conform, which I guess came from status quo bias. “A preference for the maintenance of one’s current or previous state of affairs, or a preference to not undertake any action to change this current or previous state”. It is unusual for people to be critical of the work of their organisations. By doing so, I perceived myself as going against the external and internal status quo.

My guess is that the median monthly active user of the EA Forum would not have done any of the above conditional on having my object-level views, but their personality and takes on transparency. However:

  • I am glad there have been internal and public discussions about the above among people working at ALLFED.
  • I have not had any trouble renewing my contract.
  • I still enjoy working for ALLFED, although I now think the cost-effectiveness of work to decrease famine deaths due to global catastrophic food failures is much lower relative to when I joined it.

Acknowledgements

Thanks to Anonymous Person 1, Anonymous Person 2, Anonymous Person 3, Anonymous Person 4, Anonymous Person 5, Farrah Dingal, Oliver Habryka, Pedro Amaral Grilo, Saulius Šimčikas and Sonia Cassidy for feedback on the draft[3]. Thanks to ChatGPT 4 and Pedro for feedback on whether I should publish the post.

  1. ^

     I think the “not” in this sentence was mistakenly included.

  2. ^

     I actually have a hard time coming up with examples where I thought an organisation had better not be discussing something in public. However, feel free to point to examples you are aware of.

  3. ^

     Names ordered alphabetically.

60

0
0

Reactions

0
0
New Answer
New Comment


1 Answers sorted by

I incorrectly posted something as answer

[This comment is no longer endorsed by its author]

Thanks for commenting, Juan! I think it would be better to share that as a comment instead of an answer.

Comments14
Sorted by Click to highlight new comments since:

As a government employee, I have a duty to speak candidly internally and not to share restricted information externally. I suspect most organisations have weaker but similar norms about the difference between how you speak to colleagues and externals.

Thanks for sharing, Kirsten!

Sorry, did this always say externally? Maybe I just need better reading comprehension!

I have not changed the text since I posted.

Okay, sorry for misreading, the poll makes much more sense now! I've edited the first part of my comment at it doesn't make much sense

There is some nuance to the case that seems to get overlooked in the poll. I feel completely free to express opinions in a personal capacity that might be at odds with my employer, but I also feel that there are some things it would be inappropriate to say while carrying out my job without running it by them first. It seems like you're interested in the latter feeling, but the poll is naturally interpreted as addressing the former.

Thanks for commenting, Derek!

It seems like you're interested in the latter feeling, but the poll is naturally interpreted as addressing the former.

I think both the types of pressure you mention are interesting. Feel free to elaborate on your experience in the answer to the question "What is your experience?".

Small question: Do you want anyone to fill in the form or only people working for EA related organisations?

Thanks for asking, Jeroen! From my perspective, everyone is welcome to fill the form. Then people can specify the name of their organisation, just whether it is aligned with effective altruism or not, or none of these.

Thank you, Vasco, for this, and also for the way you went about this. 

Transparent discussions around any issue are important to have and generally helpful, especially when motivated by truth-seeking and creating a better world.

It is something to be generally encouraged and not to limit or curtail. 

As you have yourself discovered, we are quite willing to engage in such conversations at ALLFED (on both personal and organisational level). You have suffered no repercussions for voicing not-quite-aligned opinions and, indeed, stimulated a whole bunch of healthy debates.

It is good to acknowledge that such conversations are rarely easy and usually uncomfortable. But the ability to gracefully engage in the difficult and the uncomfortable is in itself something to aspire to and in itself something of a measure of personal and organisational maturity.

In such conversations, a lot depends on the organisations’ - and one’s managers’ - willingness and ability to listen and to be disagreed with, and a lot also depends on the manner in which such issues are brought up. 

Speaking on behalf of ALLFED and with regard to this particular post, we appreciated the heads-up and not being surprised by it. This simple (?) act of care and courtesy can now, in turn, further facilitate conversations, and help build trust in everyone’s good will, and also in our collective ability and competence to have difficult conversations (on whatever subject). 

Personally - and here comes my “personal opinion” piece - I think that periodic shake-ups to any status quo are generally healthy and necessary for growth. I appreciate your courage to pursue your truth, especially taken your awareness that this is not “how things normally are done.” 

Thanks, Sonia, I appreciate the way you handled the situation too!

One thing I think missing from the form IMO -- there can be many forms of pressure. Those include pressure not to expressing views that dont align with the organization's, pressure not to express controversial views, and pressure not to express one's own personal views in domain-related fields at all.

You seem to be mainly focused on the first type, but the second and the third types often cause or imply the first. I suspect that what to do about a reluctance will differ on whether it is fundamentally of the first, second, or third types. But if I filled out the survey, I don't think the results would convey that my non-EA org doesn't really want personal commentary on job-relevant issues at all (whether it diverges from the party line or not).

Nice points, Jason!

But if I filled out the survey, I don't think the results would convey that my non-EA org doesn't really want personal commentary on job-relevant issues at all (whether it diverges from the party line or not).

Feel free to elaborate on your experience in the answer to the question "What is your experience?".

Thanks for the context, Juan!

In this post he built a model that estimates that preventing people from starving in close to half the countries of the world after a food shock would be net negative for the long-term future.

I no longer endorse the methodology of that post. Currently, I think:

  • The nearterm extinction risk from nuclear war is so low that it is better to assess interventions which aim to decrease famine deaths in a nuclear winter based on standard cost-benefit analysis (CBA), where saving lives is assumed to be good.
  • From a neartermist perspective, one can assume saving lives is good despite the meat eater problem.

I plan to post about both of the above in the coming weeks or months.

Which, in his defense, is technically not exactly advocating for genocide.

I have changed my mind about many things, but I have always strongly opposed killing people. Deciding not to save statistical lives is very different from killing people:

  • By donating 10 % of one's income to GiveWell's top charities, which save a life for around 5 k$, one could save many lives through one's career.
  • However, not doing the above is not as bad as being a serial killer.

I also find it interesting that many people support the continuation of wars, thus plausibly increasing deaths, while claiming that can be good longterm (e.g. by supposedly preventing authoritarian countries from gaining power). I generally oppose supporting the continuation of wars based on the simple heuristic that killing people is very bad on priors (relatedly).

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f