Epistemic status: Motivated by the feeling that there's something like a missing mood in the EA sphere.  Informed by my personal experience, not by rigorous survey.  Probably a bit scattershot, but it's already more than a month after I wanted to publish this.  (Minus this parenthetical, this post was entirely written before the Bostrom thing.  I just kept forgetting to post it.)

The last half year - the time since I moved to Berkeley to work on LessWrong, and consequently found myself embedded in the broader Bay Area rationality & EA communities - have been surprisingly normal.

The weeks following the FTX collapse, admittedly, a little less so.

One thing has kept coming up, though.  I keep hearing that people are reluctant to voice disagreements, criticisms, or concerns they have, and each time I do a double-take.  (My consistent surprise is part of what prompted me to write this post: both those generating the surprise, and those who are surprised like me, might benefit from this perspective.)

The type of issue where one person has an unpleasant[1] interaction with another person is difficult to navigate.  The current solution of discussing those things with the CEA Community Health team at least tries to balance both concerns of reducing false positive and false negatives; earlier and more public discussion of those concerns is not a Pareto-improvement[2].

But most of them are other fears: that you will annoy an important funder, by criticizing ideas that they support, or by raising concerns about their honesty, given publicly-available evidence, or something similar.  And the degree to which these fears have shaped the epistemic landscape makes me feel like I took a wrong turn somewhere and ended up in a mirror universe.

Having these fears - probably common!  Discussing those fears in public - not crazy!  Acting on those fears?  (I keep running face-first into the fact that not everybody has read The Sequences, that not everybody who has read them has internalized them, and that not everybody who has internalized them has externalized that understanding through their actions.[3])

My take is that acting on those fears, by not publishing that criticism, or raising those concerns, with receipts attached, is harmful[4].  For simplicity's sake, let's consider the cartesian product of the options:

  • to publicize a criticism, or not
  • the criticism being accurate, or not
  • the funder deciding to fund your work, or not

The set of possible outcomes:

  1. you publicize a criticism; the criticism is accurate; the funder funds your work
  2. you publicize a criticism; the criticism is accurate; the funder doesn't fund your work
  3. you publicize a criticism; the criticism is inaccurate; the funder funds your work
  4. you publicize a criticism; the criticism is inaccurate; the funder doesn't fund your work
  5. you don't publicize a criticism; the criticism is accurate; the funder funds your work
  6. you don't publicize a criticism; the criticism is accurate; the funder doesn't fund your work
  7. you don't publicize a criticism; the criticism is inaccurate; the funds your work
  8. you don't publicize a criticism; the criticism is inaccurate; the funder doesn't fund your work

What predicted outcomes are motivating these fears?  2 and 4 are the obvious candidates.

I won't pretend that these are impossible, or that you would necessarily see another funder step in if such a thing happened.  You could very well pay costs for saying things in public.  I do think that people overestimate how likely those outcomes are, or how high the costs will be, and underestimate the damage that staying silent causes to community epistemics.

But I will bite the bullet: assuming the worst, you should pay those costs.  In the long run, you do not achieve better outcomes by pretending to have beliefs other than those you have, in order to extract grant money from intolerant funding sources.

If your criticism is accurate, and a potential source of funding decides to not fund you when they would have otherwise because of it, the only way for the others to orient and react to that defection is for them to see the criticism[5] and subsequent lack of funding[6].

If your criticism is not accurate, and a potential source of funding decides to not fund you as a result, the details end up being pretty important.  From the funder's perspective, the "best" possible reason for that kind of decision is if the criticism betrays serious intellectual or epistemic failure by the critic.  This might happen in the least convenient possible world, but in practice I think most such fears, when coming from good-faith actors, are the product of imposter syndrome.  (Needless to say, grifters and other bad actors are correct to have such fears.  Making EA more robust to adversarial forces is another excellent reason for being forthright about one's honest opinions.)

Then there are criticisms which one might fear would cause them to pay more indirect social costs.  Take as examples this and this.  Let me also take this opportunity to put my money where my mouth is, by making public a disagreement I have[7].  I do not think we should be inviting AI hardware capabilities organizations to EAG(x) career fairs.  The proposed theory of change[8] suffers, on priors, from being dominated by the 1st-order effects of having better AI hardware.  If you want to subsidize hardware for alignment research, doing it by starting an general AI hardware capabilities organization seems deeply perilous.  Just start a crypto exchange literally any other startup!

There are some practical takeaways from adopting this stance:

  • It is important to support someone who pays costs as a result of publishing rigorous, well-motivated criticism.  Relying on an after-the-fact process to catch you if you step on a broken stair is scary enough; knowing that there's no net at all renders this hardly more useful than yelling into the void.  There will be those laudable individuals willing to take the leap regardless, but it is simply good policy to support those who pay costs to generate positive externalities.  I'm not really sure what this support looks like, and there are obvious difficulties with trying to formalize anything here, but it would be good for something to exist.  Maybe after-the-fact prizes to those who proffered early EA-related criticisms, which were ignored/misunderstood/rabbit-holed at the time, but have since been integrated?  I believe this has happened at least once but can't currently find a reference.
  • You should carefully consider the price of your silence.  There are good reasons to be able to credibly promise that you will keep certain things secret, but many conversations end up happening in a totally unnecessary regime of secrecy out of social inertia.  NDAs are probably much more expensive than naive calculations would suggest.
  • Correspondingly, defaults are very important.  I claim that you should default to openness.  This forces you to be explicit about what you agree to keep secret, and reduces ambiguity about other people's expectations of you (which in turn reduces your own mental overhead for tracking those expectations).
  • Notice when you are flinching away from considering a specific course of action.
  • It helps a lot to be resilient to the "things go to shit because you decided you weren't going to stay quiet about something bad" scenario.  One reason I might be surprised by the reports I hear of self-suppressed criticisms is some mixture of the typical mind fallacy and the fundamental attribution error.  My realistic worst-case outcome, assuming I somehow managed to piss off everyone doing hiring and funding in domains I consider important, is that I have to give up on direct work entirely.  I switched to direct work as a mid-career software engineer with significant prior experience in industry, and if my former employer's fortunes have changed enough that they no longer want me back, I'm not concerned about my ability to find another industry role, nor am I under any meaningful time pressure to do so.  My friends and family in LA would also be quite glad to have me back.  Don't misunderstand: this would suck a lot.  But it would suck because of what it implied about my ability to effect the kind of change that motivated me to make the jump in the first place, not because I'd be totally bereft of social and professional opportunities as a result.  The same is not necessarily true of many others, and I expect that makes it much harder when such a situation arises.

The main thing I want this post to accomplish for readers is to raise to the level of conscious awareness the existence of these dynamics, and to hopefully let them notice in real-time if they ever run into them.  It's much easier to choose to do the right thing when you consciously notice the choice in front of you.

  1. ^

    Or worse!

  2. ^

    Though I can conceive of a case that it'd be a Kaldor-Hicks improvement.

  3. ^

    Not everybody needs to read The Sequences to understand why being honest and not submitting to blackmail & extortion are both critical to establishing healthy equilibria in communities.  I, personally, did become more scrupulously honest after internalizing those lessons.

  1. ^

    I'm not totally sure how I relate to setting the zero point, or to what things should be considered superogatory.  In this case I chose the word "harmful" because it feels like the correct frame due to the background context, but I don't think a different choice would be crazy.

  2. ^

    Which has the advantage of being accurate, and is therefore advantaged in that astute observers are more likely to consider it correct!

  3. ^

    Unfortunately probabilistic, but since "obviously good" ideas tend to be slam dunks across multiple funding sources, it wouldn't take many such cases to establish a pattern.

  4. ^

    The originating thought came from someone else, but I agree with it.

  5. ^

    Which, to be clear, is not coming directly from the organization, so may not be an accurate representation of their views.

Show all footnotes
Comments13


Sorted by Click to highlight new comments since:

I do agree with you that silence can hurt community epistemics.

In the past I also thought people worried about missing out on job and grant opportunities if they voiced criticisms on the EA Forum overestimated the risks. I am ashamed to say that I thought this was a mere result of their social anxiety and pretty irrational.

Then last year I applied to an explicitly identified longtermist (central) EA org. They rejected me straight away with the reason that I wasn't bought into longtermism (as written up here which is now featured in the EA Handbook as the critical piece on longtermism...). This was perfectly fine by me, my interactions with the org were kind and professional and I had applied on a whim anyway.

But only later I realised that this meant that the people who say they are afraid to be critical of longtermism and potentially other bits of EA because they are worried about losing out on opportunities were more correct than I previously thought.

I still think it's harmful not to voice disagreements. But evidently there is a more of a cost to individuals than I thought, especially to ones who are financially reliant on EA funding or EA jobs, and I was unreasonably dismissive of this possibility.

I am a bit reluctant to write this. I very much appreciated being told the reason for the rejection and I think it's great that the org invested time and effort to do so. I hope they'll continue doing this in the future, even if insufficient buy-in to longtermism is the reason for rejection.

I’d also like to point out this post as related to the topic of speaking your mind etc: https://forum.effectivealtruism.org/posts/qtGjAJrmBRNiJGKFQ/the-writing-style-here-is-bad

To my mind the artificial academic tone here makes people feel the stakes are much higher than they should be for an online discussion forum. Also, people who have English as a second language likely have much more insight into the blunders EA is making, especially when it comes to messaging our ideas to the public.

By selecting for people who talk in strict academic tones and with an overwrought style, I’d imagine we lose a lot of legitimate opinions that could help us course correct.

To the extent that writing on the AI forum matters to EA decision making it has high stakes. Opinions that would actually allow the EA community to course correct have stakes that are worth millions of dollars. 

Is there an EA org that makes sure anyone in a position of power within the movement understands the unbelievable immense value of Psychological Safety? If not, it should be a thing. Happy to help.

Curated and popular this week
 ·  · 11m read
 · 
Does a food carbon tax increase animal deaths and/or the total time of suffering of cows, pigs, chickens, and fish? Theoretically, this is possible, as a carbon tax could lead consumers to substitute, for example, beef with chicken. However, this is not per se the case, as animal products are not perfect substitutes.  I'm presenting the results of my master's thesis in Environmental Economics, which I re-worked and published on SSRN as a pre-print. My thesis develops a model of animal product substitution after a carbon tax, slaughter tax, and a meat tax. When I calibrate[1] this model for the U.S., there is a decrease in animal deaths and duration of suffering following a carbon tax. This suggests that a carbon tax can reduce animal suffering. Key points * Some animal products are carbon-intensive, like beef, but causes relatively few animal deaths or total time of suffering because the animals are large. Other animal products, like chicken, causes relatively many animal deaths or total time of suffering because the animals are small, but cause relatively low greenhouse gas emissions. * A carbon tax will make some animal products, like beef, much more expensive. As a result, people may buy more chicken. This would increase animal suffering, assuming that farm animals suffer. However, this is not per se the case. It is also possible that the direct negative effect of a carbon tax on chicken consumption is stronger than the indirect (positive) substitution effect from carbon-intensive products to chicken. * I developed a non-linear market model to predict the consumption of different animal products after a tax, based on own-price and cross-price elasticities. * When calibrated for the United States, this model predicts a decrease in the consumption of all animal products considered (beef, chicken, pork, and farmed fish). Therefore, the modelled carbon tax is actually good for animal welfare, assuming that animals live net-negative lives. * A slaughter tax (a
MarieF🔸
 ·  · 4m read
 · 
Summary * After >2 years at Hi-Med, I have decided to step down from my role. * This allows me to complete my medical residency for long-term career resilience, whilst still allowing part-time flexibility for direct charity work. It also allows me to donate more again. * Hi-Med is now looking to appoint its next Executive Director; the application deadline is 26 January 2025. * I will join Hi-Med’s governing board once we have appointed the next Executive Director. Before the role When I graduated from medical school in 2017, I had already started to give 10% of my income to effective charities, but I was unsure as to how I could best use my medical degree to make this world a better place. After dipping my toe into nonprofit fundraising (with Doctors Without Borders) and working in a medical career-related start-up to upskill, a talk given by Dixon Chibanda at EAG London 2018 deeply inspired me. I formed a rough plan to later found an organisation that would teach Post-traumatic stress disorder (PTSD)-specific psychotherapeutic techniques to lay people to make evidence-based treatment of PTSD scalable. I started my medical residency in psychosomatic medicine in 2019, working for a specialised clinic for PTSD treatment until 2021, then rotated to child and adolescent psychiatry for a year and was half a year into the continuation of my specialisation training at a third hospital, when Akhil Bansal, whom I met at a recent EAG in London, reached out and encouraged me to apply for the ED position at Hi-Med - an organisation that I knew through my participation in their introductory fellowship (an academic paper about the outcomes of this first cohort can be found here). I seized the opportunity, applied, was offered the position, and started working full-time in November 2022.  During the role I feel truly privileged to have had the opportunity to lead High Impact Medicine for the past two years. My learning curve was steep - there were so many new things to
Sarah Cheng
 ·  · 2m read
 · 
TL;DR: The EA Opportunity Board is back up and running! Check it out here, and subscribe to the bi-weekly newsletter here. It’s now owned by the CEA Online Team. EA Opportunities is a project aimed at helping people find part-time and volunteer opportunities to build skills or contribute to impactful work. Their core products are the Opportunity Board and the associated bi-weekly newsletter, plus related promos across social media and Slack automations. It was started and run by students and young professionals for a long time, and has had multiple iterations over the years. The project has been on pause for most of 2024 and the student who was running it no longer has capacity, so the CEA Online Team is taking it over to ensure that it continues to operate. I want to say a huge thank you to everyone who has run this project over the three years that it’s been operating, including Sabrina C, Emma W, @michel, @Jacob Graber, and Varun. From talking with some of them and reading through their docs, I can tell that it means a lot to them, and they have some grand visions for how the project could grow in the future. I’m happy that we are in a position to take on this project on short notice and keep it afloat, and I’m excited for either our team or someone else to push it further in the future. Our plans We plan to spend some time evaluating the project in early 2025. We have some evidence that it has helped people find impactful opportunities and stay motivated to do good, but we do not yet have a clear sense of the cost-effectiveness of running it[1]. We are optimistic enough about it that we will at least keep it running through the end of 2025, but we are not currently committing to owning it in the longer term. The Online Team runs various other projects, such as this Forum, the EA Newsletter, and effectivealtruism.org. I think the likeliest outcome is for us to prioritize our current projects (which all reach a larger audience) over EA Opportunities, which
Recent opportunities in Building effective altruism
31
cescorza
· · 2m read