Hide table of contents

I’m a little confused who this is for: I think it’s for anyone who might want thoughts on orienting to the FTX situation in ways they’d most endorse later, especially if they are in a position of leadership or have people relying on them for guidance. It might not be coherent, it's just some thoughts, in the spirit of Scattered Takes and Unsolicited Advice.

This is written in my personal capacity, not as an employee of CEA

Something I’m thinking about a lot right now is how rationality, values, and judgment can be hardest to use when you need them most. My vision of a community is one that makes you most likely to be your best self in those times. I think I'm seeing a lot of this already, and I hope to see even more.

So for anyone thinking about FTX things, or talking about them with others, or planning to write things in that strange dialect known as “comms”, here’s my set of things I don’t want to forget. Please feel encouraged to add your own in the comments.

Integrity 

  1. There is no party line (I say by fiat) - I want EA to be ok after this, and it’s sure true that there are things people could say that would make that less likely, but I just really really don’t want EA to be a place where people can’t think and say things
    1. I want to give explicit okness to questions and wondering around judgment, decision quality or integrity of EA, EAs and EA leaders, and I don’t want to have a missing mood about people’s understandable curiosity and concern
      1. That said, obviously people in sensitive situations may not answer all questions you’re curious about, and in fact many of them have legal reasons not to.
    2. It is not your job by dint of being an EA to protect “EA the brand”.[1] You may decide that the brand is valuable in service of its goals; you may also have your own opinions on what the brand worth protecting is.
    3. Sometimes I have opinions about what makes sense to share based on confidentiality or other things, but at a broad stroke, I tend to be into people saying the truth out loud (or if my system 1 says different, I want to be into it).
  2. Soldier-iness (here meaning the feeling of wanting to defend “your tribe”) is normal and some of it is tracking real and important things about the value of what we’ve built here. Integrity doesn’t mean highlighting every bad faith criticism. (But also don’t let the desire to protect what is valuable warp your own beliefs about the world)
    1. There are going to be a lot of incentives to pile on, especially if any particular narrative starts emerging, and I also want EA to be a place where you can say “this thing that looks bad doesn’t seem actually object level bad to me for these reasons”, or “Utilitarianism is good, actually” [2]or “EA is/isn’t worse than the reference class on this” or “I think the ways in which EA is different from other movements was a worthwhile bet, even if it added risk” or “I don’t think I know enough about this to have a take.”
      1. Updating your views makes sense, but probably you for the moment have most of the same views you had two weeks ago, and overupdating also lands you in the wrong place
      2. I would be sad if people jumped too quickly to repudiate their system of ethics, or all the unusual features of it that have let us aim at doing an unusual amount of good. I would also be sad if the vibe of our response felt disingenuous - aiming to appear less consequentialist than is the case (whatever that true case is), less willing to think about tradeoffs, etc.
    2. You don’t even need to have one take - you can just say a lot of things that seem true to you
  3. I want to say things here, on twitter, out loud, etc, that are filtered by “is this helping me and others think better and more clearly”. I might not always be maximizing “epistemic support”, but I certainly don’t want to be a burden to it.[3]
  4. I think people underestimate how valuable it can be to others to say what you think and why you think it, how you feel and whether you endorse it, and what you’re still uncertain about. I’d often recommend doing that rather than waiting to talk to people until you’re certain, especially if you’re not in a situation where legal advice is relevant.
  5. I worry about jumping quickly to good stories or framings
    1.  Like neat distinctions (Sam is the naive kind of utilitarian, not like us) being a new kind of EA judo
  6. An adaptation of a question I heard recently: “What’s a way you can especially live out your values in the next weeks and months?”

Interlude - a note of caution

  • The New York Times is quoting tweetsBloomberg is reading the forum, be aware that anything you say might go viral or get misinterpreted on social media or the news, with all the attendant stresses. I’m writing this in a personal capacity, but I happen to know that if that happens and you’d like support from the community health team, this form for contacting the team is here.
  • Some people are in tricky legal situations which inform what they choose to say. Decide for yourself how generous to be to that possibility.

Being in a position of supporting others

7. Worry, confusion, anger, stress, sadness, betrayal, are all normal, as is “I’m confused, I’m going to not think about this for a few weeks and see what shakes out” - people’s emotions don’t need to be changed or managed, though people might need support

8. I think honoring your own feelings - including all the above - is good, and can be done while being clear about which feelings you endorse and don’t 

9. Be honest with people, including meta-honesty or meta-transparency if that makes sense (ie tell people when there are things you won’t be able to talk openly about)

10. A possible failure mode is to want to throw yourself under the bus quickly - it’s good to reflect on your own thoughts and judgment, but jumping too early to certainty isn’t helpful even when you’re the one taking the hit

If you’re trying to figure out what’s true

11. I think “What do you think you know and why do you think you know it” remains a crucial question, especially since we’re in a position where more information is likely to come out in the future, and much information isn’t in a position to be shared. It’s hard for me to think there is a person reading this who should reasonably think they have the full story. Alongside, making your epistemic status and reasoning transparent helps collaborative sensemaking happen.

12. Split and commit - when you’re confused about what’s true, figure out in advance what your views will be if different things turn out or you see different evidence.

  • E.g. how will you update if enough money is raised by FTX.com to make the FTX.com customers whole? Or if it turns out there’s even less than is currently thought?
  • E.g. how will you update if what FTX did was unethical but not illegal? Normal in crypto world but not in normal business world? Or the biggest financial scandal since 2008?
  • E.g. how will you update if EAs are more furious about this than you expect? Less?

13. People are going to disagree about what this situation means, how to react, how people should reflect, etc. and you get to have your own thoughts in contention with them, or rejigger your whole thinking if you get a big update. You might want to think about your models of the world, what they would have expected, what evidence is different if your model is false and what evidence is equally likely regardless.

14. It’s normal for opinions to jostle a lot back and forth - notice when you’re not in reflective equilibrium and be honest with yourself and others about it. I, for instance, find my orientations and feelings to this jumping around a lot, so the tone of my takes is likely to differ from day to day. I want to be really open about that and say that I, like so many others, am muddling through it, and trying to only make decisions I have agreed with for a continuous 48 hours

15. I worry about hindsight bias being very prevalent, especially in judgments of others

I also think it is very sensible to think ahead to future difficult circumstances and try to extract a lot of learning from this

16. How do I think the world (of communities, of crypto, of finance, of scandals, in general) works? What am I surprised by? 

17. What predictions can I make now to see in days, weeks, months or years whether my model of the world is correct?

Fin

If you’re feeling pressured to take some party line, the community health team would love to hear from you, to support you, to get feedback and to know what’s happening. CEA is giving out media guidance and plans to give out more, but in the end you will decide how you want to act and what seems right to you.

I’m really motivated by a vision of supporting other people to act as they endorse in the next days and weeks and get that support myself, to get and give whatever advice and encouragement makes that possible even while it’s hard. I also expect to fuck up, and for other people to fuck up, and to keep trying. 


 

  1. ^

     I like this comment about PR and honesty

  2. ^

     I like Tyler Cowen’s line: "I do anticipate a boring short-run trend, where most of the EA people scurry to signal their personal association with virtue ethics. Fine, I understand the reasons for doing that, but at the same time grandma, in her attachment to common sense morality, is not telling you to fly to Africa to save the starving children (though you should finish everything on your plate). Nor would she sign off on Singer (1972). While I disagree with the sharper forms of EA, I also find them more useful and interesting than the namby-pamby versions."

  3. ^

     Example 1: I’m pretty happy that this post, “We must be very clear: fraud in the service of effective altruism is unacceptable” exists, and also that its comment section contains arguments about whether it itself is too soldier-y, whether that’s an appropriate type of thinking for people trying to understand human rationality and solving important problems, whether sacred values are being pulled in that you’re not allowed to trade against and how bad that is, what utilitarianism really means and says about the kind of behavior FTX might have engaged in. All of it - and especially the comments section - makes me feel safer to think for myself.


    Example 2: Ronny Fernandez’s policy for tweeting

Comments6


Sorted by Click to highlight new comments since:

I think in times like these we need good epistemics more than ever

Thanks. There is a lot of good advice in this post, and I appreciate it.

One thing I have tried to keep in mind is that EA is a bunch of different things. It refers to abstract principles, to concrete ideas, and to specific actions; to a broad community, to particular institutions, and to individual people. 

What the "FTX situation"  means for EA  varies across that landscape. I don't think any differently about Famine, Affluence, and Morality or the importance of funding projects focused on the future.  I do think differently about  the FTX Future Fund and how affluent donors affect moral & epistemic clarity.

That framing has helped me to make sense, at least in part, of many of the contradictory emotions and reactions of the past ~10 days.

Thank you for sharing your thoughts. This whole post is dense with super sensible and helpful generally applicable advice. I really enjoyed reading this.

I didn't mention this at the time, but I was grateful you wrote this post!

I also think about the onion test all the time, and generally admire you for modeling high integrity :)

Chana -- thanks for your wisdom and insights in this post.

To expand upon this issue of EA wanting to 'protect 'EA the brand', and feelings of soldier-iness and EA tribalism:

It's worth remembering that tribalism evolved for good game-theoretic reasons, in the context of group-vs-group competition.

As Darwin put it in The Descent of Man (1871): "A tribe including many members who, from possessing in a high degree the spirit of patriotism, fidelity, obedience, courage, and sympathy, were always ready to aid one another, and to sacrifice themselves for the common good, would be victorious over most other tribes; and this would be natural selection.”

Or, as Jonathan Haidt put it in this passage from The Righteous Mind, humans are sort of 90% chimpanzee and 10% bee (in the sense of having the capacity to act similar to eusocial insects, for the good of the group, under some conditions). (NB the concept of 'group selection was often rejected by evolutionary biologists from c. 1966 through about the mid-90s, but has been revived in the form of 'multi-level selection', and the evolutionary game theory of 'group selection' is now recognized as functionally interchangeable with 'selfish gene' thinking, when genes can form individual-level and group-level aggregates). 

So, humans evolved tribalism and soldier-iness, including tribal emotions, motivations, cognitions, and reactions, over millions of years of intensive group-vs-group competition.

Is this a defense of tribalism in modern EA, in response to crises and criticism? 

No. 

The dynamics of prehistoric group-vs-group competition, warfare, territory disputes, and resource competition don't map perfectly onto the dynamics of 21st century moral/social movements. There are some deep similarities that should not be discounted, but there are also important differences -- especially given the way that social media shapes PR narratives, and the fact that we're not engaged in physical, life-or-death warfare over land or resources, but in psychological wars of influence over beliefs and values.

I'm just trying to remind people not to feel too guilty or self-critical if we feel these 'protect our EA tribe at all costs!' kind of emotions bubbling up. Of course they will bubble up. We're hyper-social primates who evolved in clans and tribes.

Another thing to be cautious about is that, given human tribal psychology, people who are perceived as traitors or defectors to their group, especially in times of crisis, may suffer some heavy reputational costs in the future. This concern for tribal loyalty is partly something to guard against, but partly something to take pragmatically into account, when weighing whether, when, and how to 'speak up' with criticisms of EA culture and organizations. 

Note: I'm being descriptive about human tribal psychology here, not prescriptive or normative about what exact lessons we should take away from all this. I am concerned that EAs who have more exposure to moral philosophy, computer science, and cognitive biases research than to evolutionary psychology (my field) might become overly hard on themselves for feeling ordinary human tribalistic feelings in times of crisis.

Thanks for writing this, I think it's a valuable post with actionable suggestions.

Emotions are naturally running very high right now, and this is good both to remind people that yes, it is ok to have strong emotions about it and that these reactions are understandable and normal. 

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr
Recent opportunities in Community
46
Ivan Burduk
· · 2m read