I work at CEA usually on community epistemics and supporting high school outreach from a community health perspective, currently interim manager of the team. (Opinions here my own by default though will sometimes speak in a professional capacity).
Personal website: www.chanamessinger.com
A dynamic I keep seeing is that it feels hard to whistleblow or report concerns or make a bid for more EA attention on things that "everyone knows", because it feels like there's no one to tell who doesn't already know. It’s easy to think that surely this is priced in to everyone's decision making. Some reasons to do it anyway:
In short, if you're acting based on the belief that there’s a thing “everyone knows”, check that that’s true.
Relatedly: Everybody Knows, by Zvi Mowshowitz
[Caveat: There's an important balance to strike here between the value of public conversation about concerns and the energy that gets put into those public community conversations. There are reasons to take action on the above non-publicly, and not every concern will make it above people’s bar for spending the time and effort to get more engagement with it. Just wanted to point to some lenses that might get missed.]
Fwiw, I think we have different perspectives here - outside of epistemics, everything on that list is there precisely because we think it’s a potential source of some of the biggest risks. It’s not always clear where risks are going to come from, so we look at a wide range of things, but we are in fact trying to be on the lookout for those big risks. Thanks for flagging it doesn’t seem like we are; I’m not sure if this comes from miscommunication or a disagreement about where big risks come from.
Maybe another place of discrepancy is that we primarily think of ourselves as looking for where high-impact gaps are, places where someone should be doing something but no one is, and risks are a subset of that but not the entirety.
(To be clear I also agree with Julia that it’s very plausible EA should have more capacity on this)
I think as an overall gloss, it’s absolutely true that we have fewer levers in the AI Safety space. There are two sets of reasons why I think it’s worth considering anyway:
But both of these could be insufficient for a decision to put more of our effort there, and it remains to be seen.
I’m a little worried that use of the word pivot was a mistake on my part in that it maybe implies more of a change than I expect; if so, apologies.
I think this is best understood as a combination of
In general I use and like this concept quite a lot, but someone else advocating it does give me the chance to float my feelings the other direction:
I think sometimes when I want to go to missing moods as a concept to explain to my interlocutor what I think is going off in our conversation, I end up feeling like I'm saying "I am demanding you are required to be sad because I would feel better if you were", which I want to be careful of imposing on people. It sometimes also feels like I'm assuming we have the same values in a way I would want to do more upfront work before calling on.
More generally, I think it's good to notice the costs of the place you're taking on some tradeoff spectrum, but also good to feel good about doing the right thing, making the right call, balancing things correctly etc.
Some more in-the-weeds thinking that also may be less precise.
It’s absolutely true that the trust we have with the community matters to doing parts of our work well (for other parts, it’s more dependent on trust with specific stakeholders), and the topic is definitely on my mind. I’ll say I don’t have any direct evidence that overall trust has gone down, though I see it as an extremely reasonable prior. I say this only because I think it’s the kind of thing that would be easy to have an information cascade about, where “everyone knows” that “everyone knows” that trust is being lost. Sometimes there’s surprising results, like that the “overall affect scores [to EA as a brand] haven't noticeably changed post FTX collapse.” There are some possibilities we’re considering about tracking this more directly.
For me, I’m trying to balance trust being important with knowing that it’s normal to get some amount of criticism and trying not to be overreactive to that. For instance, in preparation for a specific project we might run, we did some temperature checking and getting takes on how people felt about our plan.
I do want people to have a sense of what we actually do, what our work looks like, and how we think and approach things so that they can make informed decisions about how to update on what we say and do. We have some ideas for conveying those things, and are thinking about how to prioritize those ideas against putting our heads down and doing good work directly.
To say a bit about the ideas raised in light of things I’ve learned and thought about recently.
Conscientious EAs are often keen to get external reviews from consultants on various parts of EA, which I really get and I think comes from an important place about not reinventing the wheel or thinking we’re oh so special. There are many situations in which that makes sense and would be helpful; I recently recommended an HR consultant to a group thinking of doing something HR-related. But my experience has been that it’s surprisingly hard to find professionals who can give advice about the set of things we’re trying to do, since they’re often mostly optimizing for one thing (like minimizing legal risk to a company, etc.) or just don’t have experience with what we’re trying to do.
In the conversations I’ve had with HR consultants, ombudspeople, and an employment lawyer, I continually have it pointed out that what the Community Health team does doesn’t fall into any of those categories (because unlike HR, we work with organizations outside of CEA and also put more emphasis on protecting confidentiality, and unlike ombudspeople, we try more to act on the information we have).
When I explain the difference, the external people I talk to extremely frequently say “oh, that sounds complicated" or "wow, that sounds hard”. So I just want to flag that getting external perspectives (something I’ve been working a bunch on recently) is harder than it might seem. There just isn’t much in the way of direct analogues of our work where there are already known best practices. A good version according to me might look more like talking to different people and trying to apply their thinking to an out-of-distribution situation, as well as being able to admit that we might be doing an unusual thing and the outside perspectives aren’t as helpful as I’d hope.
If people have suggestions for external people it would be useful to talk to, feel free to let me know!
Appreciate the point about updating the community more often - this definitely seems really plausible. We were already planning some upcoming updates, so look out for those. Just to say something that it’s easy to lose track of, it’s often much easier to talk or answer questions 1:1 or in groups than publicly. Figuring out how to talk to many audiences at once takes a lot more thought and care. For example, readers of the Forum include established community members, new community members, journalists, and people who are none of the above. While conversations here can still be important, this isn’t the only venue for productive conversation, and it’s not always the best one. I want to empower people to feel free to chat with us about their questions, thoughts or perspectives on the team, for instance at conferences. This is part of why I set up two different office hours at the most recent EAG Bay Area.
It’s also worth saying that we have multiple stakeholders in addition to the community at large [for instance when we think about things like the AI space and whether there’s more we could be doing there, or epistemics work, or work with specific organizations and people where a community health lens is especially important], and a lot of important conversations happen with those stakeholders directly (or if not, that’s a different mistake we’re making), which won’t always be outwardly visible.
The other suggestions
Don’t have strong takes or thoughts to share right now, but thanks for the suggestions!
For what it’s worth, I’m interested in talking to community members about their perspective on the team [modulo time availability]. This already happens to some extent informally (and people are welcome to pass on feedback to me (directly or by form) or to the team (including anonymously)). When I went looking for people to talk to (somewhat casually/informally) to get feedback from people who had lost trust, I ended up getting relatively few responses, even anonymously. I don’t know if that’s because people felt uncomfortable or scared or upset, or just didn’t have the time or something else. So I want to reiterate that I’m interested in this.
I wanted to say here that we’ve said for a while that we share information (that we get consent to share) about people with other organizations and parts of CEA [not saying you disagree, just wanted to clarify]. While I agree one could have concerns, overall I think this is a huge upside to our work. If it was hard to act on the information we have, we would be much more like therapists or ombudspeople, and my guess is that would hugely curtail the impact we could have. [This may not engage with the specifics you brought up, but I thought it might be good to convey my model].
Just to add to my point about there still being a board, a director and funders in a world where we spin out, I’ll note that there are other potential creative solutions to gaining independence e.g. getting diverse funding, getting funding promises for more time in advance (like an endowment), and clever legal approaches such as those an ombudsperson I interviewed said they had. We haven’t yet looked into any of those in depth.
For what it’s worth, I think on the whole we’ve been able to act quite independently of CEA, but I acknowledge that that wouldn’t be legible from the outside.
Again quick take: would be interested in more discussion on (conditional on there being any board members) takes on what a good ratio of funders to non funders is in different situations.