C

ChanaMessinger

3247 karmaJoined Oct 2019www.chanamessinger.com

Bio

Participation
2

I work at CEA usually on community epistemics and supporting high school outreach from a community health perspective, currently interim manager of the team.  (Opinions here my own by default though will sometimes speak in a professional capacity).

Personal website: www.chanamessinger.com

Comments
295

Topic Contributions
20

Again quick take: would be interested in more discussion on (conditional on there being any board members) takes on what a good ratio of funders to non funders is in different situations.

I haven't thought hard about this yet, so this is just a quick take: I'm broadly enthused but don't feel convinced that experts have actual reason to get engaged. Can you flesh that out more?

But "everyone knows"!

A dynamic I keep seeing is that it feels hard to whistleblow or report concerns or make a bid for more EA attention on things that "everyone knows", because it feels like there's no one to tell who doesn't already know. It’s easy to think that surely this is priced in to everyone's decision making. Some reasons to do it anyway:

  • You might be wrong about what “everyone” knows - maybe everyone in your social circle does, but not outside. I see this a lot in Bay gossip vs. London gossip - what "everyone knows" is very different in those two places
  • You might be wrong about what "everyone knows" - sometimes people use a vague shorthand, like "the FTX stuff" and it could mean a million different things, and either double illusion of transparency (you both think you know what the other person is talking about but don’t) or the pressure to nod along in social situations means that it seems like you're all talking about the same thing but you're actually not
  • Just because people know doesn't mean it's the right level of salient
  • Bystander effect: People might all be looking around assuming someone else has the concern covered because surely everyone knows and is taking the right amount of action on it.

In short, if you're acting based on the belief that there’s a thing “everyone knows”, check that that’s true. 

Relatedly: Everybody Knows, by Zvi Mowshowitz

[Caveat: There's an important balance to strike here between the value of public conversation about concerns and the energy that gets put into those public community conversations. There are reasons to take action on the above non-publicly, and not every concern will make it above people’s bar for spending the time and effort to get more engagement with it. Just wanted to point to some lenses that might get missed.]

Fwiw, I think we have different perspectives here - outside of epistemics, everything on that list is there precisely because we think it’s a potential source of some of the biggest risks. It’s not always clear where risks are going to come from, so we look at a wide range of things, but we are in fact trying to be on the lookout for those big risks. Thanks for flagging it doesn’t seem like we are; I’m not sure if this comes from miscommunication or a disagreement about where big risks come from.

Maybe another place of discrepancy is that we primarily think of ourselves as looking for where high-impact gaps are, places where someone should be doing something but no one is, and risks are a subset of that but not the entirety.

(To be clear I also agree with Julia that it’s very plausible EA should have more capacity on this)

I think as an overall gloss, it’s absolutely true that we have fewer levers in the AI Safety space. There are two sets of reasons why I think it’s worth considering anyway:

  1. Impact - in a basic kind of “high importance can balance out lower tractability” way, we don’t want to only look where the streetlight is, and it’s possible that the AI Safety space will seem to us sufficiently high impact to aim some of our energy there
  2. Don’t want to underestimate the levers - we have fewer explicit moves to make in the broader AI Safety space (e.g. disallowing people from events), but there is both a high overlap with EA and my guess is that some set of people in a new space will appreciate people who have thought about community management a lot giving thoughts / advice / sharing models and so on.

But both of these could be insufficient for a decision to put more of our effort there, and it remains to be seen.

I’m a little worried that use of the word pivot was a mistake on my part in that it maybe implies more of a change than I expect; if so, apologies.

I think this is best understood as a combination of

  • Maybe this is really important, especially right now [which I guess is indeed a subset of cause prioritization]
  • Maybe there are unusually high leverage things to do in that space right now
  • Maybe the counterfactual is worse - it’s a space with a lot of new energy, new organizations, etc, and so a lot more opportunity for re-making old mistakes, not a lot of institutional knowledge, and so on. 
    • I think this basically agrees with your point (1), but as a hypothesis, not a conclusion
    • In addition, there is an unusual amount of money and power flowing around this space right now, and so it might warrant extra attention
    • This is a small effect, but we’ve received some requests from within this space to pay more attention to it, which seems like some (small) evidence

In general I use and like this concept quite a lot, but someone else advocating it does give me the chance to float my feelings the other direction:

I think sometimes when I want to go to missing moods as a concept to explain to my interlocutor what I think is going off in our conversation, I end up feeling like I'm saying "I am demanding you are required to be sad because I would feel better if you were", which I want to be careful of imposing on people. It sometimes also feels like I'm assuming we have the same values in a way I would want to do more upfront work before calling on.

More generally, I think it's good to notice the costs of the place you're taking on some tradeoff spectrum, but also good to feel good about doing the right thing, making the right call, balancing things correctly etc. 

I really like the frame of "fanfiction of your idea", I think it helpfully undermines the implicit status-thing of "I will steelman your views because your version is bad and I can state your views better than you can."

Some more in-the-weeds thinking that also may be less precise.

Trust

It’s absolutely true that the trust we have with the community matters to doing parts of our work well (for other parts, it’s more dependent on trust with specific stakeholders), and the topic is definitely on my mind. I’ll say I don’t have any direct evidence that overall trust has gone down, though I see it as an extremely reasonable prior. I say this only because I think it’s the kind of thing that would be easy to have an information cascade about, where “everyone knows” that “everyone knows” that trust is being lost. Sometimes there’s surprising results, like that the “overall affect scores [to EA as a brand] haven't noticeably changed post FTX collapse.” There are some possibilities we’re considering about tracking this more directly.  

For me, I’m trying to balance trust being important with knowing that it’s normal to get some amount of criticism and trying not to be overreactive to that. For instance, in preparation for a specific project we might run, we did some temperature checking and getting takes on how people felt about our plan. 

I do want people to have a sense of what we actually do, what our work looks like, and how we think and approach things so that they can make informed decisions about how to update on what we say and do. We have some ideas for conveying those things, and are thinking about how to prioritize those ideas against putting our heads down and doing good work directly.
 

Thoughts on the suggestions for improving trust 

To say a bit about the ideas raised in light of things I’ve learned and thought about recently.
 

External people

Conscientious EAs are often keen to get external reviews from consultants on various parts of EA, which I really get and I think comes from an important place about not reinventing the wheel or thinking we’re oh so special. There are many situations in which that makes sense and would be helpful; I recently recommended an HR consultant to a group thinking of doing something HR-related. But my experience has been that it’s surprisingly hard to find professionals who can give advice about the set of things we’re trying to do, since they’re often mostly optimizing for one thing (like minimizing legal risk to a company, etc.) or just don’t have experience with what we’re trying to do. 

In the conversations I’ve had with HR consultants, ombudspeople, and an employment lawyer, I continually have it pointed out that what the Community Health team does doesn’t fall into any of those categories (because unlike HR, we work with organizations outside of CEA and also put more emphasis on protecting confidentiality, and unlike ombudspeople, we try more to act on the information we have). 

When I explain the difference, the external people I talk to extremely frequently say “oh, that sounds complicated" or "wow, that sounds hard”. So I just want to flag that getting external perspectives (something I’ve been working a bunch on recently) is harder than it might seem. There just isn’t much in the way of direct analogues of our work where there are already known best practices. A good version according to me might look more like talking to different people and trying to apply their thinking to an out-of-distribution situation, as well as being able to admit that we might be doing an unusual thing and the outside perspectives aren’t as helpful as I’d hope. 

If people have suggestions for external people it would be useful to talk to, feel free to let me know!

More updates

Appreciate the point about updating the community more often - this definitely seems really plausible. We were already planning some upcoming updates, so look out for those. Just to say something that it’s easy to lose track of, it’s often much easier to talk or answer questions 1:1 or in groups than publicly. Figuring out how to talk to many audiences at once takes a lot more thought and care. For example, readers of the Forum include established community members, new community members, journalists, and people who are none of the above. While conversations here can still be important, this isn’t the only venue for productive conversation, and it’s not always the best one. I want to empower people to feel free to chat with us about their questions, thoughts or perspectives on the team, for instance at conferences. This is part of why I set up two different office hours at the most recent EAG Bay Area.

It’s also worth saying that we have multiple stakeholders in addition to the community at large [for instance when we think about things like the AI space and whether there’s more we could be doing there, or epistemics work, or work with specific organizations and people where a community health lens is especially important], and a lot of important conversations happen with those stakeholders directly (or if not, that’s a different mistake we’re making), which won’t always be outwardly visible.

The other suggestions
Don’t have strong takes or thoughts to share right now, but thanks for the suggestions!

For what it’s worth, I’m interested in talking to community members about their perspective on the team [modulo time availability]. This already happens to some extent informally (and people are welcome to pass on feedback to me (directly or by form) or to the team (including anonymously)). When I went looking for people to talk to (somewhat casually/informally) to get feedback from people who had lost trust, I ended up getting relatively few responses, even anonymously. I don’t know if that’s because people felt uncomfortable or scared or upset, or just didn’t have the time or something else. So I want to reiterate that I’m interested in this.

Privacy

I wanted to say here that we’ve said for a while that we share information (that we get consent to share) about people with other organizations and parts of CEA [not saying you disagree, just wanted to clarify]. While I agree one could have concerns, overall I think this is a huge upside to our work. If it was hard to act on the information we have, we would be much more like therapists or ombudspeople, and my guess is that would hugely curtail the impact we could have. [This may not engage with the specifics you brought up, but I thought it might be good to convey my model].

Independence / spinning out

Just to add to my point about there still being a board, a director and funders in a world where we spin out, I’ll note that there are other potential creative solutions to gaining independence e.g. getting diverse funding, getting funding promises for more time in advance (like an endowment), and clever legal approaches such as those an ombudsperson I interviewed said they had. We haven’t yet looked into any of those in depth.

For what it’s worth, I think on the whole we’ve been able to act quite independently of CEA, but I acknowledge that that wouldn’t be legible from the outside. 

Load more