All of ChanaMessinger's Comments + Replies

This was a delight to read! I found the fact that an essay competition in 1837 was a successful activist move really striking!

This is mentioned here, but I want to double down on the value of "asking around about the organization and what the experiences of others were". 

I talked to someone recently in tech about whether there were good ways to find out if working at any given tech organization was right for you, and he said basically no, that it was hard to get an accurate picture, that the resources that had tried to do this in the field (like Blind) added some information but gave warped impressions from who posted there. (That said, from a quick skim, it seems a lot bett... (read more)

I think people do not get karma from the baseline +1 or +2 that comes with making a new comment.

As the post says above, I’d like to share updates the  team has made on its policies based on the internal review we did following the Time article and Owen’s statement as a manager on the team and the person who oversaw the internal review. (My initial description of the internal review is here). In general, these changes have been progressing prior to knowing the boards’ determinations, though thinking from Zach and the EV legal team has been an important input throughout.
 

Changes 

Overall we spent dozens of hours over multiple calenda... (read more)

8
Tiresias
3mo
This is hard and that should be recognized. It seems you all are taking this extremely seriously and that should be commended. The recent discussion around Nonlinear got me wondering about one aspect of CH reports I hadn't considered before. Has the CH team ever spoken negatively about the person who made a report to people outside the CH team? I'm thinking of a scenario like: Steve makes an accusation against Lina. The CH team interacts with Steve and through this interaction comes to view him as somewhat of an unstable character. Even though no one has reported Steve, the CH team advises against hiring Steve to other institutions within the EA space, based on their interactions they had with Steve when the reported about Lina's behavior.

Have you considered blinded case work / decision making? Like one person collects the key information annonomises it and then someone else decides the appropriate responce without knowing the names / orgs of the people involved.

Could be good for avoiding some CoIs. Has worked for me in the past for similar situations.

I really don't know how the norms of professional investigative journalism work, but I imagine a lot hinges on whether the source of concern / subject of the piece is the repository of a large amount of relevant information about the central claims. 

e.g. the point is "how much work do you need to put in to make sure your claims are accurate" and then sometimes that implies "no need to get replies from the subject of the piece because you can get the information elsewhere" and sometimes that implies "you have to engage a bunch with the subject of the piece because they have relevant information."

8
Habryka
4mo
Yeah, I agree with these considerations.  I am however not super sure how this is related to the specific claim at hand here, which was just about whether the journalists that TracingWoodgrains asked are accurately summarized as "disagreeing with [the decision to not delay]".  It seems to me is that at least one of the journalists thought it was a messy judgement call and didn't give a recommendation one way or another on the contested question. So it seems inaccurate to me to summarize them the way TracingWoodgrains did. (The other journalist says:  which I also don't think should be succinctly summarized in the same way)

The judgement call is on giving time for "right to reply", not for "taking more time to verify claims", right? Those seem kind of different to me.

6
Habryka
4mo
That's right, though it seems to me that if someone thought it was reasonable to not give any right to reply in first place, then they presumably must also think that giving people only a very short time to reply is also OK, so one subsumes the other.  I am not totally confident of this, but on the face of it, if you think it was a reasonable judgement call to give someone no notice before posting, then it must also be a reasonable judgement call to give someone 4/1 days before posting. I don't think anyone here is arguing that in absolute terms Ben should have spent more time verifying claims non-adversarially? Like, Ben really did do a lot of that, and I think there might be many valid objections about how Ben went about it, but "more time" doesn't seem like currently something people are arguing for.  My model is people specifically wanted Ben to spend more time in the adversarial stage of a potential fact-finding process, but the above quote suggests that in some circumstances "0 hours" is an acceptable amount of time to spend on that, which I find it hard to square with a perspective that "60 hours" is an unacceptable amount of time to spend on that, according to this specific source.

Effective giving quick take for giving season

This is quite half-baked because I think my social circle contains not very many E2G folks, but I have a feeling that when EA suddenly came into a lot more funding and the word on the street was that we were “talent constrained, not funding constrained”, some people earning to give ended up pretty jerked around, or at least feeling that way. They may have picked jobs and life plans based on the earn to give model, where it would be years before the plans came to fruition, and in the middle, they lost status and ... (read more)

3
Carolina F Toth
5mo
Makes sense that there would be some jerk-around in a movement that focuses a lot on prioritization and re-prioritization, with folks who are invested in finding the highest priority thing to do. Career capital takes time to build and can't be re-prioritized at the same speed. Hopefully as EA matures, there can be some recognition that diversification is also important, because our information and processes are imperfect, and so there should be a few viable strategies going at the same time about how to do the most good. This is like your tail-risk point. And some diversity in thought will benefit the whole movement, and thoughtful people pursuing those strategies with many years of experience will result in better thinking, mentorship, and advice to share.  I don't really see a world in which earning to give can't do a whole lot of good, even if it isn't the highest priority at the moment... unless perhaps the negative impacts of the high-earning career in question haven't been thought through or weighed highly enough. 
1
Ebenezer Dukakis
5mo
Perhaps making a stronger effort to acknowledge and appreciate people who acted altruistically based on our guesses at the time, before explaining why our guesses are different now, would help? (And for this particular case, even apologizing to EtG people who may have felt scorned?) I think there's a natural tendency to compete to be "fashion-forward", but that seems harmful for EA. Competing to be fashion-forward means targeting what others will approve of (or what others think others will approve of), as opposed to the object-level question of what actually works. Maybe the sign of true altruism in an EA is willingness to argue for boring conventional wisdom, or willingness to defy a shift in conventional wisdom if you don't think the shift makes sense for your particular career situation. 😛 (In particular, we shouldn't discount switching costs and comparative advantage. I can make a radical change to the advice I give an aimless 20-year-old, while still believing that a mid-career professional should stay on their current path, e.g. due to hedging/diminishing marginal returns to the new hot thing.) BTW this recent post made a point that seems important: IMO, acknowledging and appreciating the effort people put in is the best way to prevent burnout. Implying that "your career path is boring now" is the opposite. Almost everyone in EA is making some level of sacrifice to do good for others; let's thank them for that! Thank you, whoever's reading this!

I think another example of the jerking people around thing could be the vibes from summer 2021 to summer 2022 that if you weren't exceptionally technically competent and had the skills to work on object-level stuff, you should do full-time community building like helping run university EA groups. And then that idea lost steam this year. 

6
NickLaing
6mo
It's an interesting point about the potential for jerking people around and alienating them from the movement and ideals. It could also (maybe) have something to do with having a lot of philosophers leading the movement too. It's easier to change from writing philosophically about short termism "doing good better" to long termism "what we owe the future", to writing essays about talent constraint over money constraint, but harder to psychologically and practically (although still very possible) switch from being a mid career global health worker or earning to giver to working on AI alignment. This isn't a criticism, of course it makes sense for the philosophy driving the movement to develop, just highlighting the difference in "pivotability" between leaders and some practitioners and the obvious potential for "jerking people around" collateral as the philosophy evolves. Also having lots of young people in the movement who haven't committed years of their life to things can make changing tacks more viable for many and seem more normal, while perhaps it is harder for those who have committed a few years to something. This "Willingness to pivot quickly, change their mind and their life plan intensely and often” could be as much about stage of career than it is personality. Besides earning to give people being potentially "jerked around", there are some other categories with considering too. 1. Global health people as the relative importance within the movement seems to have slowly faded. 2. if (just possibilities) AI becomes far less neglected in general in the next 3 to 5 years, or it becomes apparent that policy work seems far more important/tractable than technical alignment, then a lot of people who have devoted their careers to these may be left out in the cold. Just some very low confidence musings!

Yeah I think EA just neglects the downside of career whiplash a bit. Another instance is how EA orgs sometimes offer internships where only a tiny fraction of interns will get a job, or hire and then quickly fire staff. In a more ideal world, EA orgs would value rejected & fired applicants much more highly than non-EA orgs, and so low-hit-rate internships, and rapid firing would be much less common in EA than outside.

Am I understanding right that the main win you see here would have been protecting people from risks they took on the basis that Sam was reasonably trustworthy? 

I also feel pretty unsure but curious about whether a vibe of "don't trust Sam / don't trust the money coming through him" would have helped discover or prevent the fraud - if you have a story for how it could have happened (e.g. via as you say people feeling more empowered to say no to him - maybe it would have via been his staff making fewer crazy moves on his behalf / standing up to him more?), I'd be interested. 

"protect people from dependencies on SBF" is the thing for which I see a clear causal chain and am confident in what could have fixed it. 

I do have a more speculative hope that an environment where things like "this billionaire firehosing money is an unreliable asshole" are easy to say would have gotten better outcomes for the more serious issues, on the margin. Maybe the FTX fraud was overdetermined, even if it wasn't and I definitely don't have enough insight to be confident in picking a correction. But using an abstract version of this case as an e... (read more)

Curious if you have examples of this being done well in communities you've been aware of? I might have asked you this before.

I've been part of an EA group where some emotionally honest conversations were had, and I think they were helpful but weren't a big fix. I think a similar group later did a more explicit and formal version and they found it helpful.

4
Nathan Young
6mo
I've never seen this done well. I guess I'd read about the truth and reconciliation committees in South Africa and Ireland.

Really intrigued by this model of thinking from Predictable Updating about AI Risk.
 

Now, you could argue that either your expectations about this volatility should be compatible with the basic Bayesianism above (such that, e.g., if you think it reasonably like that you’ll have lots of >50% days in future, you should be pretty wary of saying 1% now), or you’re probably messing up. And maybe so. But I wonder about alternative models, too. For example, Katja Grace suggested to me a model where you’re only able to hold some subset of the evidence in yo

... (read more)

I’m Chana, a manager on the Community Health team. This comment is meant to address some of the things Ben says in the post above as well as things other commenters have mentioned, though very likely I won’t have answered all the questions or concerns. 

High level

I agree with some of those commenters that our role is not always clear, and I’m sorry for the difficulties that this causes. Some of this ambiguity is intrinsic to our work, but some is not, and I would like people to have a better sense of what to expect from us, especially as our strat... (read more)

2
Defacto
7mo
A design decision to not have "justice" or "countering bullies" seems sort of big and touches on deep subjects.  I guess this viewpoint above could be valid and deep, (but I'm slightly skeptical the comm. health team has that depth). It seems possible that, basically, just pursuing justice or countering bullies in a straightforward way might be robustly good and support other objectives. Honestly, it doesn't seem that complicated, and its slightly a yellow flag if it is hard in EA. I think writing on this (like Julia W's writing on on her considerations, not going to search it up but it was good). Such a piece would (ideally) show wisdom and considerations that is illuminating. I'll try to produce something, maybe not under this name or an obvious form.

Hi KnitKnack - I’m really sorry to hear you had a bad experience with the CH team, and that it contributed to some especially bad moments in your life. I totally endorse that people should have accurate expectations, which means that they should not expect we’ll always be able to resolve each issue to everyone’s satisfaction. I think that even in worlds where we did everything quote-unquote “right” (in terms of fair treatment of each of the people involved, and the overall safety and functioning of the community), some people would be disappointed in how m... (read more)

we had a stronger community health team with a broad mandate for managing risks, rather than mostly social disputes and PR? Maybe, but CH already had a broad mandate on paper. Given EVF’s current situation, it might be a tall task. And if VCs and accountancies didn’t see FTX’s problems, then a beefed-up CH might not either. Maybe a CH team could do this better independently of CEA

(Context - I was the interim head of the Community Health team for most of this year)


For what it’s worth, as as a team we've been thinking along similar lines (and having similar ... (read more)

Thanks for writing up your views here! I think it might be quite valuable to have more open conversations about what norms there's consensus on and which ones there aren't, which this helps spark.

Thanks for noticing something you thought should happen (or having it flagged to you) and making it happen!

I'd bid for you to explain more what you mean here - but it's your quick take!

2
Chris Leong
8mo
I'm very keen for more details as well.

Seems like there's room in the ecosystem for a weekly update on AI that does a lot of contextualization / here's where we are on ongoing benchmarks. I'm familiar with:
 

  • a weekly newsletter on AI media (that has a section on important developments that I like)
  • Jack Clark's substack which I haven't read much of but seems more about going in depth on new developments (though does have a "Why this matters" section. Also I love this post in particular for the way it talks about humility and confusion.
  • Doing Westminster Better on UK politics and AI / EA, which
... (read more)
2
Sean_o_h
8mo
Wout Schellart, Jose Hernandez-Orallo, and Lexin Zhou have started an AI evaluation digest, which includes relevant benchmark papers etc. It's pretty brief, but they're looking for more contributers, so if you want to join in and help make it more comprehensive/contextualised, you should reach out! https://groups.google.com/g/ai-eval/c/YBLo0fTLvUk Less directly relevant, but Harry Law also has a new newsletter in the Jack Clark style, but more focused on governance/history/lessons for AI: https://learningfromexamples.substack.com/p/the-week-in-examples-3-2-september
4
Lizka
8mo
I think I agree, but also want to flag this list in case you (or others) haven't seen it: List of AI safety newsletters and other resources
4
Quadratic Reciprocity
8mo
Another newsletter(?) that I quite like is Zvi's 

Some added context on the 80k podcasts:

At the beginning of the Jan Leike episode, Rob says:


Two quick notes before that:

We’ve had a lot of AI episodes in a row lately, so those of you who aren’t that interested in AI or perhaps just aren’t in a position to work on it, might be wondering if this is an all AI show now.

But don’t unsubscribe because we’re working on plenty of non-AI episodes that I think you’ll love — over the next year we plan to do roughly half our episodes on AI and AI-relevant topics, and half on things that have nothing to do with AI.

What

... (read more)

I liked this!

I appreciated that for the claim I was most skeptical of: "There’s also the basic intuition that more people with new expertise working on a hard problem just seems better", my skepticism was anticipated and discussed.

For me one of the most important things is:

Patch the gaps that others won’t cover

  • E.g., if more academics start doing prosaic alignment work, then ‘big-if-true’ theoretical work may become more valuable, or high-quality work on digital sentience. 
  • There’s probably predictable ‘market failures’ in any discipline – work that isn
... (read more)

I really loved this! I have basically no knowledge of the underlying context, but I think this symmary gave me a feel for how detailed and complicated this is (reality has a lot of detail and a lot of societies for air conditioning engineers!), a bit of the actual science, as well as some of the players involved and their incentives.

It's helpful and interesting to look at what small scientific communities are like as analogues for EA research groups.

From Astral Codex Ten

FRI called back a few XPT forecasters in May 2023 to see if any of them wanted to change their minds, but they mostly didn’t.


 

2
Greg_Colbourn
9mo
Weird. Does this mean they predicted GPT-4's performance in advance (and also didn't let that update them toward doom)!?

I really like this concept of epistemic probation - I agree also on the challenges of making it private and exiting such a state. Making exiting criticism-heavy periods easier probably makes it easier to levy in the first place (since you know that it is escapable).

Did you mean for the second paragraph of the quoted section to be in the quote section? 

2
Nathan Young
9mo
I can't remember but you're right that it's unclear.

Thanks so much for this, I really enjoyed it! I really like this format and would enjoy seeing more of it.

This isn't the point, and there's likely so much behind each vignette that we don't see, but I so wish for some of these folks that they are able to find e.g. people/mentors who encourage their "dumb questions", people who want to talk about consciousness, people who can help figure out what to do with doomer-y thoughts, maybe telling aggregators of information about some of the things listed (community health is one for some topics including some case... (read more)

Right, right, I think on some level this is very unintuitive, and I appreciate you helping me wrap my mind around it - even secret information is not a problem as long as people are not lying about their updates (though if all updates are secret there's obviously much less to update on)

2
trammell
10mo
Yup!

I appreciate the reminder that "these people have done more research" is itself a piece of information that others can update on, and that the mystery of why they haven't isn't solved. (Just to ELI5, we're assuming no secret information, right?)

I suppose this is very similar to "are you growing as a movement because you're convincing people or via selection effects" and if you know the difference you can update more confidently on how right you are (or at least how persuasive you are).

2
trammell
10mo
Thanks! No actually, we’re not assuming in general that there’s no secret information. If other people think they have the same prior as you, and think you’re as rational as they are, then the mere fact that they see you disagreeing with them should be enough for them to update on. And vice-versa. So even if two people each have some secret information, there’s still something to be explained as to why they would have a persistent public disagreement. This is what makes the agreement theorem kind of surprisingly powerful. The point I’m making here though is that you might have some “secret information” (even if it’s not spelled out very explicitly) about the extent to which you actually do have, say, a different prior from them. That particular sort of “secret information” could be enough to not make it appropriate for you to update toward each other; it could account for a persistent public disagreement. I hope that makes sense. Agreed about the analogy to how you might have some inside knowledge about the extent to which your movement has grown because people have actually updated on the information you’ve presented them vs. just selection effects or charisma. Thanks for pointing it out!

I tried for a while to find where I think Oliver Habryka talked about this, but didn't find it. If someone else finds it, let me know!

I want to just appreciate the description you’ve given of interaction responsibility, and pointing out the dual tensions. 

On the one hand, wanting to act but feeling worried that by merely getting involved you open yourself up to criticism, thereby imposing a tax on acting even when you think you would counterfactually make the situation better (something I think EA as a concept is correctly really opposed to in theory). 

On the other hand, consequences matter, and if in fact your actions cause others who would have done a better job not to act, a... (read more)

The forum naming conversation feels like an example of something that’s been coming up a lot that I don’t have a crisp way of talking about, which is the difference between “this is an EA thing” as a speech act and “this is an EA thing” as a description. I’m supportive of orgs and projects not branding themselves EA because they don’t want to or want to scope out a different part of the world of possible projects or don’t identify as EA. But I’m also worried about being descriptively deceptive (even unintentionally), by saying “oh, this website isn’t reall... (read more)

I'm certainly not an expert in institutional design, but for what it's worth, it feels really non-obvious to me that:

It seems harder for a decentralised movement to centralise than it is for a centralised movement to decentralise. So, trying to be as centralised as possible at the moment preserves option value.

Like, I think projects find it pretty hard to escape the sense that they're "EA" even when they want to (as you point out), and I think it's pretty easy to decide you want to be part of EV or want to take your cues from the relevant OP team and do wh... (read more)

Thanks for this! Very interesting. 

I do want to say something stronger here, where "competence" sounds like technical ability or something, but I also mean a broader conception of competence that includes "is especially clear thinking here / has fewer biases here / etc"

Trust is a two-argument function

I'm sure this must have been said before, but I couldn't find it on the forum, LW or google

I'd like to talk more about trusting X in domain Y or on Z metric rather than trusting them in general. People/orgs/etc have strengths and weaknesses, virtues and vices, and I think this granularity is more precise and is a helpful reminder to avoid the halo and horn effects, and calibrates us better on trust.

8
bruce
10mo
A commonly used model in the trust literature (Mayer et al., 1995) is that trustworthiness can be broken down into three factors: ability, benevolence, and integrity. RE: domain specific, the paper incorporates this under 'ability': There are other conceptions but many of them describe something closer to trust that is domain specific rather than generalised.
-2
Joseph Lemien
10mo
Strongly agree. I'm surprised I haven't seen this articulated somewhere else previously.

Ratio of descriptive: "this is how things are" to normative: "shape up"

To add to folks disagreeing with the "size of numbers", from my perspective:

Most respondents to Rethink's survey hadn't encountered EA. Of those who had (233), only 18 (1.1% of total respondents) referred to FTX/SBF explicitly or obliquely when asked what they think effective altruism means or where and when they first heard of it.

I think that number is importantly 7.7% of all the people who had heard of EA, which seems not that small to me (though way smaller than my immersed-all-the-time-in-meta/FTX-stuff brain might have generated on its own when that was where my head was at).
 

4
Ben_West
10mo
Thanks for pointing this out – this makes me realize I actually put the wrong number in. 18 people referred to FTX/SBF, but only 13 of them had encountered EA. So the relevant ratio is 13/233 = 5.6% (which maybe is still high). I have updated the post.

And neither "what they think effective altruism means or where and when they first heard of it" is likely to capture all -- or perhaps even most -- respondents whose opinion of EA has been downgraded by the scandal, or who haven't heard about FTX yet but will downgrade once SBF's trial gives another big round of publicity.

More probative would be responses to something like: "Do you have any concerns or negative opinions about EA, and if so what are they?"

For collecting thoughts on the concept of "epistemic hazards" - contexts in which you should expect your epistemics to be worse. not fleshed out yet. Interested in if this has already been written about, I assume so, maybe in a different framing.

From Habryka: "Confidentiality and obscurity feel like they worsen the relevant dynamics a lot, since they prevent other people from sanity-checking your takes (though this is also much more broadly applicable). For example, being involved in crimes makes it much harder to get outside feedback on your decisions, si... (read more)

I like the point of waves within cause areas! Though I suspect there would be a lot of disagreement - e.g. people who kept up with the x-risk approach even as WWOTF was getting a lot of attention.

I like the distinction between overreacting and underreacting as being "in the world" vs. "memes" - another way of saying this is something like "object level reality" vs. "social reality".

If the longtermism wave is real, then that was pretty about social reality, at least within EA, and changed how money was spent and things people said (as I understand it, I wasn't really socially involved at the time). 

So to the extent that this is about "what's happening to EA" I think there's clearly a third wave here, where people are running and getting funded ... (read more)

I'd be interested in more thoughts if you have them on evidence or predictions one could have made ahead of time that would distinguish this model from other (like maybe a lot of what's going on is youth and will evaporate over time (youth still has to be mediated by things like what you describe, but as an example).

Also, my understanding is that SBF wasn't very insecure? Does that affect your model or is the point that the leader / norm setter doesn't have to be?

Yeah, I'm confused about this. Seems like some amount of "collapsing social uncertainty" is very good for healthy community dynamics, and too much (like having a live ranking of where you stand) would be wildly bad. I don't think I currently have a precise way of cutting these things. My current best guess is that the more you push to make the work descriptive, the better, and the more it becomes normative and "shape up!"-oriented, the worse, but it's hard to know exactly what ratio of descriptive:normative you're accomplishing via any given attempt at transparency or common knowledge creation.

1
Nathan Young
10mo
Sorry what do you mean here? With my poll specifically? The community in general?

I strongly resonate with this; I think this dynamic also selects for people who are open-minded in a particular way (which I broadly think is great!), so you're going to get more of it than usual.

Thanks for writing this! I'm not sure how I'd feel if orgs I worked for went more in this direction, but I did find myself nodding along to a bunch of parts (though not all) of what you wrote.

One thing I'm curious about is you have thoughts on avoiding a "nitpick" culture, where every perk or line item becomes a big discussion among leadership or an org, or the org broadly - that seems to me like a big downside of moving in this direction.

Just because, things I especially liked:

1.

We should try to be especially virtuous whenever we find ourselves setting a

... (read more)

I don't know if this is right, but I take Lincoln to be (a bit implicitly but I see it throughout the post) taking the default cultural norm as a pretty strong starting point, and aiming to vary from that when you have a good reason (I imagine because variations from what's normal is what sends the most salient messages), rather than think about what a perk is from first principles, which explains the dishwashing and toilet cleaning.

Reminds me of C.S. Lewis's view on modesty

The Christian rule of chastity must not be confused with the social rule of ‘modest

... (read more)
[anonymous]10mo14
2
0

The default cultural norm varies a lot across offices within countries. Should we anchor to Google, hedge funds, Amazon, academia, Wave, Trajan House, the nonprofit sector, the local city council  etc? So I don't understand which cultural norm the post is anchoring to, and so I don't understand the central claim of the post. 

One of the examples given in the post is the implicit judgement that EA doesn't want to be like Google - Google is an extremely successful company that people want to work for. I don't get why it is an example of excessive pe... (read more)

Thanks for this! I feel like I have a bunch of thoughts swirling in my head as a result of reading this :)

Again quick take: would be interested in more discussion on (conditional on there being any board members) takes on what a good ratio of funders to non funders is in different situations.

I haven't thought hard about this yet, so this is just a quick take: I'm broadly enthused but don't feel convinced that experts have actual reason to get engaged. Can you flesh that out more?

But "everyone knows"!

A dynamic I keep seeing is that it feels hard to whistleblow or report concerns or make a bid for more EA attention on things that "everyone knows", because it feels like there's no one to tell who doesn't already know. It’s easy to think that surely this is priced in to everyone's decision making. Some reasons to do it anyway:

  • You might be wrong about what “everyone” knows - maybe everyone in your social circle does, but not outside. I see this a lot in Bay gossip vs. London gossip - what "everyone knows" is very different in those two
... (read more)

Fwiw, I think we have different perspectives here - outside of epistemics, everything on that list is there precisely because we think it’s a potential source of some of the biggest risks. It’s not always clear where risks are going to come from, so we look at a wide range of things, but we are in fact trying to be on the lookout for those big risks. Thanks for flagging it doesn’t seem like we are; I’m not sure if this comes from miscommunication or a disagreement about where big risks come from.

Maybe another place of discrepancy is that we primarily think... (read more)

[anonymous]10mo11
3
2

My understanding was that community health to some extent carries the can for catastrophe management, along with other parts of CEA and EA orgs. Is this right? I don't know whether people within CEA think anyone within CEA bears any responsibility for which parts of the past year's catastrophes. (I don't know as in I genuinely don't know - it's not a leading statement.) Per Ryan's comment, the actions you have announced here don't seem at all appropriate given the past year's catastrophes. 

Yeah, I'm not trying to stake out a claim on what the biggest risks are.

I'm saying assume that some community X has team A that is primarily responsible for risk management. In one year, some risks materialise as giant catastrophes - risk management has gone terribly. The worst. But the community is otherwise decently good at picking out impactful meta projects. Then team A says "we're actually not just in the business of risk management (the thing that is going poorly), we also see ourselves as generically trying to pick out high impact meta projects. So ... (read more)

2
Jason
1y
I imagine that, for a number of reasons, it's not a good idea to put out an official, full CHSP List of Reasonably-Specific, Major-to-Catastrophic Risks complete with current and potential evaluation and mitigation measures. And your inability to do so likely makes it difficult to fully brief the community about your past, current, and potential efforts to manage those kinds of risks. My guess is that a sizable fraction of the major-to-catastrophic risks center around a fairly modest number of key leaders, donors, and organizations. If that's so, there might be benefit to more specifically communicating CHSP's awareness of that risk cluster and high-level details about possible strategies to improve performance in that specific cluster (or to transition responsibility for that cluster elsewhere).
Load more