@ Rethink Priorities
Working (6-15 years of experience)
17597Joined Dec 2015


"To see the world as it is, rather than as I wish it to be."

I'm a Research Manager on the General Longtermism team at Rethink Priorities. Right now, my team and I are re-orienting towards what are the best things to do, see here.

I also volunteer as a funds manager for EA Funds' Long-term Future Fund.


I directionally agree with you. However, they do have a few other levers. For example, local EA groups can ban people based on information from CH. Grantmakers can also ask CH for consultation about people they hear grapevine rumors about and outsource this side of investigations to them.

Some of this refers to what I refer to as "mandate" in my earlier shortform that I linked.

I agree that they can't make many decisions about private events, take legal action, or fire people they do not directly employ.

To give a concrete example, my (non-EA) ex was from Europe, and she had a relative who both didn't like that she had two partners, and that I was non-white. My understanding was that the "poly" dimension was seen as substantially worse than the racial dimension. The relative's attitude didn't particularly affect our relationship much (we both thought it was kind of funny). But at least in Western countries, I think your bar on outing poly people who don't want to be outed should be at least as high as your bar for outing interracial couples who don't want to be outed, given the relative levels of antipathy people in Western countries have between the two. 

(I may want to delete this comment later).

Morality is hard in the best of times, and now is not the best of times. The movement may or may not be a good fit for you. I'm glad you're still invested in doing good regardless of perceived or actual wrongdoing of other members of the movement to date, and I hope I and others will do the same.

I guess I'm imagining from either Open Phil's perspective, or that of other large funders, the risk of value misalignment or incompetence of Open Phil staff is already priced in, and they've already paid the cost of evaluating Claire.

It's hard to imagine that (purely from the perspective of reducing costs of auditing)Holden or Cari or Dustin preferring an unknown quantity to Claire. There might be other good reasons to prefer having a more decentralized board[1], but this particular reason seems wrong. 

Likewise, from the perspective of future employees or donors to EVF, the risk of value misalignment or incompetence of EVF's largest donor is already a cost they necessarily have to pay if they want to work for or fund EVF. So adding a board member (and another source of COI) that's not associated with Open Phil can only increase the number of COIs, not decrease it.

  1. ^

    for example, a) you want a diversity of perspectives, b) you want to reduce the risks of being beholden to specific entities c) you want to increase the number of potential whistleblowers

Your argument here cuts against your prior comment.

Why was this comment downvoted?

Funnily enough, the "pigeon flu" example may cease to become a hypothetical. Pretty soon, we may need to look at the track record of various agencies and individuals to assess their predictions on H5N1

Imagine a forecaster that you haven't previously heard of told you that there's a high probability of a new novel pandemic ("pigeon flu") next month, and their technical arguments are too complicated for you to follow.[1]

Suppose you want to figure out how much you want to defer to them, and you dug through to find out the following facts:

a) The forecaster previously made consistently and egregiously bad forecasts about monkeypox, covid-19, Ebola, SARS, and 2009 H1N1.

b) The forecaster made several elementary mistakes in a theoretical paper on Bayesian statistics

c) The forecaster has a really bad record at videogames, like bronze tier at League of Legends.

I claim that the general competency argument technically goes through for a), b), and c). However, for a practical answer on deference, a) is much more damning than b) or especially c), as you might expect domain-specific ability on predicting pandemics to be much stronger evidence for whether the prediction of pigeon flu is reasonable than general competence as revealed by mathematical ability/conscientiousness or videogame ability.

With a quote like 

Hardly anyone associated with Future Fund saw the existential risk to… Future Fund, even though they were as close to it as one could possibly be.

I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant.

The natural interpretation to me is that Cowen (and by quoting him, by extension the authors of the post) is trying to say that FF not predicting the FTX fraud and thus "existential risk to FF"  is akin to a). That is, a dispositive domain-specific bad forecast that should be indicative of their abilities to predict existential risk more generally. This is akin to how much you should trust someone predicting pigeon flu when they've been wrong on past pandemics and pandemic scares. 

To me, however, this failure, while significant as evidence of general competency, is more similar to b). It's embarrassing and evidence of poor competence to make elementary errors in math. Similarly, it's embarrassing and evidence of poor competence to not successfully consider all the risks to your organization. But using the phrase "existential risk" is just a semantics game tying them together (in the same way that "why would I trust the Bayesian updates in your pigeon flu forecasting when you've made elementary math errors in a Bayesian statistics paper" is a bit of a semantics game). 

EAs do not to my knowledge claim to be experts on all existential risks, broadly and colloquially defined. Some subset of EAs do claim to be experts on global-scale existential risks like dangerous AI or engineered pandemics, which is a very different proposition.

[1] Or, alternatively, you think their arguments are inside-view correct but you don't have a good sense of the selection biases involved.

Answer by LinchFeb 02, 2023168

I'm bullet-pointing as I have a distinctive writing style amongst my friends that I'm trying to avoid in these posts.

You can consider using a large language model to do things like style transfer, summarization, and anonymization. However, I would not be surprised if the companies you interface with aren't the best custodians of your data and will train models based on your data and/or store your data in plaintext.

Load More