Will Payne

I’ve been engaging in some way with EA  since 2018. First by helping run EA Oxford. Then by running the group and setting up remote ’fellowships’. Recently I was on the CEA groups team looking into ways to support university groups.

I’m now looking for new things to work on. Hoping to shift towards work which both:

  • Gives me more sustained motivation (possibly building systems and developing code).
  • Shifts my focus more sharply towards urgent sources of existential risk

Topic Contributions

Comments

Many groups metrics have grown by 100% to 400% in the past year

Hi Thomas, great question. I’ve included a list below for our records as of today (mid Nov).

It’s worth noting that we think any of these groups could absorb at least 2 FTE at a minimum, so I’d like people looking at these numbers to not be put off applying for the Campus Specialist Internship or Campus Specialist Programme based on the amount of FTE they currently have (although if you’d want to work on some of the under supported groups that would be amazing).

  • Berkeley 1.9 FTE (of which 1 FTE is EAIF funded)
  • Brown 0.25 FTE
  • Caltech 0 FTE
  • Cambridge 3.5 FTE (grant transfer in progress)
  • Columbia 1 FTE
  • Georgetown 0.25  FTE
  • Harvard 1.375 FTE
    • Of that:
    • Harvard Undergrad 0.925 FTE
    • Harvard Grad 0.25 FTE
    • Harvard Law 0.2 FTE
  • LSE 0.75 FTE
  • MIT 1.125 FTE
  • Oxford 2 FTE
  • Princeton 0 FTE
  • Stanford 1.3 FTE
  • Swarthmore College 0.75 FTE
  • UChicago 0.15 FTE
  • UHong Kong 0 FTE
  • UPenn 0.875 FTE
  • Yale 1.9 FTE
Thoughts on "A case against strong longtermism" (Masrani)

Also worth noting that there are a bunch of other more accessible descriptions of longtermism out there and this is specifically a formal definition aimed at an academic audience (by virtue of being a GPI paper)

A Toy Model of Hingeyness

Once again I really like this model Bob. I'm pretty excited to see how this model changes with even more time to iterate. I'd never come across the formalised idea of slack before and I think it describes a lot of what was in my head when responding to your last post!

I'm wondering how you've been thinking about marginal spending in this model? I.e, If we're patient philanthropists, which choices should we spend money on and which should we save once we factor in that some choices are easier to affect than others? For example, one choice might be particularly hingey under any of your proposed definitions but be very hard for a philanthropist to affect; e.g the decisions made by one person who might not have heard of EA (either as a world leader or just being coincidentally important). We probably won't get a great payoff from spending a load of money/effort identifying that person and would prefer to avoid that path down the decision tree entirely.

I guess the thrust of the question here is how we might account for tractability of affecting choices in this model. Once tractability is factored in me might prefer to spend a little money affecting lost of small choices which aren't as hingey under this definition rather than spending a lot affecting one very hingey choice. If this is the case I think we'd want to redefine hingeyness to match our actual decision process.

It seems like each edge on the tree might need a probability or cost rating to fully describe real-world questions of traceability but I'd be very interested in yours or others thoughts.


(maybe call it 'hinge precipiceness'?)

(Or maybe precipicipality?)

I really like this model and will probably use it to think about hingeyness quite a lot now!

I'll make an attempt to give my idea of hingeyness, my guess is that the hingeyness is new enough an idea that there isn't really a correct answer out there.

You can think of every choice in this model as changing the distribution of future utilities (not just at the next time step but the sum across all time). Hingier choices are choices which change this distribution more than any other. For example a choice where one future branch includes -1000 and a bunch of 0s and the other includes 1000 and a bunch of 0s is really hingey as it changes the portfolio from [-1000, many 0s, 1000] to [-1000, many 0s] or [many 0s, 1000]. A choice between [many 0s, 10] and [more 0s, another 10], is not hingey at all and has no effect on history. A good rule of thumb is to think of a choice as hingier the more it reduces the range of possible utilities.

As an extreme example, choosing between powerful world governments where one values utopias and the other values torture seems very hingey as before we had a large range of possible futures and after we're irreversibly in a really positive or negative state.

I'll apply this model to some of your questions below.

All older years have more ripple effects, but does that make 1 more hingey?

I think in the diagram above 1 is coincidentally quite hingey because you choose between a world where you're guaranteed a 7 or 8 utility down the line or a world where you have a range between 0 or 6. The range of possible choices for one option is very different than the range for the other. You can imagine a similar timeline where the hingy moments are somewhere else (I've done an ugly drawing of one such world) As you can see in this timeline choice 1 doesn't matter at all in the long run because the full range of final options is still open to you but the second tier choices (and the one labelled 0 in the third tier) matter a lot as once you make them your range changes in big ways.

The absolute utility that 1 and 2 could add are the same, but the relative utility is very different. So, what is more important for the hingeyness?

I think neither are the key values here. Hingeyness is about leverage over the whole human trajectory so the immediate changes in utility are not the only thing we should consider. We care more about how this affects aggregate expected utility over all the remaining future states. This is why irreversible choices seem so concerning.


One last thought here is that hingeyness should probably also include some measure of tractability. It could be one choice has a large effect on the future but we don't have much of a capacity to affect that choice. For example, if we discovered an asteroid heading towards earth which we couldn't stop. There's no point in considering something the hinge of history if we can't operate it! Currently, I don't think that's in the model but maybe you could add it by imposing costs on each choice? My guess is this model could become pretty mathematically rigorous and useful for thinking about hingeyness.