I manage a team of data people and do projects and operations stuff for Greenpeace. https://www.linkedin.com/in/rj-mitchell/
Long-time giver to GiveWell charities, looking to get more directly involved.
This should recognise that more reliable motivation comes from norm-following rather than from individual willpower
I think this is right and is more true and important when the positive impacts you might have are distant in time, space or both. If you're doing something to help your local community then you should be able to see the impact yourself fairly quickly and willpower could well be the best thing to get you out picking litter or whatever. This falls down a bit if your beneficiaries are halfway round the world, in the future, or both.
It seems like there are certain principles that have a 'soft' and a 'hard' version - you list a few here. The soft ones are slightly fuzzy concepts that aren't objectionable, and the hard ones are some of the tricky outcomes you come to if you push them. Taking a couple of your examples:
Soft: We should try to do as much good with donations as possible
Hard: We will sometimes guide time and money away from things that are really quite important, because they're not the most important
Soft: Long-term impacts are more important than short-term impacts
Hard: We may pass up interventions with known and high visible short-term benefits in favour of those with long-term impacts that may not be immediately obvious
This may seem obvious, but to people who aren't familiar, leading with the soft ones on the basis that the hard ones will come up soon enough if someone is interested or does their research will be more effective in giving a positive impression than jumping straight to the hard stuff. But I see a lot more jumping than would be justified. I can see why, but if you were trying to persuade someone to join or have a good opinion of your political party, would you lead with 'we should invest in public services' or 'you should pay more taxes'?
Yes, in practice interview questions should vary a lot between different roles, even if on paper the roles are fairly similar, so I'm not sure they could be coordinated, beyond possibly some entry level roles.
In a situation where someone is good but doesn't quite fit in a role the referral element might be useful. Often I've interviewed someone thinking 'they're great but not as good a fit for the role' even if they match on paper, and being able to refer that person on to another organisation would be a mutual benefit.
I'd heard of Peter Singer in an animal rights context years before I knew anything around his EA association or human philosophy in general. I wonder if a lot of people who have heard of him are in the same place I was.
I don't think approaching this as 'why not to pursue a path' is helpful. I think it's more about helping people be aware of things they may not know so they can make an educated decision. That decision may then be 'it's not for me'. Think of the numbers showing how few people become professional athletes. The framing isn't 'don't do it because you won't make it'. It's 'few people make it, decide in full knowledge.'
Celebrate all the good actions[that people are taking (not diminish people when they don't go from 0 to 100 in under 10 seconds flat).
I'm uncomfortable doing too much celebrating of actions that are much lower impact than other actions
I think the following things can both be true:
I didn't read the OP as saying that we should settle with lower impact actions if there's the potential for higher impact ones. I read it as saying that we should make it easier for people to find their level - either helping them to reach higher impact over time if for whatever reason they're unable or unwilling to get there straight away, or making space for lower impact actions if for whatever reason that's what's available.
Some of this will involve shouting out and rewarding less impactful actions beyond their absolute value not for its own sake but because this may be the best way of helping this progression. I've definitely noticed the '0-100' thing and if I was younger and less experienced it might have bothered me more.
They said that computers would never beat our best chess player; suddenly they did. They said they would never beat our best Go player; suddenly they did. Now they say AI safety is a future problem that can be left to the labs. Would you sit down with Garry Kasparov and Lee Se-dol and take that bet?
Thanks Jordan. I wanted to pick up on the Turo element. You mention that this is something you only recently stumbled across, and it doesn't sound like you have prior experience or training in this area, and that you aren't especially passionate about it. You also say that you could make $200k a year on it working a 40 hour week. Where did you get these figures? There aren't many opportunities you can go into without experience and start earning $200k a year.
It may be possible, but I'd suggest it's a high bar to reach as such opportunities are rare, so I'd be interested to see more analysis here. You also mention risks, and it doesn't look like these are gone into in great deal. So I would really look for some maximally rational analysis on this aspect first.
'why seeing options other than the expected one would make me less likely to follow through'
I think the key is that 'following through' can mean several things that are similar from the perspective of GWWC but quite different from the perspective of the person pledging.
In my case I'd already been giving >10% for quite a while but thought it might be nice to formalise it. If I hadn't filled in the pledge it wouldn't have made any difference to my giving. So the value of the pledge to me was relatively low. If the website had been confusing or offputting I might have given up.
There are others who will already have decided to give 10% but haven't yet started. The pledge then would have a bit more value if there's a chance it could prevent backsliding but assuming the person had fully committed to giving at this level already, the GWWC pledge wouldn't be crucial to the potential pledger.
Finally, there are people who for whatever reason come across the website without yet having decided to give 10% (or even 1%) and make a decision to sign up when they're there. This is where the more standard marketing theory comes into play.
For the first two groups, the non-conversion is something like 'I can't even see what I'm meant to be signing up for. Never mind, it's not going to affect how I'll actually give anyway.' Friction in this case is anything that makes it harder to identify what the 10% pledge is and how to sign up to it. I spent a couple of seconds looking between the three options but it was ultimately pretty easy to work out which one was the one I wanted. This would be even easier if it was the one main option.
For the third, it could well be 'There's too much choice, maybe I don't want to do it.' At any rate, it will be much different from people who had already committed to giving 10%.
The 'loss' to GWWC for all three looks the same but there's only a substantial loss to the wider world with the third group.
I know people not always remembering what's in their minds can be an issue but I doubt it would be a problem on something like 'did you intend to give 10% when you arrived on the GWWC website?' and certainly not on 'have you already been giving 10%?' There's such a difference between the groups it would be really helpful to at least get an indication how they split out.
I was thinking something similar reading some comments around funds giving (or not giving) feedback. There does seem to be a missed equilibrium:
I might not jump to assuming it would all be coming off existing staff's plates though.
Anyway, great post.