All of Rob Mitchell's Comments + Replies

On being ambitious: failing successfully & less unnecessarily

One potential solution could involve explicitly funding such public goods. For example, funders could give an organisation additional funding to allow their staff to contribute more to effective altruism public goods, despite competing priorities.

I was thinking something similar reading some comments around funds giving (or not giving) feedback. There does seem to be a missed equilibrium:

  • It's in everyone's efforts if there is more feedback, support, coordination etc.
  • It's not in the interests or capability of any one organisation to take this on themselves.
... (read more)
A Problem with Motivation

This should recognise that more reliable motivation comes from norm-following rather than from individual willpower

I think this is right and is more true and important when the positive impacts you might have are distant in time, space or both. If you're doing something to help your local community then you should be able to see the impact yourself fairly quickly and willpower could well be the best thing to get you out picking litter or whatever. This falls down a bit if your beneficiaries are halfway round the world, in the future, or both.

1James Aitchison1mo
Yes, it is harder to care for distant or statistical people even if it is normatively the right thing to do. We shouldn't overestimate how much we can do by will power alone, but changing norms may be effective.
EA can sound less weird, if we want it to

It seems like there are certain principles that have a 'soft' and a 'hard' version - you list a few here. The soft ones are slightly fuzzy concepts that aren't objectionable, and the hard ones are some of the tricky outcomes you come to if you push them. Taking a couple of your examples:

Soft: We should try to do as much good with donations as possible

Hard: We will sometimes guide time and money away from things that are really quite important, because they're not the most important


Soft:  Long-term impacts are more important than short-term impac... (read more)

EA Common App Development Further Encouragement

Yes, in practice interview questions should vary a lot between different roles, even if on paper the roles are fairly similar, so I'm not sure they could be coordinated, beyond possibly some entry level roles.

In a situation where someone is good but doesn't quite fit in a role the referral element might be useful. Often I've interviewed someone thinking 'they're great but not as good a fit for the role' even if they match on paper, and being able to refer that person on to another organisation would be a mutual benefit.

I think that some questions can be used universally across seniority levels and cause areas. For example, something on 'describe an important problem that you resolved in the past few months.' Other questions can be applicable to similar types of roles (e. g. research manager) even in different fields (maybe 'a researcher has a great idea that another one disagrees with, how do you go about making a decision'). Then, some questions can be applicable to any job within a cause area ('what draws you to hen welfare?') and some particular to a type of organizations ('what interests you about research'). It could be noted what role type, cause area, and/or organization type the question is pertinent to. Then, organizations could see responses of candidates who interviewed for the role/cause/organization type. Bias could be introduced by candidates tailoring their responses to a particular position. This can be mitigated either by having questions independent of position or recruiters looking beyond the context on the actual skills (e. g. if someone resolved a disagreement in ML research, they could resolve a disagreement also in math research). Ok, that is great. What do you think about giving some of these pieces of feedback * (Unique) skillset perspective * Skills that you would recommend to gain if they apply for a similar position * Description of a position that could be ideal for the candidate (including cause area, role, environment, management collaboration/style) (with organizations tips, if known) * What is different about the candidate 'on paper' vs. 'live?' This alone can direct candidates to better roles and provide feedback on presentation while adding only a few minutes per candidate and, in conjunction with other application material, can inform referrers what to recommend more accurately.
2Yonatan Cale2mo
Nice So here's my #1 user research: Would you like to add a checkbox for that in your own application form?
How many people have heard of effective altruism?

I'd heard of Peter Singer in an animal rights context years before I knew anything around his EA association or human philosophy in general. I wonder if a lot of people who have heard of him are in the same place I was.

5Matthew Yglesias2mo
Similarly, I’d heard of Peter Singer as a result of campus controversies over his (alleged) views on disability long before I heard anything else about him. But it was actually learning about that controversy that prompted me to go see him speak some time in 2001 or so and I was surprised by what I heard.
Thoughts on requesting reasoning or examples to not pursue fields/positions

I don't think approaching this as 'why not to pursue a path' is helpful. I think it's more about helping people be aware of things they may not know so they can make an educated decision. That decision may then be 'it's not for me'. Think of the numbers showing how few people become professional athletes. The framing isn't 'don't do it because you won't make it'. It's 'few people make it, decide in full knowledge.'

"Decide in full knowledge"; I think that's exactly what I was aiming for, thank you. My impression (and maybe I just haven't seen the articles) is that there is more focus on "these are the traits of people who have done well in this path" and something along these lines would attempt to balance that. I may also be biased as I have a lot of interests, and narrowing down feels much better than adding more options to my plate.
"Big tent" effective altruism is very important (particularly right now)

Celebrate all the good actions[that people are taking (not diminish people when they don't go from 0 to 100 in under 10 seconds flat).


I'm uncomfortable doing too much celebrating of actions that are much lower impact than other actions

I think the following things can both be true:

  • The best actions are much higher impact than others and should be heavily encouraged.
  • Most people will come in on easier but lower impact actions and if there isn't an obvious and stepped progression to get to higher impact actions and support to facilitate this then many will fa
... (read more)
8Luke Freeman2mo
Thanks Rob. I think you just made my point better than me! 😀
[$20K In Prizes] AI Safety Arguments Competition

They said that computers would never beat our best chess player; suddenly they did. They said they would never beat our best Go player; suddenly they did. Now they say AI safety is a future problem that can be left to the labs. Would you sit down with Garry Kasparov and Lee Se-dol and take that bet?

Help Me Choose A High Impact Career!!!

Thanks Jordan. I wanted to pick up on the Turo element. You mention that this is something you only recently stumbled across, and it doesn't sound like you have prior experience or training in this area, and that you aren't especially passionate about it. You also say that you could make $200k a year on it working a 40 hour week. Where did you get these figures? There aren't many opportunities you can go into without experience and start earning $200k a year.

It may be possible, but I'd suggest it's a high bar to reach as such opportunities are rare, so I'd... (read more)

Giving What We Can - Pledge page trial (EA Market Testing)

'why seeing options other than the expected one would make me less likely to follow through'

I think the key is that 'following through' can mean several things that are similar from the perspective of GWWC but quite different from the perspective of the person pledging.

In my case I'd already been giving >10% for quite a while but thought it might be nice to formalise it. If I hadn't filled in the pledge it wouldn't have made any difference to my giving. So the value of the pledge to me was relatively low. If the website had been confusing or offputting ... (read more)

Organizational alignment

Well, it looks like I'm hijacking a thread about organisational scaling with some anxieties around referring to people in overly utilitarian ways that I've talked about elsewhere. Which is fair enough; interestingly I've done the opposite and talked about org scaling on threads that were fairly tangentially related and got quite a few upvotes for it. All very intriguing and if you're not occasionally getting blasted, you're not learning as much as you might, getting enough information about e.g. limits, etc...

Organizational alignment

Every person in your company is a vector. Your progress is determined by the sum of all vectors.

'Hey! I'm not a vector!' I cried out to myself internally as I read this. I mean, I get it and there's a nice tool / thought process in there, but this feels somewhat dehumanising without something to contextualise it. There are loads of tools you might employ to make good decisions that might involve placing someone in a matrix or similar, but hopefully it's obvious that it's a modelled exercise for a particular goal and you don't literally say 'people are math... (read more)

Not sure why you got downvoted. First para is valid, second seems a bit off context. (Like yes, it's related but is it related enough to actually further the goals of the OP?)
Giving What We Can - Pledge page trial (EA Market Testing)

Thanks everyone, this is very interesting and well worth having a look through the attached Gitbook.

Around the intuitive interpretation:

Perhaps giving people more options makes them indecisive. They may be particularly reluctant to choose a “relatively ambitious giving pledge” if a less ambitious option is highlighted.

It's possible that this is the reason, but there's an alternative interpretation based around the fact that GWWC is already quite well-known and referenced as 'the place you go to donate 10% of your income'. So if a lot of people are coming o... (read more)

Thanks, that makes sense to me in a general senses. I was thinking in this direction too but having a hard time putting in words 'why seeing options other than the expected one would make me less likely to follow through'. Can you dive a little deeper into what the actual 'friction is' or 'what about seeing pledges other than the one I was planning to do would make me less likely to continue?' I guess my thought was that the mechanism would be indecision, need to take more time to think about this, or maybe a sort of 'hey, I am over-achieving here, do I really need to signal that I'm a 10-percenter when I could much more easily be a 1-percenter' ... but then I need to think about it more so I don't decide in the moment. This would be interesting, I agree. I think we would get some information from this. (AFAIK we don't have it but I could ask.) I'm not convinced it would be 'fully informative', because of the usual caveats about selection bias and people not always knowing/remembering what was in their minds. But still, it seems worth doing!
Effective Developers: The CV Blind Spot

This is good advice and can be expanded outside software developers as you say. It's also great to see you offering CV help!

As someone who's hired a decent number of people, the one caveat I would add is that this will be really useful to follow as above if you are applying for a job where there is a degree of discretion among decision-makers around what they're assessing.  It's less immediately applicable, but still potentially valuable, if the initial selection is based solely on scoring against predefined criteria. Sometimes this will be explicit (... (read more)

Charlotte's Shortform

For all that I've read and done with ToCs and critical path analysis, the first thing that comes to my mind is still 'avoiding this':

(I genuinely find thinking 'make sure you don't do this' at all stages is more effective than any theory I've read.)

Also, anything that has 2-3 paths to a potential goal that are at least partially independent will usually leave you in a better place than one linear path.  Then it's not so much 'backchaining' as switching emphasis ('lobbying seems to have stalled, so let's try publicity/behaviour change... then who knows... (read more)

How close to nuclear war did we get over Cuba?

Thanks for the detailed response and for linking to that other post. I've been dealing with chickenpox in the house so this is probably later and briefer than the analysis deserves.

+1 to 'Command and Control' and 'Nuclear Folly' as well worth reading - between them, enough to dispel any illusions that the destructive power of nuclear weapons was matched with processes to avoid going wrong, whether by accident or human folly. I'll check out 'The Bomb'.

The worrying aspect for me is the combination of leeway for particular commanding officers combined with en... (read more)

How close to nuclear war did we get over Cuba?

there were no American war plans for instance that escalated from the use of tactical nuclear weapons by the Soviets to firing nuclear missiles

What's your source for this?

I'd also comment that this misses the wider global context. There were tensions over Berlin, and China and India briefly went to war alongside the Cuban missile crisis; potential overlaps between these conflicts raised the risk of nuclear exchange considerably, possibly not even beginning around Cuba, and at any rate expanding beyond it if it got going. 

I have no specifc source saying explicitly that there wasn't a plan to use nuclear weopons in response to a tactical nuclear weopon. However, I do know what the decsion making stucture for the use of nuclear weopons was. In a case where there hadn't been a decapiting strike on civillian administrators, the Presidnet was presented with plans from the SIOP (US nuclear plan) which were exclusively plans based around a statagy of descrution of the Communist bloc. The SIOP was the US nuclear plan but triggers for nuclear war weren't in it anywhere. When induvidual soliders had tactical nuclear weopons their instructions weren't fixed - they could be instructed explictly not to use tactical nukes, in general though the structure of the US armed forces was to let the commanding officer decide the most approate course of action in a given sitaution. Second thing to note - tactical nukes were viewed as battlefeild weopons by both sides. Niether viewed them as anything special becaue they were nuclear in the sense that they should engender an all out attack. So maybe I should clarify that by saying that there was no plan that required the use of tactical nuclear weopons in response a Soviet use of them. Probably the best single text of US nuclear war plans is The Bomb by Fred Kaplan. Probably best source on how tactical nukes were used is Command and Control by Eric Schollsser On the second one, I have a post here that serves to give the wider statagic context: [] But it's not clear to me how Berlin is relvent. It's relvent insofar as it's an important factor in why the crisis happened but it's not clear to me why Berlin increased the chance of escaltion into nuclear war beyond the fact that the Soviet response to a US invasion of Cuba could be to attempt to take Berlin. W
Bad Omens in Current Community Building

I haven't come across this yet... is it what I think it is?

Yep. It seems pretty easy to optimise for consequentialist impact and still be more virtuous and principled than most people. Maybe EA can lead to bad moral licensing effects in some people.
Intro and practical ideas around Salesforce within EA

Hi Eli! I'm glad those orgs are using Salesforce. It's powerful and scales very well. Annoyingly Salesforce themselves can be a massive sales and hype machine though, so it's not always easy to get the best advice from them directly. So freelance can be doubly useful.

1Eli Kaufman2mo
True, independent advice can save time and costs.
Bad Omens in Current Community Building

Very interesting. I haven't come into contact with any student groups, so can't comment on that. But here's my experiences of what's worked well and less well coming in as a longtime EA-ish giver in my late 30s looking for a more effective career:


(Free) books:  I love books - articles and TED talks are fine for getting a quick and simple understanding of something, but nothing beats the full understanding from a good book. And some of the key ones are being given away free! Picking out a few, the Alignment Problem, The Precipice and Scout Mindset ... (read more)

Occasionally, apparent coldness to immediate suffering:  I've only seen this a bit, but even one example could be enough to put someone off for good.

I would really like to ban the term "rounding error".

2Peter Elam2mo
I really like that piece that you linked to. Thanks for including it.
EA and the current funding situation

Definitely agree that networks will become worse predictors and ultimately grants, job offers etc. will become more impersonal. This isn't entirely a bad thing. For example personal and network-oriented approaches have significant issues around inclusivity that well-designed systems can avoid, especially if the original network is pretty concentrated and similar (see: the pic in the original post...)

As this happens this may also mean that over time people who have been in EA for a while may feel that 'over time the average person in the movement feels less... (read more)

a UBI-generating currency: Global Income Coin

The White Paper is fascinating as an example of some smart people trying to identify and crack problems around global UBI - it is worth a look whatever your position on this post and/or solution.

For what it's worth the $2.8tn figure that much of this hangs off seems 'blithely optimistic' as already commented and the link to M0 plucked out of thin air, and the verification system cumbersome and doubtfully viable. There is the germ of something here though and I'm glad though to see so many different organisations and approaches trying to deal with the issue. 

1Jasper Driessens2mo
Thanks, appreciate it! * I agree, replacing all M0 is 'optimistic'. Plus, even in the most successful outcome it's not at all sure that that's what happens—it could very well be GLO replaces a part of commercial bank created money (i.e. M2 but not M0) instead. The reason we use the number is to have some figure illustrative of the size of the potential. By basing it on M0, this max potential figure is not totally arbitrary, and it also allows for a simple back-of-the-envelope estimate since it assumes a world in which credit-based money creation by commercial banks would simply continue as normal. * Most important is that any level of adoption—as long as its self-sustained—leads to some amount of seigniorage that can be used for UBI. * Which part of verification seems cumbersome? Our goal is to delegate this to crypto exchanges, who already KYC their clients. The user experience of verifying for Global Income Coin will be similar to opening an account at an exchange, broker, or neobank (take a selfie, scan passport etc)
EA and the current funding situation

For many months, they will sit down many days a week and ask themselves the question "how can I write this grant proposal in a way that person X will approve of" or "how can I impress these people at organization Y so that I can get a job there?"

I would flip this and say, it's inevitable that this will happen, so what do we do about it? There are areas we can learn from:

  • Academia, as you mention - what do we want to avoid here? Which bits actually work well?
  • Organisations that have grown very rapidly and/or grown in a way that changes their nature. On a for-
... (read more)
4Nathan Young2mo
I think the question is predictivity. How can you run the most predictive systems possible for selecting good grants/employing suitable people? I guess over time, networks will be worse predictors and the average trustworthiness of applicants will fall slightly, to which we should respond accordingly. Though I guess that we have to ackowledge that some grants will be misspent and that the optimal amount of bad grants may not be 0.
EA and the current funding situation

It's useful to separate out consultancy/advice-giving versus the actual doing. I would say though that a successful management/operations setup should be able to at least ameliorate the feedback issue you mention (e.g. by identifying leading and/or more quickly changing metrics that are aligned and gaining value from these). 

2Charles He2mo
I think your comment and sentiment is great. My response wasn't directly related. I guess I'm more concerned about "by catch" or overindexing. For example, activity and discussions that are wobbly about getting into management and scaling, "Great Leap Forward", sort of style. Honestly, the root issue here is that I have some distrust related to the causes and processes about this post, the NB post, all of which seems to be related to discussion and concerns that might originated or closely involve the EA forum. I am don't think these have the best relationship to reality[1] [#fn278n6d9n9w4i]. It seems healthy for the issues to settle down. 1. ^ [#fnref278n6d9n9w4i]I think the discourse on funding/optics has been slightly defective or tinged. This caused Will, SBF to pop onto the forum. This presence is fantastic, great and should continue, but maybe in this instance, different processes or events could have occurred, so they could have used their valuable time and public presence to communicate to EA about something else.
EA and the current funding situation

I agree (and have formerly resembled this type...)  This is quite embedded in a lot of nonprofit culture. Part of it is what motivates the individual and their personality, part of it is the concept of supporters' money. 'Would the person who gave you £5 a month want you to be spending your money on that?' In practice this leads to counterproductive underspending. I remember waiting weeks to get maybe £100 worth of extra memory so I could crunch numbers at a reasonable speed without crashing the computer. The concept of taxpayers' money works similarly. 

There's probably a good forum post in there somewhere about how the psychology of charity affects perceptions of EA...

EA and the current funding situation

Really interesting, and something I'll need to come back to. Just to pick out one bit:

Often, it’ll involve people doing things that just aren’t that enjoyable: management and scaling organisations to large sizes are rarely people’s favourite activities; and, it will be challenging to incentivise enough people to do these things effectively.

I've seen variations on this theme in a few posts, and it doesn't resonate with my own experience. In a genuinely influential management/ops role, there's a great deal of satisfaction to be had in seeing your organisatio... (read more)

5Charles He2mo
As a caution, onlookers should know that there tends to be a large supply of would-be management advice or scaling advice whose quality is often mixed. This is because: * It is attractive to supply because this advice is literally executive or senior managerial work, so appears high status/impact/compensation. * It is attractive to supply because it moves into organizations where often the hard operational work and important niches have been developed successfully. In reality, it is often this object level activity that is hard and in "short supply". * Even in successful organizations, staff are often working around management/CEO or succeeding at their tasks despite leadership (it's not that leadership is bad it's that it provides many things in complicated ways). * Like other meta work, it can be (extremely) difficult to understand if you're good or bad. In particular, for scaling, the feedback loops can be very long. * Like other meta work, there can only be so many "cooks in the kitchen". In general, it is normal to scale or add more object level work, but for meta or managerial work the slots are limited (think of the reasons orgs only have have 1-2 CEOs) and more management can be negative. To see this, look at the general opinion of "management consulting" and how rarely these services are actually used by small, highly effective companies and organizations that I think are similar in profile to EA orgs. I suspect that when they are used, it's because of great respect and trust for specific principals, and not because "management" can be easily sprinkled onto an existing organization. Another issue is that "management" is a word that means many different things. As a very positive thing, and to an unusual degree, even junior EAs perform major management roles in EA organizations. Maybe the source of more management talent or activity would be to promote or "tap on the shoulder" inside EAs. There sh

Couldn't agree more, Rob. Perhaps my perception is coloured by my own experience and circle of friends, but there certainly seems to be a subset of people out there who genuinely enjoy scaling organisations. I think this is particularly the case in the for-profit sphere, where feedback loops are sometimes instantaneous thus leading to increased satisfaction among the scale-up types.  

I was also surprised to be seeing management and scaling organisations described as "rarely people’s favourite activities", this seems to be a strong claim. For me, it's the most motivating activity and I'm trying to find an organisation where I can contribute in this area.

Tentative Reasons You Might Be Underrating Having Kids

Liked 'the big picture' bit, the tone change makes this.

I do feel though that this and other posts are less focussed on one of the key aspects beyond the effect on parents and the instrumental value of kids when they're grown up, namely the inherent value of a new, independent consciousness. Whether that's a positive experience of the world is a huge consideration, which you do mention; personally I would err on the side of optimism given human progress.

I'm also concerned around valuing children based on their chance of having a big impact when adult. This... (read more)

Do you offset your carbon emissions?

Thanks for the link to the Cool Earth post. I don't offset for two reasons:

Climate offsets are frequently ineffective, for reasons discussed in the Cool Earth post and, more journalistically, here;

Focussing on policy change to reduce emissions such as a frequent flyer tax or mandating cleaner fuel or better fuel efficiency will have higher impact than focussing on individual carbon footprints, especially as the latter individual focus may take attention away from the systemic changes needed.

While many organisations offering offsets offer quite ineffectual ... (read more)

Mid-career people: strongly consider switching to EA work

As a fellow mid-career person looking at moving into EA, and agreeing that ‘EA career advice for mid-career people is undersupplied at the moment’, I found this post and the comments below really valuable - thanks for taking the time to write it up!


I wanted to pick up on Patrick’s point around specialist vs generalist, as to me this seems a key part of the issue. Much as it is the case that EA tends younger, but seems inclusive of older people, it does also seem to skew specialist. This is understandable, given there are a lot of practitioner roles t... (read more)

1Patrick Gruban2mo
As I was only looking for operations roles I don't know if there is a difference to specialists. At the moment there seems to be a lot of dynamics with orgs getting new funding and being able to expand quickly. People at the orgs might be able to tell you they are in the process of writing a job post or they might already have a document but not have posted it publicly. Also for some jobs I assume it might be easier to approach people or networks before posting them and then dealing with many applications. But this is only speculation. My impression is that often co-founders of organisations don't know themselves what a generalist might be doing in a year as everything is changing quickly. This seems to be very similar to startups. When hiring I would always point out that a job title in a contract should be seen as a starting point and might have little overlap with the actual job a few months in. The upside is that as a generalist in a small and growing organisation you can bring your specific talents to the table and have the chance to change the role so that it fits your strengths. You can then help outsource or hire talent that can cover your weaknesses. In terms of giving up something, you might try to get a sabbatical at your current company to try out direct EA work for a year. If this doesn't work out you might discuss quitting on good terms so that they'd be willing to hire you again if they have a job open after a year. It might be useful to research how likely this would work out for you. For the general framing of impact, I personally ask myself: How can I increase the expected value of the EA community having a bigger impact? Especially in longtermist organisations, the additional dollar donated might be much less useful at the moment than being a co-founder or an early employee of a new organisation. This can be still true if the organisation has a high risk of failure but might do a lot of good if it succeeds. I see that this can make it hard fo
1Ben Snodin2mo
Thanks for sharing your thoughts! If you're thinking purely about maximising impact, you probably want to go for the highest expected value thing, in which case accepting a bit more uncertainty in your lifetime impact to explore other options is (in the kind of situation you described) maybe well worth it in many cases. Of course, one important factor is how easy it is to return to the current career path after (say) a year of trying other stuff. (if this is more of a gut level concern, maybe it's a different story of course)