Jamie_Harris

Managing Director @ Leaf
Working (6-15 years of experience)

Bio

Jamie is Managing Director at Leaf, a nonprofit that introduces intelligent and altruistically motivated 16-18 year olds to ideas about how they can contribute to tackling some of the world’s most pressing problems, especially those that might have lasting effects on the long-term future.

Jamie previously worked as a teacher, as a researcher at the think tank Sentience Institute, and as co-founder and researcher at Animal Advocacy Careers, which helps people to maximise their positive impact for animals.

Give Jamie anonymous advice or feedback here: https://forms.gle/t5unVMRci1e1pAxD9

Comments
251

Topic Contributions
1

Thanks! This reply is very helpful.

If the bottleneck is essentially about people with relevant expertise not 'getting it', then I tentatively suspect that the ideal model for this path for relevant orgs would look like a consultancy. E.g. advice about how to manage contractors, and helping to onboard contractors, rather than trying to ~do the work.

If that's right, then it suggests that we need relatively few people actually developing this skillset.

(Similarly to how mental health is instrumentally very important for doing good, and it's great that there are people thinking specifically about mental health in the context of maximising positive impact, but I still wouldn't recommend 'psychiatrist/counsellor' for (m)any people who hadn't already built up a bunch of relevant expertise.)

"You can have an outsized impact relative to another potential hire by working for a high-impact organisation where you understand their cause area. This is because information security can be challenging for organisations that are focussed on social impact, as industry standard cybersecurity advice is built to support profit motives and regulatory frameworks. Tailoring cybersecurity to how an organisation is trying to achieve its mission — and to prevent the harmful events the organisation cares most about — could greatly increase your effectiveness."

This part feels pretty crucial for the argument that this is a high-impact career path; otherwise orgs working on AI safety, biosecurity etc can presumably just hire professionals without much context or interest in their cause area.

But I find it surprising.

Do orgs report struggling with this? Can't they just draw their hires/contractors' attention to the specific issues they're most concerned about, and explain how their needs differ from the norm?

Yeah, no particular purpose other than to 

(1) reduce the chance that effective altruism does end up co-opting and/or incorrectly taking credit. (I don't expect that Shakeel was intentionally trying to do this.)

(2) Lower priority, but I was intrigued about how the phrase "in EA" was being used more generally. Context: I think that what gets counted as "EA" or not often rests a lot on self-identification, which I don't see as a particularly important or useful consideration. I'm more interested in whether projects seem cost-effective (in expectation), or at least whether people seem to be actually be putting the 'core principles' of EA to good use. (Here's CEA's list on that.) I suspect what's going on here though is more about whether the projects have been Open Phil funded.

Those all seem like reasonable criteria! Again focusing on the animal welfare examples, my guess is that several of them wouldn't meet any of those criteria, though it would depend on how loosely several things are defined.

I suspect some of the advocates involved in the animal welfare victories listed here might be taken aback to see them listed as "in EA". The movements for animal rights and animal welfare long predate effective altruism. What makes these things "in EA"?

Nice! Very cool research idea and very interesting findings. From a skim and without having thought too hard about it, the methodology and setup seems more careful and higher quality than what I remember of your previous survey too, so nice job on that!

This roughly fits with my somewhat indirect inferences from some of the research I did on how media coverage and public opinion influence each other and policy-making. (Other relevant points here.) I'd expect more radical tactics to be good at forcing an issue onto the public and policy agenda, but not good (possibly counterproductive) at persuading unsupportive members of the public to change their mind. Your evidence on polarisation supports this, I think. And if it such protests do indeed increase support for more moderate groups, as your study suggests, then that gives them more influence to negotiate with policy-makers. 

My inferences are getting even more indirect now, but an implication of this is that radical campaigns targeted towards issues that the public are already supportive of would be helpful for forcing change, but not so helpful and perhaps counterproductive on issues where there isn't already popular support. (This might be bad news for groups like Animal Rebellion.)

These were also two questions that jumped to mind for me as I read this post.

Hello, since I saw this post, I switched a couple of things to using ACSI. I always thought NPS seemed pretty bad, and mostly only included it for comparison with groups like CEA who were using it.

Do you have any data you're able to share publicly yet?

 

Additionally:

The American Customer Satisfaction Index is an alternative which has stronger empirical grounding, as well as a huge number of publicly available benchmarks. It uses 3 questions, on a 10 point scale, whose scores are averaged and normalized to a 0-100 scale:[1]

How exactly are you calculating it? The Wikipedia formula seems wrong to me, unless I'm misunderstanding it.

(I have 9 answers for each of the three questions. The average responses are 9.4, 9.6, and 9.3. So I think what I'm supposed to do is =((9.4*1+9.6*1+9.3*1)-1)/9*100 . This gives me "303.7037037" which clearly seems wrong.)

My interpretation of what it should be: 

=(((9.4+9.6+9.3)-3)/27)*100

Which equals 93.8. The simpler but slightly less accurate =((9.4+9.6+9.3)/3)*10 comes out similarly, at 94.4.

Which seems very good. E.g. "Full-Service Restaurants", "Financial Advisors", and "Online News and Opinion" all  seem to hover around 70-80, while government services range a bit more widely from 60 to 90.

(Caveat that I didn't realise that you were supposed to include labels on 1 and 10 for each of the questions until I checked the Wikipedia entry just now to calculate it, and I'm not sure how this would affect the results. The labels seem pretty weird to me, so I suspect it does affect it somehow.)

Thanks!

I think that the "A narrative of EA as a cult" section is helpful for steelmanning this narrative/perception. I also appreciate your suggestions and ideas in the "How to make EA less cultish" section.

As far as I can see, you don't explore any substantive reasons or evidence why "the cult-like features of EA pose a real issue" beyond optics; you note that "the impression of a cult could also explain why some of the recent media attention on EA hasn’t been very positive", but this is about optics. So I'd be interested to hear/read you try to flesh out the "other negative consequences" that you allude to. 

The easier option is to remove "(and it's not just optics)" from the title and rename "It’s not just optics" to "It might be more than optics" or similar, but if you do have thoughts on the other negative consequences, these could be valuable to share.

(For context, I'm one of the people involved in reaching out to high school students, and I'm keen to understand the full implications -- pros and cons -- of doing so, and if there's anything we can do to mitigate the downsides while retaining the benefits.)

Thanks for this helpful post. Strong upvoted.

Animal Advocacy Careers' "skilled volunteering" board has a few things that might be relevant in the "other technical" section.

https://www.animaladvocacycareers.org/skilled-animal-volunteer-opportunities

Load More