Jamie_Harris

Jamie Harris is a researcher at Animal Advocacy Careers, a charity that he co-founded which seeks to address the career and talent bottlenecks in the animal advocacy movement, and at Sentience Institute, a social science think tank focused on social and technological change, especially the expansion of humanity's moral circle.

As well as hosting The Sentience Institute Podcast, Jamie does a number of small projects and tasks to help grow and support the effective animal advocacy community more widely. He works on whatever he thinks are the best opportunities for him to improve the expected value of the long-term future.

Give Jamie anonymous advice / feedback here https://forms.gle/t5unVMRci1e1pAxD9

Comments

On the longtermist case for working on farmed animals [Uncertainties & research ideas]

Thanks for this post Michael, I think I agree with everything here! Though if anyone thinks we can "confidently dismiss the above longtermist argument for farmed animal welfare work, without needing to do this research" I'd be interested to hear why.

I won’t be pursuing those questions myself, as I’m busy with other projects

I just wanted to note that Sentience Institute is pursuing some of this sort of research, but (1) we definitely won't be able to pursue all of these things any time soon, (2) not that much of our work focuses specifically on these cause prioritisation questions -- we often focus on working out how to make concrete progress on the problems, assuming you agree that MCE is important. That said, I think a lot of research can achieve both goals. E.g. my colleague, Ali, is finishing up a piece of research that fits squarely in "4a. Between-subjects experiments... focused on the above questions" currently titled "The impact of perspective taking on attitudes and prosocial behaviours towards non-human outgroups." And the more explicit cause prioritisation research would still fit neatly within our interests. SI is primarily funding constrained, so if any funders reading this are especially interested in this sort of research, they should feel free to reach out to us.

Contact the Sentience Institute and/or me to discuss ideas

Thanks for this note! Agreed. My email is jamie@sentienceinstitute.org if anyone does want to discuss these ideas or send me draft writeups for review.

Case studies of self-governance to reduce technology risk

Cool post! I like the methodology; it bears a lot of similarities to the case studies summary and analysis I'm writing for Sentience Institute at the moment. What do you think about the idea of converting those low / moderate / high ratings in the RQ2 table into numerical scores (e.g. out of 5, 10, or 100) and testing for statistically significant correlations between various scores and the "level of success" score?

EA Debate Championship & Lecture Series

I saw after writing this comment that Jonas Vollmer's recent "Some quick notes on 'effective altruism'" post is filled with people calling for more empirical testing on EA messaging. So perhaps there is both more interest and intent to carry out this sort of research than I previously believed.

EA Debate Championship & Lecture Series

This is very cool. Seems like high fidelity outreach to a highly promising group, done well.

My main concern: how often can this general debate theme be repeated among major debate tournaments? I wonder if this will now not be repeatable for several years.

<<Lesson 5: It may be helpful to design a formal EA-advocacy framework and research agenda. Debate can be a useful case-study for EA-advocacy for the reasons mentioned in this post.>> I have often thought this. There is a lot of research that seems like it could be useful for EA outreach, e.g. testing the effectiveness of various messaging strategies. There's some cause area-specific research, but not much that I'm aware of relating to more general EA principles.

<< However, even with the help of fellow EAs, it took us a while to understand how best to measure engagement with EA content. >> I have also thought this! I created some questions to use in an RCT we are running at Animal Advocacy Careers and lamented that I had a scale to use for "Animal Farming Opposition" (based on factor analysis of Sentience Institute's surveys) but not "Effective Altruism Inclination" or something similar.

Would be happy to discuss the EA outreach research ideas a bit more if anyone reading this is interested in pursuing (/ collaborating on?) that.

The Importance of Artificial Sentience

Hey, glad you liked the post! I don't really see a tradeoff between extinction risk reduction and moral circle expansion, except insofar as we have limited time and resources to make progress on each. Maybe I'm missing something?

When it comes to limited time and resources, I'm not too worried about that at this stage. My guess is that by reaching out to new (academic) audiences, we can actually increase the total resources and community capital dedicated to longtermist topics in general. Some individuals might have tough decisions to face about where they can have the most positive impact, but that's just in the nature of there being lots of important problems we could plausibly work on.

On the more general category of s-risks vs extinction risks, it seems to be pretty unanimous that people focused on s-risks advocate cooperation between these groups. E.g. see Tobias Baumann's "Common ground for longtermists" and CLR's publications on "Cooperation & Decision Theory". I've seen less about this from people focused on extinction risks, but I might just not have been paying enough attention.

Total Funding by Cause Area

Cool post, I enjoyed seeing these numbers like this. I share your takeaway that global health seems more overrepresented than I expected.

"Do you feel that the numbers I'm using are misrepresentative?" One consideration here is whether the appropriate figures to consider are "EA funding" or "all funding." What's the case for the former? Just that you expect EA funding to be substantially more cost-effective? Maybe. But even then you'd ideally include non-EA funding with some sort of discount, e.g. each non-EA dollar is only counted as 0.5, 0.1, or 0.01 EA dollars. I appreciate also that EA dollars are easier to count.

The Importance of Artificial Sentience

Oh "staff" might have just been the wrong word. I just meant "team members" or something else non-prescriptive. (They commented anonymously so I couldn't thank an individual.) They confirmed to me that they are currently inactive.

Many (many!) charities are too small to measure their own impact

Some other, partly overlapping reasons:

  • In rushing to measure their impact to meet requests for impact evaluation, they might just focus on the wrong things. E.g. proxy metrics that sound like good impact evaluation but aren't very good indicators really. If measuring in their "own" timelines, rather than when asked, charities might have more scope and time to do it carefully.
  • I think there's something to be said for just trying to do something really well and only subsequently stopping to take stock of what you have or haven't achieved. (We've taken pretty much the opposite approach at Animal Advocacy Careers and I periodically wonder whether that was a mistake)
  • if you're doing something that seems pretty clearly likely to be cost-effective, given the available evidence, spending resources on further evaluation might just be a waste.
  • Similarly, unless conducting and disseminating research is an important part of your theory of change, the research focus might be be a distraction if it doesn't seem likely to affect your decision-making.
How can non-biologists contribute to wild animal welfare?

I hosted a podcast episode substantially (not exclusively) about this question! https://www.sentienceinstitute.org/podcast/episode-13.html

Main categories of possible actions discussed:

  • fund Animal Ethics or Wild Animal Initiative. They're massively underfunded.
  • spread the word, especially to audiences that seem likely to be receptive to the idea.
  • get a career in an area that might help, like in policy.
Load More