Jamie_Harris

Jamie Harris is a researcher at Animal Advocacy Careers, a charity that he co-founded which seeks to address the career and talent bottlenecks in the animal advocacy movement, and at Sentience Institute, a social science think tank focused on social and technological change, especially the expansion of humanity's moral circle.

As well as hosting The Sentience Institute Podcast, Jamie does a number of small projects and tasks to help grow and support the effective animal advocacy community more widely. He works on whatever he thinks are the best opportunities for him to improve the expected value of the long-term future.

Give Jamie anonymous advice / feedback here https://forms.gle/t5unVMRci1e1pAxD9

Wiki Contributions

Comments

EA movement building: Should you run an experiment?

Thanks Peter! 

We're actually planning to do some online ads around the re-launch of the course, and literally just received our 501(c)(3) status, so will have some Google Ad Grant money available soon. But I assume this is all too short notice to be put into effect before the launch of the course next week :P

Something to bear in mind for later cohorts though, perhaps!

Evidence from two studies of EA careers advice interventions

Thanks Peter!

I'd like to see a more rigorous study exploring how these interventions affect career choice.

I'd love to know more detail, if you're happy to share.

However, I am not aware of any research on this

Likewise. I did do some digging for this; see the intro of the full paper for the vaguely relevant research I did find.

Evidence from two studies of EA careers advice interventions

Thanks David! And thanks again for all your help. I agree with lots of this, e.g. differential attrition being a substantial problem and follow-ups being very desirable. More on some of that in the next forum post that I'll share next week.

(Oh, and thanks for recording!)

Avoiding Groupthink in Intro Fellowships (and Diversifying Longtermism)

Well I think moral circle expansion is a good example. You could introduce s-risks as a general class of things, and then talk about moral circle expansion as a specific example. If you don't have much time, you can keep it general and talk about future sentient beings; if animals have already been discussed, mention that idea that if factory farming or something similar was spread to astronomical scales, that could be very bad. If you've already talked about risks from AI, I think you could reasonably discuss some content about artificial sentience without that seeming like too much of a stretch. My current guess is that focusing on detailed simulations as an example is a nice balance between (1) intuitive / easy to imagine and (2) the sorts of beings we're most concerned about. But I'm not confident in that, and Sentience Institute is planning a survey for October that will give a little insight into which sorts of future scenarios  and entities people are most concerned about. If by "introductions" you're looking for specific resource recommendations, there are short videos, podcasts, and academic articles depending on the desired length, format etc.

Some of the specifics might be technical, confusing, or esoteric, but if you've already discussed AI safety, you could quite easily discuss the concept of focusing on worst-case / “fail-safe” AI safety measures as a promising area. It's also nice because it overlaps with extinction risk reduction work more (as far as I can tell) and seems like a more tractable goal than preventing extinction via AI or achieving highly aligned transformative AI.

A second example (after MCE) that benefits from being quite close to things that many people already care about is the area of reducing risks from political polarisation. I guess that explaining the link to s-risks might not be that quick though. Here's a short writeup on this topic, and I know that Magnus Vinding of the Center for Reducing Suffering is publishing a book soon called Reasoned Politics, which I imagine includes some content on this. Its all a bit early stages though, so I probably wouldn't pick this one at the moment.

Avoiding Groupthink in Intro Fellowships (and Diversifying Longtermism)

I agree with 2. Not sure about 3 as I haven't reviewed the Introductory fellowship in depth myself.

But on 1, I want to briefly make the case that s-risks don't have to be/seem much more weird than extinction risk work. I've sometimes framed it as: The future is vast and it could be very good or very bad. So we probably want to both try to preserve it for the good stuff and improve the quality. (Although perhaps CLR et al don't actually agree with the preserving bit, they just don't vocally object to it for coordination reasons etc)

There are also ways it can seem less weird. E.g. you don't have make complex arguments about wanting to ensure a thing that hasn't happened yet continues to happen, and missed potential, you can just say: "here's a potential bad thing. We should stop that!!" See https://forum.effectivealtruism.org/posts/seoWmmoaiXTJCiX5h/the-psychology-of-population-ethics for evidence that people, on average, weigh (future/possible) suffering more than happiness.

Also consider that one way of looking at moral circle expansion (one method of reducing s-risks) is that its basically just what many social justicey types are focusing on anyway -- increasing protection and consideration of marginalised groups. It just takes it further.

What we learned from a year incubating longtermist entrepreneurship

There are very few people with longtermist & entrepreneurial experience (e.g., 2-3 years experience in both) that we trust to execute ambitious projects in specific areas of longtermism (bio, AI, etc.).

 

Do you have any reflections or recommendations about what people who meet one but not both of these criteria could be doing to become great potential LEs? I appreciate that there is an obvious answer along the lines of "try the other one out!" but I'm wondering if you have any specific suggestions beyond that.

I.e. 

What could people with longtermist experience but negligible entrepreneurship experience be doing to bridge that gap? Are there any specific resources (books, articles, courses, internships, etc) you'd recommend for people to start testing their personal fit with this and building relevant skills?

And the same question again for people with entrepreneurship experience but negligible longtermist experience.

(Further also to hrosspet's question, I'd be interested in roughly how you were defining/conceptualising those two categories, and if you have general comments about the ways in which people tended to insufficiently developed in one or the other.)

What we learned from a year incubating longtermist entrepreneurship

Thanks for this post. It's great to see the writeup to be able to learn from the experience, even though it didn't work out for you guys in this iteration of the idea.

I sense a slight potential tension between the comment that "EA operations generalists, often with community-builder backgrounds, who would be interested in working on EA meta projects" seem like a promising group to work with and the comment that "There are very few people with longtermist & entrepreneurial experience (e.g., 2-3 years experience in both) that we trust to execute ambitious projects in specific areas of longtermism (bio, AI, etc.)." I would imagine that the former group would tend to not have much experience in "specific areas of longtermism". I'd love any clarity you can shed on this:

  • Am I just wrong? I.e. do some/many of these people have substantial  experience in specific areas?
  • Is it that you see this group as being promising specifically for various meta projects that don't require deep expertise in any one area?
  • Is it that you think that this gap could potentially be bridged as part of a longtermist entrepreneurship incubator's role, e.g. by getting promising-seeming potential future LEs placed into jobs where they can build some domain specific knowledge before revisiting the idea of LE, or some such?
  • Something else?
What are the EA movement's most notable accomplishments?

Apologies for a quick answer, rather than a thorough answer where I looked up all the links and details, but one potential source:

I believe Charity Entrepreneurship partly see one of their key outputs as being creating tangible achievements of the EA community. I guess a lot of it is still pretty new, but to the extent you can find any impressive achievements from CE-incubated orgs, those are pretty clearly attributable to EA. Fish Welfare Initiative have some impressive commitments from producers in India I think, and my impression was that some of the global health charities have achieved quite a lot in a small space of time.

Buck's Shortform

Maybe there's some lesson to be learned. And I do think that EAs should often aspire to be more entrepreneurial.

But maybe the main lesson is for the people trying to get really rich, not the other way round. I imagine both communities have their biases. I imagine that lots of people try entrepreneurial schemes for similar reasons to why lots of people buy lottery tickets. And Id guess that this often has to do with scope neglect, excessive self confidence / sense of exceptionalism, and/or desperation.

Lessons from Running Stanford EA and SERI

Certainly some impressive achievements here and a lot that resonated with topics I've been thinking about, e.g. about the entrepreneurial attitude in EA movement building; I have an outline for a forum post specifically on that topic but not sure if I'll get round to writing it up.

<<As for the importance of personalized programming - when Stanford EA first ran our fellowship, we ran one big section of 15-20 fellows in one big weekly discussion... I then decided to switch our model to 2:1 to 5:1 (fellow:organizer) small groups (largely depending on capacity as I’ve had the most success with 3:1 groups so far). This increased attendance, reading completion and engagement, the ability to personally address questions, criticisms, and key takeaways from the material, and also led to fellows befriending organizers.>> I would love to know any more detail about this that you're happy to share, especially on attendance + retention as the fellowship progressed.

I would intuitively feel worried about setting up such small support groups; if one person drops out of your support groups, I would imagine it would be pretty demoralising for the remaining 1/2/3/4 people, because it would feel like such a big chunk of the group dropping out at once. The effect on perceived social norms / value would presumably be quite high. And do you ever end up with very small groups, or whole support groups disbanding?

Id also be intrigued to know if you have problems with people falling behind on reading / prep before their scheduled meeting times? In Animal Advocacy Careers' online course, the completion rates were higher than I had worried they might be, but we had quite a few people who fell behind on the weekly deadlines and then rushed through the content in a short space of time. (I don't mean they didn't pay attention to it, but cramming it in is likely worse for remembering the content; perhaps also worse for reflection + implementation, though that's just a hunch.)

Load More