Jamie_Harris

@ Leaf / Animal Advocacy Careers
Working (6-15 years of experience)

Bio

Jamie Harris works at two nonprofits:

Leaf, which seeks to introduce intelligent and altruistically motivated 16-18 year olds to ideas about how they can contribute to tackling some of the world’s most pressing problems, especially those that might have lasting effects on the long-term future. Involved since Leaf's inception in 2021, Jamie is now directing the project. https://leaf.courses/

Animal Advocacy Careers (which Jamie co-founded), which seeks to address the career and talent bottlenecks in the animal advocacy movement. https://www.animaladvocacycareers.org/

He previously worked as a teacher for 3 years, then as a researcher at the longtermist think tank Sentience Institute for 3.5 years.

Give Jamie anonymous advice / feedback here https://forms.gle/t5unVMRci1e1pAxD9

Comments
240

Topic Contributions
1

Hello! I'm wondering if it's possible to share (an anonymised version of) the full dataset by country? I'm trying to use number of EAs in any given country as an input into thinking how easy it would be to hire someone for an EA talent search programme in a given country. But at the moment I can't easily compare between, say, India, South Africa, Saudi Arabia, and Pakistan.

Thanks!

I'm not sure if this is helpful or annoying to hear at this stage, but I found a hack to search resolved comments:

Open the full comments list in the top right --> Manually open the search function with the three dots thing at the top of the page. (I.e. don't just ctrl+f ): this opens the search function you get on your browser ordinarily, rather than the search function within Google Docs specifically.

I'm not sure if this makes sense. Can record a screen grab or take multiple screenshots to show this if it's helpful. Presumably this is browser dependent, too.

You might be right that we lose some people due to the form, but I expect the cost is worth it to us. The info we gather is helpful for other purposes and services we offer.

Regarding more info about the course on the sign up page: of course there's plenty of info we could add to that page, but I worry about an intimidatingly large wall of text deterring people as well.

Thanks! I'm keen for staying in the loop with any coordination efforts.

Although I'll note that AAC's course structure is quite different from EAVP. Its content + questions/ activities, not a reading group with facilitated meetings. (The Slack and surrounding community engagement is essentially and optional, additional support group.) I would hazard a guess that the course would score more highly on your system than most or all of the other reviewed items here but I haven't gone through the checklist carefully yet.

This seems really helpful, and I look forward to reviewing your comments when we next decide how to modify/update Animal Advocacy Careers' online course, which may be in a week or so's time.

(A shame we weren't reviewed, as I would have loved to see your ranking + review! But I appreciate that our course is less explicitly/primarily focused on effective altruism.)

I'm a fan of the profile, especially the section on " What do we think are the best arguments we’re wrong?". I thought this was well done and clearly explained.

One important category that I don't remember seeing is on wider arguments against existential risk being a priority. E.g. in my experience with 16-18 year olds in the UK, a very common response to Will MacAskill's Ted talk (that they saw in the application process) was disagreement that the future was actually on track to be positive (and hence worth saving).

More anecdotally, something that I've experienced in numerous conversations, with these people and others, is that they don't expect/believe they could be motivated to work on this problem. (e.g. due to it feeling more abstract, less visceral than other plausible priorities.)

Maybe you didn't cover these because they're relevant to much work on x-risks, rather than AI safety specifically?

It's interesting to see these lists! It does seem like there are many examples here, and I wasn't aware of many of them.

Many of the given examples relate to setbacks and restraints in one or two countries at a time. But my impression is that people don't doubt that various policy decisions or other interruptions could slow AGI development in a particular country; it's just that this might not substantially slow development overall (just handicap some actors relative to others).

So I think the harder and more useful next research step would be to do more detailed case studies of individual technologies at the international level to try to get a sense of whether restraint meaningfully delayed development at an international level?

I'm a bit confused by this bit:

"We presently have to turn down some large commissions due to lack of staff capacity, and lack of funds in place to expand our team (or to maintain the team at its current size)."

Do you charge for your commissions? I'm struggling to get my head around why the ability to take commissions could be constrained by both lack of funding and staff capacity.

Thoughts I have about what might explain it / what you might mean:

  • you don't actually charge and so more commissions just means more work for free. (Or you accept low paid commissions.)
  • commissions don't always come at convenient times so sometimes there are bursts of too much work to do / too many requests, compared to some quieter periods where researchers have to focus more on their own independently generated projects.
  • you have both the research talent and the funding, it's just that there's a time delay for hiring, onboarding etc before you can convert both components into increased capacity.

Clarification on which of these, if any, seems closest to RP's situation would be welcome. Thanks!

I also put my intuitive scores into a copy of your spreadsheet.

In my head, I've tended to simplify the picture into essentially the "Value Through Intent" argument vs the "Historical Harms" argument, since these seem liked the strongest arguments in either direction to me. In that framing, I lean towards the future being weakly positive.

But this post is a helpful reminder that there are various other arguments pointing in either direction (which, in my case, overall push me towards a less optimistic view). My overall view still seems pretty close to zero at the moment though. 

Also interesting how wildly different each of our scores are. Partly I think this might be because I was quite confused/worried about double-counting. Also maybe just not fully grasping some of the points listed in the post.

Load More