Jamie_Harris

Grantmaking @ Polaris Ventures and EA Infrastructure Fund
2809 karmaJoined Working (6-15 years)London N19, UK

Bio

Participation
5

Jamie is a Program Associate at Polaris Ventures, doing grantmaking to support projects and people aiming to build a future guided by wisdom and compassion for all. Polaris' focus areas include AI governance, digital sentience, plus reducing risks from fanatical ideologies and malevolent actors.

He also spend a few hours a week as a Fund Manager at the Effective Altruism Infrastructure Fund, which aims to increase the impact of projects that use the principles of effective altruism, by increasing their access to talent, capital, and knowledge.

Lastly, Jamie is President of Leaf, an independent nonprofit that supports exceptional teenagers to explore how they can best save lives, help others, or change the course of history. (Most of the hard work is being done by the wonderful Jonah Boucher though!)

Jamie previously worked as a teacher, as a researcher at the think tank Sentience Institute, and as co-founder and researcher at Animal Advocacy Careers, which helps people to maximise their positive impact for animals.
 

Comments
359

Topic contributions
1

Random idea on the random idea: such an event (or indeed similar social opportunities for ETGers) could charge for participation and aim to fully cover costs, or even make a profit that gets donated.

EtGers have money they want to give away, and this is clearly a service that should be supporting them to address a need they have --> they should be willing to pay for it.

Also, if the service just focused on providing EtGers with fun, social connections, and a great community rather than 'overfitting' to what seems directly relevant to impact, I think it might be easier to make it successful and grow it. But then a bunch of the money would be spent on things that are quite disconnected from impact, and arguably shouldn't be funded by Open Philanthropy, EAIF etc and would be better coming from EtGers personal/social/fun budgets rather than out of their donations.

 

Arguments against: 

separate fuzzies and utilons... This might be blurring the boundaries and making it hard to optimise for either, or making it confusing for EtGers whether they should see it as a donation or not.

EtGers might underestimate the benefits of investing in themselves in this way (in the same way people often underinvest in their own mental health, productivity systems, etc) and offering it free or subsidised might better set incentives that accurately represent its value.

Thanks for the useful post Marcus!

If people reading might be a good fit for running a project helping to improve funding diversification, I encourage them to apply to the EA Infrastructure Fund. We are keen to receive applications that help with this (and aren't currently very funding constrained ourselves).

As for ideas for projects; Marcus lists some above, I list some on my post, and you might have ideas of your own.

I don't know all the details since it's a governance/operational thing but I don't think we expect this to be an issue, thankfully!

I didn't write that wording originally (I just copied it over from this post), so I can't speak exactly to their original thinking.

But I think the phrasing includes the EA community, it just uses the plural to avoid excluding others.

Some examples that jump to mind:

  • EA
  • Rationality, x-risk, s-risk, AI Safety, wild animal welfare, etc to varying degrees
  • Org-specific communities, e.g. the fellows and follow-up opportunities on various fellowship programmes.

 

I would like to more clearly understand what the canonical "stewards of the EA brand" in CEA and the EAIF have in mind for the future of EA groups and the movement as a whole?

I think this suggests more of a sense of unity/agreement than I expect is true in practice. These are complex things and individuals have different views and ideas!

 

Thanks for thinking this stuff through and coming up with ideas!

Hi Daniel! I don't have a lot to elaborate on here; I haven't really thought much about the practicalities, I was just flagging that proposals and ideas relating to regranting seem like a plausible way to help with funding diversification.

Also, just FYI, on the specific intervention idea, which could be promising, that would fall in the remit of EA Funds' Animal Welfare Fund (which I do not work at), not the Infrastructure Fund (which I work at). I didn't check with fund managers there if they endorse things I've written here or not.

Based on this information alone, EAIF would likely prefer an application later (e.g. if there is some event affecting the uncertainty that would pass) to avoid us wasting our time.

But I don't think this would particularly affect your chances of application success. And maybe there are good reasons to want to apply sooner?

And I wouldn't leave it too long anyway, since sometimes apps take e.g. 2 months to be approved. Usually less, and very occasionally more.

I think fairly standard EA retreats / fellowships are quite good at this

Maybe. To take cause prio as an example, my impression is that the framing is often a bit more like: 'here are lots of cause areas EAs think are high impact! Also, cause prioritisation might be v important.' (That's basically how I interpret the vibe and emphasis of the EA Handbook / EAVP.) Not so much 'cause prio is really important. Let's actually try and do that and think carefully about how to do this well, without just deferring to existing people's views.'

So there's a direct ^ version like that that I'd be excited about.

Although perhaps contradictorily I'm also envisaging something even more indirect than the retreats/fellowships you mention as a possibility, where the impact comes through generally developing skills that enable people to be top contributors to EA thinking, top cause areas, etc. 

I don't know much about LW/ESPR/SPARC but I suspect a lot of their impact flows through convincing people of important ideas and/or the social aspect rather than their impact on community epistemics/integrity?

Yeah I think this is part of it. But I also think that they help by getting people to think carefully and arrive at sensible and better processes/opinions.

Seems fair. I do work there, I promise this post isn't an elaborate scheme to falsely bulk out my CV.

Mm they don't necessarily need to be small! (Ofc, big projects often start small, and our funding is more likely to look like early/seed funding in these instances.) E.g. I'm thinking of LessWrong or something like that. A concrete example of a smaller project would be ESPR/SPARC, which have a substantial (albeit not sole) focus on epistemics and rationality, that have had some good evidence of positive effects, e.g. on Open Phil's longtermism survey.

But I do think the impacts might be more diffuse than other grants. E.g. we won't necessarily be able to count participants, look at quality, and compare to other programmes we've funded.

Some of the sorts of outcomes I have in mind are just things like altered cause prioritisation, different projects getting funded, generally better decision-making.

I expect we would in practice judge whether these seemed on track to be useful by a combination of (1) case studies/stories of specific users and the changes they made (2) statistics about usage.

(I do like your questions/pushback though; it's making me realise that this is all a bit vague and maybe when push comes to shove with certain applications that fit into this category, I could end up confused about the theory of change and not wanting to fund.)

Thanks! Sorry to hear the epistemics stuff was so frustrating for you and caused you to leave EA.

Yes, very plausible that the example interventions don't really get to the core of the issue -- I didn't spend long creating those and they're more meant to be examples to help spark ideas rather than confident recommendations on the best interventions or some such. Perhaps I should have flagged this in the post.

Re "centralized control and disbursion of funds": I agree that my example ideas in the epistemics section wouldn't help with this much. Would the "funding diversification" suggestions below help here?

And I'm intrigued if you're up for elaborating why you don't think the sorts of "What could be done?" suggestions would help with the other two problems you highlight. (They're not optimising for addressing those two specific concerns of course, but insofar as they all relate back to bad/weird epistemic practices, then things like epistemics training programmes might help?) No worries if you don't want to or don't have time though.

Thanks again!

Load more