I currently lead EA funds.
Before that, I worked on improving epistemics in the EA community at CEA (as a contractor), as a research assistant at the Global Priorities Institute, on community building, and Global Health Policy.
Unless explicitly stated otherwise, opinions are my own, not my employer's.
You can give me positive and negative feedback here.
Matching campaigns get a bad rep in EA circles* but it’s totally reasonable for a donor to be concerned that if they put in lots of money into an area other people won’t donate, and matching campaigns preserve the incentive for others to donate, crowding in funding.
* I agree that campaigns claiming you’ll have twice the impact as your donation will be matched are misleading.
Thanks, this is a great response. I appreciate the time and effort you put into this.
I'm not sure it makes sense to isolate 2b and 3b here - 1a can also play a role in mitigating failure (and some combination of all three might be optimal).
I just isolated these because I thought that you were most interested in EA orgs improving on 2b/3b, but noted.
I'd be curious to see a specific fictional story of failure that you think is:
* realistic (e.g. you'd be willing to bet at unfavourable odds that something similar has happened in the last year)
* seems very bad (e.g. worth say 25%+ of the org's budget to fix)
* is handled well at more mature charities with better governance
* stems from things like 2b and 3b
I'm struggling to come up with examples that I find compelling, but I'm sure you've thought about this a lot more than I have.
Though I don’t think it’s as big a deal as x-risk or factory farming. Main crux is probably the effect on factory farming, as is the case with many economic growth influencing interventions.