K

Kestrel🔸

1402 karmaJoined Working (0-5 years)Lancaster, UK

Bio

I work as a researcher in statistical anomaly detection in live data streams. I work at Lancaster University and my research is funded by the Detection of Anomalous Structure in Streaming Settings group, which is funded by a combination of industrial funding and the Engineering and Physical Sciences Research Council (ultimately the UK Government).

There's a very critical research problem that's surprisingly open - if you are monitoring a noisy system for a change of state, how do you ensure that you find any change as soon as possible, while keeping your monitoring costs as low as possible?

By "low", I really do mean low - I am interested in methods that take far less power than (for example) modern AI tools. If the computational cost of monitoring is high, the monitoring just won't get done, and then something will go wrong and cause a lot of problems before we realise and try to fix things.

This has applications in a lot of areas and is valued by a lot of people. I work with a large number of industrial, scientific and government partners.

Improving the underlying mathematical tooling behind figuring out when complex systems start to show problems reduces existential risk. If for some reason we all die, it'll be because something somewhere started going very wrong and we didn't do anything about it in time. If my research has anything to say about it, "the monitoring system cost us too much power so we turned it off" won't be on the list of reasons why that happened.

I also donate to effective global health and development interventions and support growth of the effective giving movement. I believe that a better world is eminently possible, free from things like lead pollution and neglected tropical diseases, and that everyone should be doing at least something to try to genuinely build a better world.

Comments
171

Many people will compromise their morals for money. That's life. I try not to hold it against them.

For what it's worth, EA's donor core of pledgers/EtGers probably aren't going anywhere or compromising anything much, being a group of people who constantly could just decide to have more money and don't. So maybe they'd be a nicer group for you to hang out with if this kind of moral compromise really bugs you?

Personally I'd quite like to hear about your new charity.

I think you're not realising the key difference a non-EA observer might see between

  • Shrimp on a farm (of which EAs have welfare projects going)
  • Wild, non-farmed non-fished shrimp (which are not to my knowledge the target of any EA welfare projects)

Which means a lot of this is about human intervention in the lives of animals, and therefore, human treatment of animals - cruel or not.

A simple "Some EAs believe we shouldn't be cruel to animals, no matter how small, especially the animals we eat" generally works as a short answer. I generally find people don't want to take the moral position of "humans should be cruel to the animals we eat".

This is a great point.

For anyone reading this who doesn't know about the crux here: GiveWell prioritise (because it's cost-effective) saving the lives of people in poverty, whose subjective wellbeing is low (because they are in poverty, and will continue to be in poverty after they don't die from malaria). Therefore the increase in subjective wellbeing from life-saving work is nowhere near as high as it could be for e.g. mental health types of work.

EA diluting its message to expand would result in more unqualified people applying for jobs on the EA jobs boards, which will make them worse job boards.

Thanks for being so clear (and easy-read) about your future strategic ambitions for the EA community.

Are there any plans for CEA to tackle parts of the EA community (such as the EA forum comment section) that are font-constrained?

A great post. I agree - nuclear advocacy just isn't all that effective in a world where costs of renewables and batteries have fallen so much and continue to fall.

I think more widely, what is judged "the most effective climate philanthropy intervention" will shift rapidly over time due to technological/economic/societal progress on climate and it's going to be a constant scramble to keep up with that. This is different to the situation GiveWell is in, and GiveWell have far more money for their analysis operations than Giving Green do.

I encourage continued donations to Giving Green's operations costs. They need to be able to pay staff for good analysis.

I encourage continued red-teaming of climate interventions and pointing out where interventions that might have been judged a high-EV bet in the past have ceased to be so.

Hi! There's no labels on the slider bar so it's initially unclear which side is agree vs disagree.

https://www.coursera.org/learn/sciwrite is a great course for someone looking to make their writing clearer.

I do believe that, from a purely expected-impact-maximising perspective, CG should scale up faster than they are currently doing by directing more of their money from GiveWell charities -> fundraising organisations for GiveWell charities. There's a whole bunch of opportunities above 1x they are intentionally missing out on, and also opportunities above their 5x funding bar they are trying to create and then intentionally miss out on. I believe that their current limit here is primarily reputational, and that altruism at this scale is not an efficient market.

The reputational considerations being that CG does not want to be seen using too much of its global health allocation paying for fundraisers, because someone could write a hitpiece on "a billionaire wants you to give money to help the extreme poor but won't give any himself".

Anyone who is not CG is not bound by the reputational considerations of CG, and can take advantage of a significant arbitrage opportunity.

I am confident that CG running more RFPs, committing multi-year scale-up funding, branching out into diverse initiatives, and other such things with its increased EGI budget allocation is a very clear sign that it believes there is both high impact and absorbency here. And knowing that their previous RoI has been 5x, I doubt this one will end up substantially lower. I reckon they'll use about the same judging criteria, the pot just won't run out so fast. Which means that I do think funding an EGI funded by CG is more cost-effective than GiveWell.

Load more