I'm a researcher at GPI. I have broad interests, but often they involve the intersection between bounded rationality and longtermism. I also like dogs.
Happy to chat about global priorities research, Oxford or academia. I have some degree of skepticism about effective altruism. Happy to talk about that too.
Thanks mhendric! I appreciate the kind words.
The honest truth is that prestige hierarchies get in the way of many people writing good critiques of EA. For any X (=feminism, marxism, non-consequentialism, ...) there's much more glory in writing a paper about X than a paper about X's implications for EA, so really the only way to get a good sense of what any particular X implies for EA is to learn a lot about X. That's frustrating, because EAs genuinely want to know what X implies for EA, but don't have years to learn.
Some publications (The good it promises volume; my blog) aim to bridge the gap, but there are also some decent papers if you're willing to read full papers. Papers like the Pettigrew, Heikkinen, and Curran papers in the GPI working paper series are worth reading, and GPI's forthcoming longtermism volume will have many others.
In the meantime ... I share your frustration. It's just very hard to convince people to sit down and spend a few years learning about EA before they write critiques of it (just like it's very hard to convince EAs to spend a few years learning about some specific X just to see what X might imply for EA). I'm not entirely sure how we will bridge this gap, but I hope we do.
I'll try to write more on the regression to the inscrutable and on AI papers. Any particular papers you want to hear about?
Thanks Vasco! I appreciate your readership, and you've got my view exactly right here. Even a 1% chance of literal extinction in this century should be life-alteringly frightening on many moral views (including mine!). Pushing the risk a fair bit lower than that should be a part of most plausible strategies for resisting the focus on existential risk mitigation.
Often, to be honest, it goes the other way. The average engaged EA knows a tremendous amount about EA, whereas many educated readers (including academics, who are another key part of my audience) know relatively little.
I guess one key audience of mine is academic philosophers. This audience often wants to see discussions of philosophical issues in population ethics, decision theory, and the like at a level that assumes quite a high level of background (often, alas, more than I have!).
I think in practice I often don't provide the second audience (academics, especially philosophers) with as much content as I'd like for them, and I'm trying to do what I can to grow my audience a bit more evenly.
Thanks for the kind words, Jamie!
I always appreciate engagement with the blog and I'm happy when people want to discuss my work on the EA Forum, including cross-posting anything they might find interesting. I also do my best to engage as I can on the EA Forum: I posted this blog update after several EA Forum readers suggested I do it.
I'm hesitant to outright post my blog posts as EA Forum posts. Although this is in many senses a blog about effective altruism, I'm not an effective altruist, and I need to keep enough distance in terms of the readership I need to answer to, as well as how I'm perceived.
I wouldn't complain if you wanted to cross-post any posts that you liked. This has happened before and I was glad to see it!