Kevin Kuruc

Research Fellow and Managing Director @ Population Wellbeing Initiative, UT Austin
408 karmaJoined Apr 2021Working (0-5 years)Austin, TX, USAsites.google.com/view/kevinkuruc/home


I'm an academic economist doing global priorities research. I work full time at Univ. of Texas Austin and am an affiliate of the Global Priorities Institute at Oxford. I work on macro-, welfare, and population economics.

How I can help others

I'd be happy to talk with anyone planning on pursuing a research career, especially in economics! Just message me :) 


At what time horizon? For anything over a year, I'd default to the quantity theory of money: inflation should roughly equal the rate of money supply growth (i.e., a central bank choice) minus real rates of economic growth. Increasing the money supply at 30% per year is easy, so if the Fed wanted to avoid deflation it seems like it could. The short-run during such a dramatic regime change could become whacky.

+1. I found it to be an extremely  thought provoking, informative, and high-quality post. Really well done. [FWIW: I had very weak priors over AGI timelines (I'm too confused to form a coherent inside view) and this seems like a much more reliable outside view than I was defaulting to]. 

My first impression is that this is a really exciting strategy to explore! Thanks for doing this, and please post updates :)

FWIW, I don't expect it to be that hard to find common ground with a cattle rancher about chicken welfare standards even with disagreement over the end goal, for all the great reasons you laid out in this post. (Though, it makes me wonder why ranchers wouldn't have already started this sort of lobbying without the nudge you're hoping to provide -- given how straightforward your points were, someone in the beef industry must have thought of this?)

I don't know nearly enough about headhunting to say anything definitive. But if we think they're misleading -- rather than informing -- maybe the argument should be 'EA orgs shouldn't use headhunters' for the reasons you laid out in these comments. It feels counter productive from the orgs side to trick someone into a job they wouldn't have taken with full information (*especially* for a community trying to operate with integrity). 

That seems like a distinct point from 'EA orgs shouldn't poach from one another' (which is what it seemed like the post was about). In general, my prior is that norms should be the same for hiring the EA-employed and the non-EA-employed, whether that's using headhunting services or not.

Hi! I have absolutely no expertise in this, but it seems long-term good to maximize the quality of matches between employers and employees. So, formally, I suppose I disagree with the statement:

Clearly, if a headhunter eases a bottleneck at a high-impact organization while creating a bottleneck at another equally high-impact organization, they are not having a positive effect.

If an employee takes a job at another org, presumably they expect it to be a better match for them going forward. I'd count that as a positive effect, assuming (on average) it increases their effectiveness, decreases their chances of burnout, etc. Even if its just for money or location, its hard to know what intra-household bargains have been made to do EA-work, etc.

There might also be positive general equilibrium effects: An expectation of a robust EA job market (with job-to-job transitions) increased my willingness to leave a non-EA job (academia) and enter this ecosystem. I would have been more hesitant had I felt there was a norm against hiring from other orgs. Though I'll flag that I'm not confident I accurately understand the term 'head-hunting' here, as opposed to recruiting, as opposed to hiring. In any case, a strong 'no head-hunting/recruiting' norm seems like it would weakly pressure orgs not to hire from other orgs (since they wouldn't want to be seen as recruiting from other orgs). 

I get that there are costs associated with re-hiring, re-training, and re-integrating that would be avoided if the original org just directly hires from the non-EA-employed camp. Maybe I'm underestimating these! My uninformed guess is that they are small relative to the benefits of increasing match quality. 

Curious about others thoughts on this though! Thanks for writing it.

+1. I'm super impressed by people who do this.

Hi Alene!

  1.  I like this! In accordance with some of the past discussions on the name 'EA,' I always felt a bit awkward leaning into the group name for most of the reasons you note above. I was also struck by a recent episode of Bad Takes (Laura McGann and Matt Yglesias' new podcast) about SBF where she describes not wanting to like EA because of this 'we -- a bunch of nerds -- have figured it out' vibe. She was eventually positive after learning more, but it seems really bad to be screening out people like that. 
  2. Scale seems to be one of the most important drivers of things EAs care about (factory farming, malaria prevention, future generations). +1 for 'Mass' capturing that in an intuitive way. Though, to be fair, I haven't spent any time exploring other name ideas. 
  3. Minor point: 'Mass Good' struck me as having a religious undertone (maybe just because of the word 'mass'). I actually kind of liked it for that reason! As much as some want to avoid it, EA really does feel like a secular-religion to me -- it's a community with shared values, supporting one another in pursuit of living those values. What's not to like?

I'm not confident this is the right rebranding, but a community shake-up might be the right time to be thinking seriously about one. So I'm glad you wrote this!

Thanks for working so hard on this! Great stuff.

Thanks a lot for sharing a rejection story and for all of the effort you've put into making the world a better place! I would have really appreciated meeting you at EAG. 

One thing I was surprised to read in the comments on Scott Alexander's post is this description of EAG:

EA Global is designed for people who have a solid understanding of the main concepts of effective altruism, and who are making decisions and taking significant actions based on them.

I can second the vibe of Zach's 'Data point' comment. I know/met a few (<5 but I suspect more were there based on my sampling) students at EAG SF who had only recently engaged with EA ideas and had not (yet) taken any 'significant action' based on them.  This isn't their fault, they're young! I enjoyed meeting these people and remain glad they were there. 

My sense was that the admissions committee wanted to connect bright, prospective EAs with direct work employers. That could be a reasonable goal, but it doesn't track the above description which sounds like its about experience acting on EA principles.

Load more