Jordan Taylor
Pursuing a doctoral degree (e.g. PhD)

PhD student, coding new algorithms to simulate quantum systems. Working on Tensor Networks, which can be used to simulate entangled quantum materials, quantum computers, or to perform machine learning. Looking to get into AI safety or generally do good with my life.

Topic Contributions

Comments

Expected ethical value of a career in AI safety

Yeah your first point is probably true, 100 may be unreasonable even as a lower bound (in the rightmost column). I should change it. 

--

Following  your second point, I changed:

Upon entering the field you may receive sufficiently strong indications that you will not be able to be a part of the most efficacious fraction of AI safety researchers. 

to

Upon entering the field (or just on reviewing your own personal fit) you may receive sufficiently strong indications that you will not be able to be a part of the most efficacious fraction of AI safety researchers. 

Expected ethical value of a career in AI safety

Important point. I changed 

... AI safety research seems unlikely to have strong enough negative unexpected consequences to outweigh the positive ones in expectation.

to

... Still, it's possible that there will be a strong enough flow of negative (unforseen) consequences to outweigh the positives. We should take these seriously, and try to make them less unforseen so we can correct for them, or at least have more accurate expected-value estimates. But given what's at stake, they would need to be pretty darn negative to pull down the expected values enough to outweigh a non-trivial risk of extinction.

Expected ethical value of a career in AI safety

Thanks!  Means a lot :) 
(I promise this is not my alt account flattering myself)

I'll be attending MLAB2 in Berkeley this August so hopefully I'll meet some people there.

Expected ethical value of a career in AI safety

Yes, that is true. I'm sure those other careers are also tremendously valuable. Frankly I have no idea if they're more or less valuable than direct AI safety work. I wasn't making any attempt to compare them (though doing so would be useful). My main counterfactual was a regular career in academia or something, and I chose to look at AI safety because I think I might have good personal fit and I saw opportunities to get into that area. 

Expected ethical value of a career in AI safety

This is a good point I hadn't considered. I've added a few rows calculating a marginal correction-factor to the google sheet  and I'll update the table if you think they're sensible. 

The new correction factor is based on integrating an exponentially decaying function from N_researchers to N_researchers+1, with the decay rate set by a question about the effect of halving the size of the AI alignment community. Make sure to expand the hidden rows in the doc if you want to see the calculations.  

Pursuing a doctoral degree (e.g. PhD)