Aryeh Englander is a mathematician and AI researcher at the Johns Hopkins University Applied Physics Laboratory. His work is focused on AI safety and AI risk analysis.
Meta-comment: I noticed while reading this post and some of the comments that I had a strong urge to upvote any comment that was critical of EA and had some substantive content. Introspecting, I think this was partly due to trying to signal-boost critical comments because I don't think we get enough of those, partly because I agreed with some of those critiques, ... but I think mostly because it feels like part of the EA/rationalist tribal identity that self-critiquing should be virtuous. I also found myself being proud of the community that a critical post like this gets upvoted so much - look how epistemically virtuous we are, we even upvote criticisms!
On the one hand that's perhaps a bit worrying - are we critiquing and/or upvoting critiques because of the content or because of tribal identity? On the other hand, I suppose if I'm going to have some tribal identity then being part of a tribe where it's virtuous to give substantive critiques of the tribe is not a bad starting place.
But back on the first hand, I wonder if this would be so upvoted if it came from someone outside of EA, didn't include things about how the author really agrees with EA overall, and perhaps was written in a more polemical style. Are we only virtuously upvoting critiques from fellow tribe members, but if it came as an attack from outside then our tribal defense instincts would kick in and we would fight against the perceived threat?
[EDIT: To be clear, I am not saying anything about this particular post. I happened to agree with a lot of the content in the OP, and I have voiced these and related concerns several times myself.]
This seems correct and a valid point to keep in mind - but it cuts both ways. It makes sense to reduce your credence when you recognize that expert judgment here is less informed than you originally thought. But by the same token, you should probably reduce your credence in your own forecasts being correct, at least to the extent that they involve inside view arguments like, "deep learning will not scale up all the way because it's missing xyz." The correct response in this case will depend on how much your views depend on inside view arguments about deep learning, of course. But I suspect that at least for a lot of people the correct response is to become more agnostic about any timeline forecast, their own included, rather than to think that since the experts aren't so reliable here, therefore I should just trust my own judgement.
Part-time work is an option at my workplace. Less than half-time loses benefits though, which is why I didn't want to drop down to lower than 50%.
I did not have an advisor when I sent the original email, but I did have what amounted to a standing offer from my undergrad ML professor that if I ever wanted to do a PhD he would take me as a grad student. I spent a good amount of time over the past three months deciding whether I should take him up on that or if I should apply elsewhere. I ended up taking him up on the offer.
I did not discuss it with my employer before sending the original email. It did take some work to get it through bureaucratic red tape though (conflict of interest check, etc.).
Does this look close to like what you're looking for? https://www.lesswrong.com/posts/qnA6paRwMky3Q6ktk/modelling-transformative-ai-risks-mtair-project-introduction
If yes, feel free to message me - I'm one of the people running that project.
Also, what software did you use for the map you displayed above?
In your 80,000 Hours interview you talked about worldview diversification. You emphasized the distinction between total utilitarianism vs. person-affecting views within the EA community. What about diversification beyond utilitarianism entirely? How would you incorporate other normative ethical views into cause prioritization considerations? (I'm aware that in general this is basically just the question of moral uncertainty, but I'm curious how you and Open Phil view this issue in practice.)
True. My main concern here is the lamppost issue (looking under the lamppost because that's where the light is). If the unknown unknowns affect the probability distribution, then personally I'd prefer to incorporate that or at least explicitly acknowledge it. Not a critique - I think you do acknowledge it - but just a comment.
Shouldn't a combination of those two heuristics lead to spreading out the probability but with somewhat more probability mass on the longer-term rather than the shorter term?
What skills/types of people do you think AI forecasting needs?
I know you asked Ajeya, but I'm going to add my own unsolicited opinion that we need more people with professional risk analysis backgrounds, and if we're going to do expert judgment elicitations as part of forecasting then we need people with professional elicitation backgrounds. Properly done elicitations are hard. (Relevant background: I led an AI forecasting project for about a year.)
For thinking about AI timelines, how do you go about choosing the best reference classes to use (see e.g., here and here)?