This is a special post for quick takes by Edgar Lin. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
I want to flesh out my impressions about what the crucial considerations for strong longtermist EA priorities are. I think I may want to put some of these into a fuller writeup eventually. I think there’s still a pressing need to evaluate the stronger claims about the value of AI alignment research more, so I wanted to put out my thoughts below about why I currently think the value of a pretty wide range of what I call medium-term interventions rest on those strong claims, especially as I'm pretty personally interested in a lot of more medium-term interventions.
I want to flesh out my impressions about what the crucial considerations for strong longtermist EA priorities are. I think I may want to put some of these into a fuller writeup eventually. I think there’s still a pressing need to evaluate the stronger claims about the value of AI alignment research more, so I wanted to put out my thoughts below about why I currently think the value of a pretty wide range of what I call medium-term interventions rest on those strong claims, especially as I'm pretty personally interested in a lot of more medium-term interventions.