Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist
Social media recommendation algorithms are typically based on machine learning and generally fall under the purview of near-term AI ethics.
I'm giving a ∆ to this overall, but I should add that conservative AI policy think tanks like FAI are probably overall accelerating the AI race, which should be a worry for both AI x-risk EAs and near-term AI ethicists.
You can formally mathematically prove a programmable calculator. You just can't formally mathematically prove every possible programmable calculator. On the other hand, if you can't mathematically prove a given programmable calculator, it might be a sign that your design is an horrible sludge. On the other other hand, deep-learnt neural networks are definitionally horrible sludge.
1) physical limits to scaling, 2) the inability to learn from video data, 3) the lack of abundant human examples for most human skills, 4) data inefficiency, and 5) poor generalization
All of those except 2) boil down to "foundation models have to learn once and for all through training on collected datasets instead of continually learning for each instantiation". See also AGI's Last Bottlenecks.
But the environment (and animal welfare) is still worse off in post-industrial societies than pre-industrial societies, so you cannot credibly claim going from pre-industrial to industrial (which is what we generally mean by global health and development) is an environmental issue (or an animal welfare issue). It's unclear if helping societies go from industrial to post-industrial is tractable, but that would typically fall under progress studies, not global health and development.
Ilya Sutskever today on X: