N

NthOrderVices

22 karmaJoined Feb 2021

Comments
4

Here's a relevant set of estimates from a couple of years ago, which has a guesstimate model you might enjoy. Your numbers seem to be roughly consistent with theirs. They were trying to make a broader argument that "1. EA safety is small, even relative to a single academic subfield. 2. There is overlap between capabilities and short-term safety work. 3. There is overlap between short-term safety work and long-term safety work. 4. So AI safety is less neglected than the opening quotes imply. 5. Also, on present trends, there’s a good chance that academia will do more safety over time, eventually dwarfing the contribution of EA."

I'm not sure what campus EA practices are like - but, in between pamphlets and books, there are zines. Low-budget, high-nonconformity, high-persuasion. Easy for students to write their own, or make personal variations, instead of treating like official doctrine. ie, https://azinelibrary.org/zines/

What are some practical/theoretical developments that would make your work much less/more successful than you currently expect? (4 questions in 1, but feel free to just answer the most salient for you)

To some limited degree, some people have some beliefs that are responsive to the strength of philosophical or scientific arguments, and have some actions that are responsive to their beliefs. That's about as weak a claim as you can have without denying any intellectual coherence to things. So then the question becomes, is that limited channel of influence enough to drive major societal shifts?

 Or actually, there might be two questions here: could an insight in moral philosophy alone drive a major societal shift, so that society drifts towards whichever argument is better? and, to what extent has actual moral progress been caused by intellectual catalysts like that?