Hide table of contents

I'm considering doing another pilot "epistemic map", but I'm trying to decide what topic or set of research questions I should do it on, and am thus soliciting suggestions.

Whereas my last pilot/test map focused on the relationship between poverty and terrorism (and the associated literature), I want to do this one on something EA-relevant. FWIW, I think that epistemic mapping would probably tend to be most valuable for topics that are both important and unsettled/divisive (but progress still seems possible); secondarily, I think it's also probably more valuable when the topics are dynamic (e.g., assumptions or technological capabilities may change over time), have lots of interconnected assumptions/arguments, and/or have a non-small literature base (among a few other considerations). Additionally, it is probably more practical to try to address a specific research question within a field (e.g., "does poverty lead to an increase in terrorism"?)

Some of my ideas thus far have been the controversial Democratising Risk paper and its claims/context (or something else within the X-risk literature), the Worm Wars debate, something in biosecurity (e.g., the potential value or risk of certain emerging technologies), or maybe something about AI (e.g., the viability and impact of small-sample learning). But I'd love to hear any other suggestions (or feedback on those ideas I listed)!

(Edit 3/6/2022: this post was updated to clarify that the intended focus is more on mapping the research related to specific questions within a field, rather than mapping an overall field)  

5

0
0

Reactions

0
0
New Answer
New Comment

1 Answers sorted by

Your epistemic maps seem like a useful idea, since it would make it easier to visualize the most important cause areas for where we should push. Alexey Turchin created a number of roadmaps related to existential risks and AI safety, which seem similar to what you're talking about creating. You should consider making an epistemic map of S-risks, or risks of astronomical suffering.  Tobias Baumann and Brian Tomasik have written a number of articles on S-risks, which might help you get started. I also found this LessWrong article on worse than death scenarios, which breaks down some of the possible sources of worse than death scenarios and possible ways to prevent them. S-risks are a highly neglected cause area, since longtermist/AI safety research is generally about reducing extinction risks and preserving human values rather than averting worse than death scenarios. The Center on Long-Term Risk and the Center for Reducing Suffering have done significant research on S-risk prevention, which might be useful to you if you want to know the most promising research areas for reducing S-risks. 

Thanks for the suggestion and links, I'll be looking further into those! Is there some kind of specific question within the S-risk literature that you think would be good to focus on?

Curated and popular this week
Relevant opportunities