How might EA be under-investing in "on the ground" type work that relates to cause areas?
If there's a spectrum between research/abstract and small/on the ground, what are some areas where EA cause areas could be helped with more on the ground efforts around current challenges related to cause areas.
- The on the ground version might involve: Work with local governments and local businesses to help set them up with thermal scanning for detecting fevers, or ways to detect and communicate food safety events from restaurants, or ways to detect and communicate local covid outbreaks
It's kind of like you could work on AI safety in the abstract, or you could work on AI safety in the current challenges related to AI. You could work on biosecurity in the abstract, or you could work on the current challenges related to ongoing covid spread. Both seem important. How might we be under-investing in the latter types of activities? They perhaps seem less high leverage, though I'd argue they massively increase the effectiveness of all the other EA type work on the problem. There are some things that seem basic and less important but that make a big difference in real world effectiveness, in particular hands on experience with solving similar challenges in the real world.
It's kind of like translational research. Taking EA areas and working on the most current instantiations of those challenges, as a way of seeing what skills are actually needed, developing expertise, and so on.
If one version of EA has organizations that have successfully reduced flu spread in areas, and one version has just done more abstract high level large scale government work, the one that has the former too would be much more effective in addressing the next pandemic.
My fear is that EA groups like high leverage things and dislike low leverage things, and on the ground type basic work often appears low leverage low status and boring, though it seems that without a pipeline for turning more high level high leverage EA work into on the ground results, EA's effectiveness will be hampered.
Boring not-yet-catastrophic problems related to top EA cause areas that working on would likely make EA more effective at preventing the catastrophic versions
- AI risk: deepfakes, ai astroturfing, ai powered social engineering, addictions to ai powered 'feeds', body image / ai filters and photo modifications
- Bio risk: flu, STIs, covid variants, food poisoning outbreaks
My sense is that the things in the bullet list above get much less EA resources than more high level and less 'on the ground' type things. Is that right?
What are the pros and cons for having much more EA effort allocated to some of this 'boring' translational type on the ground applications?
What are the current challenges related to top EA cause areas like bio risk and AI risk?