Last major update: July 8, 2022

This post seeks to gather AI and impact opportunities that are not yet popular in EA. Further opportunities that use AI to increase the positive impact of various benevolent actors and mitigate risks could exist.

  • Drones for wild animal welfare (based on 80k podcast)
  • Wellbeing-optimizing social media development (based on 80k podcast)
  • OpenAI and DeepMind constructive critique (based on this and this post)
  • Supporting effective charities in accepting crypto (inspired by Effective Crypto)
  • Digital governance for peaceful and morally inclusive nations (such as over 600× social and economic return on investment of land records digitization in Bangladesh)
  • Alignment research coordination among tech companies (assuming that companies seek positive impact while keeping profit)

-2

0
0

Reactions

0
0
Comments6
Sorted by Click to highlight new comments since: Today at 11:24 AM

It seems a bit misleading to call many of these “AI alignment opportunities”. AI alignment has to do with the relatively narrow problem of solving the AI control problem (ie making it so very powerful models don’t decide to destroy all value in the world), and increasing the chances society decides to use that solution.

These opportunities are more along the lines of using ml to do good in a general sense.

Ok, AI and impact. Although what about in these ways of developing institutions so that human actors use increasingly powerful AI to objectives that are better aligned, generate content on which AI can learn methods that always do good, and advance systems that would prevent even a superintelligent AI to be harmful (e. g. mutual accountability checks).

It depends on what you mean. If you mean trying to help developing countries achieve SDG goals, then this won't work for a variety of reasons, the most straightforward of which is that using data-based approaches to build statistical models is different enough from cutting edge machine learning or alignment research that it will be very likely useless to the task, and the vast majority of the benefit from such work is found in the standard benefits to people living in developing countries.

If you mean advocating for policies which subsidize good safety research, or advocate for interpretability research in ML models, then I think a better term would be "AI governance" or some other term which specifies that it's non-technical alignment work, focused on building institutions which are more likely to use solutions rather than finding those solutions.

OK, makes sense - since this is basically mostly benefit of individuals it is like AI and impact - interpretability - well sure some of the areas can relate to that, such as social media wellbeing optimization. Yes, probably the level of thinking is at the 'governance' level, not technical alignment (e. g. not quite at a place where a poorly coded drone could decide to advance selfish objectives instead of SDGs..). 

  • most of these aren't aimed at solving the alignment problem
  • some that are are incredibly vague ("outer alignment research coordination")
  • various mistakes; e.g. OpenAI and Deepmind are in fact extremely competitive

The title should not mention AI alignment at all since there are a variety of objectives

Aimed at solving alignment problem, going one by one: 1) yes, because you need to develop AI that does good in order to check for such aspects at other AI - although sure, SDGs may not exactly define impartial good, 2) no, because it is just already developed tools that advance SDGs, 3) no, it is just using drones for SDGs rather than research, 4) no, using drones for wild animal welfare not research, 5) yes, AI policy in the EU can contribute to global AI alignment, 6) yes, social media optimization objectives may determine public interest in these objectives so if wellbeing is offered by AI it may be demanded by humans - so, it is like human RL which we should get right first, 7) yes, alignment potential, 8) yes, outer and inner alignment potential if outer alignment is understood by other institutions that focus on it, 9) yes, by definition - AI safety in any location contributes to alignment, 10) no, this is using technology to solicit donations, 11) no, this is personal advocacy, 12) no, this is objectives advancement, 13) no, this is also objectives advancement, 14) no, also supporting goals with already developed technology, 15) yes, one needs to understand values in order to align for them, 16) no, it is a use of technology rather than research - as you are defining alignment by research not the actual advancement of the alignment, 17) yes, this is the research but you can argue no because it is coordinating not researching, 18) no, gaining interest of researchers and persons who can deploy AI, 19) no, it is data collection not use for research of inner alignment although perhaps outer alignment if you have a global governance AI that makes decisions, 20) no, it is learning not executing (although can contribute to alignment in general). So, you are right, most is not alignment if you define it by research but I would have to re-run my analysis if alignment is also defined by deployment of objectives using AI.

Yes, these are vague - I wish I had more specific recommendations - working in this and that position and adding exactly one of these pieces of code, running the existing code by a checking software or an organization of humans, or looking for particular outcomes in the present moment and employing sound practices to see if any outcomes may occur in the future.

Yes, these are competitive. I did not realize the framing when I was writing this,  I was just thinking about some cool ideas .. I edited this as DeepMind and OpenAI adjacent research - anyone can do that.

Yes the title is now AI and impact to better reflect the content.

More from brb243
Curated and popular this week
Relevant opportunities