Within the field of technical alignment research, some researchers are asking for a more diverse range of perspectives to apply to the problem. The idea is that greater diversity of academic backgrounds could generate novel insights into the problem, generating new research agendas and paths to progress.

However, current attempts to encourage individuals with a wider set of perspectives to work on the problem have so far struggled. As noted by the people behind Refine, both they and PIBBS were unable to do this to some extent, despite explicitly searching for candidates from a broad range of backgrounds.

Based on my involvement with the EA student community over the last few years, I think this desire for breadth conflicts with how many students decide whether to work on Technical AI Safety. My intuition is that even if one viewed AI risk as the most important problem facing Society, the impression they might receive from the EA community could be to avoid working on Technical AI Safety if they don’t have a traditional maths / ML background.

I think this is also implicit in the current problem profile by 80,000 Hours. There, the listed approaches to Technical AI Safety are all existing agendas, with no mention (as far as I can see) of forming new ones. If someone doesn't have the skills to work on these (i.e., Maths or CS ), they might think there's nothing they can contribute to the field.

My biggest uncertainty here is how many current Technical AI Safety researchers agree that the current lack of academic diversity is an issue. I don’t think this is a fringe opinion in the alignment community, but I could be wrong!

At the very least, it seems like there is some discrepancy here: some people working in alignment are asking for broader diversity in the field, but the wider EA community seems yet to take note. It might be that they are doing the correct thing by not yet acting on this request, but it seems an important conversation to have regardless.

Essentially, even if your skillset is not currently in ML, then some members of the Technical AI Safety community think there are highly promising routes you could explore. Don’t rule yourself out from working on alignment prematurely!

7

0
0

Reactions

0
0
Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities