Interested in AI Alignment and its connections to data ethics/"responsible" data science, public policy, and global development.
Author of Responsible Data Science (https://www.wiley.com/en-us/Responsible+Data+Science-p-9781119741640)
The debate on this subject has been ongoing between individuals who are within or adjacent to the EA/LessWrong communities (see posts that other comments have linked and other links that are sure to follow). However, these debates often are highly insular and primarily are between people who share core assumptions about:
There are many other AI researchers and individuals from other relevant, adjacent disciplines that would disagree with all or most of these assumptions. Debates between that group and people within the EA/LessWrong community who would mostly agree with the above assumptions is something that is sorely lacking, save for some mud-flinging on Twitter between AI ethicists and AI alignment researchers.
Interesting idea for a competition, but I don't think that the contest rules as designed and, more specifically, the information hazard policy, are well thought out for any submissions that follow the below line of argumentation when attempting to make the case for longer timelines:
Personally, I find the above arguments one of the more compelling cases for longer timelines. However, a crux of these arguments holding true is that these critical components are in fact largely ignored or deemed intractable by current researchers. Making that claim necessarily involves explaining the technology, component, method, etc. in question, which could justifiably be deemed an information hazard, even if we are only describing why this element may be critical rather than how it could be built.
Seems like this type of submission would likely be disqualified despite being exactly the kind of information needed to make informed funding decisions, no?