GF

Grant Fleming

Senior Data Scientist
37 karmaJoined Aug 2022Working (0-5 years)

Bio

Participation
1

Interested in AI Alignment and its connections to data ethics/"responsible" data science, public policy, and global development. 

Author of Responsible Data Science (https://www.wiley.com/en-us/Responsible+Data+Science-p-9781119741640) 

Comments
3

I have been considering writing a somewhat technical post arguing that “large Transformer models are shortcut finders” is a more clarifying abstraction for these sorts of artifacts than considering them to be simulators, shoggoths, utility maximizers, etc. Empirical research on the challenges of out-of-distribution generalization, path dependency in training, lottery tickets/winning subnetworks, training set memorization, and other areas appear to lend credence to this as a more reasonable abstraction.

Beyond allowing for a more accurate conception of their function, I believe seeing Transformer models through this lens naturally leads to another conclusion: that the existential risk posed by AI in the near-term, at least as presented within EA and adjacent communities, is likely overblown.

The debate on this subject has been ongoing between individuals who are within or adjacent to the EA/LessWrong communities (see posts that other comments have linked and other links that are sure to follow). However, these debates often are highly insular and primarily are between people who share core assumptions about:

  1. AGI being an existential risk with a high probability of occurring
  2. Extinction via AGI having a significant probability of occurring within our lifetimes (next 10-50 years)
  3. Other extinction risks (e.g pandemics or nuclear war) not likely manifesting prior to AGI and curtailing AI development such that AGI risk is no longer of relevance in any near-term timeline as a result
  4. AGI being a more deadly existential risk than other existential risks (e.g pandemics or nuclear war)
  5. AI alignment research being neglected and/or tractable
  6. Current work on fairness and transparency improving methods for AI models not being particularly useful towards solving AI alignment

There are many other AI researchers and individuals from other relevant, adjacent disciplines that would disagree with all or most of these assumptions. Debates between that group and people within the EA/LessWrong community who would mostly agree with the above assumptions is something that is sorely lacking, save for some mud-flinging on Twitter between AI ethicists and AI alignment researchers.

Interesting idea for a competition, but I don't think that the contest rules as designed and, more specifically, the information hazard policy, are well thought out for any submissions that follow the below line of argumentation when attempting to make the case for longer timelines:

  • Scaling current deep learning approaches in both compute and data will not be sufficient to achieve AGI, at least within the timeline specified by the competition
  • This is due to some critical component missing in the design of current deep neural networks
  • Supposing that this critical component is being ignored by current lines of research and/or has otherwise been deemed intractable, AGI development is likely to proceed slower than the current assumed status quo
  • The Future Fund should therefore shift some portion of their probability mass for the development of AGI further into the future

Personally, I find the above arguments one of the more compelling cases for longer timelines. However, a crux of these arguments holding true is that these critical components are  in fact largely ignored or deemed intractable by current researchers. Making that claim necessarily involves explaining the technology, component, method, etc. in question, which could justifiably be deemed an information hazard, even if we are only describing why  this element may be critical rather than how  it could be built.  

Seems like this type of submission would likely be disqualified despite being exactly the kind of information needed to make informed funding decisions, no?