This is a special post for quick takes by Quinn McHugh (he/him). Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I recently came across this great introductory talk from the Center for Humane Technology, discussing the less catastrophic, but still significant risks of generative large language models (LLMs). This might be a valuable resource to share with those unfamiliar with the staggering pace of AI capabilities research.

A key insight for me: Generative LLMs have the capacity to interpret an astonishing variety of languages. Whether those languages are traditional (e.g. written or verbal English) or abstract (e.g. images, electrical signals in the brain, wifi traffic, etc) doesn't necessarily matter. What matters is the events in that language can be quantified and measured.

While this opens up the door to numerous fascinating applications (e.g. translating animal vocalizations to human language, enabling blind individuals to see), it also raises some serious concerns regarding privacy of thought, mass surveillance, and further erosion of truth, among others.

[comment deleted]1
Curated and popular this week
Relevant opportunities