New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
27
Lizka
16h
1
When I try to think about how much better the world could be, it helps me to sometimes pay attention to the less obvious ways that my life is (much) better than it would have been, had I been born in basically any other time (even if I was born among the elite!). So I wanted to make a quick list of some “inconspicuous miracles” of my world. This isn’t meant to be remotely exhaustive, and is just what I thought of as I was writing this up. The order is arbitrary. 1. Washing machines It’s amazing that I can just put dirty clothing (or dishes, etc.) into a machine that handles most of the work for me. (I’ve never had to regularly hand-wash clothing, but from what I can tell, it is physically very hard, and took a lot of time. I have sometimes not had a dishwasher, and really notice the lack whenever I do; my back tends to start hurting pretty quickly when I’m handwashing dishes, and it’s easier to start getting annoyed at people who share the space.)  Source: Illustrations from a Teenage Holiday Journal (1821). See also. 2. Music (and other media!) Just by putting on headphones, I get better music performances than most royalty could ever hope to see in their courts. (The first recording was made in the 19th century. Before that — and also widespread access to radios — few people would get access to performances by “professional” musicians, maybe besides church or other gatherings. (Although I kind of expect many people were somewhat better at singing than most are today.)) (A somewhat silly intuition pump: apparently Franz Liszt inspired a mania. But most 19th century concert attendees wouldn’t be hearing him more than once or maybe a handful of times! So imagine listening to a piece that rocks your world, and then only being able to remember it by others’ recreations or memorabilia you’ve managed to get your hands on.)  In general, entertainment seems so much better today. Most people in history were illiterate — their entertainment might come from their own
Notes on some of my AI-related confusions[1] It’s hard for me to get a sense for stuff like “how quickly are we moving towards the kind of AI that I’m really worried about?” I think this stems partly from (1) a conflation of different types of “crazy powerful AI”, and (2) the way that benchmarks and other measures of “AI progress” de-couple from actual progress towards the relevant things. Trying to represent these things graphically helps me orient/think.  First, it seems useful to distinguish the breadth or generality of state-of-the-art AI models and how able they are on some relevant capabilities. Once I separate these out, I can plot roughly where some definitions of "crazy powerful AI" apparently lie on these axes:  (I think there are too many definitions of "AGI" at this point. Many people would make that area much narrower, but possibly in different ways.) Visualizing things this way also makes it easier for me[2] to ask: Where do various threat models kick in? Where do we get “transformative” effects? (Where does “TAI” lie?) Another question that I keep thinking about is something like: “what are key narrow (sets of) capabilities such that the risks from models grow ~linearly as they improve on those capabilities?” Or maybe “What is the narrowest set of capabilities for which we capture basically all the relevant info by turning the axes above into something like ‘average ability on that set’ and ‘coverage of those abilities’, and then plotting how risk changes as we move the frontier?” The most plausible sets of abilities like this might be something like:  * Everything necessary for AI R&D[3] * Long-horizon planning and technical skills? If I try the former, how does risk from different AI systems change?  And we could try drawing some curves that represent our  guesses about how the risk changes as we make progress on a narrow set of AI capabilities on the x-axis. This is very hard; I worry that companies focus on benchmarks in ways that
Holden Karnofsky has joined Anthropic (LinkedIn profile). I haven't been able to find more information.
Intro fellowship sign-ups at EA groups participating in Early OSP doubled this Fall. CEA’s University Groups Team is increasingly focusing its marginal efforts on piloting more involved support to a subset of EA university groups. This pilot program – Early OSP or EOSP[1] – includes early mentorship (starting in the summer), a semester planning retreat in August, and a workshop around EAG Boston, among other initiatives. With the Fall semester now complete, we are analyzing initial outcomes. One standout result is that intro fellowship applications at EOSP groups averaged 32 per group, up from 14 the prior year[2]. Although we lack full baseline data, there are promising indicators – two groups, for instance, went from zero applications in Fall 2023 to meaningful engagement this year.  Of course, this is just one metric among the many that matter[3]. It is, however, an encouraging signal, that we’re hoping to build on as we continue to build out principles-first EA. 1. ^ Early OSP (EOSP) is modeled after our regular Organizer Support Program. EOSP kicked off with organizers from these groups: Harvard, Yale, Stanford, MIT, Columbia, UC Berkeley, UChicago, UPenn, Oxford, and Cambridge. They have been anonymized on the graph. 2. ^ Thank you to everyone who made this happen, especially the group organizers at all these universities! 3. ^ We’re still collecting data on other metrics, and hope to share a more all-things-considered take in the future.
On Friday I gave a talk to the APART research fellows, about writing on the EA Forum. The talk included a few tips on writing a banger Forum post. The corresponding section from my handout is below - LMK if any of the advice is useful, or strikes you as wrong: 3 rules for (Forum) writing Be Engaging * Write with your reader's limited time and attention in mind. * Use concrete examples and analogies to illustrate points. * Be concise and avoid repetition. Be Honest * Express uncertainty when you have it. * Don’t aim to persuade, aim to explain. * Make your reasoning transparent and easy to follow — this helps people disagree with you Be Clear * Put a TL;DR at the beginning * Use descriptive headings that summarise each section * Write concise, uncluttered sentences (Claude is very helpful for this) Common Mistakes * Don't assume readers are automatically interested in your topic - explain its relevance. * Don't assume specialised knowledge - define jargon and provide necessary background information. People are familiar with basic AI risk arguments, but more niche topics need some explanation. As a rule of thumb, if you are about to use an acronym, you’re probably talking about something niche. Extra tips * [highly recommended] Julian Shapiro’s handbook- TLDR: * It’s worth rewriting things.  * Optimise for succinctness and clarity. * No more than one idea per sentence.  * On writing well * If you’ve heard writing advice before, you might not get much from this, but it’s often recommended — the main useful takeaway was how much the author hates “clutter”. He’s right. Remove unnecessary words and phrases as you write, and when you do your final edit.  * General Forum advice * A post on the Forum is the start, not the end, of a conversation. Don’t feel you have to answer every critique and explore every angle. You’ll have the opportunity to do that in the comments.