JP Addison

Lead developer for the EA Forum

Comments

EA Forum update: New editor! (And more)

I wish there was a community-led way of deciding about tags. I think LW is making the calls about their tag-classification that they've introduced. (See image.) So maybe it makes sense for us to be more opinionated.

EA Forum update: New editor! (And more)

The History tag is for posts that are strongly focused on historical events or trends which don't necessarily connect to other tags (e.g., a post on the history of nuclear weapons should go in that tag instead), or that discuss or make heavy use of historical research methods. 

I feel like it's fine for them to overlap.

EA Forum update: New editor! (And more)

[Low confidence – I'm hashing out my own opinion in public, not trying to apply admin pressure]

I like the tags you've listed there. If you'd asked me to think about concepts in EA and written a (long) list, I'd hope I would have found those. I feel like Political Polarization is maybe more niche than I would do? There's a key difference between us and LW here, which is that LW is investing a large amount of time into creating a whole ontology out of their tagging system, and organizing thing hierarchically, which allows the highlighting of broader tags, while we can't match them in hours devoted if Aaron and I both worked on it full time.

omnizoid's Shortform

I like this! I would recommend polishing it into a top level post.

omnizoid's Shortform

Here's an EA Global talk on the subject. I find it uncompelling. It's extraordinarily expensive, and does little to protect against the X-risks I'm most concerned about, namely AI risk and engineered pandemics.

Lukas_Gloor's Shortform

I know it might not be what you're looking for, but congratulations!

The Importance of Unknown Existential Risks

Admin: The "Pamlin & Armstrong (2015)" link was broken — I updated it.

Max_Daniel's Shortform

I would expect advanced AI systems to still be improveable in a way that humans are not. You might lose all ability to see inside the AI's thinking process, but you could still make hyperparameter tweaks. Humans you can also make hyperparameter tweaks, but unless your think AIs will take 20 years to train, it still seems easier than comparable human improvement.

Load More