Off and on projects in epistemic public goods, AI alignment (mostly interested in multipolar scenarios, cooperative AI, ARCHES, etc.), and community building. I've probably done my best work as a research engineer on a cryptography team, I'm pretty bad at product engineering.
I don't know why people overindex on loud grumpy twitter people. I haven't seen evidence that most FAccT attendees are hostile and unsophisticated.
I would like for XPT to be assembled with the sequence functionality plz https://forum.effectivealtruism.org/users/forecasting-research-institute I like the sequence functionality for keeping track of my progress or in what order did I read things
This also brings to mind time, where it seems like projects and roles are uncorrelated enough right now that it's fine to date, but two years of unforeseen career developments between the two of you create something like a formal power asymmetry. Are you obligated to redteam dates with respect to where your respective careers might end up in the future?
Yes! To be clear, reading or many forms of recommending is not the red flag, the curiosity or DADA-like view of the value prop of books like that make sense to me. The specific way it comes across in the passage on the Adorian Deck saga definitely makes hiding behind "defensive cynicism" very weak and sounds almost dishonest. The broader view is more charitable toward Emerson in this particular way (see this subthread).
My comment was still when I was mid reading OP. Earlier in the essay there's an account of the Adorian Deck situation, then the excerpts from the book, which is as far as I got before I wrote this comment. Later in OP does the case for that Emerson is interested in literature like this for DADA reasons become clearer and defensible.
For commenting before I got to the end of the post, I apologize.
But like. It seems that the tide is turning toward "oh, flooding the EA forum with anonymous sniping from the sidelines is the Cool And Correct Thing To Do Now" and that seems like two or three distinct kinds of bad.
Yes, this tends to bug me a lot. I think Ben is being different here, because
48 Laws of Power sounds like quite the red flag of a book! It's usually quite hard to know if someone begrudgingly takes on zero-sum worldviews for tactical reasons or if they're predisposed / looking for an excuse to be cunning, but an announcement like this (in the form of just being excited about this book) seems like a clear surrender of anyone's obligation to act cooperatively toward you.
replaceability misses the point (with why EAs skew heavily on not liking protests). it's way more an epistemics issue-- messaging and advocacy are just deeply corrosive under any reasonable way of thinking about uncertainty.
In my sordid past I did plenty of "finding the three people for nuanced logical mind-changing discussions amidst a dozens of 'hey hey ho ho outgroup has got to go'", so I'll do the same here (if I'm in town), but selection effects seem deeply worrying (for example, you could go down to the soup kitchen or punk music venue and recruit all the young volunteers who are constantly sneering about how gentrifying techbros are evil and can't coordinate on whether their "unabomber is actually based" argument is ironic or unironic, but you oughtn't. The fact that this is even a question, that if you have a "mass movement" theory of change you're constantly temped to lower your standards in this way, is so intrinsically risky that no one should be comfortable that ML safety or alignment is resorting to this sort of thing).