Thanks for posting this Vicky! It's a super interesting line of thought and I'd love to hear more about your research and how you view its path to effecting change in the world.
I'm commenting to flag one typo which threw me off the first time I read Vicky's comment for any future readers--I think Bernstein and Malone's Handbook of Collective Intelligence was published in 2015, rather than 2005.
It feels like CI has been coming into its own as an actual field of research over the last 10-15 years. It'd seem much less promising to me if there had been a handbook published in 2005 without any major synthesizing efforts since.
I have been thinking about this a lot recently, and it seems like a pretty perennial topic on the forum. This post raises good points--I'm especially interested in the idea that EA might be somewhere like a local maximum in terms of cause prioritization, such that if you "reran" EA you'd likely end up somewhere else--I but there are many ways to come at this issue. The general sentiment, as I understand it, is that the EA cause prioritization paradigm seems insufficient given how core it is.
For anyone who's landed here, here's a few very relevant posts, in reverse chronological order:
The "Meta Cause" - Aug '22
The Case Against "Cause Areas" - July '21
Why "cause area" as the unit of analysis? - Jan '21
The Case of the Missing Cause Prioritization Research - Aug '20
The ITN framework, cost-effectiveness, and cause prioritisation - Oct ‘19
On "causes" - June '14
Paul Christiano on Cause Prioritization research - March '14
The case for cause prioritization as the best cause
The apparent lack of a well-organized, public "cause ontology," or a world model which tries to integrate, systematically and extensibly, the main EA theories of change, seems like a research gap, and I've been unable to find any writing which satisfactorily resolves this line of inquiry for me.
Note that it's not just "cause prioritization" per se that this is relevant to, but really any sort of cause ontologies or frameworks for integrating/comparing theories of change. It has to do with the bigger question of how EA ought to structure its epistemic systems, and is also thus relevant to e.g. metascience and collective behavior "studies," to name a few preparadigmatic proto-disciplines.
I think a lot of people have spent time on this. I currently use Obsidian for longer-form note taking and building up more complex thoughts, but mainly I just use backlinks; the Roam-style graph view is hip but I don't find it particularly useful. Then I use apple notes for on-the-fly, unimportant jots that I may or may not get ingested by Obsidian in a more structured form when I'm on my computer. I manage all of my productivity and to-do lists on one Apple note and my Apple calendar. Longer-form works of writing that have taken something of a single cohesive shape get their own Google docs. Finally, I manage all my sources on Zotero, with a few folders for broad subject areas and one big messy folder where I dump forum and blog posts I've read or skimmed that I might want to find later. I also read and annotate all PDFs within Zotero.
I'm pretty happy with this system currently, although I wish I had a good, easy-to-set up Zotero-Obsidian integration, and that it was easier to copy-paste links between Markdown and plain text editors. If anyone has suggestions on either of these that'd be great.
In general, I think it's a well-explored space and imo nobody has come up with anything that's convincingly a large productivity multiplier; for me, it doesn't seem like a promising place too put too much thought into at the moment. This comment thread on LessWrong raises some more interesting points.
Location: Vermont, USA Remote: Yes Willing to relocate: Yes Skills: - Research, especially computational and quantitative disciplines - Data analytics & data science. A decent theoretical and practical background in Statistical or Machine Learning--things like Support Vector Machines which were state-of-the-art ~10 years ago, theoretical grasp of e.g. neural networks but zero implementation experience. - Programming (web development, data science with R, scientific computation) - See my website: https://tobyweed.herokuapp.com/ - ~Math~ (undergrad degree, see CV). Undergrad thesis in the applications of Reproducing Kernel Hilbert Spaces to machine learning. - Scientific communication and writing Résumé/CV/LinkedIn: - Website: https://tobyweed.herokuapp.com - CV: https://tobyweed.herokuapp.com/pics/tobyweed_CV.pdf Email: email@example.com Notes: I'm a recent math undergrad who's long had an interest in EA and existential risk. Hoping to contribute to ambitious research agendas related to AI governance, (technical) AI safety & alignment, or macrostrategy.
Needed to be said. I'm someone who gravitates to a lot of EA ideas, but I've avoided identifying as "an EA" for just this reason. Recently went to an EAG, which quelled some of my discomfort with EA's cultishness, but I think there's major room for improvement.
My lightly held hypothesis is that the biggest reason for this is EA's insularity. I think that using broader means of communication (publishing in journals and magazines, rather that just the EA forum) would go a really long way to enabling people to be merely inspired by EA, rather than "EAs" themselves. I like EA as a set of ideas and a question, not so much as a lifestyle and an all-consuming community. People should be able to publicly engage with (and cite!) EA rhetoric without having to hang out on a particular forum or have read the EA canon.