As always, my Forum-posting 'reach' exceeds my time-available 'grasp', so here are some general ideas I have floating around in various states of scribbles, notes, google doc drafts etc, but please don't view them as in any way finalised or a promise to write-up fully:
- AI Risk from a Moderate's Perspective: Over the last year my AI risk vibe has gone down, probably lower than many other EAs who work with this area. However, I'm also more concerned about it than many other people (especially people who think most of EA is good but AI risk is bonkers). I think my intuitions and beliefs make sense, but I'd like to write them down fully, answer potential criticisms, and identify cruxes at some point.
- Who holds EA's Mandate of Heaven: Trying to look at the post-FTX landscape of EA, especially amongst the leadership, through a 'Mandate of Heaven' lens. Essentially, various parts of EA leaderships have lost the 'right to be deferred to', but while some of this previous leadership/community emphasis has taken a step back, nothing has stepped in to fill the legitimacy vacuum. This post would look at potential candidates, and whether the movement needs something like this at all.
- A Pluralist Vision for 'Third Wave' EA: Ben's post has been in my mind for a long time. I don't at all claim to have to full answer to this, but I think some form of pluralism that counteracts latent totalism in EA may be a good thing. I think I'd personally tie this to proposals for EA democratisation, but I don't want to make that a load-bearing part of the piece.
- An Ideological Genealogy of e/acc: I've watched the rise of e/acc with a mixture of bewilderment, amusement, and alarm over the last year-and-a-half. It seems like a new ideology for a new age, but as Keynes said "Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back." I have some academic scribblers in mind, so it would be interesting to see if anything coherent comes out of it.
- EA EDA, Full 2023 Edition: Thanks to cribbing the work of other Forum users, I have metadata for (almost) every EA Forum post and comment, along with tag data, that was published in 2023. I've mostly got it cleaned up, but need to structure it into a readable product that tells us something interesting about the state of EA in 2023, rather than just chuck lots of graphs at the viewer.
- Kicking the Tires on 'Status': The LessWrong community and broader rationalist diaspora use the term 'status' a lot to explain the world (this activity is low/high status, this person is doing this activity to gain high status etc.), and yet I've almost never seen anyone define what this actually means, or compare it to alternative explanations. I think one of the primary LW posts grounds it in a book about improv theatre? So I might do I deep dive on it taking an eliminativism/deflationary stance on status and proposing a more idea-focused paradigm for understanding social behaviour.
Finally, updates to the Criticism of EA Criticism sequence will continue intermittently so long as bad criticisms continue or until my will finally breaks.
Hi Toby! I have a super lay perspective on this so if anyone would like to collaborate on a post I would love for that. Or for someone to just take the idea and run with it.
On not being able to do anything: I am imagining me in various super powerful positions and thinking if I then see e-waste stopping being an issue. I then think main reason they won't do anything:
Basically with current deaths to e-waste, I think we are seeing the beginning of how AI will simply push out humans by taking their resources. In this case, the resource they are grabbing from humans is a clean environment. I am imagining the scale of e-waste in a world where even 40% of current labor is replaced by AI + robots. I think this would be possible to estimate initially in terms of tons, and we have some idea of number of deaths due to current volumes of e-waste so could scale those deaths up linearly for a first approximation. AI does not need to be very agentic to cause issues. It only needs to be something the economic system demands, politicians are unable to stop and that will have a large scale. More like natural evolution, a new species that is super fit and adaptable just pushing out other species.
I am totally open to AI having other, even more devastating impacts, but I think the current focus on accidents/harm from AI is too Western focused and we forget our lessons from e.g. Malaria - that most/the first victims are poor and far away from us. I think the problem with AI will look much worse if we also include non-Western perspectives on harm from AI.
Not sure how the numbers will turn out, but I would not be surprised if we see thousands of deaths due to GPT/AI e-waste already and extrapolating out e.g. based on Nvidia share price or other publications on the proliferation of AI and its datacenters we might expect hundreds of thousands if not millions of deaths attributable to AI just on current trajectories with current harm models in just a few years. Should get the alarm bells ringing even louder perhaps and could bring on board environmentalists, global health people etc. etc. in an effort to make AI go well.