Hi, I'm Ben, currently a final year medical student at the University of Sydney. I studied an undergraduate double-degree (BA, BSc) triple-majoring in philosophy, government & international relations, and neuroscience.
I've spent my MD doing bits and bobs in global health. I've also conducted some research projects at the Future of Humanity Institute, the Stanford Existential Risk Initiative, the vaccine patch company Vaxxas, and the Lead Exposure Elimination Project.
If misaligned managers tend to increase with organisation age and size, to what extent would keeping orgs separate and (relatively) smaller help defend against this? That is, would we prefer work/funding in a particular cause-area to be distributed amongst several smaller, independent, competing orgs rather than one big super-org? (What if the super-org approach was more efficient?)
Or would EA be so cohesive a movement that even separate orgs function more like departments, such that an analogous slide to misaligned managers happens anyway?
I don't know enough to judge, but my impression is that the big EA orgs have a lot of staff moving between them, and talk to each other a lot. Would we be worried enough by sclerosis that we would intentionally drive for greater independence and separation?
Haha what a crossover!
Ah fair, thanks. I figured there might be a reason you didn't mention it in passing.
Mike Cannon-Brookes, an Australian tech billionaire, recently bought an 11% stake in AGL, a huge energy company, for the stated intention of accelerating its transition away from coal. This was following two rejected takeover bids earlier this year.
They're good attempts though - I think this is just a tricky needle to thread
To me 'contemporary altruists' suggests people who are alive today and altruistic, in contradistinction to historical altruists in the past, e.g. Katharine McCormick or John D. MacArthur.
Thanks for linking that! I couldn't remember where I had read the framing first
This isn't a substantive answer like those above - but I think you can get a lot of Effective Altruism off the ground with 2 premises that are widely agreed by most moral philosophies but are generally under-attended:
1) Consequences matter (which any moral philosophy worth its salt agrees, although to varying extent on what else matters)
2) Pay attention to scope, i.e. 100X lives saved is way, way better than saving one life.
There's a lot more complexity and nuance to views in Effective Altruism, but I think this is a common core (in addition to lives having equal moral value etc.) that is robust for almost all plausible ethical approaches.
Bottom line up front