Below is a list of EA project ideas which I've been thinking a bit about over the last few months. I'd be interested if people think I should take turn any of these ideas into top-level posts or shortform posts (and in future whether posts for individual ideas are better than a long list like this). Thanks!
I expect I'll write a couple more 'project ideas list' posts like this by the end of the year. Note that the order is mostly arbitrary.
If you could see yourself actually working on one of these, please do let me know. I might be able to connect you to other people who could work with, elaborate on details, or help you find funding.
Imagine a ‘timeline of everything’, showing major events (astronomical, geological, historical) from the Big Bang to the end of time. Users can zoom in and out, much like existing apps that show the scale of the universe.
Appreciating the vast amount of time ahead of us, and the relatively brief period of time that all of recorded human history makes up, is a key underlying intuition for longtermist arguments. The website could explain longtermist ideas as link to relevant reading, like The Precipice.
Various timelines of the universe have been made in video or graphic form, such as here. But I suspect being able to navigate through different scales of time yourself might be a very different experience.
Ultimately, you could imagine a website hosting a series of visualizations illustrating various longtermist ideas. These visualisations and graphics, some interactive, some updated with live data, could be tied together with essays about key longtermist topics, amounting to a kind of undirected, highly visual introduction to ideas from effective altruism and longtermism. You could imagine a handful of ‘tracks’ (e.g. big history, existential risks, technological progress, human progress) which tie together these graphics, once enough are made. I am excited by the prospect of Our World In Data adding new charts which could be of special interest from a longtermist perspective. But I do think this project could be different enough to warrant being separate, because the visualisations could be more creative, various in form, and perhaps less squarely data-driven.
I made a small start on this idea last year, and made plans to hire for it, but left it by the wayside because I got distracted.
Meta book ideas
As I understand it, a lot of involved work is required to secure a book deal, and ∴ to start on the process of writing the book. Before you can get an advance from one of them, you need to shop an idea around publishers, or find an outside agent you can trust to do so. The author typically needs to put a lot of their own time into this process, and in any case will need to wait before a deal goes through to begin writing with the confidence that their time is being put to good use. First-time or more obscure authors have it especially bad, since they have little to show to prospective publishers. This is presumably bad, at best because it uses up the time of people who's time is valuable and could be used just writing the book; and at worst because it makes potentially impactful books less likely to happen in the first place, for this reason.
In this new context for EA where money appears to be much less of a constraining factor, I wonder if this problem can be fixed. Imagine a group of evaluators with (i) a good amount of context on EA ideas, and (ii) a decent understanding of the world of publishing. As a prospective EA author, you apply with your book idea to this team, and if the pitch meets a basic threshold, then you quickly receive an advance. After that, the work of finding a publisher falls to this team of specialists, rather than the author herself. But the author retains the rights to the book if/when it is eventually published. If the group cannot find a publisher after some period of time, they have the option to self-publish, e.g. as a free ebook.
The major effect of such a scheme is that it would very likely make more valuable books happen.
Primarily, this is because the bar for which books get approved would be lower: more valuable book ideas would be quickly approved by this group than by the average publisher. The main reason the bar would be lower is that the group would not prioritise profits; up to and including the point where it could make sense to commission books which will not take a profit in expectation (but would nonetheless benefit the world as e.g. a free ebook). Another reason is that some ideas related to effective altruism might be hard to properly convey to a publisher without much context — the book could be saying something very important, and important enough that the idea could catch on and sell well, but there's a greater chance that a conventional publisher misses this. Superintelligence might count as an example — it ended up catching on in a way that was very hard to predict, a grantmaker with a lot of context on the ideas might have anticipated that better than a mainstream publisher.
The other reason this scheme might help more valuable books happen is because it just eliminates much of the administrative faff of each prospective author figuring out on their own how to get started.
When it comes to books and other media like films and social media accounts, I think a hits-based approach is best. I would guess that (i) the impact of books is roughly power-law distributed, and (ii) the expected impact of a book doesn't scale linearly with the amount of money you put into it. These two things would suggest that it would make sense to roll the dice on many book ideas which could plausibly do well.
Centrally, then, the idea is to install a middleman between EA authors and publishers, capable of smoothing out the risks for the authors by handling multiple books at once and being less sensitive than individual authors to losing money.
What's the case against? Maybe I've overrated how much hassle it is to find a publisher in order to get started on a book. Maybe I have also overrated how many potentially very valuable EA books there are waiting to happen, but which don't happen for the reasons discussed. It might also just be the case that nearly all the most promising potential authors would do better to spend their time another way, and that there aren't so many other promising potential authors.
I consider a couple similar ideas below.
Buying back rights
10 years after the publication of Peter Singer's The Life You Can Save, the organisation of the same name bought back the copyrights to the book. As a result, they could distribute the book for free, and record and release a free (and star-studded) audiobook.
As I understand it, some of the legal/administrative aspects of the deal posed a major and time-consuming difficulty. But the difficulty per book will decrease the more books we do this for (assuming there are ways to learn from and systematise the process). So perhaps we should try doing this for more books for which it would be really valuable to hand out free copies (or hand out physical copies with much less hassle).
The Precipice doesn't have a Spanish translation. It should.
As I understand it, the way books typically get released in new languages is that a publisher in a new language will make a bid on the book and supply the translator(s) themselves. If the bid makes sense, the book's agent (sometimes in consultation with the author) will sell the copyrights for that region, and the book gets republished.
Translating important books just seems extremely worthwhile. Try thinking of your list of the ten most important books of the last couple decades — the books you wish everyone would read. Imagine one of those books lacks a translation in a language with > 50 million first-language speakers. What's the expected impact, as a fraction of the impact of the original book? Is it greater than 1%? That seems plausible in at least a handful of cases. Now how much should an altruistic planner have been prepared to pay to make the original book exist? What is 1% of that cost? Likely still very high.
So the default course of waiting for offers to come around seems improvable — partly because we could help make translations happen which might never happen otherwise, but also to accelerate their arrival where they would have happened anyway. But that's not all: the impact of a translated book can depend on the accuracy of its translation. Terms of art and arguments are typically crafted with some care, and fragile to small errors in translations. So in taking matters into our own hands, we might also select translators trusted to get the finer and more idiosyncratic details right.
In practice, I'm imagining something like proactively approaching major publishers in (typically) non-English speaking countries and introducing them to a translator who we trust will do an excellent job, and who we can commit to paying ourselves. But I'm sure there are plenty of variations.
An EA publishing house
The question that spawned these book-related thoughts was: what if EA started a publishing house?
I'm not sure this is a good proximate aim, because It's unclear why you'd want to replace many of the functions that established publishers already serve. For instance, many publishers are well-known, and getting your book published by them counts as free credibility and publicity. And as mentioned, publishers operate through relationships which take a long time to establish.
But it would be very cool to have close to an end-to-end publishing operation. To begin with, this would get rid of some of the friction and frustration associated with grappling with an outside editor over content, cover imagery, and so on. And it would be very easy to reprint classics in the public domain, in order to bring them to a wider audience. Collected blog posts, also.
Plus, you could spin up your own badass brand, and shape it however you like. You could quickly earn 1,000 true fans who automatically buy the next book you release. I think that the real impact you get from books is heavy-tailed in the sense that almost all the impact comes from a small number of readers. If you can cultivate the brand to appeal to those few readers, you don't need a very large audience to get the 'impact-adjusted reach' of a well-established publishing house.
Some people did successfully publish some highlights from LessWrong, and they just did it again. But I imagine this would have been easier, and potentially reached more people, if a specialised initiative had existed with some of the infrastructure and expertise already in place.
The obvious and standout inspiration here is the (hopefully not) inimitable Stripe Press.
A spin on this idea could be to start an 'imprint' instead — a new 'trade name' for an existing publisher. For instance, Viking Press is an imprint of Penguin Random House.
In the meantime, I expect it would also be obviously good for prospective EA authors to pitch ambitious book ideas to existing badass publishing houses like Stripe Press.
If you wanted to spread ideas you thought mattered, and you happened to have $250 million burning a hole in your pocket, you might consider buying a media company like a newspaper, such as The Atlantic. I expect there are far most cost-effective ways of spreading good ideas, because this strategy is so untargeted. When you buy a newspaper, you are buying all the showbiz and health and beauty columns which you can't do anything especially useful with. Your plan might be to (i) to begin taking an editorial line on certain verticals of the newspaper in the direction of the ideas you cared about, and (ii) to promote and open-access the articles that come out of (i). But buying the whole company to do this is like buying a cruise ship to execute a beach landing.
But maybe there’s a more targeted version that could work.
Some newspapers featured sponsored articles, a kind of ‘native advertising’ where a company pays to either commission or write an article, which is released open-access on the newspaper’s website, along with a sticker indicating that “this post was sponsored by company X”. By 'open-access', I mean removing the paywall for that specific article.
What about a philanthropic version of this, where someone sponsors stories about EA topics to be published and open-accessed on popular newspapers like The Guardian or The Atlantic. Philanthropic organisations do support content on major media outlets, but I'm not aware of them paying for articles to be open-accessed. This strategy might be used to quickly spread good ideas about effective altruism.
An initial version could involve open-accessing existing articles. But you could also imagine creating an ongoing relationship with a media company where it is increasingly possible to commission articles, because the company makes more from open-accessing the ‘good’ articles than the revenue from ads they would otherwise have made.
I don’t think many articles have been ‘sponsored’ like this for philanthropic reasons, but I can’t think of why it wouldn’t be possible to do more (if not actually a good idea).
An overlay journal is a journal (almost always exclusively online) that does not produce its own content, but selects from texts that are already (freely) available online. The selection process can look just like that of a ‘real’ journal, including a board of editors and peer review.
Fields within existential risk, AI safety, global priorities research, and other aspects of EA lack dedicated journals. Yet, it’s often difficult to get such work published in decent journals because of the interdisciplinary nature of the work, because the work is unusually speculative, or because of its perceived weirdness.
That’s an issue, because research exerts influence through prestige and citations, and prestige and citations are more likely if you are published in a top journal, in part because top journals are read more.
This tentatively suggests the idea of establishing new academic journals for EA fields. But I'm not sure about that. Prestige is hard to quickly bootstrap with money, setting up a functional journal actually just sounds like a lot of hard administrative work, and any such journal would need an editorial board of prominent researchers in the field, whose time would likely be better spent doing anything other than editing a niche journal.
But maybe there is a neighbouring idea which could work: an online overlay journal, which could more quickly and easily become widely read and earn prestige or acclaim. This is because overlay journals do not compete with proper journals, but rather select from them. You could imagine the website being really attractive, and each issue could be filled with commentary from the authors and editors. Illustrations even. The product is a collection of articles designed to actually interest people in or adjacent to the field; but it is not entering the game of competing with established journals for status.
Most overlay journals select from open-access research. But I think it could also be possible to share articles from paywalled journals, by paying an 'article processing charge'. My (outsider) impression is that it's somewhat rare to pay to 'liberate' your article from a paywalled journal, because you can often access it for free if you belong to an academic institution which is paying a subscription to the journal, or otherwise you can use evil and nefarious means like Sci-Hub. But paying to free high-quality research from their paywall silos could make sense if it meant that the research became just a bit more well-known, or easier to cite. If you think the research is really important, and you have the money to do so, then this could be a good buy.
Done well, this could help important research get eyeballs of researchers in adjacent fields, and project a certain amount of credibility. It could also provide a strong ‘worst case’ or fallback home for really good research which doesn’t have a natural home. To the extent that research is sometimes not done for this reason, or less effort is put into it, an overlay journal could incentivise quality EA research.
Take Toby Ord's recent paper ‘The Edges of Our Universe’. It’s valuable, fun work; and Toby was able to write it in part because he was not worrying about climbing the academic ladder with publications. It currently lives on arXiv, an open-access repository which anyone can post papers to without review. If someone more junior than Toby was considering whether to write this, it could be more likely that they decided not to.
A related idea is an open-access online magazine which rewrites technical work in e.g. philosophy, economics, bio, into accessible and engaging articles. Distil exemplifies how to do this for some AI research. I think I would be at least as excited about this idea as the overlay journal idea.
A related idea is to flat-out buy a big journal, or one of the publishing companies that owns them. Maybe I'll write about that in the next iteration!
Funding criticism of effective altruism
I think one of the most valuable features of EA is its epistemic culture: the way EAs reason about hard problems. I feel unusually free to discuss close to anything that seems important; I worry an unusually small amount about offending my superiors or saying something that could easy be taken out of context, accidentally offending my peers by disagreeing with them, or about saying naive things with the aim of being corrected. I do not believe this culture is guaranteed to persist without effort to maintain it, which I think should include continuing to foster a culture of openly questioning and criticising crucial assumptions. In particular, I don't want anyone within or beyond EA to have good reason to worry about the consequences of voicing good-faith criticisms. Positively encouraging and funding those criticisms, and similar 'red-teaming' activities is a clear way to address that.
Even without any reason to worry about saying critical things, useful criticisms are still probably going to be underprovisioned. This is for obvious reasons: where are the naturally occurring incentives to point out where something is going wrong, unless we deliberately set them up?
Plus, clearly, critical work is often just highly valuable, because:
- The set of ideas that makes up effective altruism and longtermism is still relatively new, and still being developed. It would not be surprising if the ‘consensus’ has not missed certain crucial considerations, or if some parts of that consensus were mistaken.
- Sometimes it’s possible to make mistakes in implementing a project which aren’t obvious from the inside, but can be pointed out by outside observers. This could be especially true for longtermist projects, which may have fewer obvious feedback loops.
- Points of confusion go under-reported. It can be embarrassing to announce that you're simply confused about some assumption that everyone else seems to regard as obvious. I expect there's some amount of pluralistic ignorance at play here, and the less of this the better.
- The epistemic state of longtermist EA could be much improved. Longtermism strikes me as an area where it may be easy to perceive more consensus and clarity than really exists. So we should be excited about suggestions about what the longtermist research community has missed so far.
- Because (especially longtermist) EA has grown so rapidly, big projects now might be grounding their theories of change in a relatively small amount of early research. It makes sense to check that research for Wizard of Oz style citation trails — where the key claims bottom out in citations that aren't much better than guesswork.
- A good deal of research which informs EA decision-making is conducted by generalists rather than subject experts. We might benefit from employing subject experts to review that research and point out blind spots.
I think the best way to collect low-hanging fruit here is with a prize, probably announced and conducted on the EA Forum. The idea would be to solicit pieces critically assessing some existing work, ideas, directions, or empirical claims. 1,000–10,000 words, say; 2–20 hours of work. A prize pool of (say) $20,000 to $150,000, to be disbursed between multiple winners.
The prize announcement should be clear about the criteria; and the criteria might include: reasoning transparency, novelty, action-relevance, and focus (well-defined scope). I think the EA Forum guidelines are an excellent and relevant resource also.
But you could also imagine offering grants for individuals to write critical pieces. This might involve disagreeing with an influential view or piece ("I dispute X"), or it could involve engaging with a neglected perspective or discipline ("X seems to have missed Y, here's how you can add it in"). There could be a fund people could apply to with research proposals, and/or that fund can proactively reach out to individuals who might be a good fit. There could also be some means by which anyone can recommend some other person for consideration.
But useful criticisms don't only go unwritten because it's hard to get paid for them: just paying people doesn't address concerns about prestige and status. To address those worries, EA (research) organisations might be encouraged to create a /critique-us page on their website, where they list key claims that they would most like to see scrutinised, and positively encourage people to critically investigate them.
Also, it should be easy to write pseudonymously or anonymously. The Journal of Controversial Ideas allows this, and is commendable in that respect.
Who decides what criticism gets funded, published, and awarded? That's tricky. Some selection would be needed, because we specifically want to reward (and so incentivise) thoughtful, useful, good-faith criticism. Just boosting critical voices across the board would drown the useful signal in attention-draining noise. One model is to have a board composed of insiders and outsiders to EA, who vote on applications for funding or winners of a prize. Whatever the model ends up being, it seems unusually important for the decision-making process to be transparent; meaning who was not recommended for funding should be publicly knowable (if the rejected applicant so wishes).
Lastly, what should happen to this critical work once it's written? How do we make sure decision-relevant criticism change decisions? One answer is to try collating the work in a single place, such that criticisms are easy to search and find. I also think it would be useful to have some means by which it's easy to identify responses to, changes caused by, and discussions of any piece of criticism, as far as there are any. This could just live on the forum — there are already some relevant tags: for criticism of effective altruist organizations, criticism of effective altruist causes, and criticism of effective altruism. To post criticism there would be a good start. But I expect we could do more. For instance, the project could involve a dedicated website, which lists each piece with some meta-commentary, links to discussion on the EA Forum, related pieces, and so on. It could even become a journal, although I suspect this might be jumping the gun given the absence of a journal for any part of EA itself.
Some of the above ideas were informed and inspired by conversations with Joshua Monrad, with whom I’ve discussed some more concrete project ideas in this domain
Space governance research centre
I am entirely biased, because I recently wrapped up a preliminary bit of research on space governance. But I think the governance of outer space could end up becoming something like a new research area for longtermist EAs.
When an extremely nascent field is just coalescing, I think it makes sense to find everyone who could help establish it, and put them in the same place (figuratively or literally). Minimally, this could look like reaching out to people who seem to know a bit about space and a bit about the principles of longtermist EA, and hooking them into an informal network of researchers.
A more ambitious version could look like a proper research centre — i.e. actually registering a nonprofit, having people on the payroll, having a bunch of branding and operational support and maybe even affiliations with established academic institutions.
One reason is that this could bootstrap the field very quickly, and indeed increase the chance a field emerges at all.
In general, I think specific book ideas aren't wildly valuable, because almost all the work, and all the value, comes from the execution rather than the idea. Regardless, here are some specific EA-relevant books I would love to read.
A (New) Introduction to Effective Altruism
Doing Good Better is a superb book, and probably remains the best book-length introduction to effective altruism. But in part because the book did so much to grow EA, the movement has outgrown the book. This is neither a surprise nor a mark against the book — nearly 7 years have passed since it was published.
In particular, the book came before AI alignment, existential risks, and longtermism really emerged as key threads in EA thinking, so the experience of reading the book and then getting up to speed with cutting-edge EA could be jarring; even confusing. The issue isn't simply that longtermism happened — the community itself has properly grown up since 2015, along with a constellation of organisations, ideas, cause areas, and cause area candidates. If your aim is to excite people about getting involved with EA, failing to describe all the exciting stuff which has happened in the past 5 years would be to miss a wide-open goal.
What should a new introduction to EA look like? Well, I'm confident that it shouldn't look like a redo of Doing Good Better (Doing Doing Good Better better...) One cool version could be a whirlwind survey of organisations and concepts, where the aim is just to blow the reader's mind as efficiently as possible. Or it could in fact be multiple books, where the arguments and framings in each are more specifically tested and optimised for specific audiences; such as high school kids looking for inspiration about what to do with their lives.
I also happen to think this is unusually important to get right, since the reputation of the book would be difficult to separate from EA's overall reputation. So I think a lot of care should be taken in writing it, and likely the idea should be entrusted to someone with a track record as a communicator.
Utilitarianism: a Modern Introduction and Defence
I'm still a bit confused about why there are not more popular books about utilitarianism. It does seem clear that utilitarian views are decidedly out of vogue in the philosophical establishment, and in the humanities writ large. About more obscure ethical views you will find enough books, trade and academic, to fill flea market bookshelves. But the best philosophical explanation and defence of utilitarianism may literally be Mill's (1861) Utilitarianism. 
I claim this is bad, and not in a way which depends on a full-blooded utilitarianism being correct. I think of 'utilitarianism' as now mostly referring to a somewhat diffuse bundle of attitudes; centrally scope sensitivity, impartiality, some kind of aggregative principle, a focus on consequences, some kind of total view with respect to population ethics, and some kind of Bayesian mindset. Taken independently, it just seems obviously good if more people had a better understanding of the virtues of each of these components.
I'm less interested in defending utilitarianism against edge cases and restrictions, because I think the important core of the view is rarely undermined if you allow those edge cases and restrictions. If most people thought the Earth was flat, and you claimed it was round, you wouldn't mind conceding that it wasn't exactly spherical.
Outside the academy, 'utilitarianism' and 'utilitarian' connote bad things, and I think it's time to reclaim those words. For instance, 'utilitarian' suggests 'cold and austere', but utilitarian-as-in-ethically-utilitarian design would surely be really fun. More seriously, utilitarianism is associated with a calculating approach, as opposed to a loving one. But there's a sense in which utilitarianism is love axiomatised: the most principled way to spread and enact all the things that utilitarianism is unfavourably contrasted with.
I'm also curious to read some properly thoughtful and ingenious objections, beyond the familiar ones.
History of Philanthropy
If you're a philanthropic movement and you want to avoid the mistakes and retrace the successes of your predecessors, a good start might be learning about the history of philanthropy. So I'm pro more books that tell the story of (e.g. 20th c.) philanthropy, and which try to draw out actionable lessons.
Open Philanthropy have made very good inroads on researching the history of philanthropy. They report that this research "has contributed significantly to our picture of what great giving looks like". In particular, they say that it (i) inspired ambition; (ii) suggested the value of creating rather than delegating new nonprofits; and (iii) suggested "the possibility of creating change by helping a nascent field grow even when there’s no apparent political opportunity". They also say that the most useful book they found was Casebook for The Foundation: A Great American Secret. I would add that Philanthropy: From Aristotle to Zuckerberg also looks relevant.
Some people who know their stuff on this subject are Benjamin Soskis and Rhodri Davies. The website HistPhil is is a web publication on the history of the philanthropic and nonprofit sectors
Stripe Press commissioning editor Tamara Winter also recently tweeted about this question.
Beyond philanthropy, I'd also love to read a book about something like social movements which achieved an outsized impact. For example, the group of early neoliberals clustered around Chicago achieved a wild amount amount of influence, partly through the so-called 'Chicago boys'. How did that happen?
In a meta turn, I'd also be interested to read about the books that did the most to change the world, especially in a positive way. Less interested in "this famous person wrote a book, so that book must've been important", and more interested in "this book you likely haven't heard of appears to have actually influenced one or many important and consequential decisions".
A Verbal History of EA
Effective altruism has reached a point where more people will be asking about its own history. But everything happened too quickly for anyone to take notes. The records are there, but scattered across forum and blog posts. As EA grows, having a canonical source for the early history of EA will become more important.
But that's not the real reason I want to see a history of EA. The real reason is that it's an awesome story. It's funny, and surprising, and heartwarming, and inspiring.
One way this could get written is if a relative outsider (a journalist or writer) does some interviews, reads a few Wikipedia pages, and writes a story based on their view from the outside (like Tom Chivers did for AI safety). This could be very good, but I am more compelled by the idea of a verbal history — a collection of voices woven together by an editor. Valley of Genius does this for Silicon Valley and it's just epic. I hear Hackers is very similar.
To be specific: imagine something like 50–150 interviews with people who were close to different parts of EA — who were in Oxford when Giving What We Can got started, who were around the Bay when things spread west and collided with rationalist people and 80,000 Hours got started, and so on. The author narrates the broad strokes at the beginning of each chapter, and 90% of the remaining text is quotes from those interviews. Strange, amusing, nerdy minutiae are appreciated.
Would this book be impactful? I'm not convinced — seems more like a time-consuming exercise in naval gazery. But for the somewhat narrow audience that might appreciate such a book, I think it would be completely delightful.
If you take a 'utopia' to mean something like "a future radically better than the world today"; and you think radically improving our lot is achievable or even likely conditional on surviving a 'time of perils', then a decent first approximation of certain flavours of longtermism is something like 'maximise P(Utopia)'. Longtermists and effective altruists are right to point out that we do not need a granular picture of 'utopia' to aim towards, so long as we can fix problems today and secure a future in which we eventually have the space to do that thinking. But writing about utopia could still make sense now, for a few reasons:
- Some of the most destructive social and political movements from history can be described as 'utopian', and it could pay to understand how those utopian visions translated into bloodshed and disarray.
- On a smaller and less calamitous scale, history is littered with failed experiments in communal/utopian living. I'd love to learn more about them, and learn about cross-cutting reasons they failed.
- As Holden Karnofsky writes: "When thinking about the value of ensuring that humanity continues to exist and/or successfully navigating what could be the most important century, it seems important to consider how good things could be if they go well, not just how bad things could be if they go poorly [...] In particular, I think it's liable to make us fail to feel the full importance of what's at stake."
- Yet, it's very difficult to describe utopias in any kind of compelling or convincing detail. I'd like to see more writing about why, and also more efforts to do better.
- The history of attempts to describe utopia is going to roughly track a history of how we used to imagine the (actual) future, and knowing about this intellectual history could help us do that imagining better.
- I have spoken to smart people who just don't believe life can get much better than the standard of living for healthy people in relatively wealthy, stable parts of the world. A book about utopia might be aimed at persuading people that there is something more to hold out for. Alternatively, maybe those people have a point. In which case, a book about utopia could try understanding why life can't feasibly get much better than this.
You could imagine a book which begins with the intellectual history, and mixes in stories about the practical experiments inspired by them, plus the events which shaped the intellectual history in turn. Then it could turn to actually sincerely trying to fill in some details about what an actual, achievable, conceivable utopia could look like. This means digging into some social science and speculative engineering.
Megaprojects idea contest
Recently, it has become possible to consider EA projects which could scale to absorb hundreds of millions of dollars, because very large donors have stepped onto the scene and may be able to fund them. This effectively opens up a new category of project which previously was mostly not worth taking very seriously (Michael Aird has written an excellent post nuancing exactly what this implies). Some ideas which could scale hugely are floating around, but perhaps there hasn't yet been enough of a systematic push to generate and (especially) sort them.
Like with ideas for a book, generating ideas for such massively scalable projects might be the easiest part — most of what matters lies in refining the ideas and then executing on them. But ideas are also cheap, so to pass over an idea for a highly impactful project of this kind would amount to an egregious missed opportunity.
One natural suggestion, then, is to announce a contest for 'megaproject' ideas. Entrances should include the case for impact, key uncertainties, and (crucially) possible harms. I'm not sure who would make a great judge, but perhaps folks with grant evaluation experience, or experienced forecasters. Or perhaps there is a way to have the community score the submissions. I also wonder about creative incentive models here, such as by paying submissions a bonus if the suggestion is both appreciably novel, and an appreciably similar version ends up being funded.
Are all the best 'megaproject' ideas fairly obvious? Or are some ideas, or variants on them, hiding from plain view? In other words: how expansive is the space of ideas? I'm not sure. And to the extent I'm not sure, it seems worth really checking for excellent but previously hidden ideas, especially since (again) ideas are cheap.
After soliciting ideas, I'd be interested in fleshing out, say, a dozen of the top proposals. Then you could imagine a round of evaluation, where perhaps teams of forecasters could independently score each proposal along various metrics, such as "by how many percentage points of percentage points could this reduce the chance of existential catastrophe this century?".
Since originally writing this idea down, the FTX Future Fund have of course announced a very similar prize. This is great news, of course. I'm unsure if it makes this idea redundant, or whether there could still be reason and interest to run a contest on the EA Forum.
One-on-one advice matchmaking platform
If you have been following developments in the world of effective altruism recently, you will have noticed that money has suddenly become a whole lot cheaper relative to other things. Among other things, this means is that skills and new people with the appropriate skills and context have become far more valuable (in monetary terms); and at current margins we're bottlenecked by skills and people far more than money.
A rapid way to bring in new people, to vet them, and ultimately to begin trusting them to work autonomously, is to connect them with experienced EAs. I think even one-off calls can prove extremely influential in this way: a smart, motivated person can read a bunch about EA online, but the thing which can enable them to begin doing useful work is often speaking with other human beings doing that kind of work. However, the way in which more experienced EAs are matched up with new people is a little unsystematic and ad-hoc. It is possible to reach out to experienced EAs on initiative, but relying on initiative like this favours (i) confident and outgoing people over people who underrate their own ability or are otherwise shy about asking for advice; and (ii) people who are already well-connected (e.g. already went to a university with lots of EA alumni).
One exception is that the 80k careers advising team do a great job at connecting their advisees with experience people in their network. But this is limited: the 80k team can't be expected to keep track of every potential connection.
From the perspective of more experienced folks, it's nice to have ways to do good outside of your job, and give back to the community that gave you a leg up. The default way to do this has been to give ≈ 10% of your income to Giving What We Can. But I just said that money has become relatively less valuable, and getting new (skilled, trustworthy) people involved has become relatively more valuable. So perhaps we could consider a new norm of giving on the order of 2–5% of your work time to speaking with junior people.
Now here's a concrete idea to enable all that: an online platform where advisees and advisors can sign up and input their interests and focuses. Advisors can tell the platform how much time they're prepared to spend mentoring others, plus what times they are free to speak, and the platform can work some magic and schedule calls automatically. Think of it kind of like scalable, distributed 80k advising.
One nice feature of this idea is that it wouldn't require much 'core EA' time to build — it sounds like a fairly standard job for a few developers. But it could streamline much of the ad-hoccery and inefficiency of arranging calls like this, and help leverage a small amount of experienced time to generate a lot of new EAs.
[Here I have cut a section about nuclear power advocacy]
Quadratic funding pools for EAs
Quadratic funding is an extremely neat method for allocating funding between different public goods, according to a collective decision-making procedure. In short, you have a pool of money and a group of people, each with their own personal pot of money. Each person can suggest and personally give to a proposed public good, and others can also give to that project according to how much they (would) value it. Then the quadratic finding rule decides how the larger pool gets spent, in the following way: for each project, take the sum of the square roots of each individual contribution, and square it.
Where is each contribution.
Roughly speaking, this means the pool of money subsidises each project, in addition to the individual donations, in a way which gives you a certain kind of optimal allocation — namely, optimal for the group if everyone distributed their money to get the most bang-for-buck for themselves once the funding pool subsidises that project according to the rule. I am no good at explaining this in a short and less obscure way, so if you're curious you could read Vitalik Buterin's excellent explainer on quadratic voting and quadratic funding. Here is an excerpt:
In any situation where Alice contributes to a project and Bob also contributes to that same project, Alice is making a contribution to something that is valuable not only to herself, but also to Bob. When deciding how much to contribute, Alice was only taking into account the benefit to herself, not Bob, whom she most likely does not even know. The quadratic funding mechanism adds a subsidy to compensate for this effect, determining how much Alice "would have" contributed if she also took into account the benefit her contribution brings to Bob.
I'm suggesting that perhaps we could experiment with something similar as a mechanism for funding projects in EA. People opt in, perhaps to multiple pools with different themes. They can suggest projects — by describing the idea and listing it — and then vote on each project by spending some of their pot of money. Perhaps this could be the person's own money, or perhaps everyone could be allocated the same amount of 'credits' at the start. How to allocate 'credits' is tricky — on one hand, you want people to feel like they're invested in the process going well, and to feel like they have a responsibility to do a certain amount of due diligence on each project. On the other hand, you want everyone to feel empowered to participate even if they can't afford to pitch in as much of their own money: this is not primarily or even secondarily supposed to be a fundraising mechanism.
Setting up the infrastructure for this seems like the heftiest task here — maybe a few months of dev time at least, even for an MVP.
As a point in favour, this kind of QF mechanism gives people an influence over where funds are allocated that is greater than their individual donation, and this may make individual donations more attractive or meaningful. Also, it works as a really nice way to allocate funds democratically — it's probably close to an ideal rule for a funding pool to be said to fairly represent the views of its donors.
What about considerations against? One issue is that the subsidising pool needs to be fairly large compared to the sum of individual donations, but that actually doesn't seem like a dealbreaker in a world where money is not especially constrained.
More importantly, using quadratic funding as a way to decide what to fund doesn't quite line up with the original motivation for quadratic funding, which is where the people who provide for the public goods are the same people who benefit from them. This is the case where you can show that the allocation ends up being optimal in a certain sense. But in the EA version, the beneficiaries barely overlap with the providers (think animals, future generations, recipients of bed nets). The game is less: people are all providing information about what they'd prefer, but rather: people are weighing in on what they think is best overall. And nobody claimed QF is the best way to share and aggregate that kind of information — it's normally going to be more (time and effort) efficient to trust a few decision-makers to act on behalf of many donors.
As mentioned, email me if you're interested in helping join or start any of these projects! I'll aim to post a few more if people think that lists like these are useful. Footnotes (footnote, I guess) below.
I'd also like to clarify that I'm not trying to claim credit for these ideas, or to claim that I was in some sense the first person to think of any of them. In some cases this should be obvious!
Certainly book sales are heavy-tailed, and sales are going to track impact decently well.
My own feeling is that e.g. Superintelligence and The Precipice are each worth > $1 billion in this sense. $10 million is indeed still very high.
Like Mill's On Liberty or the collected works of Alan Turing, or whatever else.
Such as the Journal of High Energy Physics, Logical Methods in Computer Science, and Geometry & Topology, which are all overlays for arXiv.
You could definitely do this without making it part of the broader overlay journal idea.
Note that 'red teaming' is standard (and important) practice outside of the nonprofit context, specially in cybersecurity & intelligence.
How much expertise do you have? How confident are you about the claims you’re making? What would change your mind? If your work includes data, how were they collected?
It’s fine to point out where something is going wrong; even better to be constructive, by suggesting a concrete improvement.
Though not close to the scale of technical AI safety, AI governance, or bio.
FHI in the Wild West days.
Where, to be clear, the field is something like 'longtermist considerations on space governance'.
The Precipice and What We Owe The Future (forthcoming) do focus on the longtermist aspect, but they are not centrally about effective altruism the project/community/bundle of ideas.
Here's a list of other options. Also worthy of mention are J.J.C. Smart's Utilitarianism: For and Against, Peter Singer's writing (especially Practical Ethics), Katarzyna De Lazari-Radek's Very Short Introduction and the writing of Joshua Greene, especially Moral Tribes.
Utilitarianism.net is a superb online introduction to utilitarianism which only continues to improve. However, I suspect books can achieve things which are harder to achieve with web resources (of arbitrarily high quality), such as generating invites on popular podcasts. So I do think a gap remains for a book!
For what it's worth I don't think this makes those versions of longtermism objectionably 'utopian', because they don't prescribe some particular utopia, they're just committed to a view along the lines of "the future could be radically better, let's protect that potential, and leave it to our descendants to build that future in all of its details."
As in, thorough understanding of effective altruism and the field in which they work. Specifically the understanding that is hard to glean from the 'outside'. Context, after all, is that which is scarce.
Thanks for a really valuable post - there are some great ideas and it's well written. That said, I don't know why "I expect this will end up being part 1 of N, where E(N)≈3 this year."
is phrased like that, instead of something like:
"I will probably write several posts with more ideas later this year."
I think this is a case of unnecessary jargon - notable mainly because the rest of the post is a model of clarity.
Thanks very much for the pointer, just changed to something more sensible!
(For what it's worth, I had in mind this was much more of a 'dumb nerdy flourish' than 'the clearest way to convey this point')
I'd like to see a Portuguese translation of The Precipice, and would love to contribute w it. Is there any project along this line yet? If not, perhaps I could get some acquaintances in an academic publishing house interested (I have recently organised a translation of SEP entries on Philosophy and Economics) or apply for funding. I could try getting a bigger company interested, too - but that's likely harder and would take longer.
Amazing! Just sent you a message.
Excellent post! I really appreciate your proposal and framing for a book on utilitarianism. In line with your point, William MacAskill, Richard Yetter Chappell and I also perceived a lack of accessible, modern, and high-quality resources on utilitarianism (and related ideas). This is what motivated us to create utilitarianism.net, an online textbook on utilitarianism. The website has been getting a lot of traction over the past year, and we are still expanding and improving its content (including plans to experiment with non-text media and translations into other languages). We encourage anyone to reach out to us with ideas for additional content we could create or any other ways to improve the website.
Big fan of utilitarianism.net — not sure how I forgot to mention it!
Thanks for working on this website, it's a great idea!
Possible additions to your list of books (I've only read the first one so forgive me if they aren't as good/relevant as I think they are):
EDIT: the Very Short Introduction book recommends the other books listed (with the exception of The Methods of Ethics)
Embarrassingly wasn't aware of the last three items on this list; thanks for flagging!
Neither was I before I looked at the bibliography in the first book!
I guess that kind of confirms the complaint that there isn't an obvious, popular book to recommend on the topic!
As a beginner exploring normative ethics this looks very helpful. Thanks!
I love these ideas and would happily fund some of them!
I've thought of using quadratic funding to allocate funds to EA projects before - see here and here.
Re: the publishing house idea, how would it work as an imprint - how would you ensure that the imprint is committed to maximizing impact rather than only its parent company's ROI? Could it be seeded with a grant like Future Perfect?
Oh cool, wasn't aware other people were thinking about the QF idea!
Re your question about imprints — I think I just don't know enough about how they're typically structured to answer properly.
My intuition is that the priority for funding criticism of EA/longtermism is low, because there will be a lot of smart and motivated people who (in my opinion, because of previously held ideological commitments; but the true reason doesn’t matter for the purpose of my argument) will formulate and publicize criticisms of EA/longtermism, regardless of what we do.
I'm not sure about this. People outside EA who have a good criticisms might just decide it's not worth writing up at length - they should just ignore EA and get on with their preferred projects. People inside EA might worry about making themselves unpopular ('getting cancelled') and conclude it's not worth the risk.
I disagree somewhat; if we directly fund critiques, it might be easier to make sure a large portion of the community actually sees them. If we post a critique to the EA Forum under the heading "winners of the EA criticism contest," it'll gain more traction with EAs than if the author just posted it on their personal blog. EA-funded critiques would also be targeted more towards persuading people who already believe in the idea, which may make them better.
While critiques will probably be published anyway, increasing the number of critiques seems good; there may be many people who have insights into problems in EA but wouldn't have published them due to lack of motivation or an unargumentative nature.
Holding such a contest may also convey useful signaling to people in and outside the EA community and hopefully promote a genuine culture of open-mindedness.
Feels false, from quick googling:
worked with the Future of Humanity Institute to form the Global Priorities Project](https://www.centreforeffectivealtruism.org/history).
My guess is that when you factor in lead times on writing a book, this starts to feel a lot more plausible. The book could easily have been finished nine months before it came out. It could easily have been started a year before that. And its basic shape could have been mostly settled six months before that. So I think we could easily be talking about a book the shape of which should be dated to sometime in 2013.
Which isn't to say none of those threads were starting to emerge in 2013 (or, indeed, quite a lot earlier), but my sense is that they lacked anything like the prominence they have now.
Thanks, this is a useful clarification. I think my original claim was unclear. Read as "very few people were thinking about these topics at the time when DGB came out", then you are correct.
(I think) I had in mind something like "at the time when DGB came out it wasn't the case that, say, > 25% of either funding, person-hours, or general discussion squarely within effective altruism concerned the topics I mentioned, but now it is".
I'm actually not fully confident in that second claim, but it does seem true to me.
AI alignment and existential risks have been key components from the very beginning. Remember, Toby worked for FHI before founding GWWC, and even from the earliest days MIRI was seen as an acceptable donation target to fulfill the pledge. The downweighting of AI in DGB was a deliberate choice for an introductory text.
Thanks, that's useful to know.
Another idea would be, before the book is published, to propose to the publisher to give them a lump sum in exchange for the rights after ~3 years. My impression is books usually make most of the money right after they are published, so such a deal may be attractive to the publisher. Also if you're an in-demand author you have a lot of leverage at the beginning.
Nice list! On Utilitarianism books I would just point out Utilitarianism: A Very Short Introduction - jointly written by Katarzyna de Lazari Radek and Peter Singer. It's brilliant and accessible, and even has a section on EA. I would love to see this book find its way into the hands of more people who may respond well to its content.
Also, you mention Mill's book may be the best defence and explanation of utilitarianism. I don't have a view myself not having read it, but I know Singer far prefers Henry Sidgwick's The Methods of Ethics and wrote The Point of View of the Universe: Sidgwick and Contemporary Ethics as a discussion and extension of Sidgwick's ideas.
(One thing worth noting is that Sidgwick's 'The Method of Ethics' is very long and dense. I think it's more philosophy than most people will be interested in reading in their entire lives, and for this reason I don't generally recommend it to EAs, similar to why I don't generally recommend that people read Reasons and Persons in full.)
Thanks, I suspected as much. Peter Singer’s Sidgwick book is probably a much better recommendation to make!
Like Max mentioned, I'm not sure The Methods of Ethics is a good introduction to utilitarianism; I expect most people would find it difficult to read. But thanks for the pointer to the Very Short Introduction, I'll check it out!
Also just copying from my kindle version of the very short introduction book:
That's fair enough. I haven't read the Singer book based on Sidgwick, but I suspect it would be far more accessible and a good book for someone to read if they are already familiar with the key ideas of utilitarianism.
Interestingly the books I mentioned aren't in the utilitarianism.net list of books. Not sure why.
Thanks for writing this post Fin!
I want to express my support for the 'Space governance research centre' idea. I've published a little on Space Policy/Governance and had some very positive feedback from professionals working within the field (e.g. within ESA, Raytheon, companies engaged in EO and so on) supporting the need for proactive policymaking and governance of space activities.
It seems like a natural area for a research centre/think tank/policy lab aimed at policy research and implementation algined with principles of longtermism & EA. I would also argue that the current context (right before space policy/governance is really picked up - if indeed that's what will happen) is exactly the right moment to start something like this... an org could have an outsized impact by taking advantage of a relatively neglected environment and shaping the field from the ground up.
Would love to talk to you about this - if you're keen then feel free to reach out to me at email@example.com
Not sure if you're aware but Open Phil does sponsor a segment of the Guardian focused on farmed animals and has done so since 2017.
I was aware but should have mentioned it in the post
— thanks for pointing it out :)
i knew about this, but having a closer look now, i was surprised to see several articles that seem unaligned with common effective altruist positions. for example:
I haven't read most of the post yet, but already I want to give a strong upvote for (1) funding critiques of EA, and (2) the fact that you are putting up a list of projects you'd like to see. I would like to see more lists of this type! I've been planning to do one of them myself, but I haven't gotten to it yet.
I think funding good criticism is a really good idea.
As a meetup organizer, I'm becoming very aware that preserving a culture of criticism is in tension with building a strong social fabric, or making friends. Maintaining the culture is really hard. It would help a bit, to have this very clear signal that we materially value good criticism, and that we protect our critics, even though we're normally so moderate and agreeable when we meet in person, do not be fooled, we know the value of disagreement too.
Another thing is, I think this prize would convince a lot of bad critics to work a little harder, and many of them would consequently turn right into good critics, and, I don't think it harms our culture of criticism to admit that bad critics cause more harm than good. Bad critics misrepresent things, they make everyone who reads them less informed and more confused, they take up time to respond to. It creates noise and rifts that heal slower than they're torn. I genuinely wouldn't wish disingenuous critics on anyone, I wouldn't even wish them on disingenuous critics (alas, by social adjacency).
And, I think this would cure a lot of them. This proposition that you might be able to deliver a criticism so objectively good that the targets of your ire are committed to paying you for it, officially recognizing that you were right and they were wrong, actioning your advice, and changing. Imagine the level of satisfaction you'd get out of that. The world would be made right.
A lot of people would be moved by that offer.
I don't know if you'd really need to do anything special to keep the criticism good. On some level, people know whether their criticism is going to be useful to the people they're talking to, they act like it's deeply ambiguous and it should not be for us (or anyone) to decide whether they're criticizing in good faith or not, it really isn't ambiguous. Bad criticism is trivial to identify: You can tell it's bad because it does not move you and visibly wasn't intended to. It will seem to be driven by ignorance or intentional misrepresentation. It's oriented around lowering the target, rather than reforming them. Good criticism shows time and care and you if you've ever heard the litany of gendlin then you'll have no difficulty taking it with relief. Good criticism liberates you from a mistake that you want to stop making. Bad criticism doesn't.
Even if there were some rigid, "fair" set of rules forcing you to smile and say thank you and pay for bad criticism, this would not make us any healthier, because bad criticism isn't actionable, it wasn't intended to be. It has no use.
Seeing that only genuinely good criticism can win these prizes, many would be convinced to put down the whip and pick up the scalpel, be more careful in checking their assumptions, citing sources, arguing for the sake of the target rather than some disinterested audience.
By the way, I love this loose definition of utilitarianism:
This is a great list, thanks for sharing! Wanted to throw one other idea into the pile of potentially promising causes to fund and work on. Sorry if this distracts from the post, looking to share the idea and I would be very interested in any feedback!
[Okay, thousand word comments should probably be their own posts. Moved this discussion of "nuclear weapons security engineering" to my shortform here.]
Thanks for sharing — you should post this as a shortform or top-level post, otherwise I'm worried it'll just get lost in the comments here :)
So true. Scared of being stupid on the front page I guess. Compromised by moving to my shortform, thanks again for the inspiration!
This is a great list of projects! I've been compiling a list like this myself (with a bigger focus on software/tech initiatives), and will aim to share that out sometime.
Re Quadratic funding pools for EAs: Sam Harsimony and I are working on a Manifold Markets initiative to build this out, as part of exploring Charity Prediction Markets. Be on the lookout for this - we should have a prototype out within the next month! If you or anyone else would like to help, get in touch with firstname.lastname@example.org. We're interested in talking to:
On the topic of starting a publishing house/imprint - I recall seeing a suggestion from Ben Pace that an EA could buy Blackwell's in Oxford and steer it in an EA direction...
There are several authors who are somewhat E.A.-aligned like Sam Harris, Steven Pinker, etc. who we should work on recruiting to write books from them in the future about E.A. Also, I would a verbal history of E.A. simply for the reason of, if it got optioned for movie rights, who would play Will Macaskill. I have my bets on Liam Hemsworth.