I discovered this recently and enjoyed the brevity and clarity of the presentation, so I'm sharing it on the Forum.
I couldn't find a video or recording of the actual talk from the Conference on the Ethics of Giving. If you happen to know of one, please share it in the comments!
Written in haste. All inaccuracies are mine.
This post originally appeared on LessWrong. It has been very lightly edited.
Megaproject management is a new-ish subfield of project management. Originally considered to be the special case of project management where the budgets were enormous (billions of dollars), it is developing into a separate specialization because of the high complexity and tradition of failure among such projects. The driving force behind treating it as a separate field appears to be Bent Flyvbjerg, previously known around here for Reference Class Forecasting as the first person to develop an applied procedure. That procedure was motivated by megaprojects. For context, these projects are things like powerplants, chip fabs, oil rigs, et cetera; in other words, the building blocks of modernity.
I will make a summary of the paper "What you should know about megaprojects, and why: an overview" from 2014. For casual reading, there is an article about it from the New Yorker here.
History
Megaprojects...
I've seen and heard many discussions about what EAs should do. William McAskill has ventured a definition of Effective Altruism, and I think it is instructive. Will notes that "Effective altruism consists of two projects, rather than a set of normative claims." One consequence of this is that if there are no normative claims, any supposition about what ought to happen based on EA ideas is invalid. This is a technical point, and one which might seem irrelevant to practical concerns, but I think there are some pernicious consequences of some of the normative claims that get made.
So I think we should discuss why "Effective Altruism" implying that there are specific and clear preferable options for "Effective Altruists" is often harmful. Will's careful definition avoids that harm, and I think should be taken seriously in that regard.
Claiming something normative given moral uncertainty, i.e. that we may be incorrect, is...
Strongly endorsed.
Thanks for the write-up. A few quick additional thoughts on my end:
Moral circle expansion is the attempt to expand the perceived boundaries of the category of . It has been proposed as a priority cause area and as a heuristic for discovering cause X-and-as-a-heuristic-for-discovering-cause-x.
Metaethics is the study of the language, knowledge and nature of morality. Together with normative ethics and applied ethics, it is one of the three main branches of moral philosophy.
Metaculus is a reputation-based prediction solicitation and aggregation engine. It was founded in November 2015 by astrophysicist Anthony Aguirre, cosmologist Greg Laughlin and data scientist Max Wainwright (Mann 2016; Shelton 2016).
Mercy for Animals is an animal protection organization that conducts undercover investigations of factory farms and engages in outreach activities to promote veganism.
I've recently started interning with Charity Entrepreneurship have been reading some key articles that I found really useful to understanding the organisation better and to actually do the work I've been doing better, such as:
I'd be curious if other EA organisations have some "must-read" articles that encapsulate the organisation's approach to doing good - the high level strategy, assumptions about welfare or suffering, perspective on the future, theory of change or anything else.
Here’s a list of alternative high level narratives about what is importantly going on in the world—the central plot, as it were—for the purpose of thinking about what role in a plot to take:
Really, thanks for the post. I think it's quite important to have such a list.
I’m not sure if we could say “very likely,” though the odds are surely relevant. I'm no expert, but I guess the case about the solution to the Fermi Paradox is still open, ranging from what prob distribution one uses to model the problem, to our location in the Milky Way. For instance, being “close” to ... (read more)
Thanks for the post, Aaron. It's a good lecture and a very interesting subject.
I wonder if there’s a more general problem of “tipping points” here. And though I think there’s no real necessary conflict between individual & collective action for EAs, there’s a relevant issue when it comes to analyzing how neglected a cause area is – i.e., deciding if an additional contribution increases the probability of effective change.
I should remark that I’m not sure that the “expected badness amount of buying one chicken” is roughly equivalent to one-chicken death... (read more)