(Not mine) This post looks at ghostwriting and other misleading/dishonest behavior in EA. Maybe some people who have accounts here can clarify if it was intentional or not.
I haven't been convinced by anything I've read, but I also haven't read much.
I'm concerned that unless you use preferences, you couldn't justify any kind of tradeoff rate between (and hence the commensurability of) suffering and happiness/pleasure, because they are fundamentally different. Then, by using an exclusively hedonistic view of value, haven't you already rejected the moral relevance of preferences, and, if so, how would you justify referring to them to defend hedonism? Even if you could set a tradeoff rate based on preferences, how would you justify usin... (Read more)
Following the recent debate on the effectiveness of systemic interventions, I assert that investments in global trade may be effectively altruistic. If quantified, the impacts of investments in world commerce facilitation may outcompete the effects of funding GiveWell’s charities by unit amounts.
Unlike investments in GiveWell’s charities, financing trade advancement of developing nations enables individuals who live in emerging economies to gain commercial competitiveness and thus join a virtuous cycle of income growth. An increased income enables the beneficiaries to purchase he... (Read more)
Catherine Hollander, How this year’s winners of the Nobel Prize in Economics influenced GiveWell’s work, The GiveWell Blog, October 18, 2019.
On Monday, the Royal Swedish Academy of Sciences announced that the development economists Abhijit Banerjee, Esther Duflo, and Michael Kremer are this year’s recipients of the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel.
Banerjee, Duflo, and Kremer’s work to understand the global poor has influenced our research in myriad ways over the years. Some GiveWell staff cite Banerjee and Duflo... (Read more)
Stuart Russell, professor of Computer Science at UC Berkeley and Director of the Center for Human-Compatible Intelligence (CHAI), has a new book out today: “Human Compatible: Artificial Intelligence and the Control Problem”.
In the book, he explains why he has come to consider his own discipline an existential threat to our species, and lays out how we can change course before it's too late. The book explores the idea of intelligence in humans and in machines, describes the benefits we can expect (from intelligent personal assistants to vastly accelerated scientific research... (Read more)
(Crossposted on LessWrong)
Absolute negative utilitarianism (ANU) is a minority view despite the theoretical advantages of terminal value monism (suffering is the only thing that motivates us “by itself”) over pluralism (there are many such things). Notably, ANU doesn’t require solving value incommensurability, because all other values can be instrumentally evaluated by their relationship to the suffering of sentient beings, using only one terminal value-grounded common currency for everything.
Therefore, it is a straw man argument that NUs don’t value life or positive states, because NUs value ... (Read more)
Cross-posted from the Animal Ethics blog.
The term “wild animal suffering” is a general term that can be defined as follows:
Wild animal suffering: the harms that animals living outside direct human control suffer due partly or totally to natural causes
In 3.1: Hard and Soft Skills, we discussed the possibility that the Germy Paradox exists because bioweapons aren’t actually easy to make. Today, we go into the past and discuss another possibility – that whether or not they’re effective, there’s some kind of taboo or cultural reason they aren’t used.
This is not a new idea, although there’s no real consensus. I separate scholarly explanations for the Taboo Filter into two schools: the humaneness hypothesis and the treachery hypothesis. In the humaneness hypothesis, people reject BW because they a... (Read more)
Let's imagine that establishment liberals dominated funding councils across the world and time and again made poor decisions in regard to maximising wellbeing. It would then be worth think if there was a specific way to aid them in making better ones. Are there ideologies which time and again cause people to make significantly worse choices than a typical person?
When discussing cognitive enhancement research as a potential EA cause area, a frequent counter-argument goes along the following lines:
"Higher cognitive performance is better. Thus, evolution already optimised for cognitive performance. Thus, it's unlikely that simple changes to brain chemistry could improve cognitive performance. Thus, cognitive enhancement research (and particularly research into nootropics) has low tractability."
I find the argument fairly weak for a number of reasons. Iodine supplementation seems to have worked great, and so does drinking coffee. But there are also some th... (Read more)
I think there are many questions whose answers would be useful for technical AGI safety research, but which will probably require expertise outside AI to answer. In this post I list 30 of them, divided into four categories. Feel free to get in touch if you’d like to discuss these questions and why I think they’re important in more detail. I personally think that making progress on the ones in the first category is particularly vital, and plausibly tractable for researchers from a wide range of academic backgrounds.
Studying and understanding safety problems
[Epistemic status: Pretty confident. But also, enthusiasm on the verge of partisanship]
One intuitive function which assigns impact to agents is the counterfactual, which has the form:
CounterfactualImpact(Agent) = Value(World) - Value(World/Agent)
which reads "The impact of an agent is the difference between the value of the world with the agent and the value of the world without the agent".
It has been discussed in the effective altruism community that this function leads to pitfalls, paradoxes, or to unintuitive results when considering scenarios with multiple stakeholders. See:(Read more)
The Procreation Asymmetry consists of these two claims together:
However, if a bad existence can be an "existential harm" (according to c... (Read more)
I don’t claim originality for any content here; people who’ve been influential on this include Nick Beckstead, Phil Trammell, Toby Ord, Aron Vallinder, Allan Dafoe, Matt Wage, and, especially, Holden Karnofsky and Carl Shulman. Everything tentative; errors all my own.
Here are two distinct views:
Strong Longtermism := The primary determinant of the value of our actions is the effects of those actions on the very long-run future.
The Hinge of History Hypothesis (HoH) := We are living at the most influential time ever.
It seems that, in the effective altruism community as it currently... (Read more)