29 economists and philosophers, including leading researchers published today in Utilitas: “avoiding the Repugnant Conclusion is not a necessary condition for a minimally adequate... approach to population ethics.” The link at the top of this post is to my own summary of the article and how we reached it, posted at Medium.
Population ethics asks how to evaluate policies and social trends that change the size of the global population. For decades, research has focused on whether to accept “the Repugnant Conclusion.” The Repugnant Conclusion is a hypothetical claim about how to compare populations of well-off people against imaginable, enormous populations of worse-off people. The Stanford Encyclopedia of Philosophy explains the Repugnant Conclusion and calls it “one of the cardinal challenges of modern ethics”. In a new publication in the journal Utilitas (link to open access paper), 29 philosophers, economists, and demographers agree: “avoiding the Repugnant Conclusion should no longer be...
"Insider giving" is sad to learn about and certainly inflates donation figures.
Quoting from the abstract of 'Insider Giving' (71 Duke Law Journal (Forthcoming 2021; UCLA School of Law, Law-Econ Research Paper No. 21-02):
Corporate insiders can avoid losses if they dispose of their stock while in possession of material, non-public information. One means of disposal, selling the stock, is illegal and subject to prompt mandatory reporting. A second strategy is almost as effective and it faces lax reporting requirements and legal restrictions. That second method is to donate the stock to a charity and take a charitable tax deduction at the inflated stock price. “Insider giving” is a potent substitute for insider trading. We show that insider giving is far more widespread than previously believed.
In Human Compatible, Stuart Russell makes an argument that I have heard him make repeatedly (I believe on the 80K podcast and the FLI conversation with Steven Pinker). He suggests a pretty bold and surprising claim:
[C]onsider how content-selection algorithms function on social media... Typically, such algorithms are designed to maximize click-through, that is, the probability that the user clicks on presented items. The solution is simply to present items that the user likes to click on, right? Wrong. The solution is to change the user's preferences so that they become more predictable. A more predictable user can be fed items that they are likely to click on, thereby generating more revenue. People with more extreme political views tend to be more predictable in which items they will click on... Like any rational entity, the algorithm learns how to modify the state of its environment—in this case, the user's mind—in order
In this post I provide a brief sketch of The case for strong longtermism as put forward by Greaves and MacAskill, and proceed to raise and address possible misconceptions that people may have about strong longtermism. Some of the misconceptions I have come across, whilst others I simply suspect may be held by some people in the EA community.
The goal of this post isn’t to convert people as I think there remain valid objections against strong longtermism to grapple with, which I touch on at the end of this post. Instead, I simply want to address potential misunderstandings, or point out nuances that may not be fully appreciated by some in the EA community. I think it is important for the EA community to appreciate these nuances, which should hopefully aid the goal of figuring out how we can do the most good.
EDIT: I realise this is a long post....
Each year, Animal Charity Evaluators (ACE) publishes a list of goals. This year, to align with our new operating model, we will share our top-level goals for 2021 and offer some of the potential activities we can do to achieve these goals. In efforts to stay agile in our work, we will set quarterly goals internally, assess our progress on those goals at the end of each quarter, and adjust goals accordingly. Here we present our top-level goals for 2021:
After publishing our 2020 charity recommendations, our researchers held a series of retrospective meetings. The outcome of those retrospectives—paired with feedback from all staff, board members, and the charities who participated in our evaluation process—helped us...
Negative utilitarianism (NU) is a version of utilitarianism whose standard account holds that an act is morally right if and only if it leads to less suffering than any of its alternatives. NU was originally presented as an alternative to classical utilitarianism, which regards suffering and happiness as equally important, and is a leading example of a suffering-focused view, a broader family of ethical positions that assign primary—though not necessarily exclusive or overriding—moral importance to the alleviation of suffering.(Read More)
Negative utilitarianism is a version of utilitarianism whose
paradigmatic account holds that the only determinant of whether an act is right is whether it minimizes expected suffering. It may be contrasted with classical utilitarianism, which does not give higher weight to reducing suffering over promoting happiness. Negative utilitarianism is an example of a suffering-focused view, a broader family of ethical positions that assign primary—though not necessarily exclusive—moral importance to the alleviation of suffering. Negative preference utilitarianism
another prominent version, according to which we ought to minimize
Ord, Toby (2013)
, Why I'm not a negative utilitarian, Toby Ord's Blog - Unpolished Ideas, March.
We hereby announce a new meta-EA institution - "Naming What We Can".
We believe in a world where every EA organization and any project has a beautifully crafted name. We believe in a world where great minds are free from the shackles of the agonizing need to name their own projects.
To name and rename every EA organization, project, thing, or person. To alleviate any suffering caused by name-selection decision paralysis
Using our superior humor and language articulation prowess, we will come up with names for stuff.
We are a bunch of revolutionaries who believe in the power of correct naming. We translated over a quintillion distinct words from English to Hebrew. Some of us have read all of Unsong. One of us even read the whole bible. We spent countless fortnights debating the in and outs of our own org’s title - we Name What We Can.
We're here for...
In March 2020, I wondered what I’d do if - hypothetically - I continued to subscribe to longtermism but stopped believing that the top longtermist priority should be reducing existential risk. That then got me thinking more broadly about what cruxes lead me to focus on existential risk, longtermism, or anything other than self-interest in the first place, and what I’d do if I became much more doubtful of each crux.
I made a spreadsheet to try to capture my thinking on those points, the key columns of which are reproduced below.