Working to reduce extreme suffering for all sentient beings. Author of 'Suffering-Focused Ethics: Defense and Implications' & 'Reasoned Politics'.

Wiki Contributions


Tips for overcoming low back pain

Update: I tried taking curcumin supplements to boost my general health, and after taking them for some weeks, I began to notice that I didn't get low back pain even when I did things that usually triggered it. This was much to my surprise, since I didn't expect the supplements to have any effect on my back. Where before I needed to be careful not to do things that would give me low back pain, I now feel like I would have to make an active effort to make the pain come back. So it feels like a really big difference.

This is anecdotal, of course, but it turns out that positive effects of curcumin on low back pain are also supported by medical studies.

So it might be worth trying if you're dealing with low back pain. (If you do, note that it may take weeks for the full effects to kick in.)

The effective altruist case for parliamentarism

Tiago writes the following in response to a similar comment made on Overcoming Bias:

That is a common hypothesis, which is why studies usually include legal origins as a control. Others do not need to do it, because they used a fixed effects approach such that any invariant characteristic such as colonizer will be automatically controlled for. But endogeneity might always be an issue, which is why the book also deals with theory, and auxiliary evidence from companies and municipalities. I think you would like it!

As Tiago notes, the evidence goes beyond just national governments; the first chapter of his book has sections on national governments, corporations, and local government, and the latter two are not subject to this confounder. And as Hanson writes in his review of the book, one may argue that the evidence from cities (i.e. local governance) is most convincing:

Finally, and to me most persuasive, there is evidence on U.S. cities ...

As usual, the studies of variation across nations have a small N problem; if you try to include too many controls, you run out of data. In contrast, for firms N is huge, but one worries that their problems are too different, as boards of directors are rarely elected directly by shareholders. But the problem of city [governance] seems close enough to nations, and there N is large. For example, in this study N = 12,238.

See e.g. the studies on local governance cited above: Carr, 2015; Nelson & Afonso, 2019.

The effective altruist case for parliamentarism

I think the cause of promoting parliamentarism is potentially quite promising, and something that deserves considerably more attention than it has received so far (besides the OP, I believe this post is the only post related to parliamentarism on this forum).

Unfortunately, I don't feel the OP does justice to the cause or to the arguments in its favor. And I suspect Tiago himself would agree; the more convincing case is found in his book on the subject (that direct link to the book is found on his website).

The following is an excerpt on "Parliamentarism vs. Presidentialism" (from a forthcoming book of mine) that provides a summary of some of the reasons in favor of parliamentarism, based in large part on Tiago's book:

Among modern democracies, one can broadly distinguish two different systems of government: parliamentarism and presidentialism. In a parliamentary system, the executive branch of government is appointed by — and may also be dismissed by — the legislative branch, and the ministers of the government carry a collective responsibility. In a presidential system, by contrast, the head of government, i.e. the president, is elected directly by citizens. The president has the power to appoint and dismiss ministers, and is responsible for the entire executive branch (Santos, 2020, p. 2).

A number of scholars have argued that parliamentarism has proved superior to presidentialism across a wide range of important metrics. In the words of political scientist and diplomat Tiago Santos, “political science analysis of the different systems is close to a consensus on the superiority of parliamentarism, economic models almost unanimously point in the same direction, and empirical evidence supports it” (Santos, 2020, p. xii).

In particular, countries with parliamentary systems are generally better at protecting individual liberties, including freedom of the press, and income inequality is 12-24 percent higher in presidential countries compared to parliamentarist ones (McManus & Ozkan, 2018; Santos, 2020, p. 1, p. 11). Parliamentary systems also appear to have significantly lower levels of political polarization (Casal Bértoa & Rama, 2021), and to be more stable, more peaceful, and less prone to coups (Santos, 2020, p. 1, ch. 1), all of which seem desirable features in relation to the proxy aims of securing cooperation, improving our values, and increasing our overall capacity to reduce suffering [and to achieve other altruistic aims].

Parliamentary systems are also associated with “better corruption control, bureaucratic quality, rule of law, …, infant mortality, and literacy” (Santos, 2020, p. 47; Gerring et al., 2009). In terms of more general measures, parliamentarism is associated with higher scores on the UN Development Program’s Inequality-Adjusted Development Index; for instance, not a single presidential country is in the top 20 of this index (Santos, 2020, p. 11).

Evidence pertaining to corporations and local governance likewise supports the overall effectiveness of parliamentary models over presidential ones. In terms of corporate governance, most corporations choose a structure similar to parliamentarism, whereas virtually none opt for a presidentialist structure, which suggests that parliamentarist structures have considerable advantages for effective and adaptive governance (Santos, 2020, 1.2.2). At the level of local government, it turns out that cities that opt for more parliamentary structures, such as by electing a city council that appoints a council manager, tend to do better on various measures compared to cities that opt for a more presidential structure, such as a “strong mayor” model in which a city mayor and council are elected separately. For instance, cities with the council manager model tend to have less corruption and less conflict among senior officials (Carr, 2015; Nelson & Afonso, 2019; Santos, 2020, 1.2.3).

What might account for this apparent superiority of parliamentarist structures compared to presidentialist ones? Political theorists have pointed to a number of mechanisms. Santos argues that the difference can be thought of as a general algorithmic difference: parliamentarist systems implement a decision algorithm that is generally better suited for making good decisions (Santos, 2021). In more specific terms, parliamentary systems only have popular elections for the legislature, whose members become the sole representatives of the people, whereas presidential systems both elect the legislature and a president as representatives of the people. And since presidential systems have no mechanism for aligning the majority of the legislature with the head of government, the president is likely to diverge in significant ways from a majority of the legislature on key political issues. In addition, parliamentary systems can more easily replace incompetent leaders, and they likewise tend to have less concentration of power compared to presidential systems, in which the president has all of the executive power (Linz, 1990; Santos, 2020, 1.1.1).

If there is indeed such a strong case in favor of opting for parliamentarism over presidentialism, across so many relevant measures, should we not expect parliamentarism to be more popular? First, parliamentarism arguably is quite popular, especially among political scientists, as hinted by Santos above, and as evidenced by an elaborate literature defending its superiority compared to presidentialism (see e.g. Linz & Valenzuela, 1994; Riggs, 1997; Selinger, 2019; Santos, 2020).

Second, it is not surprising if parliamentarism has a difficult time gaining widespread popularity, in part because the case for parliamentarism can sound vaguely technical and boring, and in part because any vision to advance parliamentarist change is unlikely to stir our primal political motivations. It fails to inspire a struggle against a political outgroup — there is no clear “anti-parliamentarist” coalition to oppose — and hence being in favor of parliamentarism fails to signal any clear partisan loyalties, just as it fails to be a signal of altruistic traits.

Perhaps also see this interview with Tiago. [Edited to add links to some of the studies.]

New book — "Suffering-Focused Ethics: Defense and Implications"

The book is now also available in audiobook and hardcover formats, and is free on kindle as well:

Forecasting Transformative AI: Are we "trending toward" transformative AI? (How would we know?)

I must admit that I’m quite confused about some of the key definitions employed in this series, and, in part for that reason, I’m often confused about what claims are being made. Specifically, I’m confused about the definitions of “transformative AI” and “PASTA”, and find them to be more vague and/or less well-chosen than what sometimes seems assumed here. I'll try to explain below.

1. Transformative AI (TAI)

1.1 The simple definition

The simple definition of TAI used here is "AI powerful enough to bring us into a new, qualitatively different future". This definition seems quite problematic given how vague it is. Not that it is entirely meaningless, of course, as it surely does give some indication as to what we are talking about, yet it is far from meeting the bar that someone like Tetlock would require for us to track predictions, as a lot of things could be argued to (not) count as “a new, qualitatively different future.”

1.2 The Industrial Revolution definition

A slightly more elaborate definition found elsewhere, and referred to in a footnote in this series, is “software (i.e. a computer program or collection of computer programs) that has at least as profound an impact on the world’s trajectory as the Industrial Revolution did.” Alternative version of this definition: “AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution.”

This might be a bit more specific, but it again seems to fall short of the Tetlock bar: what exactly do we mean by the term “the world’s trajectory”, and how would we measure an impact on it that is “at least as profound” as that of the Industrial Revolution?

For example, the Industrial Revolution occurred (by some definitions) roughly from 1760 to 1840, about 80 years during which the world economy got almost three times bigger, and we began to see the emergence of a new superpower, the United States. This may be compared to the last 80 years, from 1940 to 2020, what we may call “The Age of the Computer”, during which the economy has doubled almost five times (i.e. it’s roughly 30 times bigger). (In fact, by DeLong’s estimates, the economy more than tripled, i.e. surpassed the relative economic growth of the IR, in just the 25 years from 1940 to 1965.) And we saw the fall of a superpower, the Soviet Union; the rise of a new one, China; and the emergence of international institutions such as the EU and the UN.

So doesn’t “The Age of the Computer” already have a plausible claim to having had “at least as profound an impact on the world’s trajectory as the Industrial Revolution did”, even if no further growth were to occur? And by extension, could one not argue that the software of this age already has a plausible claim to having “precipitated” a transition comparable to this revolution? (This hints at the difficulty of specifying what counts as sufficient “precipitation” relative to the definition above: after all, we could not have grown the economy as much as we have over the last 80 years were it not for software, so existing software has clearly been a necessary and even a major component; yet it has still just been one among a number of factors accounting for this growth.)

1.3 The growth definition

A definition that seems more precise, and which has been presented as an operationalization of the previous definition, is phrased in terms of growth of the world economy, namely as “software which causes a tenfold acceleration in the rate of growth of the world economy (assuming that it is used everywhere [and] that it would be economically profitable to use it).”

I think this definition is also problematic, in that it fails in significant ways to capture what people are often worried about in relation to AI.

First, there is the relatively minor point that it is unclear in what cases we could be justified in attributing a tenfold acceleration in the economy to software (were such an acceleration to occur), rather than to a number of different factors that may all be similarly important, as was arguably the case in the Industrial Revolution.

For instance, if the rate of economic growth were to increase tenfold without software coming to play a significantly larger role in the economy than it does today, i.e. if its share of the world economy were to remain roughly constant, yet with software still being a critical component for this growth, would this software qualify as TAI by the definition above? (Note that our software can get a lot more advanced in an absolute sense even as its relative role in the economy remains largely the same.) It’s not entirely clear. (Not even if we consult the more elaborate “Definition #2” of TAI provided here.) And it’s not entirely irrelevant either, since economic growth appears to have been driven by an interplay of many different factors historically, and so the same seems likely to be true in the future.

But more critical, I think, is that the growth definition seems to exclude a large class of scenarios that would appear to qualify as “transformative AI” in the qualitative sense mentioned above, and scenarios that many concerned about AI would consider “transformative” and important. It is, after all, entirely conceivable, and arguably plausible, that we could get software that “would bring us into a new, qualitatively different future" without growth rates changing much. Indeed, growth rates could decline significantly, such that the world economy only grows by, e.g., one percent a year, and we could still — if such growth were to play out for another, say, 150 years — end up with “transformative AI” in the sense(s) that people are most worried about, and which could in principle entail a “value drift” and “lock-in” just as much as more rapidly developed AI.

I guess a reply might be that these are just very rough definitions and operationalizations, and that one shouldn’t take them to be more than that. But it seems that they often are taken to be more than that; for instance, the earlier-cited document that provides the growth definition appears to say about it that it “best captures what we ultimately care about as philanthropists”.

I think it is worth being clear that the definitions discussed above are in fact very vague and/or that they diverge in large and important ways from the AI scenarios people often worry about, including many of the scenarios that seem most plausible.


PASTA was defined as: “AI systems that can essentially automate all of the human activities needed to speed up scientific and technological advancement.”

This leaves open how much of a speed-up we are talking about. It could be just a marginal speed-up (relative to previous growth rates), or it could be a speed-up by orders of magnitude. But in some places it seems that the latter is implicitly assumed.

One might, of course, argue that automating all human activities related to scientific and technological progress would have to imply a rapid speed-up, but this is not necessarily the case. It is conceivable, and in my view quite likely, that such automation could happen very gradually, and that we could transition to fully or mostly automated science in a manner that implies growth rates that are similar to those we see today.

We have, after all, automated/outsourced much of science today, to an extent that past scientists might say that we have, relative to their perspective, already automated the vast majority of science, with scientifically-related calculations, illustrations, simulations, manufacturing, etc. that are, by their standards, mostly done by computers and other machines. And this trend could well continue without being more explosive than the growth we have seen so far. In particular, the step from 90 percent to 99 percent automated science (or across any similar interval) could happen over years, at a familiar and fairly steady growth rate.

I think it’s worth being clear that the intuition that fully automated science is in some sense inevitable (assuming continued technological progress) does not imply that a growth explosion is inevitable, or even that such an explosion is more likely to happen than not.

Forecasting transformative AI: what's the burden of proof?

Thanks for your reply :-)

Most of your post seems to be arguing that current economic trends don't suggest a coming growth explosion.

That's not quite how I'd summarize it: four of the six main points/sections (the last four) are about scientific/technological progress in particular. So I don't think the reasons listed are mostly a matter of economic trends in general. (And I think "reasons listed" is an apt way to put it, since my post mostly lists some reasons to be skeptical of a future growth explosion — and links to some relevant sources — as opposed to making much of an argument.)

This post is arguing not "Current economic trends suggest a growth explosion is near" but rather "A growth explosion is plausible enough (and not strongly enough contraindicated by current economic trends) that we shouldn't too heavily discount separate estimates implying that transformative AI will be developed in the coming decades."

I get that :-) But again, most of the sections in the cited post were in fact about scientific and technological trends in particular, and I think these trends do support significantly lower credences in a future growth explosion than the ones you hold. For example, the observation that new scientific insights per human have declined rapidly suggests that even getting digital people might not be enough to get us to a growth explosion, as most of the insights may have been plugged already. (I make some similar remarks here.)

Additionally, one of the things I had in mind with my remark in the earlier comment relates to the section on economic growth, which says:

My main response is that the picture of steady growth - "the world economy growing at a few percent per year" - gets a lot more complicated when we pull back and look at all of economic history, as opposed to just the last couple of centuries. From that perspective, economic growth has mostly been accelerating, and projecting the acceleration forward could lead to very rapid economic growth in the coming decades.

In relation to this point in particular, I think the observation mentioned in the second section of my post seems both highly relevant and overlooked, namely that if we take a nerd-dive into the data and look at doublings, we have actually seen an unprecedented deceleration (in terms of how the growth rate has changed across doublings). And while this does not by any means rule out a future growth explosion, I think it is an observation that should be taken into account, and it is perhaps the main reason to be skeptical of a future growth explosion at the level of long-run growth trends. So that would be the kind of reason I think should ideally have been discussed in that section. Hope that helps clarify a bit where I was coming from.

Forecasting transformative AI: what's the burden of proof?

I don't feel this post engages with the strongest reasons to be skeptical of a growth explosion. The following post outlines what I would consider some of the strongest such reasons:

MagnusVinding's Shortform

An argument in favor of (fanatical) short-termism?

[Warning: potentially crazy-making idea.]

Section 5 in Guth, 2007 presents an interesting, if unsettling idea: on some inflationary models, new universes continuously emerge at an enormous rate, which in turn means (maybe?) that the grander ensemble of pocket universes consists disproportionally of young universes.

More precisely, Guth writes that, "in each second the number of pocket universes that exist is multiplied by a factor of exp{10^37}." Thus, naively, we should expect earlier points in a given pocket universe's timeline to vastly outnumber later points — by a factor of exp{10^37} per second!

(A potentially useful way to visualize the picture Guth draws is in terms of a branching tree, where for each older branch, there are many more young ones, and this keeps being true as the new, young branches grow and spawn new branches.)

If this were true, or even if there were a far weaker universe generation process to this effect (say, one that multiplied the number of pocket universes by two for each year or decade), it would seem that we should, for acausal reasons, mostly prioritize the short-term future, perhaps even the very short-term future.

Guth tentatively speculates whether this could be a resolution of sorts to the Fermi paradox, though he also notes that he is skeptical of the framework that motivates his discussion:

Perhaps this argument explains why SETI has not found any signals from alien civilizations [because if there were an earlier civ at our stage, we would be far more likely to be in that civ], but I find it more plausible that it is merely a symptom that the synchronous gauge probability distribution is not the right one.

I'm not claiming that the picture Guth outlines is likely to be correct. It's highly speculative, as he himself hints, and there are potentially many ways to avoid it — for example, contra Guth's preferred model, it may be that inflation eventually stops, cf. Hawking & Hertog, 2018, and thus that each point in a pocket universe's timeline will have equal density in the end; or it might be that inflationary models are not actually right after all.

That said, one could still argue that the implication Guth explores — which is potentially a consequence of a wide variety of (eternal) inflationary models — is a weak reason, among many other reasons, to give more weight to short-term stuff (after all, in EV terms, the enormous rate of universe generation suggested by Guth would mean that even extremely small credences in something like his framework could still be significant). And perhaps it's also a weak reason to update in favor of thinking that as yet unknown unknowns will favor a short(er)-term priority to a greater extent than we had hitherto expected, cf. Brian Tomasik's discussion of how we might model unknown unknowns.

AMA: Tobias Baumann, Center for Reducing Suffering

Concerning how EA views on this compare to the views of the general population, I suspect they aren’t all that different. Two bits of weak evidence:


Brian Tomasik did a small, admittedly unrepresentative and imperfect Mechanical Turk survey in which he asked people the following:

At the end of your life, you'll get an additional X years of happy, youthful, and interesting life if you first agree to be covered in gasoline and burned in flames for one minute. How big would X have to be before you'd accept the deal?

More than 40 percent said that they would not accept it “regardless of how many extra years of life” they would get (see the link for some discussion of possible problems with the survey).


The Future of Life Institute did a Superintelligence survey in which they asked, “What should a future civilization strive for?” A clear plurality (roughly a third) answered “minimize suffering” — a rather different question, to be sure, but it does suggest that a strong emphasis on reducing suffering is very common.

1. Do you know about any good articles etc. that make the case for such views?

I’ve tried to defend such views in chapter 4 and 5 here (with replies to some objections in chapter 8). Brian Tomasik has outlined such a view here and here.

But many authors have in fact defended such views about extreme suffering. Among them are Ingemar Hedenius (see Knutsson, 2019); Ohlsson, 1979 (review); Mendola, 1990; 2006; Mayerfeld, 1999, p. 148, p. 178; Ryder, 2001; Leighton, 2011, ch. 9; Gloor, 2016, II.

And many more have defended views according to which happiness and suffering are, as it were, morally orthogonal.

2. Do you think such or similar views are necessary to prioritize S-Risks?

As Tobias said: No. Many other views can support such a priority. Some of them are reviewed in chapter 1, 6, and 14 here.

3. Do you think most people would/should vote in such a way if they had enough time to consider the arguments?

I say a bit on this in footnote 23 in chapter 1 and in section 4.5 here.

4 For me it seems like people constantly trade happiness for suffering ... Those are reasons for me to believe that most people ... are also far from expecting 1:10^17 returns or even stating there is no return which potentially could compensate any kind of suffering.

Many things to say on this. First, as Tobias hinted, acceptable intrapersonal tradeoffs cannot necessarily be generalized to moral interpersonal ones (cf. sections 3.2 and 6.4 here). Second, there is the point Jonas made, which is discussed a bit in section 2.4 in ibid. Third, tradeoffs concerning mild forms of suffering that a person agrees to undergo do not necessarily say much about tradeoffs concerning states of extreme suffering that the sufferer finds unbearable and is unable to consent to (e.g. one may endorse lexicality between very mild and very intense suffering, cf. Klocksiem, 2016, or think that voluntarily endured suffering occupies a different moral dimension than does suffering that is unbearable and which cannot be voluntarily endured). More considerations of this sort are reviewed in section 14.3, “The Astronomical Atrocity Problem”, here.

AMA: Tobias Baumann, Center for Reducing Suffering

[Warning: potentially disturbing discussion of suicide and extreme suffering.]

I agree with many of the points made by Anthony. It is important to control for these other confounding factors, and to make clear in this thought experiment that the person in question cannot reduce more suffering for others, and that the suicide would cause less suffering in expectation (which is plausibly false in the real world, also considering the potential for suicide attempts to go horribly wrong, Humphry, 1991, “Bizarre ways to die”). (So to be clear, and as hinted by Jonas, even given pure NU, trying to commit suicide is likely very bad in most cases, Vinding, 2020, 8.2.)

Another point one may raise is that our intuitions cannot necessarily be trusted when it comes to these issues, e.g. because we have an optimism bias (which suggests that we may, at an intuitive level, wholly disregard these tail risks); because we evolved to prefer existence almost no matter the (expected) costs (Vinding, 2020, 7.11); and because we intuitively have a very poor sense of how bad the states of suffering in question are (cf. ibid., 8.12).

Intuitions also differ on this matter. One EA told me that he thinks we are absolutely crazy for staying alive (disregarding our potential to reduce suffering), especially since we have no off-switch in case things go terribly wrong. This may be a reason to be less sure of one's immediate intuitions on this matter, regardless of what those intuitions might be.

I also think it is important to highlight, as Tobias does, that there are many alternative views that can accommodate the intuition that the suicide in question would be bad, apart from a symmetry between happiness and suffering, or upside-focused views more generally. For example, there is a wide variety of harm-focused views, including but not restricted to negative consequentialist views in particular, that will deem such a suicide bad, and they may do so for many different reasons, e.g. because they consider one or more of the following an even greater harm (in expectation) than the expected suffering averted: the frustration of preferences, premature death, lost potential, the loss of hard-won knowledge, etc. (I say a bit more about this here and here.)

Relatedly, one should be careful about drawing overly general conclusions from this case. For example, the case of suicide does not necessarily say much about different population-ethical views, nor about the moral importance of creating happiness vs. reducing suffering in general. After all, as Tobias notes, quite a number of views will say that premature deaths are mostly bad while still endorsing the Asymmetry in population ethics, e.g. due to conditional interests (St. Jules, 2019; Frick, 2020). And some views that reject a symmetry between suffering and happiness will still consider death very bad on the basis of pluralist moral values (cf. Wolf, 1997, VIII; Mayerfeld, 1996, “Life and Death”; 1999, p. 160; Gloor, 2017; 1, 4.3, 5).

Similar points can be made about intra- vs. interpersonal tradeoffs: one may think that it is acceptable to risk extreme suffering for oneself without thinking that it is acceptable to expose others to such a risk for the sake of creating a positive good for them, such as happiness (Shiffrin, 1999; Ryder, 2001; Benatar & Wasserman, 2015, “The Risk of Serious Harm”; Harnad, 2016; Vinding, 2020, 3.2).

(Edit: And note that a purely welfarist view entailing a moral symmetry between happiness and suffering would actually be a rather fragile basis on which to rest the intuition in question, since it would imply that people should painlessly end their lives if their expected future well-being were just below "hedonic zero", even if they very much wanted to keep on living (e.g. because of a strong drive to accomplish a given goal). Another counterintuitive theoretical implication of such a view is that one would be obliged to end one's life, even in the most excruciating way, if it in turn created a new, sufficiently happy being, cf. the replacement argument discussed in Jamieson, 1984; Pluhar, 1990. I believe many would find these implications implausible as well, even on a purely theoretical level, suggesting that what is counterintuitive here is the complete reliance on a purely welfarist view — not necessarily the focus on reducing suffering over increasing happiness.)

Load More