Thanks. I don’t agree with your interpretation of the survey data. I'll quote another sentence from the essay that made my statement on this more clear,
The majority of the population of Taiwan simply want to be left alone, as a sovereign nation—which they already are, in every practical sense.
The position "declare independence as soon as possible" is unpopular for an obvious reason that I explained in the post. Namely, if Taiwan made a formal declaration of independence, it would potentially trigger a Chinese invasion.
"Maintaining the status quo" is, for t... (read more)
I like that you admit that your examples are cherry-picked. But I'm actually curious what a non-cherry-picked track record would show. Can people point to Yudkowsky's successes?
While he's not single-handedly responsible, he lead the movement to take AI risk seriously at a time when approximately no one was talking about it, which has now attracted the interests of top academics. This isn't a complete track record, but it's still a very important data-point. It's a bit like if he were the first person to say that we should take nuclear war seriously, and then five years later people are starting to build nuclear bombs and academics realize that nuclear war is very plausible.
What I view as the Standard Model of Longtermism is something like the following:
This model doesn't predict tha... (read more)
I have an issue with your statement that longtermists neglect suffering, because they just maximize total (symmeric) welfare. I think this statement isn't actually true, though I agree if you just mean pragmatically, most longtermists aren't suffering focused.
Hilary Greaves and William MacAskill loosely define strong longtermism as, "the view that impact on the far future is the most important feature of our actions today." Longtermism is therefore completely agnostic about whether you're a suffering-focused altruist, or a traditional welfarist in line wit... (read more)
There is an estimate of 24.9 million people in slavery, of which 4.8 million are sexually exploited! Very likely these estimates are exaggerated, and the conditions are not as bad as one would think hearing those words, and even if they were the conditions might not be as bad as battery cages, but my broader point is that the world really does seem like it is very broken and there are problems of huge scale even just restricting to human welfare, and you still have to prioritize, which means ignoring some truly massive problems.
I agree, there is already a ... (read more)
I want to understand the main claims of this post better. My understanding is that you have made the following chain of reasoning:
High inflation is bad for two reasons: (1) real wages decline, especially among the poor, (2) inflation causes populism, which may cause Democrats to lose the 2022 midterm elections.
It's quite nuanced. There are real wage declines on average, but some poor people might have seen small real wage increases, others might have seen real wage losses. For instance, older people's wages seem stickier according to the wage tracker here. OpenPhil's grantee EmployAmerica has an interesting analysis of the various factors, and one could perhaps reasonably disag... (read more)
Here are some of my reasons for disliking high inflation, which I think are similar to the reasons of most economists:
Inflation makes long-term agreements harder, since they become less useful unless indexed for inflation.
Inflation imposes costs on holding wealth in safe, liquid forms such as bank accounts, or dollar bills. That leads people to hold more wealth in inflation-proof forms such as real estate, and less in bank accounts, reducing their ability to handle emergencies.
Inflation creates a wide variety of transaction costs: stores need to change the... (read more)
More voters have seen their real wages go down than up (mostly in the lower income brackets).
What is your source for this claim? By contrast, this article says,
Between roughly 56 and 57 percent of occupations, largely concentrated in the bottom half of the income distribution, are seeing real hourly wage increases.
And they show this chart,
Here's another article that cites economists saying the same thing.
The most recent source for this i Jason Furman on Twitter in March:
"Average hourly earnings have been declining for more than a year as inflation has outpaced nominal wage gains. This is larger than any 12-month pre-pandemic decline since 1980*. VERY serious composition issues affecting the exact trajectory so PLEASE read next tweets too.
The first article you cite is only till October '21 and 6 months can make a difference. I also agree with the claim of the second article you cite "Inflation is high, but wage gains for low-income workers... (read more)
Here's a quote from Wei Dai, speaking on Feburary 26th 2020,
... (read more)Here's another example, which has actually happened 3 times to me already:
- The truly ignorant don't wear masks.
- Many people wear masks or encourage others to wear masks in part to signal their knowledge and conscientiousness.
- "Experts" counter-signal with "masks don't do much", "we should be evidence-based" and "WHO says 'If you are healthy, you only need to wear a mask if you are taking care of a person with suspected 2019-nCoV infection.'"
- I respond by citing actual evidence in the form of a m
Thanks for the continued discussion.
If I'm understanding correctly the main point you're making is that I probably shouldn't have said this:
There is little room for improvement here...
I think I'm making two points. The first point was, yeah, I think there is substantial room for improvement here. But the second point is necessary: analyzing the situation with Taiwan is crucial if we seek to effectively reduce nuclear risk.
I do not think it was wrong to focus on the trade war. It depends on your goals. If you wanted to promote quick, actionable and robust a... (read more)
My reason for focusing on the trade war though is because trade deescalation would have very few downsides and would probably be a substantial positive all on its own before even considering the potential positive effects it could have on relations with China and possibly nuclear risk.
I agree. I think we're both on the same page about the merits of ending the trade war, as an issue by itself.
The optimal policy here is far from clear to me.
Right. From my perspective, this is what makes focusing on Taiwan precisely right thing to do in a high-level analysis.... (read more)
You mention ending the trade war as the main mechanism by which we could ease US-China tensions. I agree that this policy change seems especially tractable, but it does not appear to me to be an effective means of avoiding a global conflict. As Stefan Schubert pointed out, the tariffs appear to have a very modest effect on either the American or Chinese economy.
The elephant in the room, as you alluded to, is Taiwan. A Chinese invasion of Taiwan, and subsequent intervention by the United States, is plausibly the most likely trigger for World War 3 in the ne... (read more)
One question I have is whether this is possible and how difficult it is?
I think it would be very difficult without human assistance. I don't, for example, think that aliens could hijack the computer hardware we use to process potential signals (though, it would perhaps be wise not to underestimate billion-year-old aliens).
We can imagine the following alternative strategy of attack. Suppose the aliens sent us the code to an AI with the note "This AI will solve all your problems: poverty, disease, world hunger etc.". We can't verify that the AI will actually... (read more)
I don't find the scenario plausible. I think the grabby aliens model (cited in the post) provides a strong reason to doubt that there will be many so-called "quiet" aliens that hide their existence. Moreover, I think malicious grabby (or loud) aliens would not wait for messages before striking, which the Dark Forest theory relies critically on. See also section 15 in the grabby aliens paper, under the heading "SETI Implications".
In general, I don't think there are significant risks associated with messaging aliens (a thesis that other EAs have argued for, along these lines).
I think failing to act can itself be atrocious. For example, the failure of rich nations to intervene in the Rwandan genocide was an atrocity. Further, I expect Peter Singer to agree that this was an atrocity. Therefore, I do not think that deontological commitments are sufficient to prevent oneself from being party to atrocities.
My interpretation of Peter Singer's thesis is that we should be extremely cautious about acting on a philosophy that claims that an issue is extremely important, since we should be mindful that such philosophies have been used to justify atrocities in the past. But I have two big objections to his thesis.
First, it actually matters whether the philosophy we are talking about is a good one. Singer provides a comparison to communism and Nazism, both of which were used to justify repression and genocide during the 20th century. But are either of these philosop... (read more)
I'm happy with more critiques of total utilitarianism here. :)
For what it's worth, I think there are a lot of people unsatisfied with total utilitarianism within the EA community. In my anecdotal experience, many longtermists (including myself) are suffering focused. This often takes the form of negative utilitarianism, but other variants of suffering focused ethics exist.
I may have missed it, but I didn't see any part of the paper that explicitly addressed suffering-focused longtermists. (One part mentioned, "Preventing existential risk is not prima... (read more)
Another strange implication is that enough worlds of utopia plus pinprick would be worse than a world of pure torture.
I view this implication as merely the consequence of two facts, (1) utilitarians generally endorse torture in the torture vs. dust specks thought experiment, (2) negative preference utilitarians don't find value in creating new beings just to satisfy their preferences.
The first fact is shared by all non-lexical varieties of consequentialism, so it doesn't appear to be a unique critique of negative preference utilitarianism.
The second ... (read more)
Moving from our current world to utopia + pinprick would be a strong moral improvement under NPU. But you're right that if the universe was devoid of all preference-having beings, then creating a utopia with a pinprick would not be recommended.
World destruction would violate a ton of people's preferences. Many people who live in the world want it to keep existing. Minimizing preference frustration would presumably give people what they want, rather than killing them (something they don't want).
I'm curious whether you think your arguments apply to negative preference utilitarianism (NPU): the view that we ought to minimize aggregate preference frustration. It shares many features with ordinary negative hedonistic utilitarianism (NHU), such as,
But NPU also has several desirable properties that are not shared with NHU:
For a long time, I've believed in the importance of not being alarmist. My immediate reaction to almost anybody who warns me of impending doom is: "I doubt it". And sometimes, "Do you want to bet?"
So, writing this post was a very difficult thing for me to do. On an object-level, l realized that the evidence coming out of Wuhan looked very concerning. The more I looked into it, the more I thought, "This really seems like something someone should be ringing the alarm bells about." But for a while, very few people were predicting anything big on respectable f... (read more)
It was much less disruptive than revolutions like in France, Russia or China, which attempted to radically re-order their governments, economies and societies. In a sense I guess you could think of the US revolution as being a bit like a mutiny that then kept largely the same course as the previous captain anyway.
I agree with the weaker claim here that the US revolution didn't radically re-order "government, economy and society." But I think you might be exaggerating how conservative the US revolution was.
The United States is widely considered to be ... (read more)
The main way I could see an AGI taking over the world without being exceedingly superhuman would be if it hid its intentions well enough so that it could become trusted enough to be deployed widely and have control of lots of important infrastructure.
My understanding is that Eliezer's main argument is that the first superintelligence will have access to advanced molecular nanotechnology, an argument that he touches on in this dialogue.
I could see breaking his thesis up into a few potential steps,
The first radically superhuman AGI will have the unique ability to deploy advanced molecular nanomachines, capable of constructing arbitrary weapons, devices, and nanobot swarms.
Why believe this is easy enough for AGI to achieve efficiently and likely?
Potential ways around this that come to mind:
Good ideas. I have a few more,
What is the likely market size for this platform?
I'm not sure, but I just opened a Metaculus question about this, and we should begin getting forecasts within a few days.
Eliezer Yudkowsky wrote a sequence on ethical injunctions where he argued why things like this were wrong (from his own, longtermist perspective).
And it feels terribly convenient for the longtermist to argue they are in the moral right while making no effort to counteract or at least not participate in what they recognize as moral wrongs.
This is only convenient for the longtermist if they do not have equivalently demanding obligations to the longterm. Otherwise we could turn it around and say that it's "terribly convenient" for a shorttermist to ignore the longterm future too.
Regarding the section on estimating the probability of AI extinction, I think a useful framing is to focus on disjunctive scenarios where AI ends up being used. If we imagine a highly detailed scenario where a single artificial intelligence goes rougue, then of course these types of things will seem unlikely.
However, my guess is that AI will gradually become more capable and integrated into the world economy, and there won't be a discrete point where we can say "now the AI was invented." Over the broad course of history, we have witnessed numerous instance
... (read more)A trip to Mars that brought back human passengers also has the chance of bringing back microbial Martian passengers. This could be an existential risk if microbes from Mars harm our biosphere in a severe and irreparable manner.
From Carl Sagan in 1973, "Precisely because Mars is an environment of great potential biological interest, it is possible that on Mars there are pathogens, organisms which, if transported to the terrestrial environment, might do enormous biological damage - a Martian plague, the twist in the plot of H. G. Wells' War of the ... (read more)
I recommend the paper The Case for Strong Longtermism, as it covers and responds to many of these arguments in a precise philosophical framework.
It seems to me that there's a background assumption of many global poverty EAs that human welfare has positive flowthrough effects for basically everything else.
If this is true, is there a post that expands on this argument, or is it something left implicit?
I've since added a constraint into my innovation acceleration efforts, and now am basically focused on "asymmetric, wisdom-constrained innovation."
I think Bostrom has talked about something similar: namely, differential technological development (he talks about technology rather than... (read more)
Growth will have flowthrough effects on existential risk.
This makes sense as an assumption, but the post itself didn't argue for this thesis at all.
If the argument was that the best way to help the longterm future is to minimize existential risk, and the best way to minimize existential risk is by increasing economic growth, then you'd expect the post to primarily talk about how economic growth decreases existential risk. Instead, the post focuses on human welfare, which is important, but secondary to the argument you are making.
This is somethin... (read more)
I'm confused what type of EA would primarily be interested in strategies for increasing economic growth. Perhaps someone can help me understand this argument better.
The reason presented for why we should care about economic growth seemed to be a long-termist one. That is, economic growth has large payoffs in the long-run, and if we care about future lives equally to current lives, then we should invest in growth. However, Nick Bostrom argued in 2003 that a longtermist utilitarian should primarily care about minimizing existential risk, rather than inc... (read more)
I have now posted as a comment on Lesswrong my summary of some recent economic forecasts and whether they are underestimating the impact of the coronavirus. You can help me by critiquing my analysis.
I suspect the reflection is going to be mostly used by our better and wiser selves on settling details/nuances within total (mostly hedonic) utilitarianism rather than discover (or select) some majorly different normative theory.
Is this a prediction, or is this what you want? If it's a prediction, I'd love to hear your reasons why you think this would happen.
My own prediction is that this won't happen. But I'd be happy to see some reasons why I am wrong.
I hold a few core ethical ideas that are extremely unpopular: the idea that we should treat the natural suffering of animals as a grave moral catastrophe, the idea that old age and involuntary death is the number one enemy of humanity, the idea that we should treat so-called farm animals with an very high level of compassion.
Given the unpopularity of these ideas, you might be tempted to think that the reason they are unpopular is that they are exceptionally counterinuitive ones. But is that the case? Do you really need a modern education and philosphical t... (read more)
Right, I wasn't criticizing cause priortization. I was criticizing the binary attitude people had towards anti-aging. Imagine if people dismissed AI safety research because, "It would be fruitless to ban AI research. We shouldn't even try." That's what it often sounds like to me when people fail to think seriously about anti-aging research. They aren't even considering the idea that there are other things we could do.
Now look again at your bulleted list of "big" indirect effects, and remember that you can only hasten them, not enable them. To me, this consideration make the impact we can have on them seem no more than a rounding error if compared to the impact we can have due to LEV (each year you bring LEV closer by saves 36,500,000 lives of 1000QALYS. This is a conservative estimate I made here.)
This isn't clear to me. In Hilary Greaves and William MacAskill's paper on strong longtermism, they argue that unless what we do now impacts a critical lo... (read more)
Thanks for the bullet points and thoughtful inquiry!
I've taken this as an opportunity to lay down some of my thoughts on the matter; this turned out to be quite long. I can expand and tidy this into a full post if people are interested, though it sounds like it would overlap somewhat with what Matthew's been working on.
I am very interested in a full post, as right now I think this area is quite neglected and important groundwork can be completed.
My guess is that most people who think about the effects of anti-aging research don't think very... (read more)
There are more ways, yes, but I think they're individually much less likely than the ways in which they can get better, assuming they're somewhat guided by reflection and reason.
Again, I seem to have different views about to what extent moral views are driven by reflection and reason. For example, is the recent trend towards Trumpian populism driven by reflection and reason? (If you think this is not a new trend, then I ask you to point to previous politicians who share the values of the current administration).
I expect future generations, comp... (read more)
I'm a moral anti-realist, too. You can still be a moral anti-realist and believe that your own ethical views have improved in some sense
Sure. There are a number of versions of moral anti-realism. It makes sense for some people to think that moral progress is a real thing. My own version of ethics says that morality doesn't run that deep and that personal preferences are pretty arbitrary (though I do agree with some reflection).
In the same way, I think the views of future generations can end up better than my views will ever be.
Again, that makes s... (read more)
I think newer generations will tend to grow up with better views than older ones (although older generations could have better views than younger ones at any moment, because they're more informed), since younger generations can inspect and question the views of their elders, alternative views, and the reasons for and against with less bias and attachment to them.
This view assumes that moral progress is a real thing, rather than just an illusion. I can personally understand this of view if the younger generations shared the same terminal values, and me... (read more)
I think this could be due (in part) to biases accumulated by being in a field (and being alive) longer, not necessarily (just) brain aging.
I'm not convinced there is actually that much of a difference between long-term crystallization of habits and natural aging. I'm not qualified to say this with any sort of confidence. It's also worth being cautious about confidently predicting the effects of something like this in either direction.
Do Long-Lived Scientists Hold Back Their Disciplines? It's not clear reducing cognitive decline can make up for this or the effects of people becoming more set in their ways over time; you might need relatively more "blank slates".
In addition to what I wrote here, I'm also just skeptical that scientific progress decelerating in a few respects is actually that big of a deal. The biggest case where it would probably matter is if medical doctors themselves had incorrect theories, or engineers (such as AI developers) were using outdated ide... (read more)
Eliminating aging also has the potential for strong negative long-term effects.
Agreed. One way you can frame what I'm saying is that I'm putting forward a neutral thesis: anti-aging could have big effects. I'm not necessarily saying they would be good (though personally I think they would be).
Even if you didn't want aging to be cured, it still seems worth thinking about it because if it were inevitable, then preparing for a future where aging is cured is better than not preparing.
Another potentially major downside is the stagnation of r... (read more)
If I had to predict, I would say that yes, ~70% chance that most suffering (or other disvalue you might think about) will exist in artificial systems rather than natural ones. It's not actually clear whether this particular fact is relevant. Like I said, the effects of curing aging extend beyond the direct effects on biological life. Studying anti-aging can be just like studying electoral reform, or climate change in this sense.
I think to switch my position on crux 2 using only timeline arguments, you'd have to argue something like <10% chance of transformative AI in 50 years.
That makes sense. "Plausibly soonish" is pretty vague so I pattern matched to something more similar to -- by default it will come within a few decades.
It's reasonable that for people with different comparative advantages, their threshold for caring should be higher. If there were only a 2% chance of transformative AI in 50 years, and I was in charge of effective altruism resource allocation, I would still want some people (perhaps 20-30) to be looking into it.
In my utilitarian view, [democracy and utility maximizing procedures] are one in the same. An election is effectively just "a decision made by more than one person", thus the practical measure of democratic-ness is "expected utility of a voting procedure".
Doesn't this ignore the irrational tendencies of voters?
I like this way of thinking about AI risk, though I would emphasize that my disagreement comes a lot from my skepticism of crux 2 and in turn crux 3. If AI is far away, then it seems pretty difficult to understand how it will end up being used, and I think even when timelines are 20-30 years from now, this remains an issue [ETA: Note that also, during a period of rapid economic growth, much more intellectual progress might happen in a relatively small period of physical time, as computers could automate some parts of human intellectual labor. This implies ... (read more)
The argument you presented appears excellent to me, and I've now changed my mind on this particular point.