All of Matthew_Barnett's Comments + Replies

Preventing a US-China war as a policy priority

My assessment is that actually the opposite is true.

The argument you presented appears excellent to me, and I've now changed my mind on this particular point.

Preventing a US-China war as a policy priority

Thanks. I don’t agree with your interpretation of the survey data. I'll quote another sentence from the essay that made my statement on this more clear,

The majority of the population of Taiwan simply want to be left alone, as a sovereign nation—which they already are, in every practical sense.

The position "declare independence as soon as possible" is unpopular for an obvious reason that I explained in the post. Namely, if Taiwan made a formal declaration of independence, it would potentially trigger a Chinese invasion.

"Maintaining the status quo" is, for t... (read more)

1Michael Huang7d
No worries. I think we have different definitions of the status quo, and that is affecting our interpretation of the survey results. Your definition of the status quo is a form of independence: functional independence (or perhaps de facto independence). In which case, since all the survey results show that "Maintain status quo" is popular, means that independence is the most popular choice. My definition of the status quo is something in-between unification and independence, like a third way. It's the "none of the above" choice, disapproving both unification and independence. If this definition is used, then all the survey results show that this position is the most popular choice. It's a shame that the survey question doesn't actually define what the status quo is. The status quo changes over time too, so it's hard to pin down. But perhaps that is what makes the status quo option so popular. It's a vague, undefined entity that can be interpreted whatever way you like. Anyway, for completeness, here's the full survey question from the data collection methodology [https://esc.nccu.edu.tw/upload/44/doc/6965/%E9%87%8D%E8%A6%81%E6%94%BF%E6%B2%BB%E6%85%8B%E5%BA%A6%E5%88%86%E4%BD%88%E8%B6%A8%E5%8B%A2%E5%9C%96%E8%B3%87%E6%96%99%E7%A0%94%E7%A9%B6%E6%96%B9%E6%B3%95%E8%AA%AA%E6%98%8E(methodology)202112.pdf] :
On Deference and Yudkowsky's AI Risk Estimates

I like that you admit that your examples are cherry-picked. But I'm actually curious what a non-cherry-picked track record would show. Can people point to Yudkowsky's successes?

While he's not single-handedly responsible, he lead the movement to take AI risk seriously at a time when  approximately no one was talking about it, which has now attracted the interests of top academics. This isn't a complete track record, but it's still a very important data-point. It's a bit like if he were the first person to say that we should take nuclear war seriously, and then five years later people are starting to build nuclear bombs and academics realize that nuclear war is very plausible.

8Ben Garfinkel11d
I definitely do agree with that! It's possible I should have emphasized the significance of it more in the post, rather than moving on after just a quick mention at the top. If it's of interest: I say a little more about how I think about this, in response to Gwern's comment below. (To avoid thread-duplicating, people might want to respond there [https://forum.effectivealtruism.org/posts/NBgpPaz5vYe3tH4ga/on-deference-and-yudkowsky-s-ai-risk-estimates?commentId=wgxz7MAsnxRuKe65N] rather than here if they have follow-on thoughts on this point.) My further comment is:
How much current animal suffering does longtermism let us ignore?

What I view as the Standard Model of  Longtermism is something like the following:

  • At some point we will develop advanced AI capable of "running the show" for civilization on a high level
  • The values in our AI will determine, to a large extent, the shape of our future cosmic civilization
  • One possibility is that AI values will be alien. From a human perspective, this will either cause extinction or something equally bad.
  • To avoid that last possibility, we ought to figure out how to instill human-centered values in our machines.

This model doesn't predict tha... (read more)

2Jack Malde2mo
I'm just making an observation that longtermists tend to be total utilitarians in which case they will want loads of beings in the future. They will want to use AI to help fulfill this purpose. Of course maybe in the long reflection we will think more about population ethics and decide total utilitarianism isn't right, or AI will decide this for us, in which case we may not work towards a huge future. But I happen to think total utilitarianism will win out, so I'm sceptical of this.
How much current animal suffering does longtermism let us ignore?

I have an issue with your statement that longtermists neglect suffering, because they just maximize total (symmeric) welfare. I think this statement isn't actually true, though I agree if you just mean pragmatically, most longtermists aren't suffering focused.

Hilary Greaves and William MacAskill loosely define strong longtermism as, "the view that impact on the far future is the most important feature of our actions today." Longtermism is therefore completely agnostic about whether you're a suffering-focused altruist, or a traditional welfarist in line wit... (read more)

3Jack Malde2mo
I agree with all of that. I was objecting to the implication that longtermists will necessarily reduce suffering. Also (although I'm unsure about this), I think that the EA longtermist community will increase expected suffering in the future, as it looks like they will look to maximise the number of beings in the universe.
How much current animal suffering does longtermism let us ignore?

There is an estimate of 24.9 million people in slavery, of which 4.8 million are sexually exploited! Very likely these estimates are exaggerated, and the conditions are not as bad as one would think hearing those words, and even if they were the conditions might not be as bad as battery cages, but my broader point is that the world really does seem like it is very broken and there are problems of huge scale even just restricting to human welfare, and you still have to prioritize, which means ignoring some truly massive problems.

I agree, there is already a ... (read more)

5Asa Cooper Stickland2mo
EDIT: I made this comment assuming the comment I'm replying to is making a critique of longtermism but no longer convinced this is the correct reading 😅 here's the response anyway: Well it's not so much that longtermists ignore such suffering, it's that anyone who is choosing a priority (so any EA, regardless of their stance on longtermism) in our current broken system will end up ignoring (or at least not working on alleviating) many problems. For example the problem of adults with cancer in the US is undoubtedly tragic but is well understood and reasonably well funded by the government and charitable organizations, I would argue it fails the 'neglectedness' part of the traditional EA neglectedness, tractability, importance system. Another example, people trapped in North Korea, I think would fail on tractability, given the lack of progress over the decades. I haven't thought about those two particularly deeply and could be totally wrong but this is just the traditional EA framework for prioritizing among different problems, even if those problems are heartbreaking to have to set aside.
Critique of OpenPhil's macroeconomic policy advocacy

I want to understand the main claims of this post better. My understanding is that you have made the following chain of reasoning:

  1. OpenPhil funded think tanks that advocated looser macroeconomic policy since 2014.
  2. This had some non-trivial effect on actual macroeconomic policy in 2020-2022.
  3. The result of this policy was to contribute to high inflation.
  4. High inflation is bad for two reasons: (1) real wages decline, especially among the poor, (2) inflation causes populism, which may cause Democrats to lose the 2022 midterm elections.
  5. Therefore, OpenPhil should no
... (read more)

High inflation is bad for two reasons: (1) real wages decline, especially among the poor, (2) inflation causes populism, which may cause Democrats to lose the 2022 midterm elections.

It's quite nuanced. There are real wage declines on average, but some poor people might have seen small real wage increases, others  might have seen real wage losses. For instance, older people's wages seem stickier according to the wage tracker here. OpenPhil's grantee EmployAmerica has an interesting analysis of the various factors, and one could perhaps reasonably disag... (read more)

Here are some of my reasons for disliking high inflation, which I think are similar to the reasons of most economists:

Inflation makes long-term agreements harder, since they become less useful unless indexed for inflation.

Inflation imposes costs on holding wealth in safe, liquid forms such as bank accounts, or dollar bills. That leads people to hold more wealth in inflation-proof forms such as real estate, and less in bank accounts, reducing their ability to handle emergencies.

Inflation creates a wide variety of transaction costs: stores need to change the... (read more)

Critique of OpenPhil's macroeconomic policy advocacy

More voters have seen their real wages go down than up (mostly in the lower income brackets).

What is your source for this claim? By contrast, this article says,

Between roughly 56 and 57 percent of occupations, largely concentrated in the bottom half of the income distribution, are seeing real hourly wage increases.


And they show this chart,

Here's another article that cites economists saying the same thing.

The most recent source for this i Jason Furman on Twitter  in March: 

"Average hourly earnings have been declining for more than a year as inflation has outpaced nominal wage gains. This is larger than any 12-month pre-pandemic decline since 1980*. VERY serious composition issues affecting the exact trajectory so PLEASE read next tweets too. 

The first article you cite is only till October '21 and 6 months can make a difference. I also agree with the claim of the second article you cite "Inflation is high, but wage gains for low-income workers... (read more)

How we failed

Here's a quote from Wei Dai, speaking on Feburary 26th 2020, 

Here's another example, which has actually happened 3 times to me already:

  1. The truly ignorant don't wear masks.
  2. Many people wear masks or encourage others to wear masks in part to signal their knowledge and conscientiousness.
  3. "Experts" counter-signal with "masks don't do much", "we should be evidence-based" and "WHO says 'If you are healthy, you only need to wear a mask if you are taking care of a person with suspected 2019-nCoV infection.'"
  4. I respond by citing actual evidence in the form of a m
... (read more)
Reducing Nuclear Risk Through Improved US-China Relations

Thanks for the continued discussion.

If I'm understanding correctly the main point you're making is that I probably shouldn't have said this:

There is little room for improvement here...

I think I'm making two points. The first point was, yeah, I think there is substantial room for improvement here. But the second point is necessary: analyzing the situation with Taiwan is crucial if we seek to effectively reduce nuclear risk.

I do not think it was wrong to focus on the trade war. It depends on your goals. If you wanted to promote quick, actionable and robust a... (read more)

1Ryan Beck3mo
Yeah definitely on the same page then! I agree with what you said there with the possible exception or caveat that I'm skeptical on improvements to the Taiwan issue and that if you find or know of any persuasive abyss-staring arguments on this topic (or write them yourself) I'd appreciate it if you share them with me because I'd be happy to be wrong in my skepticism and would like to learn more about any promising options.
Reducing Nuclear Risk Through Improved US-China Relations

My reason for focusing on the trade war though is because trade deescalation would have very few downsides and would probably be a substantial positive all on its own before even considering the potential positive effects it could have on relations with China and possibly nuclear risk.

I agree. I think we're both on the same page about the merits of ending the trade war, as an issue by itself.

The optimal policy here is far from clear to me.

Right. From my perspective, this is what makes focusing on Taiwan precisely right thing to do in a high-level analysis.... (read more)

2Ryan Beck3mo
To be clear I'm not arguing that people shouldn't think about it or try to solve it. I'm definitely in favor of more discussion on that topic and I'd love to read some high effort analysis from an EA perspective. If I'm understanding correctly the main point you're making is that I probably shouldn't have said this: Which in that case that's a fair critique. I'm not well-informed enough to know the options here and their advantages and risks in great detail, so my perception that there's not much room for improvement could be way off base. I'd summarize my position as having the perception that the Taiwan issue is a hard question that I'm not equipped to solve and I'm skeptical that there are significant improvements available there, so instead I focused on a topic that I view as low hanging fruit. Though I was probably wrong to characterize the Taiwan issue as futile or unimprovable, instead I should have characterized it as a highly complex issue that I'm not equipped to do justice to and I perceive as having substantial downsides to any shift in policy.
Reducing Nuclear Risk Through Improved US-China Relations

You mention ending the trade war as the main mechanism by which we could ease US-China tensions. I agree that this policy change seems especially tractable, but it does not appear to me to be an effective means of avoiding a global conflict. As Stefan Schubert pointed out, the tariffs appear to have a very modest effect on either the American or Chinese economy.

The elephant in the room, as you alluded to, is Taiwan. A Chinese invasion of Taiwan, and subsequent intervention by the United States, is plausibly the most likely trigger for World War 3 in the ne... (read more)

2Ryan Beck3mo
This is a good point, I completely agree that the trade war is of small importance relative to things like relations with Taiwan. My reason for focusing on the trade war though is because trade deescalation would have very few downsides and would probably be a substantial positive all on its own before even considering the potential positive effects it could have on relations with China and possibly nuclear risk. To me the same can't be said for the Taiwan issue. The optimal policy here is far from clear to me. Strategic ambiguity is our intentional policy, and I'm not sure clarifying our stance would be preferable to that. Committing to defend Taiwan could allow Taiwan to do more provocative things, which could lead to war. Declaring we will not defend Taiwan could empower China to invade. I agree it's a significant issue that should be carefully considered, but it's also an issue that I'm sure international relations experts have spilled huge amounts of ink over so I'm not sure if there are any clearly superior policy improvements available in this area.
My current thoughts on the risks from SETI

One question I have is whether this is possible and how difficult it is?

I think it would be very difficult without human assistance. I don't, for example, think that aliens could hijack the computer hardware we use to process potential signals (though, it would perhaps be wise not to underestimate billion-year-old aliens).

We can imagine the following alternative strategy of attack. Suppose the aliens sent us the code to an AI with the note "This AI will solve all your problems: poverty, disease, world hunger etc.". We can't verify that the AI will actually... (read more)

1Jon P3mo
Yeah I think that's a good point. I mean I could see how you could send a civilisation the blueprints for atomic weapons and hope that they wipe themselves out or something, that would be very feasible. I guess I'm a bit more skeptical when it comes to AI. I mean it's hard to get code to run and it has to be tailored to the hardware. And if you were going to teach them enough information to build advanced AIs I think there'd be a lot of uncertainty about what they'd end up making, I mean there'd be bugs in the code for sure. It's an interesting argument though and I can really see your perspective on it.
My current thoughts on the risks from SETI

I don't find the scenario plausible. I think the grabby aliens model (cited in the post) provides a strong reason to doubt that there will be many so-called "quiet" aliens that hide their existence. Moreover, I think malicious grabby (or loud) aliens would not wait for messages before striking, which the Dark Forest theory relies critically on. See also section 15 in the grabby aliens paper, under the heading "SETI Implications".

In general, I don't think there are significant risks associated with messaging aliens (a thesis that other EAs have argued for, along these lines).

[linkpost] Peter Singer: The Hinge of History

I think failing to act can itself be atrocious. For example, the failure of rich nations to intervene in the Rwandan genocide was an atrocity. Further, I expect Peter Singer to agree that this was an atrocity. Therefore, I do not think that deontological commitments are sufficient to prevent oneself from being party to atrocities.

3MichaelStJules5mo
You could have deontological commitments to prevent atrocities, too, but with an overriding commitment that you shouldn't actively commit an atrocity, even in order to prevent a greater one. Or, something like a harm-minimizing consequentialism with deontological constraints against actively committing atrocities. Of course, you still have to prioritize and can make mistakes, which means some atrocities may go ignored, but I think this takes away the intuitive repugnance and moral blameworthiness.
[linkpost] Peter Singer: The Hinge of History

My interpretation of Peter Singer's thesis is that we should be extremely cautious about acting on a philosophy that claims that an issue is extremely important, since we should be mindful that such philosophies have been used to justify atrocities in the past. But I have two big objections to his thesis.

First, it actually matters whether the philosophy we are talking about is a good one. Singer provides a comparison to communism and Nazism, both of which were used to justify repression and genocide during the 20th century. But are either of these philosop... (read more)

7MichaelStJules5mo
I think some moral views, e.g. some rights-based ones or ones with strong deontological constraints, would pretty necessarily disavow atrocities on principle, not just for fairly contingent reasons based on anticipated consequences like (act) utilitarians would. Some such views could also still rank issues. I basically agree with the rest.
Democratising Risk - or how EA deals with critics

I'm happy with more critiques of total utilitarianism here. :) 

For what it's worth, I think there are a lot of people unsatisfied with total utilitarianism within the EA community. In my anecdotal experience, many longtermists (including myself) are suffering focused. This often takes the form of negative utilitarianism, but other variants of suffering focused ethics exist.

I may have missed it, but I didn't see any part of the paper that explicitly addressed suffering-focused longtermists. (One part mentioned, "Preventing existential risk is not prima... (read more)

Against Negative Utilitarianism

Another strange implication is that enough worlds of utopia plus pinprick would be worse than a world of pure torture.

I view this implication as merely the consequence of two facts, (1) utilitarians generally endorse torture in the torture vs. dust specks thought experiment, (2) negative preference utilitarians don't find value in creating new beings just to satisfy their preferences.

The first fact is shared by all non-lexical varieties of consequentialism, so it doesn't appear to be a unique critique of negative preference utilitarianism. 

The second ... (read more)

Against Negative Utilitarianism

Moving from our current world to utopia + pinprick would be a strong moral improvement under NPU. But you're right that if the universe was devoid of all preference-having beings, then creating a utopia with a pinprick would not be recommended.

4Omnizoid6mo
That seems deeply unintuitive. Another strange implication is that enough worlds of utopia plus pinprick would be worse than a world of pure torture. A final implication is that for a world of Budhist monks who have rid themselves completely of desires and merely take in the joys of life without having any firm desires for future states of the world, it would be morally neutral to bring their well-being to zero.
Against Negative Utilitarianism

World destruction would violate a ton of people's preferences. Many people who live in the world want it to keep existing. Minimizing preference frustration would presumably give people what they want, rather than killing them (something they don't want).

1Omnizoid6mo
Sure, but it would say that creating utopia with a pinprick would be morally bad.
Against Negative Utilitarianism

I'm curious whether you think your arguments apply to negative preference utilitarianism (NPU): the view that we ought to minimize aggregate preference frustration. It shares many features with ordinary negative hedonistic utilitarianism (NHU), such as,

But NPU also has several desirable properties that are not shared with NHU:

  • Utopia, rather than world-destruction, is the globally optimal solution that maximizes utility.
  • It's compatible with the thesis that v
... (read more)
4Omnizoid6mo
I'm not sure I quite understand the theory. Wouldn't global destruction be better than utopia if it were painless because there are no unmet preferences. I also laid out some problems with preference utilitarianism in my other post arguing for utilitarianism.
Concerning the Recent 2019-Novel Coronavirus Outbreak

For a long time, I've believed in the importance of not being alarmist. My immediate reaction to almost anybody who warns me of impending doom is: "I doubt it". And sometimes, "Do you want to bet?"

So, writing this post was a very difficult thing for me to do. On an object-level, l realized that the evidence coming out of Wuhan looked very concerning. The more I looked into it, the more I thought, "This really seems like something someone should be ringing the alarm bells about." But for a while, very few people were predicting anything big on respectable f... (read more)

Rowing, Steering, Anchoring, Equity, Mutiny

It was much less disruptive than revolutions like in France, Russia or China, which attempted to radically re-order their governments, economies and societies. In a sense I guess you could think of the US revolution as being a bit like a mutiny that then kept largely the same course as the previous captain anyway.

I agree with the weaker claim here that the US revolution didn't radically re-order "government, economy and society." But I think you might be exaggerating how conservative the US revolution was. 

The United States is widely considered to be ... (read more)

Discussion with Eliezer Yudkowsky on AGI interventions

The main way I could see an AGI taking over the world without being exceedingly superhuman would be if it hid its intentions well enough so that it could become trusted enough to be deployed widely and have control of lots of important infrastructure.

My understanding is that Eliezer's main argument is that the first superintelligence will have access to advanced molecular nanotechnology, an argument that he touches on in this dialogue. 

I could see breaking his thesis up into a few potential steps,

  1. At some point, an AGI will FOOM to radically superhuman
... (read more)

The first radically superhuman AGI will have the unique ability to deploy advanced molecular nanomachines, capable of constructing arbitrary weapons, devices, and nanobot swarms.

Why believe this is easy enough for AGI to achieve efficiently and likely?

3Brian_Tomasik8mo
Thanks. :) That's possible with a GPT-style AI too. For example, you could ask GPT-3 to write a procedure for how to get a cup of coffee, and GPT-3 will explain the steps for doing it. But yeah, it's plausible that there will be better AI designs than GPT-style ones for many tasks. As I mentioned to Daniel, I feel like if a country was in the process of FOOMing its AI, other countries would get worried and try to intervene before it was too late. That's true even if other countries aren't worried about AI alignment; they'd just be worried about becoming powerless. The world is (understandably) alarmed when Iran, North Korea, etc work on developing even limited amounts of nuclear weapons, and many natsec people are worried about China's seemingly inevitable rise in power. It seems to me that the early stages of a FOOM would cause other actors to intervene, though maybe if the FOOM was gradual enough, other actors could always feel like it wasn't the quite right time to become confrontational about it. Maybe if the FOOM was done by the USA, then since the USA is already the strongest country in the world, other countries wouldn't want to fight over it. Alternatively, maybe if there was an international AI project in which all the major powers participated, there could be rapid AI progress with less risk of war. Another argument against FOOM by a single AGI could be that we'd expect people to be training multiple different AGIs with different values and loyalties, and they could help to keep an eye on one another in ways that humans couldn't. This might seem implausible, but it's how humans have constructed the edifice of civilization: groups of people monitoring other groups of people and coordinating to take certain actions to keep things under control. It seems almost like a miracle that civilization is possible; a priori I would have expected a collective system like civilization to be far too brittle to work. But maybe it works better for humans than for AGIs.
A proposal for a small inducement prize platform

Potential ways around this that come to mind:

Good ideas. I have a few more,

  • Have a feature that allows people to charge fees to people who submit work. This would potentially compensate the arbitrator who would have to review the work, and would discourage people from submitting bad work in the hopes that they can fool people into awarding them the bounty.
  • Instead of awarding the bounty to whoever gives a summary/investigation, award the bounty to the person who provides the best  summary/investigation, at the end of some time period. That way, if someo
... (read more)
A proposal for a small inducement prize platform

What is the likely market size for this platform?

I'm not sure, but I just opened a Metaculus question about this, and we should begin getting forecasts within a few days. 

2So-Low Growth1y
Briefly: 1. I like the idea 2. Think it will work 3. Also like the idea of using Metaculus to forecast this
How should longtermists think about eating meat?

Eliezer Yudkowsky wrote a sequence on ethical injunctions where he argued why things like this were wrong (from his own, longtermist perspective).

How should longtermists think about eating meat?
And it feels terribly convenient for the longtermist to argue they are in the moral right while making no effort to counteract or at least not participate in what they recognize as moral wrongs.

This is only convenient for the longtermist if they do not have equivalently demanding obligations to the longterm. Otherwise we could turn it around and say that it's "terribly convenient" for a shorttermist to ignore the longterm future too.

Critical Review of 'The Precipice': A Reassessment of the Risks of AI and Pandemics

Regarding the section on estimating the probability of AI extinction, I think a useful framing is to focus on disjunctive scenarios where AI ends up being used. If we imagine a highly detailed scenario where a single artificial intelligence goes rougue, then of course these types of things will seem unlikely.

However, my guess is that AI will gradually become more capable and integrated into the world economy, and there won't be a discrete point where we can say "now the AI was invented." Over the broad course of history, we have witnessed numerous instance

... (read more)
Matthew_Barnett's Shortform

A trip to Mars that brought back human passengers also has the chance of bringing back microbial Martian passengers. This could be an existential risk if microbes from Mars harm our biosphere in a severe and irreparable manner.

From Carl Sagan in 1973, "Precisely because Mars is an environment of great potential biological interest, it is possible that on Mars there are pathogens, organisms which, if transported to the terrestrial environment, might do enormous biological damage - a Martian plague, the twist in the plot of H. G. Wells' War of the ... (read more)

If you value future people, why do you consider near term effects?

I recommend the paper The Case for Strong Longtermism, as it covers and responds to many of these arguments in a precise philosophical framework.

Growth and the case against randomista development
It seems to me that there's a background assumption of many global poverty EAs that human welfare has positive flowthrough effects for basically everything else.

If this is true, is there a post that expands on this argument, or is it something left implicit?

I've since added a constraint into my innovation acceleration efforts, and now am basically focused on "asymmetric, wisdom-constrained innovation."

I think Bostrom has talked about something similar: namely, differential technological development (he talks about technology rather than... (read more)

2Halffull2y
No, I actually think the post is ignoring x-risk as a cause area to focus on now. It makes sense under certain assumptions and heuristics (e.g. if you think near term x-risk is highly unlikely, or you're using absurdity heuristics), I think I was more giving my argument for how this post could be compatible with Bostrom.
Growth and the case against randomista development
Growth will have flowthrough effects on existential risk.

This makes sense as an assumption, but the post itself didn't argue for this thesis at all.

If the argument was that the best way to help the longterm future is to minimize existential risk, and the best way to minimize existential risk is by increasing economic growth, then you'd expect the post to primarily talk about how economic growth decreases existential risk. Instead, the post focuses on human welfare, which is important, but secondary to the argument you are making.

This is somethin
... (read more)
1Halffull2y
It seems to me that there's a background assumption of many global poverty EAs that human welfare has positive flowthrough effects for basically everything else. At one point I was focused on accelerating innovation, but have come to be more worried about increasing x-risk (I have a question somewhere else on the post that gets at this). I've since added a constraint into my innovation acceleration efforts, and now am basically focused on "asymmetric, wisdom-constrained innovation."
Growth and the case against randomista development

I'm confused what type of EA would primarily be interested in strategies for increasing economic growth. Perhaps someone can help me understand this argument better.

The reason presented for why we should care about economic growth seemed to be a long-termist one. That is, economic growth has large payoffs in the long-run, and if we care about future lives equally to current lives, then we should invest in growth. However, Nick Bostrom argued in 2003 that a longtermist utilitarian should primarily care about minimizing existential risk, rather than inc... (read more)

1Halffull2y
Let's say you believe two things: 1. Growth will have flowthrough effects on existential risk. 2. You have a comparative advantage effecting growth over x-risk. You can agree with Bostrom that x-risk is important, and also think that you should be working on growth. This is something very close to my personal view on what I'm working on.
Matthew_Barnett's Shortform

I have now posted as a comment on Lesswrong my summary of some recent economic forecasts and whether they are underestimating the impact of the coronavirus. You can help me by critiquing my analysis.

What are the key ongoing debates in EA?
I suspect the reflection is going to be mostly used by our better and wiser selves on settling details/nuances within total (mostly hedonic) utilitarianism rather than discover (or select) some majorly different normative theory.

Is this a prediction, or is this what you want? If it's a prediction, I'd love to hear your reasons why you think this would happen.

My own prediction is that this won't happen. But I'd be happy to see some reasons why I am wrong.

Matthew_Barnett's Shortform

I hold a few core ethical ideas that are extremely unpopular: the idea that we should treat the natural suffering of animals as a grave moral catastrophe, the idea that old age and involuntary death is the number one enemy of humanity, the idea that we should treat so-called farm animals with an very high level of compassion.

Given the unpopularity of these ideas, you might be tempted to think that the reason they are unpopular is that they are exceptionally counterinuitive ones. But is that the case? Do you really need a modern education and philosphical t... (read more)

Effects of anti-aging research on the long-term future

Right, I wasn't criticizing cause priortization. I was criticizing the binary attitude people had towards anti-aging. Imagine if people dismissed AI safety research because, "It would be fruitless to ban AI research. We shouldn't even try." That's what it often sounds like to me when people fail to think seriously about anti-aging research. They aren't even considering the idea that there are other things we could do.

Effects of anti-aging research on the long-term future
Now look again at your bulleted list of "big" indirect effects, and remember that you can only hasten them, not enable them. To me, this consideration make the impact we can have on them seem no more than a rounding error if compared to the impact we can have due to LEV (each year you bring LEV closer by saves 36,500,000 lives of 1000QALYS. This is a conservative estimate I made here.)

This isn't clear to me. In Hilary Greaves and William MacAskill's paper on strong longtermism, they argue that unless what we do now impacts a critical lo... (read more)

Effects of anti-aging research on the long-term future

Thanks for the bullet points and thoughtful inquiry!

I've taken this as an opportunity to lay down some of my thoughts on the matter; this turned out to be quite long. I can expand and tidy this into a full post if people are interested, though it sounds like it would overlap somewhat with what Matthew's been working on.

I am very interested in a full post, as right now I think this area is quite neglected and important groundwork can be completed.

My guess is that most people who think about the effects of anti-aging research don't think very... (read more)

6Will Bradshaw2y
I'm not sure it's all that crazy. EA is all about prioritisation. If something makes you believe that anti-ageing is 10% less promising as a cause area than you thought, that could lead you to cut your spending in that area by far more than 10% if it made other cause areas more promising. I've spoken to a number of EAs who think anti-ageing research is a pretty cool cause area, but not competitive with top causes like AI and biosecurity. As long as there's something much more promising you could be working on it doesn't necessarily matter much how valuable you think anti-ageing is. Now, some people will have sufficient comparative advantage that they should be working on ageing anyway: either directly or on the meta-level social-science questions surrounding it. But it's not clear to me exactly who those people are, at least for the direct side of things. Wetlab biologists and bioinformaticians could work on medical countermeasures for biosecurity. AI/ML people (who I expect to be very important to progress in anti-ageing) could work on AI safety (or biosecurity again). Social scientists could work on the social aspects of X-risk reduction, or on some other means of improving institutional decision-making. There's a lot competing with ageing for the attention of well-suited EAs. I'm not saying ageing will inevitably lose out to all those alternatives; it's very neglected and (IMO) quite promising, and some people will just find it more interesting to work on than the alternatives. But I do generally back the idea of ruthless prioritisation.
Effects of anti-aging research on the long-term future
There are more ways, yes, but I think they're individually much less likely than the ways in which they can get better, assuming they're somewhat guided by reflection and reason.

Again, I seem to have different views about to what extent moral views are driven by reflection and reason. For example, is the recent trend towards Trumpian populism driven by reflection and reason? (If you think this is not a new trend, then I ask you to point to previous politicians who share the values of the current administration).

I expect future generations, comp
... (read more)
1MichaelStJules2y
I don't really have a firm idea of the extent reflection and reason drives changes in or the formation of beliefs, I just think they have some effect. They might have disproportionate effects in a motivated minority of people who become very influential, but not necessarily primarily through advocacy. I think that's a good description of EA, actually. In particular, if EAs increase the development and adoption of plant-based and cultured animal products, people will become less speciesist because we're removing psychological barriers for them, and EAs are driven by reflection and reason, so these changes are in part indirectly driven by reflection and reason. Public intellectuals and experts in government can have influence, too. Could the relatively pro-trade and pro-migration views of economists, based in part on reflection and reason, have led to more trade and migration, and caused us to be less xenophobic? Minimally, I'll claim that, all else equal, if the reasons for one position are better than the reasons for another (and especially if there are good reasons for the first and none of the other), then the first position should gain more support in expectation. I don't think short-term trends can usually be explained by reflection and reason, and I don't think Trumpian populism is caused by reflection and reason, but I think the general trend throughout history is away from such tribalistic views, and I think that there are basically no good reasons for tribalism might play a part, although not necessarily a big one. That's a good point. However, is this only in social interactions (which, of course, can reinforce prejudice in those who would act on it in other ways)? What about when they vote? We're talking maybe 20 years of prejudice inhibition lost at most on average, so at worst about a third of adults at any moment, but also a faster growing proportion of people growing up without any given prejudice they'd need to inhibit in the first place vs many
Effects of anti-aging research on the long-term future
I'm a moral anti-realist, too. You can still be a moral anti-realist and believe that your own ethical views have improved in some sense

Sure. There are a number of versions of moral anti-realism. It makes sense for some people to think that moral progress is a real thing. My own version of ethics says that morality doesn't run that deep and that personal preferences are pretty arbitrary (though I do agree with some reflection).

In the same way, I think the views of future generations can end up better than my views will ever be.

Again, that makes s... (read more)

1MichaelStJules2y
There are more ways, yes, but I think they're individually much less likely than the ways in which they can get better, assuming they're somewhat guided by reflection and reason. This might still hold once you aggregate all the ways they can get worse and separate all the ways they can get better, but I'm not sure. I expect future generations, compared to people alive today, to be less religious, less speciesist, less prejudiced generally, more impartial, more consequentialist and more welfarist [https://en.wikipedia.org/wiki/Welfarism], because of my take on the relative persuasiveness of these views (and the removal of psychological obstacles to having these views), which I think partially explains the trends. No guarantee, of course, and there might be alternatives to these views that don't exist today but are even more persuasive, but maybe I should be persuaded by them, too. I don't expect them to be more suffering-focused (beyond what's implied by the expectations above), though. Actually, if current EA views become very influential on future views, I might expect those in the future to be less suffering-focused and to cause s-risks, which is concerning to me. I think the asymmetry is relatively more common among people today than it is among EAs, specifically.
Effects of anti-aging research on the long-term future
I think newer generations will tend to grow up with better views than older ones (although older generations could have better views than younger ones at any moment, because they're more informed), since younger generations can inspect and question the views of their elders, alternative views, and the reasons for and against with less bias and attachment to them.

This view assumes that moral progress is a real thing, rather than just an illusion. I can personally understand this of view if the younger generations shared the same terminal values, and me... (read more)

1MichaelStJules2y
I'm a moral anti-realist, too. You can still be a moral anti-realist and believe that your own ethical views have improved in some sense, although I suppose you'll never believe that they're worse now than before, since you wouldn't hold them if that were the case. Some think of it as what you would endorse if you were less biased, had more information and reflected more. I think my views are better now because they're more informed, but it's a possibility that I could have been so biased in dealing with new information that my views are in fact worse now than before. In the same way, I think the views of future generations can end up better than my views will ever be. So I don't expect such views to be very common over the very long-term (unless there are more obstacles to having different views in the future), because I can't imagine there being good (non-arbitrary) reasons for those views (except the 2nd, and also the 3rd if future robots turn out to not be conscious) and there are good reasons against them. However, this could, in principle, turn out to be wrong, and an idealized version of myself might have to endorse these views or at least give them more weight. I think where idealized versions of myself and idealized versions of future generations will disagree is due to different weights given to opposing reasons, since there is no objective way to weight them. My own weights may be "biases" determined by my earlier experiences with ethics, other life experiences, genetic predisposition, etc., and maybe some weights could be more objective than others based on how they were produced, but without this history, no weights can be more objective than others. Finally, just in practice, I think my views are more aligned with those of younger generations and generations to come, so views more similar to my own will be relatively more prominent if we don't cure aging (soon), which is a reason against curing aging (soon), at least for me.
Effects of anti-aging research on the long-term future
I think this could be due (in part) to biases accumulated by being in a field (and being alive) longer, not necessarily (just) brain aging.

I'm not convinced there is actually that much of a difference between long-term crystallization of habits and natural aging. I'm not qualified to say this with any sort of confidence. It's also worth being cautious about confidently predicting the effects of something like this in either direction.

Effects of anti-aging research on the long-term future
Do Long-Lived Scientists Hold Back Their Disciplines? It's not clear reducing cognitive decline can make up for this or the effects of people becoming more set in their ways over time; you might need relatively more "blank slates".

In addition to what I wrote here, I'm also just skeptical that scientific progress decelerating in a few respects is actually that big of a deal. The biggest case where it would probably matter is if medical doctors themselves had incorrect theories, or engineers (such as AI developers) were using outdated ide... (read more)

1InquilineKea1y
On moral progress - I think it's highly plausible that future generations will not be okay with people dying due to natural causes in the same way that they're not okay with people dying from cancer or infectious diseases.
2MichaelStJules2y
I think newer generations will tend to grow up with better views than older ones (although older generations could have better views than younger ones at any moment, because they're more informed), since younger generations can inspect and question the views of their elders, alternative views, and the reasons for and against with less bias and attachment to them. Curing aging doesn't cure confirmation bias or belief perseverance/the backfire effect [https://en.wikipedia.org/wiki/Belief_perseverance].
Effects of anti-aging research on the long-term future
Eliminating aging also has the potential for strong negative long-term effects.

Agreed. One way you can frame what I'm saying is that I'm putting forward a neutral thesis: anti-aging could have big effects. I'm not necessarily saying they would be good (though personally I think they would be).

Even if you didn't want aging to be cured, it still seems worth thinking about it because if it were inevitable, then preparing for a future where aging is cured is better than not preparing.

Another potentially major downside is the stagnation of r
... (read more)
2MichaelStJules2y
As I wrote here [https://forum.effectivealtruism.org/posts/jXDGYMEDFL2WhPXaB/do-long-lived-scientists-hold-back-their-disciplines#R93TyJLKsjvc2ycLG] , I think this could be due (in part) to biases accumulated by being in a field (and being alive) longer, not necessarily (just) brain aging. I'd guess that more neuroplasticity or neurogenesis is better than less, but I don't think it's the whole problem. You'd need people to lose strong connections, to "forget" more often. Also, people's brains up until their mid 20s are still developing and pruning connections.
Effects of anti-aging research on the long-term future

If I had to predict, I would say that yes, ~70% chance that most suffering (or other disvalue you might think about) will exist in artificial systems rather than natural ones. It's not actually clear whether this particular fact is relevant. Like I said, the effects of curing aging extend beyond the direct effects on biological life. Studying anti-aging can be just like studying electoral reform, or climate change in this sense.

My personal cruxes for working on AI safety
I think to switch my position on crux 2 using only timeline arguments, you'd have to argue something like <10% chance of transformative AI in 50 years.

That makes sense. "Plausibly soonish" is pretty vague so I pattern matched to something more similar to -- by default it will come within a few decades.

It's reasonable that for people with different comparative advantages, their threshold for caring should be higher. If there were only a 2% chance of transformative AI in 50 years, and I was in charge of effective altruism resource allocation, I would still want some people (perhaps 20-30) to be looking into it.

Thoughts on electoral reform
In my utilitarian view, [democracy and utility maximizing procedures] are one in the same. An election is effectively just "a decision made by more than one person", thus the practical measure of democratic-ness is "expected utility of a voting procedure".

Doesn't this ignore the irrational tendencies of voters?

1ClayShentrup2y
I discussed this in my post: The ignorance factor represents a disparity between the actual utility impact a candidate will have on a voter, and the assumed utility impact which forms the basis for her vote. Even with lots of ignorance, there's still a significant difference in performance from one voting method to another. In addition, I believe a lot of our ignorance comes from "tribal" thinking. If we have two parties (tribes), and each party must pick one side of any issue (abortion, guns, health care, etc.). Thus voters will tend to retroactively justify their beliefs about a given issue based on how it comports with their stated party affiliation. Note that this forced binary thinking is so powerful that we even have a party divide over the objective reality of climate change! With a system like approval voting, candidates can easily run outside of the party system and still be viable. Thus they can take any arbitrary position on any issue, giving voters the freedom to move freely through the issue axes. A new offshoot of the GOP could form that is generally socially conservative and pro gun rights, but totally committed to addressing climate change. With 3-5 viable parties able to constantly adjust to changing realities, this is expected to reduce the amount of voter ignorance considerably, by allowing voters to consider issues which were once taken as given as part and parcel of their party affiliation.
My personal cruxes for working on AI safety

I like this way of thinking about AI risk, though I would emphasize that my disagreement comes a lot from my skepticism of crux 2 and in turn crux 3. If AI is far away, then it seems pretty difficult to understand how it will end up being used, and I think even when timelines are 20-30 years from now, this remains an issue [ETA: Note that also, during a period of rapid economic growth, much more intellectual progress might happen in a relatively small period of physical time, as computers could automate some parts of human intellectual labor. This implies ... (read more)

2Rohin Shah2y
I broadly agree with this, but I feel like this is mostly skepticism of crux 3 and not crux 2. I think to switch my position on crux 2 using only timeline arguments, you'd have to argue something like <10% chance of transformative AI in 50 years.
Load More