Topic Contributions


Preventing a US-China war as a policy priority

My assessment is that actually the opposite is true.

The argument you presented appears excellent to me, and I've now changed my mind on this particular point.

Preventing a US-China war as a policy priority

Thanks. I don’t agree with your interpretation of the survey data. I'll quote another sentence from the essay that made my statement on this more clear,

The majority of the population of Taiwan simply want to be left alone, as a sovereign nation—which they already are, in every practical sense.

The position "declare independence as soon as possible" is unpopular for an obvious reason that I explained in the post. Namely, if Taiwan made a formal declaration of independence, it would potentially trigger a Chinese invasion.

"Maintaining the status quo" is, for the most part, code for maintaining functional independence, which is popular, because as you said, "It means peace and prosperity, and it has been surprisingly stable over the last 70 years." This is what I meant by saying the Taiwanese "want to be their own nation instead, indefinitely" in the sentence you quoted, because I was talking about what's actually practically true, not just what's true on paper. 

I'll note that if you add up the percentage of people who want to maintain the status quo indefinitely, and those who want to maintain the status quo but move towards independence, it sums to 52.4%. It goes up to 58.4% if you include people who want to declare independence as soon as possible.

I admit my wording sucked, but I think what I said basically matches the facts-on-the ground, if not the literal survey data you quoted, in the sense that there is almost no political will right now to reunify with China (at least until they meet some hypothetical conditions, which they probably won't any time soon).

On Deference and Yudkowsky's AI Risk Estimates

I like that you admit that your examples are cherry-picked. But I'm actually curious what a non-cherry-picked track record would show. Can people point to Yudkowsky's successes?

While he's not single-handedly responsible, he lead the movement to take AI risk seriously at a time when  approximately no one was talking about it, which has now attracted the interests of top academics. This isn't a complete track record, but it's still a very important data-point. It's a bit like if he were the first person to say that we should take nuclear war seriously, and then five years later people are starting to build nuclear bombs and academics realize that nuclear war is very plausible.

How much current animal suffering does longtermism let us ignore?

What I view as the Standard Model of  Longtermism is something like the following:

  • At some point we will develop advanced AI capable of "running the show" for civilization on a high level
  • The values in our AI will determine, to a large extent, the shape of our future cosmic civilization
  • One possibility is that AI values will be alien. From a human perspective, this will either cause extinction or something equally bad.
  • To avoid that last possibility, we ought to figure out how to instill human-centered values in our machines.

This model doesn't predict that longtermists will make the future much larger than it otherwise would . It just predicts that they'll make it look a bit different than it otherwise would look like.

Of course, there are other existential risks that longtermists care about. Avoiding those will have the effect of making the future larger in expectation, but most longtermists seem to agree that non-AI x-risks are small by comparison to AI.

How much current animal suffering does longtermism let us ignore?

I have an issue with your statement that longtermists neglect suffering, because they just maximize total (symmeric) welfare. I think this statement isn't actually true, though I agree if you just mean pragmatically, most longtermists aren't suffering focused.

Hilary Greaves and William MacAskill loosely define strong longtermism as, "the view that impact on the far future is the most important feature of our actions today." Longtermism is therefore completely agnostic about whether you're a suffering-focused altruist, or a traditional welfarist in line with Jeremy Bentham. It's entirely consistent to prefer to minimize suffering over the long-run future, and be a longtermist. Or put another way, there are no major axiological commitments involved with being a longtermist, other than the view that we should treat value in the far-future similar to the way we treat value in the near-future.

Of course, in practice, longtermists are more likely to advocate a Benthamite utility function than a standard negative utilitarian. But it's still completely consistent to be a negative utilitarian and a longtermist, and in fact I consider myself one.

How much current animal suffering does longtermism let us ignore?

There is an estimate of 24.9 million people in slavery, of which 4.8 million are sexually exploited! Very likely these estimates are exaggerated, and the conditions are not as bad as one would think hearing those words, and even if they were the conditions might not be as bad as battery cages, but my broader point is that the world really does seem like it is very broken and there are problems of huge scale even just restricting to human welfare, and you still have to prioritize, which means ignoring some truly massive problems.

I agree, there is already a lot of human suffering that longtermists de-prioritize. More concrete examples include,

  • The 0.57% of the US population that is imprisoned at any given time this year. (This might even be more analogous to battery cages than slavery).
  • The 25.78 million people who live under the totalitarian North Korean regime.
  • The estimated 27.2% of the adult US population that who lives with more than one of these chronic health conditions: arthritis, cancer, chronic obstructive pulmonary disease, coronary heart disease, current asthma, diabetes, hepatitis, hypertension, stroke, and weak or failing kidneys.
  • The nearly 10% of the world population who lives in extreme poverty, which is defined as a level of consumption equivalent to less than $2 of spending per day, adjusting for price differences between nations.
  • The 7 million Americans who are currently having their brain rot away, bit by bit, due to Alzheimer's and other forms of dementia. Not to mention their loved ones who are forced to witness this.
  • The 6% of the US population who experienced at least one major depressive episode in the last year.
  • The estimated half a million homeless population in the United States .
  • The significant fraction of people who have profound difficulties with learning and performing work, who disproportionately live in poverty and are isolated from friends and family
Critique of OpenPhil's macroeconomic policy advocacy

I want to understand the main claims of this post better. My understanding is that you have made the following chain of reasoning:

  1. OpenPhil funded think tanks that advocated looser macroeconomic policy since 2014.
  2. This had some non-trivial effect on actual macroeconomic policy in 2020-2022.
  3. The result of this policy was to contribute to high inflation.
  4. High inflation is bad for two reasons: (1) real wages decline, especially among the poor, (2) inflation causes populism, which may cause Democrats to lose the 2022 midterm elections.
  5. Therefore, OpenPhil should not make similar grants in the future.

I'm with you on claims 1, 2, and 3. I'm not sure about 4 and 5. Let me focus on my confusions with claim 4.

In another comment, I pointed out that it wasn't clear to me that inflation hurts low-wage workers by a substantial margin. Maybe the sources I cited there were poor, but it doesn't seem like there's a consensus about this issue to my (untrained) eyes.

The fact that prediction markets currently indicate that Republicans have an edge in the midterm elections is not surprising. FiveThirtyEight says, "One of the most ironclad rules in American politics is that the president’s party loses ground in midterm elections." The only modern exception to this rule was the 2002 midterm election, in which Republicans gained seats because of 9/11.

If we look at ElectionBettingOdds, it appears that the main shock that pushed the markets in favor of a Republican win was the election last year. (see Senate, and House forecasts). It's harder to see Republicans gaining due to inflation in the data (though I agree they probably did). EDIT: OK I think it's more clear to me now that the spike in the House forecast in May 2021 was probably due to inflation concerns.

Critique of OpenPhil's macroeconomic policy advocacy

More voters have seen their real wages go down than up (mostly in the lower income brackets).

What is your source for this claim? By contrast, this article says,

Between roughly 56 and 57 percent of occupations, largely concentrated in the bottom half of the income distribution, are seeing real hourly wage increases.

And they show this chart,

Here's another article that cites economists saying the same thing.

How we failed

Here's a quote from Wei Dai, speaking on Feburary 26th 2020, 

Here's another example, which has actually happened 3 times to me already:

  1. The truly ignorant don't wear masks.
  2. Many people wear masks or encourage others to wear masks in part to signal their knowledge and conscientiousness.
  3. "Experts" counter-signal with "masks don't do much", "we should be evidence-based" and "WHO says 'If you are healthy, you only need to wear a mask if you are taking care of a person with suspected 2019-nCoV infection.'"
  4. I respond by citing actual evidence in the form of a meta-analysis: medical procedure masks combined with hand hygiene achieved RR of .73 while hand hygiene alone had a (not statistically significant) RR of .86.

After over a month of dragging their feet, and a whole bunch of experts saying misleading things, the CDC finally recommended people wear masks on April 3rd 2020.

Reducing Nuclear Risk Through Improved US-China Relations

Thanks for the continued discussion.

If I'm understanding correctly the main point you're making is that I probably shouldn't have said this:

There is little room for improvement here...

I think I'm making two points. The first point was, yeah, I think there is substantial room for improvement here. But the second point is necessary: analyzing the situation with Taiwan is crucial if we seek to effectively reduce nuclear risk.

I do not think it was wrong to focus on the trade war. It depends on your goals. If you wanted to promote quick, actionable and robust advice, it made sense. If you wanted to stare straight into the abyss, and solve the problem directly, it made a little less sense. Sometimes the first thing is what we need. But, as I'm glad to hear, you seem to agree with me that we also sometimes need to do the second thing.

Load More