TC

Timothy Chan

799 karmaJoined timothytfchan.github.io/

Comments
82

Agreed. Getting a larger share of the pie (without breaking rules during peacetime) might be 'unimaginative' but it's hardly naïve. It's straightforward and has a good track record of allowing groups to shape the world disproportionately.

Leopold Aschenbrenner makes some good points for "Government > Private sector" in the latest Dwarkesh podcast.

Reposting a comment I made last week

Some people make the argument that the difference in suffering between a worst-case scenario (s-risk) and a business-as-usual scenario, is likely much larger than the difference in suffering between a business-as-usual scenario and a future without humans. This suggests focusing on ways to reduce s-risks rather than increasing extinction risk.

A helpful comment from a while back: https://forum.effectivealtruism.org/posts/rRpDeniy9FBmAwMqr/arguments-for-why-preventing-human-extinction-is-wrong?commentId=fPcdCpAgsmTobjJRB

Personally, I suspect there's a lot of overlap between risk factors for extinction risk and risk factors for s-risks. In a world where extinction is a serious possibility, it's likely that there would be a lot of things that are very wrong, and these things could lead to even worse outcomes like s-risks or hyperexistential risks.

I think theoretically you could compare (1) worlds with s-risk and (2) worlds without humans, and find that (2) is preferable to (1) - in a similar way to how no longer existing is better than going to hell. One problem is many actions that make (2) more likely seem to make (1) more likely. Another issue is that efforts spent on increasing the risk of (2) could instead be much better spent on reducing the risk of (1).

Some people make the argument that the difference in suffering between a worst-case scenario (s-risk) and a business-as-usual scenario, is likely much larger than the difference in suffering between a business-as-usual scenario and a future without humans. This suggests focusing on ways to reduce s-risks rather than increasing extinction risk.

A helpful comment from a while back: https://forum.effectivealtruism.org/posts/rRpDeniy9FBmAwMqr/arguments-for-why-preventing-human-extinction-is-wrong?commentId=fPcdCpAgsmTobjJRB

Personally, I suspect there's a lot of overlap between risk factors for extinction risk and risk factors for s-risks. In a world where extinction is a serious possibility, it's likely that there would be a lot of things that are very wrong, and these things could lead to even worse outcomes like s-risks or hyperexistential risks.

Research related to the research OP mentioned found that increases in carbon emissions also come with things that decrease (and increase) suffering in other ways, which has complicated the analysis of whether it results in a net increase or decrease in suffering. https://reducing-suffering.org/climate-change-and-wild-animals/ https://reducing-suffering.org/effects-climate-change-terrestrial-net-primary-productivity/ 

Yes, a similar dynamic (relating to siding with another side to avoid persecution) might have existed in Germany in the 1920s/1930s (e.g. I imagine industrialists preferred Nazis to Communists). I agree it was not a major factor in the rise of Nazi Germany - which was one result of the political violence - and that there are differences.

I would add that it's shunning people for saying vile things with ill intent which seems necessary. This is what separates the case of Hanania from others. In most cases, punishing well-intentioned people is counterproductive. It drives them closer to those with ill intent, and suggests to well-intentioned bystanders that they need to choose to associate with the other sort of extremist to avoid being persecuted. I'm not an expert on history but from my limited knowledge a similar dynamic might have existed in Germany in the 1920s/1930s; people were forced to choose between the far-left and the far-right.

Given his past behavior, I think it's more likely than not that you're right about him. Even someone more skeptical should acknowledge that the views he expressed in the past and the views he now expresses likely stem from the same malevolent attitudes.

But about far-left politics being 'not racist', I think it's fair to say that far-left politics discriminates in favor or against individuals on the basis of race. It's usually not the kind of malevolent racial discrimination of the far-right - which absolutely needs to be condemned and eliminated by society. The far-left appear primarily motivated by benevolence towards racial groups perceived to be disadvantaged or are in fact disadvantaged, but it is still racially discriminatory (and it sometimes turns into the hateful type of discrimination). If we want to treat individuals on their own merits, and not on the basis of race, that sort of discrimination must also be condemned.

I'm skeptical about the value of slowing down leading AI labs primarily because it likely reduces the influence of the values of EAs in shaping the deployment of AGI/ASI. Anthropic is the best example of a lab with people who share these values, but I'd imagine that EAs also have more overlap with the staff at OpenAI and DeepMind than actors who would catch up because of a slowdown. And for what it's worth, the labs were founded with the stated goal of benefiting humanity before it became far more apparent that current paradigms have a high chance of resulting in AGI with the potential of granting profit/power to their human operators and investors.

As others have noted, people and powerful groups outside of this community and surrounding communities don't seem to be interested in consequentialist, impartial, altruistic priorities like creating a positive long-term future for humanity, but are instead more self-interested. Personally I'm more downside-focused, but I think it's relevant to most EAs that other parties wouldn't be as willing to dedicate a large amount of resources towards creating large amounts of happiness for others, and because of that, the reduction of influence of the values of EAs will result in a considerable loss of expected future value.

EDIT (2024-05-19): When I wrote this I had in mind Anthropic > OpenAI > DeepMind but Anthropic > DeepMind > OpenAI seems more sensible now. Unclear where to insert various governments/militaries/politicians/CEOs into this ranking.

Thank you for bringing attention to fetal suffering - especially the possibility of suffering of <24 weeks fetuses.

Others have already pointed out that the interventions of applying anaesthetics to fetuses has issues of political tractability, but I think there's also a dynamic that could result in backfire on moral circle expansion efforts to include fetuses and/or other "less complex" entities.

Most people haven't spent time thinking about whether simpler entities can suffer and haven't formed an opinion so it seems like they're particularly susceptible to first impressions. The suggestion that less developed fetuses can suffer would likely imply to them that early abortions are wrong. People who don't like this normative implication might decide (probably unjustifiably) to think less developed fetuses and by extension other "less complex" entities cannot suffer to absolve themselves of acting in ways that might increase fetal suffering. "Early abortions are not wrong -> early fetuses cannot suffer -> anything of "lower complexity" cannot suffer". On the other hand, first introducing the ideas abstractly and suggesting that we should care about simple entities "in general" sidesteps this and could lead people to eventually care about fetal suffering in an, admittedly indirect, but less politically charged way.

So between two strategies, (1) advocate for lower complexity entities in general, and (2) advocate for less developed fetuses, those concerned with moral circle expansion to fetuses and/or other simpler entities should probably focus on the first strategy.

(Personally, I'd prefer if people accepted that they act in ways that might increase suffering, while simultaneously aim to decrease suffering.)

Load more