All of liu_he's Comments + Replies

Answer by liu_heApr 21, 20231
1
0

Definitely. The day humanity figured out the alignment problem is also the day CCP gains unlimited intelligence and power. Humanity may avoid extinction, but simultaneously the same innovation drives the world towards global stable totalitarianism. And in fact, the CCP with its sharp focus on power may even be the entity to seriously invest and figure out this problem in the first place. 

This post itself sounds very misinformed about CCP history over the past hundred years. 

Yes, the CCP changes, but not its underlying logic of unlimited power, and all the dangers associated with it.

Yes, it adapts to external environment to survive, but the domestic costs of doing so cannot be lightly overlooked - such as some of the worst famines, political purges, mass-shooting against teenage students, mass imprisonment, forced labour camps (and the list goes on) humanity has ever seen. 

There is the tendency among some China watchers, in their ... (read more)

Does this mean one should reduce the expected risk for the cross-strait? I don't think so. While there is no evidence of an imminent attack, it is abundantly clear that the Chinese government would like to have the option of a successful attack at will,  to strike at the most politically convenient time. Anyone who examines cross-strait relations from the perspective of imminence (i.e. is there going to be an attack this week, next month or next year) is mistaken in perspective. It could be five years, ten years, or 50 years. Or it could be a random s... (read more)

1
Jack Cunningham
1y
To my mind, the piece is a welcome response to the recent (imo) irresponsible hyping of cross-strait risk by influential US actors. To the extent that anyone's expectation of the risk of cross-strait violence was influenced by such voices, this piece should help recalibrate down. But of course the fundamental risk remains, even if there are reasons to doubt its immimence as represented by China hawks. You could do a Straussian reading of this piece such that it is in fact saying 'China won't bomb TW next week so let's calm down' in order to "avoid having the US government kick-starting a nuclear war by pre-emptive strikes or panic-induced miscalculations." To the extent that Tim Heath is respected and that WotR is widely read by US decision-makers, I think this reading makes some sense (although ofc there are very strong incentives for the US gov't to not start a war with China that have nothing to do with whether they're reading WotR or not). Your mileage may vary. Your broader point, though, that we should take a longer/less-temporally-bound/more structural view of the risk, is one that I agree with. 

To my eyes, you and the post-writer don't really disagree but prefer different levels of descriptive precision. So instead of saying 'EA is X', you would prefer saying 'many people in EA are X'. After the precise sharpening, this seems to capture pretty much still the same idea and sentiment about where EA is going as highlighted in the post.

And to respond to the post by saying 'this is not precise enough' rather than 'this is the wrong trend' seems to miss the point here. Of course, using tables and simple before-afters is not a format for precise descrip... (read more)

2
Luke Freeman
2y
Largely yes. That's why  I said I'm disappointed with this framing (not just in this post but in other contexts where this is happening).

Hi, I'm Leo, and I am running an EA-inspired charity evaluator in China, helping Chinese donors give to the most effective charities. We have a Chinese-language blog on effective giving, where our team post effective giving-related content. May I ask to join the channel?

Unfortunately, this post doesn't quite persuade me that small donors can be impactful compared to large donors. The gist of the post seems to be that, as long as there are professional EA fund managers, small donors may achieve a similar level of marginal impact. This seems clear enough. Since EA grant evaluators typically regrant unrestricted funding, they will just treat any dollar - whether from large or small donors - as the same. Everyone's allowed to save lives at$3000 per life. 

However, if the EA movement is asking the question 'if we needed X ... (read more)

I thought this was an informative talk. I especially enjoyed the exposition of the issue of unequal distribution of gains from AI. However, I am not quite convinced a voluntary Windfall Clause for companies to sign up would be effective.  The examples you gave in the talk aren't quite cases where the voluntary reparation by companies come close to the level of contribution one would reasonably expect from them to address the damage and inequality the companies caused. I am curious, if the windfall issue is essentially one of oligopolistic regulation, ... (read more)

Thank you for the post, it is certainly very interesting to read. I learned a lot.