LH

liu_he

12 karmaJoined Oct 2019

Comments
8

Answer by liu_heApr 21, 20231
1
0

Definitely. The day humanity figured out the alignment problem is also the day CCP gains unlimited intelligence and power. Humanity may avoid extinction, but simultaneously the same innovation drives the world towards global stable totalitarianism. And in fact, the CCP with its sharp focus on power may even be the entity to seriously invest and figure out this problem in the first place. 

This post itself sounds very misinformed about CCP history over the past hundred years. 

Yes, the CCP changes, but not its underlying logic of unlimited power, and all the dangers associated with it.

Yes, it adapts to external environment to survive, but the domestic costs of doing so cannot be lightly overlooked - such as some of the worst famines, political purges, mass-shooting against teenage students, mass imprisonment, forced labour camps (and the list goes on) humanity has ever seen. 

There is the tendency among some China watchers, in their eagerness to 'educate' the West about China, too quickly adopt the official narrative and history of the CCP. In doing so, they create a dangerous alliance, often out of ignorance more than willingness. Only when one can get over the hook of CCP official propaganda can one truly begin to see China as it is (sometimes it does seem terribly enticing. Hundreds of millions of people literally lifted out of by the Mother Party, rising on the global stage, developing modern technology, etc.). And I'm beginning to come to the view that the moral instincts of ignorant people reacting to phenomena in China are often more laudable than those of 'experts', who claim to know subtleties but in effect really are finding hopeless justifications for a morally bankrupt system. I'd recommend reading not Western China watchers but well-respected (and often suppressed) Chinese experts, scholars such as Gao Hua, Qin Hui, Shen Zhihua, to name a few. 

Does this mean one should reduce the expected risk for the cross-strait? I don't think so. While there is no evidence of an imminent attack, it is abundantly clear that the Chinese government would like to have the option of a successful attack at will,  to strike at the most politically convenient time. Anyone who examines cross-strait relations from the perspective of imminence (i.e. is there going to be an attack this week, next month or next year) is mistaken in perspective. It could be five years, ten years, or 50 years. Or it could be a random soldier firing shots next week and dragging the whole world into a world war. Effective guarding against nuclear risks does not occur with an accurate expected risk timeline (impossible given the extreme opaqueness of information and randomness of war events). Instead, it happens with constant, careful, vigilant guarding. Of course, we should avoid having the US government kick-starting a nuclear war by pre-emptive strikes or panic-induced miscalculations, but that is different from saying, 'China won't bomb TW next week so let's calm down'. I don't think this is the right sentiment here. 

To my eyes, you and the post-writer don't really disagree but prefer different levels of descriptive precision. So instead of saying 'EA is X', you would prefer saying 'many people in EA are X'. After the precise sharpening, this seems to capture pretty much still the same idea and sentiment about where EA is going as highlighted in the post.

And to respond to the post by saying 'this is not precise enough' rather than 'this is the wrong trend' seems to miss the point here. Of course, using tables and simple before-afters is not a format for precise descriptions. However, the author still uses it, perhaps not out of carelessness but because this format is good for highlighting trends in an easily understandable way. To my eyes, the post is supposed to highlight overall trends rather than a precise description of the community. Semantic precision is always prudent, but the main gist of the idea highlighted in the post seems to remain after semantic sharpening. 

So if the response here is essentially 'yes people in EA are moving to the direction DMMF says it is, but just don't say EA is', I'd say the post still basically stands. 

Hi, I'm Leo, and I am running an EA-inspired charity evaluator in China, helping Chinese donors give to the most effective charities. We have a Chinese-language blog on effective giving, where our team post effective giving-related content. May I ask to join the channel?

Unfortunately, this post doesn't quite persuade me that small donors can be impactful compared to large donors. The gist of the post seems to be that, as long as there are professional EA fund managers, small donors may achieve a similar level of marginal impact. This seems clear enough. Since EA grant evaluators typically regrant unrestricted funding, they will just treat any dollar - whether from large or small donors - as the same. Everyone's allowed to save lives at$3000 per life. 

However, if the EA movement is asking the question 'if we needed X amount of dollars, who should we approach', would small donors still be the answer? I think this is the sort of 'impact' that people question, i.e. where do we expect impact to predominantly come from. Within EA, small donors make up a 1/10 of Good Venture + FTX. To be of comparable impact, small donors need to be 10x more effective. 

Of course, we also need to consider where the baseline is. Would a 1/10 impact compared to large donors be decent enough for small donors collectively? As a MOVEMENT that does the MOST good, should we see small donors that give to AMF as impactful because they save lives per $3000 - and that's very good for the world; or unimpactful as the number of people saved through such giving is expectedly much lower than what large donors are doing? These are probably the key considerations to determine how the impact of small donors should be viewed. Just discussing the marginal impact of small donors doesn't quite do it for me.

I thought this was an informative talk. I especially enjoyed the exposition of the issue of unequal distribution of gains from AI. However, I am not quite convinced a voluntary Windfall Clause for companies to sign up would be effective.  The examples you gave in the talk aren't quite cases where the voluntary reparation by companies come close to the level of contribution one would reasonably expect from them to address the damage and inequality the companies caused. I am curious, if the windfall issue is essentially one of oligopolistic regulation, since there's only a small number of them, would it be more effective to simply tax the few oligopolies, instead of relying on voluntary singups? Perhaps what we need is not a voluntary legally binding contract, but a legally binding contract, period, regardless of voluntariness of the companies involved?

Thank you for the post, it is certainly very interesting to read. I learned a lot.