AW

Anton Wille

6 karmaJoined Aug 2021

Comments
3

Thank you for this article, I thoroughly enjoyed reading and thought it was really illuminating! 

The  essentialization of chinese people and the chinese state seem to me to be particularly harmful and dominating a lot of the popular discussion and analysis on China. It doesn't help that the there is a large language and cultural barrier to interaction with chinese people, and that the chinese state is quite opaque in the way it organizes and acts. If you could recommend any additional literature or news sources of good and accessible China scholars in English, I'd be very grateful.


> “I’ve spent so much time learning about China that my grounding in methods and methodology is not in any way as solid as if I had studied anthropology and then taken on an interest in China. And it is … because the language is difficult, and because there is so much that we as China scholars need to know about China. So it’s actually a huge field. So it’s difficult to do both.”

This seems to be a sad but common issue in a lot of fields: Those who are supposed to be good friends don't really appreciate each other, because they compete for the same sources of funding and often come to different, even opposing, results in their research.  Here in Germany for example the quant. sociologists founded their own organization a while ago because they felt marginalized by the qual. research-oriented majority.  

In the particular case of China studies, the case for collaboration seems to be quite obvious: If researchers with methodological expertise collaborate with those with cultural and language expertise, the complexity and quality of the research should increase 

Assuming that it costs around 6000£ to save a life, these 1-2 million come down to around 200-300 lives saved. EAs claim to have a very high standard in evaluating the money spent by charities, this shouldn't stop at the 'discretionary spending' of the evaluators.

Out of curiosity I fed it some questions from assignments I had during my undergrad. Predictably, it got answers asking for simple factual knowledge quite right. It also did quite well in delivering a bone structure for essays/reports. But it completely failed on more complex questions related to maths or algorithms. The answers still read really well, they were just completely false. My best guess is that we will soon all have ai assistants, which will certainly shift teaching and being taught, but I don't think we have to panic yet.