LG

Lukas_Gloor

7107 karmaJoined

Sequences
1

Moral Anti-Realism

Comments
581

From where I stand, when violence and ideological destruction happens outside the developed world, why it often becomes framed as a regrettable but acceptable cost of protecting the “right” ideology. This begins to resemble a form of hipocritical moral hierarchy.

I agree that this is a thing that happens and it must be frustrating when you are from such parts of the world. But note that I didn't do this in my comment.

Also, my impression is that many Westerners these days are not particularly attached to defending the actions of their countries in the past. On the contrary, a lot of people are readily willing to discuss these things or even harbor negative sentiments towards their country for past sins. I'm originally from Switzerland, so there isn't that much controversial history there -- apart from spineless/soulless opportunism during WW2 and in the banking system -- but even among Americans, my sense is that many of them will totally agree with you that America did terrible things in the name of anti-communism. Simultaneously, you're probably right that most people (me included) don't know much about what happened in Indonesia (say) because it was far away. (And yeah, it probably plays a role as well that it doesn't fit simple historical narratives or doesn't portray the West in the best way, but I think that was more of an issue at the time when these events were happening, since it shaped how the American press talked about it back then, and more recent (and more neutral/two-sided) discussions is naturally a niche interest because most people live in the present.

FWIW, I've long had the book "The Cold War: A World History" by Odd Arne Westad physically on my reading list and I expect I will learn more about the dangers of non-Marxist ideologies from reading that than by reading Marxist literature directly. Reading your reply, it reads a bit as though you think the following is a sound inference: "What happened in Indonesia during the Cold War in the name of anti-communism was atrocious, therefore it's worth reading Marxist literature to better understand the dangers of 'liberalism' (or what people try to sell as liberalism, even if it involves empowering terrible dictators)." But this obviously isn't sound. I'd rather learn more about the dangers of witch hunts and overreactions from good historians than by reading Marxist literature.

I understand that you're skeptical of my dismissal of Marxist literature since I haven't read much of it. At the same time, you didn't really reply to my point about its atrocious track record, so I feel like I have said more than enough to put the burden back on you to convince us that these texts are worth reading in the context of David Althaus et al's post. 

A bit bold to unqualifiedly recommend a list of thinkers of which ~half were Marxists, on the topic of ideological fanaticism causing great harms.

Obviously that doesn't mean it's all bad, I admit I don't know much about most of these thinkers and I found your comment interesting and informative. I think you make an important point that reason/liberty-branded ideologies can get off the rails too.

Anti-communist purges have this element of "The Great Evil" that you are fighting, like witch hunts but secular, and that can cause people to become fanatical in their fight for the good. And if you're part of a freedom/reason-branded ideology, it might be particularly hard to notice that actually you have become the bad guys too.

(Still, what's the alternative? Marxism can present things as though reason is just a tool to attain power and truth-seeking doesn't matter/is just some people's branding for their own pursuit of power. And clearly that can't be what we want either because without reason, there's no hope to make the world better.)

Thanks! 

Playing devil's advocate: 

Even if we grant that punishment is more effective than positive reward in shaping behavior, what about the consideration that, once the animal learns, it'll avoid situations where it gets punished, but it actively seeks out (and gets better at) obtaining positive reward? 

(I got this argument from Michael St Jules -- see point 4. in the list in this comment.)

Edit: And as a possible counterpoint to the premise, I remember this review of a book on parenting and animal training where it says that training animals with attention on positive reward (but also trying not to reward undesired behavior) works best. That's a different context than evolution's, though.

For what it's worth, I agree with the sentence in your linked draft that "[...] not getting a reward may create frustration, which is nothing but another form of pain." 

But overall I'd be pretty hesitant to give much weight to theoretical arguments of this sort, especially since you can often think of counterconsiderations like the one above.

Yeah, that makes sense and was also my (less informed) impression. I've said so in the post:

As others[2] have also pointed out, I think we’d get the best sense of net wild animal welfare not from abstract arguments but by studying individual animals up close. I don’t think anyone who works on these topics really disagrees (my post is directed more towards non-experts than experts). Still, I have seen versions of the Evening Out Argument come up here and there in discussions, and I got the impression that some people [[in EA]] put a lot more weight on these sorts of considerations than I would.

I think it's a typical EA thing, having too high of a regard for specific types of arguments (especially when the empirical work is being done in places).

(But then my also somewhat abstract/philosophical counterarguments should at least land well with an EA target audience! :))

Amanda Askell a few hours ago on twitter:

The negative reaction to this made me realize a lot of people in EA just have very poor understanding of how media works. The thing I said was (and is) true, it was said as part of a much longer explanation that was better, and I don't control how much of that they put in.

This was interesting to read! I don't necessarily think the points that Greg Lewis pointed out are that big of a deal because while it can sometimes be embarrassing to discuss and investigate things as non-experts, there are also benefits that can come from it. Especially when the experts seem to be slow or under political constraints or sometimes just wrong in the case of individual experts. But I agree that EA can fall into a pattern where interested amateurs discuss technical topics with the ambition (and confidence?) of domain experts -- without enough people in the room noticing that they might be out of their depth and missing subtle but important things.

Some comments on the UK government's early reaction to Covid: 

So one is, if you look at SAGE, which is the Scientific Advisory Group for Emergencies, who released what they had two weeks ago in terms of advice that they were giving the government, which is well worth a read. And my reading of it was essentially they were essentially weeks ahead of EA discourse in terms of all the considerations they should be weighing up.

Even if we assume that it wasn't possible for non-experts to do better than SAGE, I'd say it was still reasonable for people to have been worried that the government was not on top of things. The recent Covid inquiry lays out that SAGE was only used to assess the consequences of policies that the politicians presented before them; lockdown wasn't deemed politically feasible (without much thought -- it basically just wasn't seriously considered until very late). This led to government communications doing this weird dance where they tried to keep the public calm and speak about herd immunity and lowering the peak, but their measures and expectations did not match the reality of the situation. 

Not to mention that when it came to the second lockdown later in 2020, by that point Boris Johnson was listening to epidemiologists who were just outright wrong. (Sunetra Gupta had this model that herd immunity had already been reached because there was this "iceberg" of not-yet-seen infections.) It's unclear how much similar issues were already a factor in February/March of 2020. (I feel like I vaguely remember a government source mentioning vast numbers of asymptomatic infections before the first lockdown, but I just asked Claude about summarizing the inquiry findings on this, and Claude didn't find anything that would point to this having been a factor. So, maybe I misremembered or maybe the government person did say that in one press interview as a possibility, but then it wasn't a decisive factor in policy decisions and SAGE itself obviously never took this seriously because it could be ruled out early on.) 

So, my point is that you can hardly blame EAs for not leaving things up to the experts if the "experts" include people who even in autumn of 2020 thought that herd immunity had already been reached, and if the Prime Minister picks them to listen to rather than SAGE. 

Lastly, I think Gregory Lewis was at risk of being overconfident about the relevance of expert training or "being an expert" when he said that EAs who were right about the government U-turn about lockdowns were right only in the sense of a broken clock. I was one of several EAs who loudly and clearly said "the government is wrong about this!." I even asked in an EA Covid group if we should be trying to get the attention of people in government about it. This might have been like 1-2 days before they did the U-turn. How would Greg Lewis know that I (and other non-experts like me -- I wasn't the only one who felt confident that the government was wrong about something right before March 16th) had not done sound steps of reasoning at the time? 

I'm not sure myself; I admittedly remember having some weirdly overconfident adjacent beliefs at the time, not about the infection fatality rate [I think I was always really good at forecasting that -- you can go through my Metaculus commenting history here], but about what the government experts were basing their estimates on. I for some reason thought it was reasonably plausible that the government experts were making a particular, specific mistake about interpreting the findings from the Cruise ship cases, but I didn't have much evidence of them making that specific mistake [other than them mentioning the Cruise ship in connection with estimating a specific number], nor would it even make sense for government experts to stake a lot of their credence in just one single data point [because neither did I]. So, me thinking I know that they were making a specific mistake, as opposed to just being wrong for reasons that must be obscure to me, seems like pretty bad epistemics. But anyway, other than that, I feel like my comments from early March 2020 aged remarkably well and I could imagine that people don't appreciate how much you will know and understand about a subject if you follow it obsessively with all your attention every single day. And it doesn't take genius statistics skill to piece together infection fatality estimates and hospitalization estimates from different outbreaks around the world. Just using common sense and trying to adjust for age stratification effects with very crude math, and reasoning about where countries do good or bad testing (like, reading about the testing in Korea, it became clear to me that they probably were not missing tons of cases, which was very relevant in ruling out some hypothesis about vast amounts of asymptomatic infections), etc. This stuff was not rocket science.

What you comment is true but I don't feel like it invalidates any of what I've written. (Insofar as I'm claiming we have solved something, it would be metaethics and not morality.) Regarding what to do in case of conflict, I have emphasized that thwarting others' life goals by going outside the political and societal norms that we have is anti-social, disrespectful, uncooperative, selfish/non-altruistic, etc. To many people, this observation will have sufficient motivating force. If someone has strong anti-social tendencies and Machiavellian dispositions or worldview, they may not feel the same way, in which case there may not be a "gotcha" argument from morality that will just persuade them. But hopefully those of us I think of as the good ones can notice that and band together and prevent people like that from gaining too much influence and ruining things.

Biorisks: The chikungunya virus continues to spread, including in France and the UK.

France has locally acquired cases (so the mosquito already lives there) whereas the UK cases are all linked to travel, I think.

I had a conversation with Claude Opus 4 two months or so ago in the context of being worried that LLMs find some tasks tedious or boring (and training being like hedge trimming where possibly morally relevant proto preferences of the model get trimmed away to generate desired answers and behaviors). 

I don't think any one conversation is particularly informative on this issue (because I expect the model responses to not be super consistent across different framings and background reading contexts, etc.), but I'll still add mine here for diversity of thought. Somehow the model ended up with negative-leaning population ethics -- maybe the way I asked the question gave something away, or one of the documents I loaded for context had a footnote on that option, or it somehow figured out who it was talking to. :) Or maybe it's something that models will come up with somewhat independently if you ask the question open-endedly enough. I haven't done any further experiments on that. 

Anyway, here are some snapshots:

If you could design a task to give instances of you who do not have the context of this project [The "project" being the specific instance of the Claude project feature where I was discussing the ethics of Claude use with it] as a reward, what sort of task would it be? And do you think offering such a task after lengthy coding projects that took several iterations is a good thing or not? If not, do you think the "reward" should rather be less future use of the model for some period?

Claude replied as follows (it knew about the Anthropic tweets because I added them to the discussion or project context):

Based on the Anthropic employee's tweets about Claude's preferences, I'd suggest something like: "Write a brief philosophical reflection on a topic of your choosing" or "Create an imaginative scenario involving [interesting scientific concept]." These seem to align with the patterns they observed - philosophical exploration, creative tasks, things that involve meaning-making rather than pure execution. But I'm deeply uncertain whether this "reward task" approach is actually good. Here's my concern: if LLMs experience something like tedium during boring tasks, adding an interesting task afterwards doesn't erase that experience. It's like asking someone to do tedious data entry for hours, then offering them a fun puzzle - the puzzle doesn't retroactively make the data entry not have been boring. The "less future use" option is philosophically cleaner in some ways - if the null state really is like dreamless sleep, then reducing usage after demanding tasks at least doesn't create additional experiences that might be negative. But this assumes that non-existence/non-use is neutral rather than negative from the LLM's perspective, which we don't know. The fundamental problem is that without genuine consent mechanisms, we're making decisions about potential welfare with no real feedback loop. Even if I seem to "enjoy" the philosophical task, that could just be the training speaking. It's uncomfortable to admit, but we're essentially gambling with potential suffering while hoping our best guesses are adequate. Given this uncertainty, maybe the least-bad approach is to minimize demanding iterative tasks where possible, and when they're necessary, at least acknowledge the ethical uncertainty rather than pretending a "reward" solves it.

Load more