To what extent do you think the field of risk management is applicable to x-risks, and where is it most lacking?
The EA Forum could maybe fairly trivially collect some data on this by sending an alert randomly to a subset of instances of up/down votes across the user population that collects feedback on the reasons for the up/down vote. Obviously it would need to be balanced by ensuring not to cause too much friction for users.
I think this post could be improved by targeting the audience better. Maybe you could say at the start of this post why you believe this is relevant to EA?EDIT to add:I have seen your section titled Context, but don't think it properly introduces a broad and unfamiliar EA audience to why you think this might be relevant or important. Can you explain why you think disinformation and data privacy violations should be paid more attention to in the EA community?
I'd add that if you're interested in tech/AI policy, the Competition & Markets Authority (CMA) is quite a good, lesser known, place to consider.EDIT to add: also the CMA is incubating the new Digital Markets Unit (DMU).
I don't think xuan's main point was about being charitable, although they had a few thoughts in that direction. More generally, trying to be charitable is usually good. Of course it's going to miss a point (what finite comment isn't), but maybe it's making another?
I appreciate you trying to bring the discussion towards what you see as the real reason for lefty positions being held by privileged students (subconscious social status jockeying), but I wonder if there's a more constructive way to speculate about this?Maybe one prompt is: how would you approach a conversation with such a lefty friend to discover if that is their reason, or not?You could be direct, put your cards on the table, and say you think they are just interested in the social status stuff, and let them defend themselves (that's usually what happens when you attack someone's subconscious motivation, regardless what's true). Or you could start by asking yourself, what if I was wrong here? Is there is another reason they might hold this position on this topic? That might lead you to ask questions about their reasons. You could test how load-bearing their explanations are, by asking hypotheticals, or for them to be concrete and specific. Maybe you, or they, end up changing/modifying your position or beliefs, or at least have a good discussion, with at least one person having more understanding going out than you had coming in. In any case, I think a conversation that assumes good faith is more likely to lead to a productive discussion.Circling back to the initial thing: I'm assuming that you do see the value in being charitable and assuming good faith in general, and just feel it is hard to practice this in conversations when people are very attached to their positions. But let me know if not, i.e. if you do genuinely think there is no point in being charitable (as that would be our true disagreement, this seems unlikely).
Please correct me if I've misunderstood you here. + nitpick: you use terms people might not have heard of. If I look up 'Moloch' I don't immediately see the article by Scott Alexander that I think you have in mind, just a Wikipedia article about the god.
Even though I've come across these arguments before, something about the style of this piece made me feel a little uneasy and overwhelmed. I think it's that it raises many huge issues quickly, one after the other.It's up to you of course, but consider having a content warning at the top of this. Something to the effect of:Warning: the following arguments may be worrying beyond the useful level of motivating action. If you think this is a risk for you, be cautious about how, when and whether you wish to read it.
That wasn't really what I was saying, and I don't think you're steelmanning the intersectionalist perspective, although I agree with your description of the crux. I think many (maybe most?) people who like intersectionality would agree that prioritization is sometimes necessary and useful.
An attempt to steelman intersectionality for a moment:- problems are usually interwoven and complex- separating problems from their contexts can cause more problems- saying one problem is more important than another has negative side effects, because we are trying to fix a broken hammer with broken hammer (comparison culture is a cause of many problems, is a belief of many progressives, I believe)
I am unsure this is incompatible with prioritization, which in my view is simply a practical consequence of not having infinite resources. I think they'd agree, and would not take issue with, for example, someone dedicating their life to only climate change, as long as that person did not go around saying climate change is more important than all the other important issues, and also saw how climate change is related to, for example, improving international governance, or reducing corruption and worked with those efforts rather than in competition with/undermining them.
I think viewing most intersectionality proponents as people who cannot ever work on one thing because they literally need to address all problems at once is an overly literal interpretation, although it's possible to get this impression if there are a few loud ones like this (I don't know enough to know).
The disagreement seems to be more about whether it is helpful to compare the importance of issues in a public way. Comparing things, whilst necessary and important, can have side effects such as making some people feel bad about a the good thing that they are doing because it isn't the best thing a person in theory could be doing. We are familiar with this from 80K's mistakes.
I was focusing more on the marketing side like Cullen, and wondering whether worldview diversification might be a way to better connect with intersectionality proponents via a message like this:problems are complicated and sometimes entangled, and we can work on many at once, on a group level, but also our resources are finite, so when allocating them, trade-offs will need to be made
Thanks for the article, interesting and well-written. I'm sure will be useful as a reference for me in some future conversations.With reference to your section titled Incompatibility Between Intersectionality and Prioritization - how do you see worldview diversification fitting in?To me, this perspective incorporates the value of diversification of causes (which intersectionality protects) whilst still being realistic about actually getting things done (which prioritization protects). Under a worldview diversification lens, prioritization is less about one thing to the exclusion of all others, whilst still not going as far as to say all causes are equal and should have an equal place at the table.
Ah sorry, I misunderstood
I agree with much of this answer. However, I'm not sure it's the lack of promise of scale that makes projects not get funded, but rather other reasons. I am also excited about EA Funds now encouraging time-limited all-in experiments.