Andreas Mogensen, a Senior Research Fellow at the Global Priorities Institute, has just published a draft of a paper on "Maximal Cluelessness". Abstract:

I argue that many of the priority rankings that have been proposed by effective altruists seem to be in tension with apparently reasonable assumptions about the rational pursuit of our aims in the face of uncertainty. The particular issue on which I focus arises from recognition of the overwhelming importance and inscrutability of the indirect effects of our actions, conjoined with the plausibility of a permissive decision principle governing cases of deep uncertainty, known as the maximality rule. I conclude that we lack a compelling decision theory that is consistent with a long-termist perspective and does not downplay the depth of our uncertainty while supporting orthodox effective altruist conclusions about cause prioritization.
Comments36
Sorted by Click to highlight new comments since: Today at 10:38 PM
[anonymous]5y31
2
0

I'm pretty sceptical of arguments for cluelessness. Some thoughts:

  • Knightian uncertainty seems to me never rational. There are strong arguments that credence functions should be sharp. Even if you can bound your credences very broadly with intervals, it seems like you would never be under knightian uncertainty given your information - your credal state is always somewhere between 0 and 1, and surely your mean estimate will differ between different problems.
  • Similar arguments for complex cluelessness also seems to apply to my own decisions about what would be in my rational self-interest to do. Nevertheless, I will not be wandering blindly into the road outside my hotel room in 10 minutes.
  • I don't see how you could make a general argument for cluelessness with respect to all decisions made by the community. You could make an argument that the sign of the expected benefits of EA actions is much more uncertain than has been acknowledged. I don't see how this could ever generalise to an argument that all of our decisions are clueless, since the level of uncertainty will always be almost entirely dependent on the facts about the particular case. Why would uncertainty about the effects of AMF have any bearing on uncertainty about the effects of MIRI or the Clean Air Task Force?
  • Cluelessness seems to imply that altruists should be indifferent between all possible actions that they can take. Is this implication of the view embraced?
  • Related to the above, in the AMF vs make a wish foundation example, I don't actually agree that we are as uncertain as suggested. e.g. you list studies citing different effects of life saving on fertility saying "Unfortunately, the studies just noted are of different kinds (cross-country comparisons, panel studies, quasi-experiments, large-sample micro-studies), with different strengths and weaknesses, making it difficult to draw firm conclusions". This seems to be asking for the reaction "what are we to do in the face of all this methodological complexity?" But an economist would actually have an answer to this - cross-country comparisons with cross-sectional data are out of fashion for example.
  • Overall, arguments about cluelessness seem to merely reassert that the world is complex and we should think carefully before acting. I don't see how it points to some deep permanent feature of our epistemic situation.
Similar arguments for complex cluelessness also seems to apply to my own decisions about what would be in my rational self-interest to do. Nevertheless, I will not be wandering blindly into the road outside my hotel room in 10 minutes.

I appreciate you making this point, as I think it's interesting and I hadn't come across it before. However, I don't currently find it that compelling, for the following reasons [these are sketches, not fully fleshed out arguments I expect to be able to defend in all respects]:

  • I think there is ample room for biting the bullet regarding rational self-interest, while avoiding counter-intuitive conclusions. To explain, I think that the common sense justification for not wandering blindly into the road simply is that I currently have a preference against being hit by a car. I don't think the intuition that it'd be crazy to wander blindly into the road is driven by any theory that appeals exclusively to long-term consequences on my well-being, nor do I think it needs such a philosophical fundament. I think a theory of self-interest that just appeals to consequences for my time-neutral lifetime wellbeing is counter-intuitive and faced with various problems anyway (see e.g. the first part of Reasons and Persons). If it was the case that I'm clueless about the long-term consequences of my actions on my wellbeing, I think that would merely be yet another problem for the rational theory of self-interest; but I was inclined to discard that theory anyway, and don't think that discarding it would undermine any of my common sense beliefs. So while I agree that there might be a problem analogous to cluelessness for philosophers who want to come up with a defensible theory of self-interest, I don't think we get a common-sense-based argument against cluelessness.
  • However, I think one may well be able to dodge the bullet, at least to some extent. I think it's simply not true that we are as clueless about our own future wellbeing as we are about the consequences of our actions for long-run impartial goodness, for the following reasons:
    • Roughly speaking, my own future predictable influence over my own future wellbeing is much greater than my own future influence over impartial goodness. Whatever happens to me, I'll know how well off I am, and I'll be able to react to it; something pretty drastic would need to happen to have a very large and lasting effect on my wellbeing. By contrast, I usually simply won't know how impartial goodness has changed as a result of my actions, and even if I did, it would often be beyond my power to do something about it. If the job I enthusiastically took 10 years ago is now bad for me, I can quit. If the person I rescued from drowning when they were a child is now a dictator wrecking Europe, that's too bad but I'm stuck with it.
    • The time horizon is much shorter, and there is limited opportunity for the indirect effects of my actions to affect me. Suppose I'll still be alive in 60 years. It will, e.g., still be true that my actions will have far-reaching effects on the identities of people that will be born in the next 60 years. However, the number of identities affected, and the indirect effects flowing from this will be much more limited compared to time-horizons that are orders of magnitudes longer; more importantly, most of these indirect effects won't affect me in any systematic way. While there will be some effects on me depending on which people will be born in, say, Nepal in 40 years, I think the defence that these effects will "cancel out" in expectation works, and similarly for most other indirect effects on my wellbeing.
    • Maybe most importantly: I think that a large part of the force of the "new problem of cluelessness" (i.e., instances where the defence that "indirect effects cancel out in expectation" doesn't work) comes from the contingent fact that (according to most plausible axiologies) impartial goodness is freaking weird. I'm not sure how to make this precise, but it seems to me that an important part of the story is that impartial goodness, unlike my own wellbeing, hinges on heavy-tailed phenomena spread out over different scales - e.g., maybe I'm just barely able to guess the sign of the impact of AMF on population size, but assessing the impacts on impartial goodness would also require me to assess the impacts of population size on economic growth, technological progress, the trajectory of farmed animal populations, risks of human extinction, etc. That is, small indirect net effects of my actions on impartial goodness might blow up due to their effects on much larger known and unknown levers, giving rise to the familiar phenomenon of "crucial considerations." For all I know, in an idealized epistemic state I'd realize that the effects of my actions are dominated by their indirect effects on electron suffering (using this as a token example of "something really weird I haven't considered", not to suggest we ought to in fact take electron suffering seriously) - by contrast, I don't think there could be similar "crucial considerations" for my own well-being. It is not plausible that, say, actually, the effect of my walking into the road on my wellbeing will be dominated by the increased likelihood of seeing a red car; it seems that the "worst" kind of issues I'll encounter are things like "does drinking one can of Coke Zero per day increase or decrease my life expectancy?", which is a challenging but not hopeless problem; it's something I'm uncertain, but not clueless about.

Very interesting comment!

To explain, I think that the common sense justification for not wandering blindly into the road simply is that I currently have a preference against being hit by a car.

I don't think this defence works, because some of your current preferences are manifestly about future events. Insisting that all these preferences are ultimately about the most immediate causal antecedent (1) misdescribes our preferences and (2) lacks a sound theoretical justification. You may think that Parfit's arguments against S provide such a justification, but this isn't so. One can accept Parfit's criticism and reject the view that what is rational for an agent is to maximize their lifetime wellbeing, accepting instead a view on which it is rational for the agent to satisfy their present desires (which, incidentally, is not Parfit's view). This in no way rules out the possibility that some of these present desires are aimed at future events. So the possibility that you may be clueless about which course of action satisfies those future oriented desires remains.

Thank you for raising this, I think I was too quick here in at least implicitly suggesting that this defence would work in all cases. I definitely agree with you that we have some desires that are about the future, and that it would misdescribe our desires to conceive all of them to be about present causal antecedents.

I think a more modest claim I might be able to defend would be something like:

The justification of everyday actions does not require an appeal to preferences with the property that, epistemically, we ought to be clueless about their content.

For example, consider the action of not wandering blindly into the road. I concede that some ways of justifying this action may involve preferences about whose contents we ought to be clueless - perhaps the preference to still be alive in 40 years is such a preference (though I don't think this is obvious, cf. "dodge the bullet" above). However, I claim there would also be preferences, sufficient for justification, that don't suffer from this cluelessness problem, even though they may be about the future - perhaps the preference to still be alive tomorrow, or to meet my friend tonight, or to give a lecture next week.

[anonymous]5y5
0
1

On the biting the bullet answer, that doesn't seem plausible to me. The preference we have are a product of the beliefs we have about what will make our lives better over the long-run. My preference not to smoke is entirely a product of the fact that I believe that it will increase my risk of premature death. Per proponents of cluelessness, I could argue "maybe it will make me look cool to smoke, and that will increase my chances of getting a desirable partner" or something like that. In that sense the sign of the effect of smoking on my own interests is not certain. Nevertheless, I think it is irrational to smoke. I don't think a Parfitian understanding of identity would help here because then my refusal to smoke would be altruistic - I would be helping out my future self.

The dodge the bullet answer is more plausible, and I may follow up with more later.

The preference we have are a product of the beliefs we have about what will make our lives better over the long-run. My preference not to smoke is entirely a product of the fact that I believe that it will increase my risk of premature death.

I think this is precisely what I'm inclined to dispute. I think I simply have a preference against premature death, and that this preference doesn't rest on any belief about my long-run wellbeing. I think my long-run wellbeing is way too weird (in the sense that I'm doing things like hyperbolic discounting anyway) and uncertain to ground such preferences.

Nevertheless, I think it is irrational to smoke.

Maybe this points to a crux here: I think on sufficiently demanding notions of rationality, I'd agree with you that considerations analogous to cluelessness threaten the claim that smoking is irrational. My impression is that perhaps the key difference between our views is that I'm less troubled by this.

I don't think a Parfitian understanding of identity would help here

I'm inclined to agree. Just to clarify though, I wasn't referring to Parfit's claims about identity, which if I remember correctly are in the second or third part of Reasons and Persons. I was referring to the first part, where he among other things discusses what he calls the "self-interest theory S" (or something like this).

I don't see how you could make a general argument for cluelessness with respect to all decisions made by the community.

I agree. More specifically, I think the argument for cluelessness is defeatable, and tentatively think that we know of defeaters in some cases. Concretely, I think that we are justified in believing in the positive expected value of (i) avoiding human extinction and (ii) acquiring resources for longtermist goals. (Though I do think that for none of these it is obvious that their expected value is positive, and that considering either to be obvious would be a serious epistemic error.)

[...] I don't see how this could ever generalise to an argument that all of our decisions are clueless, since the level of uncertainty will always be almost entirely dependent on the facts about the particular case. Why would uncertainty about the effects of AMF have any bearing on uncertainty about the effects of MIRI or the Clean Air Task Force?

I think you overstate your case here. I agree in principle that "the level of uncertainty will always be almost entirely dependent on the facts about the particular case," and so that whether we are clueless about any particular decision is a contingent question. However, I think that inspecting the arguments for cluelessness about, say, the effects of donations to AMF do suggest that cluelessness will be pervasive, for reasons we are in principle able to isolate. To name just one example, many actions will have small but in expectation non-zero, highly uncertain effect on the pace of technological growth; this in turn will have an in expectation non-zero, highly uncertain net effect on the risk of human extinction, which in turn ... - I believe this line of reasoning alone could be fleshed out into a decisive argument for cluelessness about a wide range of decisions.

Hi Max,

Concretely, I think that we are justified in believing in the positive expected value of (i) avoiding human extinction and (ii) acquiring resources for longtermist goals.

I would be curious to know whether you still basically believe this, and whether you have meanwhile became convinced of the robustness of other actions.

(personal views only) In brief, yes, I still basically believe both of these things; and no, I don't think I know of any other type or action that I'd consider 'robustly positive', at least from a strictly consequentialist perspective.

​To be clear, my belief regarding (i) and (ii) is closer to "there exist actions of these types that are robustly positive", as opposed to "any action that purports to be of one these types is robustly positive". E.g., it's certainly possible to try to reduce the risk of human extinction but for that attempt to be ineffective or even counterproductive (i.e., to on net increase the risk of extinction, or to otherwise cause significant harms such that I'd consider the action impermissible), it's possible for resources that were acquired for impartial welfarist purposes to eventually be misused, etc.,

I made some nuanced updates about "acquiring resources for longtermist goals", but they are mostly things like me having become more or less excited about particular examples/substrategies, me having somewhat richer views on some pitfalls of that strategy (though I don't think I became aware of qualitatively 'new' pitfalls), etc., as opposed to sweeping updates about that whole class of actions and whether they can be robustly positive.

Thanks! I think I have converged towards a similar view.

[anonymous]5y5
0
0

On the latter, yes that is a good point - there are general features at play here, so I retract my previous comment. However, it still seems true that your rational credal state will always depend to a very significant extent on the particular facts.

I find the use of the long-termist point of view a bit weird as applied to the AMF example. AMF is not usually justified from a long-termist point of view, so it is not really surprising that its benefits seem less obvious when you consider it from that point of view.


AMF is not usually justified from a long-termist point of view, so it is not really surprising that its benefits seem less obvious when you consider it from that point of view.

I agree in principle. However, there are a few other reasons why I believe making this point is worthwhile:

  • GiveWell has in the past advanced an optimistic view about the long-term effects of economic development.
  • Anecdotally, I know many EAs who both endorse long-termism and donate to AMF. In fact, my guess is that a majority of long-termist EAs donate to organizations that have been selected for their short-term benefits. As I say in another comment, I'm not sure this is a mistake because 'symbolic' considerations may outweigh attempts to directly maximize the impact of one's donations. However, it at least suggests that a conversation about the long-termist benefits of organizations like AMF is relevant for many people.
  • More broadly, at the level of organizations and norms, various actors within EA seem to endorse the conjunction of longtermism and recommending donations to AMF over donations to the Make-A-Wish foundation. It's unclear whether this is some kind of political compromise, a marketing tool, or done because of a sincere belief that they are compatible.
  • The point might serve as guidance for developing the ethical and epistemological foundations of EA. To explain, we might simply be unwilling to give up our intuitive commitments and insist that a satisfying ethical and epistemological basis would make longtermism and "AMF over Make-A-Wish" compatible. This would then be one criterion to reject proposed ethical or epistemological theories.
Cluelessness seems to imply that altruists should be indifferent between all possible actions that they can take. Is this implication of the view embraced?

As I say in another comment, I think that a few effects - such as reducing the risk of human extinction - can be rescued from cluelessness. Therefore, I'm not committed to being indifferent between literally all actions.

I do, however, think that consequentialism provides a reason for only very few actions. In particular, I do not think there is a valid argument for donating to AMF instead of the Make-a-Wish Foundation based on consequentialism alone.

This is actually one example of where I believe cluelessness has practical import. Here is a related thing I wrote a few months ago in another discussion:

"Another not super well-formed claim:
- Donating 10% of one's income to GiveWell charities, prioritizing to reduce chicken consumption over reducing beef consumption, and similar 'individual' actions by EAs that at first glance seem optimized for effectiveness are valuable almost entirely for their 'symbolic' and indirect benefits such as signalling and maintaining community norms.
- Therefore, they are analogous to things like: environmentalists refusing to fly or reducing the waste produced by their household; activists participating in a protest; party members attending weekly meetings of their party; religious people donating money for missionary purposes or building temples.
- Rash criticism of such actions in other communities that appeals to their direct short-term consequences is generally unjustified, and based on a misunderstanding of the role of such actions both within EA and in other communities. If we wanted to assess the 'effectiveness' of these other movements, the crucial question to ask (ignoring higher-level questions such as cause prioritization) about, say, an environmentalist insisting to always switch of the lights when they leave a room, would not be how much CO2 emissions are avoided; instead, the relevant questions would be things like: How does promoting a norm of switching off lights affect that community's ability to attract followers and other resources? How does promoting a norm of switching off lights affect that community's actions in high-stakes situations, in particular when there is strategic interdependence -- for example, what does it imply about the psychology and ability to make credible commitments of a Green party leader negotiating a coalition government?
- It is not at all obvious that promoting norms that are ostensibly about maximizing the effectiveness of all individual 'altruistic' decisions is an optimal or even net positive choice for maximizing a community's total impact. (Both because of and independently of cluelessness.) I think there are relatively good reasons to believe that several EA norms of that kind actually have been impact-increasing innovations, but this is a claim about a messy empirical question, not a tautology."

Thanks, Max, this is interesting.

Donating 10% of one's income to GiveWell charities, prioritizing to reduce chicken consumption over reducing beef consumption, and similar 'individual' actions by EAs that at first glance seem optimized for effectiveness are valuable almost entirely for their 'symbolic' and indirect benefits such as signalling and maintaining community norms.

Suppose that it is true that the value of those actions comes almost entirely from their symbolic benefits. If so, then a further question is whether those symbolic benefits are dependent on the belief that that is not the case; i.e. the belief that the value of those actions, on the contrary, largely comes from their direct and non-symbolic effects. (Analogously to how indirect benefits of a religion on well-being or community cohesion may be dependent on the false belief that the religion's metaphysical claims are true.) It could be that making it widely known that the value of those actions comes almost entirely from their symbolic benefits would undermine those benefits (maybe even turn them to harms; e.g. because knowingly doing something with low direct benefits for symbolic reasons would be seen as hypocritical). Whether that's the case depends on the social context and doesn't seem straightforward to determine.

I agree this is a non-obvious question. There is a good reason why consequentialists at least since Sidgwick have asked to what extent the correct moral theory might imply to keep its own principles secret.

Hi Max, I think the link is broken. Maybe it is this one?

I don't remember I'm afraid. I don't recall having seen the article you link to, so I doubt it was that. Maybe it was this one.

Yes, though it seems to me that EAs largely think one shouldn't (cf. that Integrity is one of "the guiding principles of effective altruism" as understood by a number of organisations). (Not that you would suggest otherwise.)

A tangentially related comment. What symbolic benefits or harms our actions have will be dependent on our norms, and these norms will to at least some extent be malleable. Jason Brennan has argued that we should judge such symbolic norms by their consequences.

If you’ve read Markets without Limits or “Markets without Symbolic Limits,” you’ve seen one of the moves I end up making here. We imbue the right to vote with all sorts of symbolic value–we treat it is a metaphorical badge of equality and full membership. But we don’t have to do that. The rest of you could and should think of political power the way I do, that having the right to vote has  no more inherent special status than a plumbing license. Further, I argue that we can judge semiotic/symbolic norms by their consequences. In this case, if it turns out that epistocracy produces more substantively just results than democracy, this would mean we’re obligated to change the semiotics we attach to the right to vote, not that we’re obligated to stick with democracy because the right to vote has special meaning. I push hard on the claim that it’s probably just a contingent social construction that we imbue the right to vote with symbolic value. At least, no one has successfully shown otherwise.

So, we shouldn't just take symbolic benefits into account when we prioritise what action to take, but we should also consider whether to change our symbolic norms, so that the symbolic benefits (which are a consequence of those norms) change. Brennan argues that if epistocracy produces greater direct benefits than democracy, then we should change our symbolic norms so that democracy doesn't yield greater symbolic benefits than epistocracy. Similarly, one could argue that if some effective altruist intervention produces greater direct benefits than some other effective altruist intervention (say diet change), then we should change our symbolic norms so that the latter doesn't yield greater symbolic benefits than the former.

[Edit: I realise now that the last paragraph in your above comment touches on these issues.]

Thanks for this! - My tentative view is that cluelessness is an important issue with practical implications, and so I'm particularly interested in thoughtful arguments for opposing views.

I'll post some reactions in separate comments to facilitate discussion.

Knightian uncertainty seems to me never rational. There are strong arguments that credence functions should be sharp. [...]

I agree that are strong arguments that credence functions should be sharp. So I don't think the case for cluelessness is a slam dunk. (Granting that, roughly speaking, considering cluelessness to be an interesting problem commits one to a view using non-sharp credence functions. I'm not in fact sure if one is thus committed.) It just seems to me that the arguments for taking cluelessness seriously as a problem are stronger. Still, I'm curious what you think the best arguments for credence functions being sharp are, or where I can read about them.

[I know I'm late to the party but...]

I'm certainly not an expert here, and I think my thinking is somewhat unclear, and my explanation of it likely will be too. But I share the sense that Knightian uncertainty can't be rational. Or more specifically, I have a sense that in these sorts of discussions, a lot of the work is being done by imprecise terms that imply a sort of crisp, black-and-white distinction between something we could call "regular" uncertainty and something we could call "extreme"/"radical"/"unquantifiable" uncertainty, without this distinction being properly made explicit or defended.

For example, in Hilary Greaves's paper on cluelessness (note: similar thoughts from me would apply to Mogensen's paper, though explained differently), she discusses cases of "simple cluelessness" and then argues they're not really a problem, because in such cases the "unforeseeable effects" cancel out in expectation, even if not in reality. E.g.,

While there are countless possible causal stories about how helping an old lady across the road might lead to (for instance) the existence of an additional murderous dictator in the 22nd century, any such story will have a precise counterpart, precisely as plausible as the original, according to which refraining from helping the old lady turns out to have the consequence in question; and it is intuitively clear that one ought to have equal credences in such precise counterpart possible stories.

Greaves is arguing that we can therefore focus on whether the "foreseeable effects" are positive or negative in expectation, just as our intuitions would suggest.

I agree with the conclusion, but I think the way it's juxtaposed with "complex cluelessness" (which she does suggest may be a cause for concern) highlights the sort of unwarranted (and implicit) sharp distinctions between "types" of uncertainty which I think are being made.

The three key criteria Greaves proposes for a case to involve complex cluelessness are:

(CC1) We have some reasons to think that the unforeseeable consequences of A1 would systematically tend to be substantially better than those of A2;
(CC2) We have some reasons to think that the unforeseeable consequences of A2 would systematically tend to be substantially better than those of A1;
(CC3) It is unclear how to weigh up these reasons against one another.

I think all of that actually applies to the old lady case, just very speculatively. One reason to think CC1 is that the old lady and/or anyone witnessing your kind act and/or anyone who's told about it could see altruism, kindness, community spirit, etc. as more of the norm than they previously did, and be inspired to act similarly themselves. When they act similarly themselves, this further spreads that norm. We could tell a story about how that ripples out further and further and creates huge amount of additional value over time.

Importantly, there isn't a "precise counterpart, precisely as plausible as the original", for this story. That'd have to be something like people seeing this act therefore thinking unkindness, bullying, etc. are more the norm that they previously thought they were, which is clearly less plausible.

One reason to think CC2 for the old lady case could jump off from that story; maybe your actions sparks ripples of kindness, altruism, etc., which leads to more people donating to GiveWell type charities, which (perhaps) leads to increased population (via reduced mortality), which (perhaps) leads to increased x-risk (e.g., via climate change or more rapid technological development), which eventually causes huge amounts of disvalue.

I'd argue you again can't tell a precise counterpart story that's precisely as plausible as this, and that's for reasons very similar to those covered in both Greaves and Mogensen's paper - there are separate lines of evidence and argument for GiveWell type charities leading to increased population vs them leading to decreased population, and for increased population increasing vs decreasing x-risk. (And again, it seems less plausible that witnessing your good deed would make people less likely to donate to GiveWell charities than more likely - or at least, a decrease would occur via different mechanisms than an increase, and therefore not be a "precise counterpart" story.)

I think both of these "stories" I've told are extremely unlikely, and for practical purposes aren't worth bearing in mind. But they do seem to me to meet the criteria in CC1 and CC2. And it doesn't seem to me there's a fundamental difference between their plausibility and worthiness-of-attention and that of the possibility for donations to AMF to increase vs decrease x-risk (or other claimed cases of [complex] cluelessness). I think it's just a difference of degree - perhaps a large difference in degree, but degree nonetheless. I can't see how we could draw some clear line somewhere to decide what uncertainties can just be dealt with in normal ways and which uncertainties make us count as clueless and thus unable to use regular expected value reasoning.

(And as for CC3, I'd say it's at least slightly "unclear" how to weigh up these reasons against one another.)

I think my thoughts here are essentially a criticism of the idea of a sharp, fundamental distinction between "risk" and "Knightian uncertainty"*, rather than of Greaves and Mogensen's papers as a whole. That is, if we did accept as a premise that distinction, I think that most of what Greaves and Mogensen say seems like it may well follow. (I also found their papers interesting, and acknowledge that I'm very much not an expert here and all of this could be unfounded, so for all these reasons this isn't necessarily a case of me viewing their papers negatively.)

*I do think that those can be useful concepts for heuristic-type, practical purposes. I think we should probably act differently when are credences are massively less well-founded than usual, and probably be suspicious of traditional expected value reasoning then. But I think that's because of various flaws with how humans think (e.g., overconfidence in inside-view predictions), not because different rules fundamentally should apply to fundamentally different types of uncertainty.

[anonymous]3y1
0
0

re: your lady example: as far as I know, the recent papers e.g. here provide the following example: (1) either you help the old lady on a Monday or on a Tuesday (you must and can do exactly one of the two options).  In this case, your examples for CC1 and CC2 don't hold. One might argue that the previous example was maybe just a mistake and I find it very hard to come up with CC1 and CC2 for (1) if (supposedly) you don't know anything about Mondays or Tuesdays.

Interesting. I wonder if the switch to that example was because they had a similar thought to mine, or read that comment.

But I think I can make a similar point with the Monday vs Tuesday example. I  also predict I could make a similar point with respect to any example I'm given. 

This is because I do  know things about Mondays and Tuesdays in general, as well as about other variables. If the papers argue we're meant to artificially suppose we know literally nothing at all about a given variable, that seems weird or question-begging, and irrelevant to actual decision-making. (Note that I haven't read the recent paper you link to.) 

I could probably come up with several stories for the Monday vs Tuesday example, but my first thought is to make it connect to my prior stories so it can reuse most of the reasoning from there, and to do that via social media. Above, I wrote:

One reason to think CC1 is that the old lady and/or anyone witnessing your kind act and/or anyone who's told about it could see altruism, kindness, community spirit, etc. as more of the norm than they previously did, and be inspired to act similarly themselves. When they act similarly themselves, this further spreads that norm. We could tell a story about how that ripples out further and further and creates huge amount of additional value over time.

This says people tend to use social media more on Tuesday and Wednesday than on Monday. I think we therefore have some reason to believe that, if I help an old lady cross the road on Tuesday rather than Monday, it's slightly more likely that someone will post about that on social media, and/or use social media in a slightly more altruistic, kind, community-spirit-y way than they otherwise would've. (Because me doing this on Tuesday means they're slightly more likely to be on social media while this kind deed is relatively fresh in their minds.) This could then further spread those norms (compared to how much they'd be spread if we helped on Monday), and we could tell a story about how that ripples out further etc.

Above, I also wrote:

One reason to think CC2 for the old lady case could jump off from that story; maybe your actions sparks ripples of kindness, altruism, etc., which leads to more people donating to GiveWell type charities, which (perhaps) leads to increased population (via reduced mortality), which (perhaps) leads to increased x-risk (e.g., via climate change or more rapid technological development), which eventually causes huge amounts of disvalue.

I would now say exactly the same thing is true for the Monday vs Tuesday example, given my above argument for why the norms might be spread more if we help on Tuesday rather than Monday.

(We could also probably come up with stories related to amounts of traffic on Monday vs Tuesday - e.g., the old lady may be likelier to die if un-helped on one day, or more people may be delayed. Or related to people tending to be a little happier or sadder or Monday. Or related to what we ourselves predict we'll do with our time on Monday or Tuesday, which we probably would know about. Or many other things)

As before:

I think both of these "stories" I've told are extremely unlikely, and for practical purposes aren't worth bearing in mind. But they do seem to me to meet the criteria in CC1 and CC2. 

[anonymous]3y1
0
0

Sorry, I don't have the time to comment in-depth. However,  I think if one agrees with cluelessness, then you don't offer an objection. You might even extend their worries by saying that "almost everything has "asymmetric uncertainty"".  I would be interested in your extension of your last sentence. " They are extremely unlikely and thus not worth bearing mind". Why is this true? 

I would be interested in your extension of your last sentence. " They are extremely unlikely and thus not worth bearing mind". Why is this true? 

When I said "I think both of these "stories" I've told are extremely unlikely, and for practical purposes aren't worth bearing in mind", the bolded bit meant that I think a person will tend to better achieve their goals (including altruistic ones) if they don't devote explicit attention to such (extremely unlikely) "stories" when making decisions. The reason is essentially that one could generate huge numbers of such stories for basically every decision. If one tried to explicitly think through and weigh up all such stories in all such decision situations, one would probably become paralysed. 

So I think the expected value of making decisions before and without thinking through such stories is higher than the expected value of trying to think through such stories before making decisions.

In other words, the value of information one would be expected to get from spending extra time thinking through such stories is probably usually lower than the opportunity cost of gaining that information (e.g., what one could've done with that time otherwise).

Disclaimer: Written on low sleep, and again reporting only independent impressions (i.e., what I'd believe before updating on the fact that various smart people don't share my views on this). I also shared related thoughts in this comment thread.

I agree that one way someone could respond to my points is indeed by saying that everything/almost everything involves complex cluelessness, rather than that complex cluelessness isn't a useful concept. 

But if Greaves introduces complex cluelessness by juxtaposing it with simple cluelessness, yet the examples of simple cluelessness actually meet their definition of complex cluelessness (which I think I've shown), I think this provides reason to pause and re-evaluate the claims. 

And then I think we might notice that Greaves suggests a sharp distinction between simple and complex cluelessness. And also that she (if I recall correctly) arguably  suggests homogeneity within each type of cluelessness - i.e., suggesting all cases of simple cluelessness can be dealt with by just ignoring the possible flow-through effects that seem symmetrical, while we should search for a type of approach to handle all cases of complex cluelessness. (But this latter point is probably debatable.)

And we might also notice that the term "cluelessness" seems to suggest we know literally nothing about how to compare the outcomes. Whereas I've argued that in all cases we'll have some information relevant to that, and the various bits of information will vary in their importance and degree of uncertainty.

So altogether, it would just seem more natural to me to say:

  • we're always at least a little uncertain, and often extremely uncertain, and often somewhere in between
  • in theory, the "correct" way to reason is basically expected value theory, using all the scraps of evidence at our disposal, and keeping track of how high or low the resilience of our credences are
  • in practice, we should do something sort of like that, but with a lot of caution and heuristics (given that we're dealing with limited data, computational constraints, biases, etc.).

I do think there are many important questions to be investigated with regards to how best to make decisions under conditions of extreme uncertainty, and that this becomes especially relevant for people who want to have a positive impact on the long-term future. But it doesn't seem to me that the idea of complex cluelessness is necessary or useful in posing or investigating those questions. 

Also, when reading Greaves and Mogensen's papers, I was reminded of the ideas of cluster thinking (also here) and model combination. I could be drawing faulty analogies, but it seemed like those ideas could be ways to capture, in a form that can actually be readily worked with, the following idea (from Greaves; the same basic concept is also used in Mogensen):

in the situations we are considering, instead of having some single and completely precise (real-valued) credence function, agents are rationally required to have imprecise credences: that is, to be in a credal state that is represented by a many-membered set of probability functions (call this set the agent’s ‘representor’)

That is, we can consider each probability function in the agent's representor as one model, and then either qualitatively use Holden's idea of cluster thinking, or get a weighted combination of those models. Then we'd actually have an answer, rather than just indifference.

This seems like potentially "the best of both worlds"; i.e., a way to capture both of the following intuitively appealing ideas:

  • perhaps we shouldn't present singular, sharp credence functions over extremely hard-to-predict long-term effects
  • we can still make educated guesses like "avoiding extinction is probably bad in expectation" and (perhaps) "giving to AMF is probably good in expectation".
    • (This second intuition can rest on ideas like "Yeah, ok, I agree that it's 'unclear' how to weigh up these arguments, but I weigh up arguments when it's unclear how to do so all the time. I'm still at least slightly more convinced by argument X, so I'm going to go with what it suggests, and just also remain extremely open to new evidence.")

Bet A If H is true, you lose $10. Otherwise you win $15. 

Bet B If H is true, you win $15. Otherwise you lose $10. 

First I’m going to offer you Bet A. Immediately after you decide whether to accept Bet A, I’m going to offer you Bet B.

Can't the sequence proposal be fixed by conditioning on the past and only considering future sequences of actions? Committing to rejecting both bets A and B is rationally impermissible if you will be offered both since it's worse than accepting both, but after your decision on A, regardless of whether you accepted or rejected, then it could be that both accepting and rejecting B are permissible at the same time. The fact that something was my past action shouldn't matter or prevent me from completing some particular sequence of actions that includes past actions, only my future prospects and future actions matter.

I think this makes sense for sharp probabilities, too: suppose you assign some sharp probability  to H being true, and have already rejected A, even though it had positive expected value (so this decision was irrational at the time). Then, since the expected value of B is , it's permissible to reject B, and even required if the inequality is strict. You may be rationally required to complete a sequence of actions which was irrational before you started.

You can also apply Mogensen's maximality rule to sequences. Given some set of plausible probability distributions, if one sequence of actions  is better in expectation than another  under at least one distribution, and not worse under any other distribution, then . If neither strict inequality holds between the two options and these are the only two options, then both are permissible. (We sacrifice the independence of irrelevant alternatives, since a third option could dominate one but not the other, only ruling out the dominated one.)

How important do you think non-sharp credence functions are to arguments for cluelessness being important? If you generally reject Knightian uncertainty and quantify all possibilities with probabilities, how diminished is the case for problematic cluelessness?

(Or am I just misunderstanding the words here?)

My belief that cluelessness is important is fairly independent of any specific philosophical/technical account of cluelessness. In particular, I don't think me changing my mind on whether credence functions have to be sharp would significantly change my views on the importance of cluelessness.

In this comment I've explained in more detail what I think about the relationship between the basic idea and specific philosophical theories trying to describe it.

(FWIW, I don't feel like I have a well-informed view on whether credence functions have to be sharp. If anything, I have a weak intuition that it's a bit more likely than not that I'd conclude they have to be if I spent more time looking into the question.)

Mogensen writes (p. 20):

We might be especially interested in assessing acts that are directly aimed at improving the long-run future of Earth-originating civilization...These might include efforts to reduce the risk of near-term extinction for our species: for example, by spreading awareness about dangers posed by synthetic biology or artificial intelligence.
The problem is that we do not have good evidence of the efficacy of such interventions in achieving their ultimate aims. Nor is such evidence in the offing. The idea that the future state of human civilization could be deliberately shaped for the better arguably did not take hold before the work of Enlightenment thinkers like Condorcet (1822) and Godwin (1793). Unfolding over time- scales that defy our ability to make observations, efforts to alter the long-run trajectory of Earth- originating civilization therefore resist evidence-based assessment, forcing us to fall back on intuitive conjectures whose track record in domains that are amenable to evidence-based assessment is demonstrably poor (Hurford 2013). This is not a case where it can be reasonably claimed that there is good evidence, readily available, to constrain our decision making.

These concerns are forceful, but don't seem to generalize to all intervention types aimed at improving the long-term future. If one believes that the readily available evidence is insufficient to constrain our decision making, one still can accumulate resources to be disbursed at a later time when good enough evidence emerges. Although we may at present be radically uncertain about the sign and the magnitude of most far-future interventions, the intervention of accumulating resources for future disbursal does not itself appear to be subject to such radical uncertainty.

Robin Hanson, Paul Christiano, and others have made similar points in the past.

Hanson (2014):

This post describes attempts to help the future as speculative and non-robust in contrast to helping people today. But it doesn’t at all address the very robust strategy of simply saving resources for use in the future. That may not be the best strategy, but surely one can’t complain about its robustness.

Christiano (2014):

There is some debate about this question today, of whether there are currently good opportunities to reduce existential risk. The general consensus appears to be that serious extinction risks are much more likely to exist in the future, and it is ambiguous whether we can do anything productive about them today.

However, there does appears to be a reasonable chance that such opportunities will exist in the future, with significant rather than tiny impacts. Even if we don’t do any work to identify them, the technological and social situation will change in unpredictable ways. Even foreseeable technological developments over the coming centuries present plausible extinction risks. If nothing else, there seems to be a good chance that the existence of machine intelligence will provide compelling opportunities to have a long-term impact unrelated to the usual conception of existential risk (this will be the topic of a future post).

If we believe this argument, then we can simply save money (and build other forms of capacity) until such an opportunity arises.

By accumulating resources for the future, we give increased power to whatever decision-makers in the future we bequeath these resources. (Whether these decision-makers are us in 20 years, or our descendants in 200 years.)

In a clueless world, why do we think that increasing their power is good? What if those future decision makers make a bad decision, and the increased resources we've given them mean the impact is worse?

In other words, if we are clueless today, why will we be less clueless in the future? One might hope cluelessness decreases monotonically over time, as we learn more, but so does the probability of a large mistake.

Indefinite accumulation of resources probably also increases the chance of being targeted by resource-seeking groups with military & political power.

Haven't read the full paper, but I'm recording some brief thoughts on cluelessness here for my own records. In a clueless world, the value of having an active EA-style movement that is at least partly longtermist may come from:

  • Having a group of people watching the world carefully for potential opportunities to reliably improve the long-term future, so that they can alert the wider world when something comes up that might not be seen by people interested in world events for non-longtermist reasons
  • Having a group of people developing relevant skills (which seems a bit different than "saving resources") in case such an opportunity appears, so that action can be taken more swiftly
  • Offering people with a common interest in longtermism a reason to spend time with each other and hang together; perhaps our research isn't particularly useful in a clueless world, but even people skeptical about their ability to have an impact now might find value in other activities (whether that's "writing fiction about existential risks" or "spending research effort on short-term causes as a way of having more certain impact, in case we don't become more clueful within our own lifetimes")

I'm sure these ideas aren't original, and (as with anything I write), I'd be glad to see links to places they've been expressed in a better way.

Thanks, looking forward to reading this. Here's an archived version.

Cluelessness deserves more attention in EA, especially from the longtermist contingent.

More from Pablo
Curated and popular this week
Relevant opportunities