All of Anthony DiGiovanni's Comments + Replies

(Sorry, due to lack of time I don't expect I'll reply further. But thank you for the discussion! A quick note:)

from the subjective feeling (in your mind) that their EVs feel very hard to compare

EV is subjective. I'd recommend this post for more on this.

1
Vasco Grilo🔸
You are welcome to return to this later. I would be curious to know your thoughts. I liked the post. I agree EV is subjective to some extent. The same goes for the concept of mass, which depends on our imperfect understanding of physics. However, the expected mass of objects is still comparable, unless there is only an infinitesimal difference between their mass.

I don't know exactly what you mean by "feels very hard to compare". I'd appreciate more direct responses to the arguments in this post, namely, about how the comparison seems arbitrary.

3
Vasco Grilo🔸
It looks like you are inferring incomparability between the value of 2 futures (non-discrete overlap between their UEVs) from the subjective feeling (in your mind) that their EVs feel very hard to compare (given all the evidence you considered), as any comparisons involve decisive arbitrary assumptions. I mean "arbitrary" as used in common language. Comparisons among the expected cost-effectiveness of the vast majority of interventions seem arbitrary to me too due to effects on soil animals and microorganisms. However, the same goes for comparisons among the expected mass of seemingly identical objects with a similar mass if I can only assess their mass using my hands, but this does not mean their mass is incomparable. To assess this, we have to empirically determine which fraction of the uncertainty in their mass is irreducible. 10 k years ago, it would not have been possible to determine which of 2 rocks with around 1 kg was the heaviest if their mass only differed by 10^-6 kg. Yet, this is possible today. Some semi-micro balances have a resolution of 0.01 mg, 10^-8 kg. So I would say the expected mass of the rocks was comparable 10 k years ago. Do you agree? There could be some irreducible uncertainty in the mass of the rocks, but much less than suggested by the evidence available 10 k years ago.

I see arbitrary choices as a reason for further research to decrease their uncertainty

First, it's already very big-if-true if all EA intervention candidates other than "do more research" are incomparable with inaction.

Second, "do more research" is itself an action whose sign seems intractably sensitive to things we're unaware of. I discuss this here.

2
Vasco Grilo🔸
To clarify, I think any actions people consider in practice are comparable, not only impact-focussed ones involving research. On the value of research, it again looks like you are inferring the value of many possible futures is incomparable essentially because it feels very hard to compare their EVs.

However, by actual value, you mean a set of possible values

No, I mean just one value.

 

why would weighted sums of actual masses representing expected masses not be comparable?

Sorry, by "expected" I meant imprecise expectation, since you gave intervals in your initial comment. Imprecise expectations are incomparable for the reasons given in the post — I worry we're talking past each other.

2
Vasco Grilo🔸
I see. You are using the term actual value as it is usually used. What do you think about the 2nd paragraph of my last comment? The framework seems quite reasonable in principle, but I believe you are overestimating a lot the degree of imprecision (irreducible uncertainty) in practice. It looks like you are inferring the value of many possible futures is incomparable essentially because it feels very hard to compare their expected values (EVs), and therefore any choice of which one has the highest EV feels very arbitrary. In contrast, I see arbitrary choices as a reason for further research to decrease their uncertainty, and I expect this is overwhelmingly reducible. Without using any instruments, it would feel very arbitrary to pick which one of 2 identical objects with 1 and 1.001 kg is the heaviest, but this does not mean their mass is incomparable. For most practical purposes, I can assume their mass is the same. I can also use a sufficiently powerful scale in case a small difference would matter. If their mass was sufficiently close, like if it differed by only 10^-100 kg, I agree they may be incomparable, but I do not see this being relevant in practice.

What do you mean by actual mass?

The mass that the object in fact has. :) Sorry, not sure I understand the confusion.

 

I think expected masses are comparable because possible masses are comparable.

I don't think this follows. I'm interested in your responses to the arguments I give for the framework in this post.

2
Vasco Grilo🔸
I think the term actual value is usually used to describe a possible and discrete value. However, by actual value, you mean a set of possible values, one for each of the distributions describing the mass of a single object? There has to be more than one distribution describing the mass for the expected mass not to be discrete. If that is what you mean by actual value, the actual masses of 2 objects are not necessarily comparable under your framework? If I understood correctly what you mean by actual value, and you still hold that the actual masses of 2 objects are always comparable, why would weighted sums of actual masses representing expected masses not be comparable? I can see expected masses being incomparable in principle. It seems that gravitons are the least massive entities, and the upper bound for the mass of one is currently 1.07*10^-67 kg. So I assume we cannot currently distinguish between, for example, 10^-100 and 10^-99 kg. Yet, the expected masses of objects I can pick are practically discrete, and therefore comparable, even if I feel exactly the same about the mass of the objects. I would argue the welfare of possible futures is comparable for the same reasons.

Would your framework suggest the mass of the objects is incomparable

Yes, for the expected mass.

I believe my best guess should be that the mass of one is smaller, equal, or larger than that of the other

Why? (The actual mass must be either smaller, equal, or larger, but I don't see why that should imply that the expected mass is.)

3
Vasco Grilo🔸
I did mean the expected mass. I have clarified this in my comment now. What do you mean by actual mass? Possible mass? The expected mass is the mean of the possible masses weighted by their probability. I think expected masses are comparable because possible masses are comparable.

Quotes: Recent discussions of backfire risks in AI safety

Some thinkers in AI safety have recently pointed out various backfire effects that attempts to reduce AI x-risk can have. I think pretty much all of these effects were known before,[1] but it's helpful to have them front of mind. In particular, I'm skeptical that we can weigh these effects against the upsides precisely enough to say an AI x-risk intervention is positive or negative in expectation, without making an arbitrary call. (Even if our favorite intervention doesn't have these specific do... (read more)

So then, the difference between (a) and (b) is purely empirical, and MNB does not allow me to compare (a) and (b), right? This is what I'd find a bit arbitrary, at first glance.

Gotcha, thanks! Yeah, I think it's fair to be somewhat suspicious of giving special status to "normative views". I'm still sympathetic to doing so for the reasons I mention in the post (here). But it would be great to dig into this more.

What would the justification standards in wild animal welfare say about uncertainty-laden decisions that involve neither AI nor animals: e.g. as a government, deciding which policies to enact, or as a US citizen, deciding who to vote for President?

Yeah, I think this is a feeling that the folks working on bracketing are trying to capture: that in quotidian decision-making contexts, we generally use the factors we aren't clueless about (@Anthony DiGiovanni -- I think I recall a bracketing piece explicitly making a comparison to day-to-day decision making, bu

... (read more)
4
Eli Rose🔸
But suppose I want to know who of two candidates to vote for, and I'd like to incorporate impartial ethics into that decision. What do I do then? Hmm, I don't recall this; another Eli perhaps? : )

Would you say that what dictates my view on (a)vs(b) is my uncertainty between different epistemic principles

It seems pretty implausible to me that there are distinct normative principles that, combined with the principle of non-arbitrariness I mention in the "Problem 1" section, imply (b). Instead I suspect Vasco is reasoning about the implications of epistemic principles (applied to our evidence) in a way I'd find uncompelling even if I endorsed precise Bayesianism. So I think I'd answer "no" to your question. But I don't understand Vasco's view well eno... (read more)

2
Jim Buhler
Oh so for the sake of argument, assume the implications he sees are compelling. You are unsure about whether your good epistemic principles E imply (a) or (b).[1] So then, the difference between (a) and (b) is purely empirical, and MNB does not allow me to compare (a) and (b), right? This is what I'd find a bit arbitrary, at first glance. The isolated fact that the difference between (a) and (b) is technically empirical and not normative doesn't feel like a good reason to say that your "bracket in consequentialist bracketing" move is ok but not the "bracket in ex post neartermism" move (with my generous assumptions in favor of ex post neartermism). 1. ^ I don't mean to argue that this is a reasonable assumption. It's just a useful one for me to understand what moves MNB does and does not allow. If you find this assumption hard to make, imagine that you learn that we likely are in simulation that is gonna shut down in 100 years and that the simulators aren't watching us (so we don't impact them).

But lots of the interventions in 2. seem to also be helpful for getting things to go better for current farmed and wild animals, e.g. because they are aimed avoiding a takeover of society by forces which don't care at all about morals

Presumably misaligned AIs are much less likely than humans to want to keep factory farming around, no? (I'd agree the case of wild animals is more complicated, if you're very uncertain or clueless whether their lives are good or bad.)

5
Eli Rose🔸
That does seem right, thanks. I intended to include dictator-ish human takeover there (which seems to me to be at least as likely as misaligned AI takeover) as well, but didn't say that clearly. Edited to "relatively amoral forces" which still isn't great but maybe a little clearer.

Thanks Jo! Yeah, the perspective I defend in that post in a nutshell is:

  • The "reasons" given by different normative views are qualitatively different.
  • So, when choosing between A and B, we should look at whether each normative view gives us reason to prefer A over B (or B over A).
  • If consequentialist views say A and B are incomparable, these views don't give me a reason to prefer A over B (or B over A).
  • Therefore, if the other normative views in aggregate say A is preferable, I have more reason to choose A.

(Similarly, the decision theory of "bracketing" might ... (read more)

this happens to break at least the craziest Pascalian wagers, assuming plausible imprecise credences (see DiGiovanni 2024).

FWIW, since writing that post, I've come to think it's still pretty dang intuitively strange if taking the Pascalian wager is permissible on consequentialist grounds, even if not obligatory. Which is what maximality implies. I think you need something like bracketing in particular to avoid that conclusion, if you don't go with (IMO really ad hoc) bounded value functions or small-probability discounting.

(This section of the bracketing p... (read more)

This particular claim isn't empirical, it's about what follows from compelling epistemic principles

(As for empirical evidence that would change my mind about imprecision being so severe that we're clueless, see our earlier exchange. I guess we hit a crux there.)

Hi Vasco —

one will still be better than the other in expectation

My posts argue that this is fundamentally the wrong framework. We don't have precise "expectations".

1
Vasco Grilo🔸
Thanks, Anthony. Is there any empirical evidence that would change your mind on that?

Sounds great, please DM me! Thanks for the invite. :)

In the meantime, if it helps, for the purposes of this discussion I think the essential sections of the posts I linked are:

(The section I linked to from this other post is more of a quick overview of stuff mostly discussed in the sections above. But it might be harder to follow because it's in the context of a post about unawareness specifically, hence the "UEV" term etc. —... (read more)

And yet even a very flawed procedure will, on average across worlds, do better than chance

I respond to the "better than chance" claim in the post I linked to (in my reply to Richard). What do you think I'm missing there? (See also here.)

2
Vasco Grilo🔸
Hi Anthony, I think the arguments you provide only imply the expected change in welfare from pursuing any 2 strategies is closer than one may have thought. However, as long as the information about each strategy is not exactly the same, one will still be better than the other in expectation. If the difference is sufficiently small (e.g. me leaving home 0.001 s later), one could say they have practically the same expected value. I agree there are many more strategies in this situation than people realise. Yet, I am not convinced literally all possible strategies are in that situation.
7
Bentham's Bulldog
It’s a somewhat long post.  Want to come on the podcast to discuss?

Maybe the people who endorse cluelessness are right and our actions can’t make the future reliably better (though not likely). But are you really willing to bet 10^42 expected life years on that proposition? Are you really willing to gamble all that expected value on a speculative philosophical proposition like moral cluelessness?

I'm sorry if I'm a bit of a broken record, but this argument doesn't engage with the strongest case for cluelessness. See my comment here.

3
Bentham's Bulldog
I don't agree with that.  Cluelessness seems to only arise if you have reason to think that on average your actions won't make things better.  And yet even a very flawed procedure will, on average across worlds, do better than chance.  This seems to deal with epistemic cluelessness fine. 

I'd be pretty interested to hear why those who disagree-voted disagree.

For my part, I simply didn't know the series existed until seeing this post, since this is the only post in the series on EAF.  :)

Moreover, I take it that there is very little credibility to the opposite view, that we should regard the inverse of the above claims as disproportionately likely by default. So if you give some (higher-order) credence to views or models implying cluelessness, and some to views on which we can often reasonably expect commonsensically good things to be long-term good, then it seems the positive expectations could trivially win out

I don't think this works, at least at the level of our empirical credences, for reasons I argue here. (I think the crux here is t... (read more)

Right, but the same point applies to other scope-restricted views, no? We need some non-arbitrary answer as to why we limit the scope to some set of consequences rather than a larger or smaller set. (I do think bracketing is a relatively promising direction for such a non-arbitrary answer, to be clear.)

Sorry this wasn't clear! I wasn't thinking about the choice between fully eliminating factory farming vs. the status quo. I had in mind marginal decreased demand for animal products leading to marginal decreased land use (in expectation), which I do think we have a fairly simple and well-evidenced mechanism for.

I also didn't mean to say the wild animal effects dominate, just that they're large enough to be competitive with the farmed animal effects. I agree the tradeoffs between e.g. cow or chicken suffering vs. wild insect suffering seem ambiguous. (And y... (read more)

I have mixed feelings about this. So, there are basically two reasons why bracketing isn't orthodox impartial consequentialism:

  1. My choice between A and B isn't exactly determined by whether I think A is "better" than B. See Jesse's discussion in this part of the appendix.
  2. Even if we could interpret bracketing as a betterness ranking, the notion of "betterness" here requires assigning a weight of zero to consequences that I don't think are precisely equally good under A vs. B.

I do think both of these are reasons to give less weight to bracketing in my decisio... (read more)

Rejecting premise 1, completeness is essentially a nonstarter in the context of morality, where the whole project is premised on figuring out which worlds, actions, beliefs, rules, etc., are better than or equivalent to others. You can deny this your heart of hearts - I won’t say that you literally cannot believe that two things are fundamentally incomparable - but I will say that the world never accommodates your sincerely held belief or conscientious objector petition when it confronts you with the choice to take option A, option B, or perhaps coin flip

... (read more)
9
Anthony DiGiovanni
I'd be pretty interested to hear why those who disagree-voted disagree.

I'd recommend specifically checking out here and here, for why we should expect unintended effects (of ambiguous sign) to dominate any intervention's impact on total cosmos-wide welfare by default. The whole cosmos is very, very weird. (Heck, ASI takeoff on Earth alone seems liable to be very weird.) I think given the arguments I've linked, anyone proposing that a particular intervention is an exception to this default should spell out much more clearly why they think that's the case.

4
JackM
I'll read those. Can I ask regarding this: What makes you think that? Are you embracing a non-consequentialist or non-impartial view to come to that conclusion? Or do you think it's justified under impartial consequentialism?

This will be obvious to Jesse, but for others:

Another important sense in which bracketing isn't the same thing as ignoring cluelessness is, we still need to account for unawareness. Before thinking about unawareness, we might have credences about some locations of value I' that tell us A >_{I'} B. But if the mechanisms governing our impact on I' are complex/unfamiliar enough, arguably our unawareness about I' is sufficiently severe that we should consider A and B incomparable on I'.

Thanks Ben — a few clarifications:

  • Bracketing doesn’t in general recommend focusing on the “first order consequences”, in the sense people usually use that term (e.g. the first step in some coarse-grained causal pathway). There can be locations of value I’ where we’d think A >_{I’} B if we only considered first order consequences, yet A [incomparable]_{I’} B all things considered. Conversely, there can be locations of value I’ that are only affected by higher-order consequences, yet A >_{I’} B.
  • Not sure exactly what you mean by “generally do better”, b
... (read more)

Hi Toby — sorry if this is an annoyingly specific (or not!) question, but do you have a sense of whether the following would meet the bar for "deep engagement"?:

  • One of the chapters contains a pretty short subsection that's quite load-bearing for the thesis of the chapter. (So in particular, the subsection doesn't give an extensive argument for that load-bearing claim.)
  • The essay replies at length to that subsection. Since the subsection doesn't contain an extensive argument, though, most of the essay's content is:
    • (1) a reply to anticipated defenses of the s
... (read more)
2
Toby Tremlett🔹
Hey Anthony. Very reasonable question! Replying at length to a sub-section from one of the essays would definitely constitute deep engagement. The second idea is less obviously related to the collection, but I did say that you could write on a 'theme from across the collection' so it would likely qualify. If you want to be extra sure I'm happy to look at a plan or something. The motivation with the quick take is mostly to underline that this isn't an essay competition about longtermism, it's an essay competition about a specific book about longtermism. You clealry get that, so I'm not concerned :)

explains how some popular approaches that might seem to differ are actually doing the same, but implicitly

Yep, I think this is a crucial point that I worry has still gotten buried a bit in my writings. This post is important background. Basically: You might say "I don't just rely on an inside view world model and EV max'ing under that model, I use outside views / heuristics / 'priors'." But it seems the justification for those other methods bottoms out in "I believe that following these methods will lead to good consequences under uncertainty in some sense... (read more)

Poll: Is this one of your cruxes for cluelessness?

There's a cluster of responses to arguments for cluelessness I've encountered, which I'm not yet sure I understand but maybe is important. Here's my attempted summary:[1]

Sure, maybe assigning each action a precise EV feels arbitrary. But that feeling merely reflects the psychological difficulty of generating principled numbers, for non-ideal agents like us. It's not a problem for the view that even non-ideal agents should, ultimately, evaluate actions as more or less rational based on precise EV.

I... (read more)

2
Sharmake
  A non-trivial reason for this is that precise numbers expose ideological assumptions, and a whole of people do not like this. It's easy to lie with numbers, but it's even easier to lie without a number.
1
JohanEA
One of the only good scenarios that I can think of where this response to cluelessness makes sense, is if a person subscribes to moral realism. And even then, the arguments for moral uncertainty seem too compelling to me. Do you know of any person that is not skeptical of cluelessness?  My take is that, if you really try to look at it from first principles, most will arrive at the conclusion that it is not possible to calculate the EV of any action for us humans. The rest is just cope because one is not willing to give up on ones EA identity and all of the sacrifices you have done. Sunk-cost fallacy is just too big of an obstacle. Yes, if there is a moral theory that is objectively correct, then one should in principle be able to translate that moral theory into a framework that helps us calculate EV. But since we are not omniscient, that just does seem impossible. But in combination with the principle that we can't understand the downstream effect of our actions in the long-term, I don't understand how somebody can be skeptical of cluelessness.  I know this comment does not directly address your initial prompt, but I thought I'd rather post it than not. Thank you for sharing!
2
Toby Tremlett🔹
I understand my vote to be consistent with us never in fact reaching 'principled' precise EV numbers though. 
Anthony DiGiovanni
16
4
0
80% disagree

(There’s a lot more I might want to say about this, and also don't take the precise 80% too seriously, but FWIW:)[1]

When we do cause prioritization, we’re judging whether one cause is better than another under our (extreme) uncertainty. To do that, we need to clarify what kind of uncertainty we have, and what it means to do “better” given that uncertainty. To do that, we need to reflect on questions like:

  • “Should we endorse classical Bayesian epistemology (even as an ‘ideal’)?” or
  • “How do we compare actions’ ‘expected’ consequences, when we can’t conceive of
... (read more)

taking the Rethink Priorities 7 - 15% numbers at face value, when the arguments for those AFAICT don't even have particular models behind them

I'm interested to hear what you think the relevant difference is between the epistemic grounding of (1) these figures vs. (2) people's P(doom)s, which are super common in LW discourse. I can imagine some differences, but the P(dooms) of alignment experts still seem very largely ass-pulled and yet also largely deferred-to.

Gotcha, so to be clear, you're saying: it would be better for the current post to have the relevant quotes from the references, but it would be even better to have summaries of the explanations?

(I tend to think this is a topic where summaries are especially likely to lose some important nuance, but not confident.)

2
Pablo
Yes, that’s what I’m saying. I defer to you, since I am not familiar with this topic. My above assessment was "on priors”.

That's helpful to know, thanks! I currently don't have time for this, but (edit) might add quotes later.

Most of the added value comes from the synthesis

Could you please clarify what you mean by this?

3
Pablo
I was referring to the difference in value between a collection of references and a summary of the content of those references (as opposed to a mere collection of representative quotes).

Maybe you need some account of transworld identity (or counterparts) to match these lives across possible worlds

That's the concern, yeah. When I said ”some nontrivially likely possible world containing an astronomical number of happy lives”, I should have said these were happy experience-moments, which (1) by definition only exist in the given possible world, and (2) seem to be the things I ultimately morally care about, not transworld persons.[1] Likewise each of the experience-moments of the lives directly saved by the AMF donation only exist in a g... (read more)

Thanks for this post, Magnus! While I’m still uncompelled by your arguments in “Why give weight to a scope-adjusted view” for the reasons discussed here and here, I’ll set that aside and respond to the “Asymmetry in practical recommendations”.

Suppose that (i) the normative perspective from which we’re clueless (e.g., impartial consequentialism plus my framework here) says both A and B are permissible, and (ii) all other normative perspectives we give weight to say only A is permissible. In that case, I’d agree we should do A, no ma... (read more)

6
Magnus Vinding
To clarify, the general approach outlined here doesn't rest on the use of discount rates — that's just a simple and illustrative example of scope-restriction.

Unfortunately not that "succinct" :) but I argue here that cluelessness-ish arguments defeat the impartial altruistic case for any intervention, longtermist or not. Tl;dr: our estimates of the sign of our net long-term impact are arbitrary. (Building on Mogensen (2021).)

(It seems maybe defensible to argue something like: "We can at least non-arbitrarily estimate net near-term effects. Whereas we're clueless about the sign of any particular (non-'gerrymandered') long-term effect (or, there's something qualitatively worse about the reasons for our beliefs ab... (read more)

The "lower meat production" ⇒ "higher net primary productivity" ⇒ "higher wild animal suffering" connection seems robust to me. Or not that much less robust than the intended benefit, at least.

4
mal_graham🔸
This might not be the place for a discussion of this, but I personally don't feel that the "robustness" of Tomasikian chain of reasoning you note here is similar to the "robustness" of the idea that factory farms contain a crazy amount of suffering.  In the first instance, the specific chain of arrows above seems quite speculative, since we really have no idea how land use would change in a world with no factory farming. Are we that confident net primary productivity will increase? I'm aware there are good arguments for it, but I'd be surprised if someone couldn't come up with good arguments against if they tried. More importantly, I don't think that's a sufficient reasoning chain to demonstrate that wild animal effects dominate? You'd need to show that wild+farmed animal welfare on post-factory farmed land uses is lower than wild+farmed animal welfare on current land uses, and that seems very sensitive to specific claims about moral weights, weights between types of suffering, empirical information about wild animal quality of life, what it means for a life to be net-negative, etc. Or am I misunderstanding what you mean by robustness? I've just finished reading your unawareness sequence and mostly feel clueless about everything, including what it could mean for a reasoning chain to be robust.  

Permissive epistemology doesn't imply precise credences / completeness / non-cluelessness

(Many thanks to Jesse Clifton and Sylvester Kollin for discussion.)

My arguments against precise Bayesianism and for cluelessness appeal heavily to the premise “we shouldn’t arbitrarily narrow down our beliefs”. This premise is very compelling to me (and I’d be surprised if it’s not compelling to most others upon reflection, at least if we leave “arbitrary” open to interpretation). I hope to get around to writing more about it eventually.

But suppose you d... (read more)

My understanding is that your proposed policy would be something like 'represent an interval of credences and only take "actions" if the action seems net good across your interval of credences'. … you'd take no actions and do the default. (Starving to death? It's unclear what the default should be which makes this heuristic more confusing to apply.)

Definitely not saying this! I don’t think that (w.r.t. consequentialism at least) there’s any privileged distinction between “actions” and “inaction”, nor do I think I’ve ever implied this. My claim is: For any ... (read more)

(ETA: The parent comment contains several important misunderstandings of my views, so I figured I should clarify here. Hence my long comments — sorry about that.)

Thanks for this, Ryan! I’ll reply to your main points here, and clear up some less central yet important points in another comment.

Here's what I think you're saying (sorry the numbering clashes with the numbering in your comment, couldn't figure out how to change this):

  1. The best representations of our actual degrees of belief given our evidence, intuitions, etc. — what you call the “terminally corr
... (read more)

^ I'm also curious to hear from those who disagree-voted my comment why they disagree. This would be very helpful for my understanding of what people's cruxes for (im)precision are.

7
Ryan Greenblatt
1. I think philosophically, the right ultimate objective (if you were sufficiently enlightened etc) is something like actual EV maximization with precise Bayesianism (with the right decision theory and possibly with "true terminal preference" deontological constraints, rather than just instrumental deontological constraints). There isn't any philosophical reason which absolutely forces you to do EV maximization in the same way that nothing forces you not to have a terminal preference for flailing on the floor, but I think there are reasonably compelling arguments that something like EV maximization is basically right. The fact that something doesn't necessarily get money pumped doesn't mean it is a good decision procedure, it's easy for something to avoid necessarily getting money pumped. 2. There is another question about whether it is a better strategy in practice to actually do precise Bayesianism given that you agree with the prior bullet (as in, you agree that terminally you should do EV maximization with precise Bayesianism). I think this is a messy empirical question, but in the typical case, I do think it's useful to act on your best estimates (subject to instrumental deontological/integrity constraints, things like unilateralists curse, and handling decision theory reasonably). My understanding is that your proposed policy would be something like 'represent an interval of credences and only take "actions" if the action seems net good across your interval of credences'. I think that following this policy in general would lead to lower expected value, do I don't do it. I do think that you should put weight on unilateralists curse and robustness, but I think the weight varies by domain and can derived by properly incorporating model uncertainty into your estimates and being aware of downside. E.g., for actions which have high downside risk if they go wrong relative to the upside benefit, you'll end up being much less likely to take these actions due to vario

I’m strongly in favor of allowing intuitive adjustments on top of quantitative modeling when estimating parameters.

We had a brief thread on this over on LW, but I'm still keen to hear why you endorse using precise probability distributions to represent these intuitive adjustments/estimates. I take many of titotal's critiques in this post to be symptoms of precise Bayesianism gone wrong (not to say titotal would agree with me on that).

ETA: Which, to be clear, is a question I have for EAs in general, not just you. :)

8
Anthony DiGiovanni
^ I'm also curious to hear from those who disagree-voted my comment why they disagree. This would be very helpful for my understanding of what people's cruxes for (im)precision are.

In theory, we could influence them, and in some sense merely wagging a finger right now has a theoretical influence on them. Yet it nevertheless seems to me quite defensible to practically disregard (or near-totally disregard, à la asymptotic discount) these effects given how remote they are

Sorry, I'm having a hard time understanding why you think this is defensible. One view you might be gesturing at is:

  1. If a given effect is not too remote, then we can model actions A and B's causal connections to that effect with relatively high precision — enough to just
... (read more)

I'm not sure about this, though. As I wrote in a previous comment:

The reasons to do various parochial things, or respect deontological constraints, aren't like this. They aren't grounded in something like "this thing out there in the world is horrible, and should be prevented wherever/whenever it is [or whoever causes it]".

The concern I've tried to convey in our discussion so far is: Insofar as our moral reasons for action are grounded in "this thing out there in the world is horrible, and should be prevented wherever/whenever it is [or whoever causes it]"... (read more)

(I unfortunately don't have time to engage with the rest of this comment, just want to clarify the following:)

Indeed, bracketing off "infinite ethics shenanigans" could be seen as an implicit acknowledgment of such a de-facto breakdown or boundary in the practical scope of impartiality.

Sorry this wasn't clear — I in fact don't think we're justified in ignoring infinite ethics. In the footnote you're quoting, I was simply erring on the side of being generous to the non-clueless view, to make things easier to follow. So my core objection doesn't reduce to "p... (read more)

2
Magnus Vinding
Right, I suspected that — hence the remark about infinite ethics considerations counting as an additional problem to what's addressed here. My point was that the non-clueless view addressed here (finite case) already implicitly entails scope limitations, so if one embraces that view, the question seems to be what the limitation (or discounting) in scope is, not whether there is one.

I've replied to this in a separate Quick Take. :) (Not sure if you'd disagree with any of what I write, but I found it helpful to clarify my position. Thanks for prompting this!)

Load more