D

dph121

5 karmaJoined Sep 2014

Comments
6

part 2/2

Have you considered that for some people, the most agenty thing to do would be to change their decision-procedure so it becomes less "agenty"? [...] your idealized self-image/system-2/goal.

Yes, I have, for myself, and I declined, because I didn't see how it would help. Your analogy is indeed an analogy for what you believe, but it is not evidence. I asked you why you advise reducing one's agency, and the mere fact that it is theoretically possible for it to be a good idea doesn't demonstrate that it in fact is.

Note that in the analogy, if the robbers aren't stupid, they will kill one of your family members because taking the pill is a form of non-compliance, wait for the pill to wear off, and then ask the same question minus one. If the hostage crisis is a good analogy for your internal self, what is to stop "system 1" from breaking its promises or being clever? That's basically how addiction works: take a pill a day or start withdrawal. Doing that? Good. Now take one more a day or start withdrawal. Replace “pills” with “time/effort not spent on altruism” and you’re doomed.

With EA being too demanding, you don't even have to change your goals, it suffices to adjust your expectations to yourself.

The utility quota strikes again. Here, your problem is that EA can be "too demanding" - apparently there is some kind of better morality by which you can say that EA is being Wrong, but somehow you don't decide to use that morality for EA instead.

"Careful reflection" also isn't enough for humans to converge on an answer for themselves. If it was, tens of thousands of philosophers should have managed to map out morality, and we wouldn't need the likes of MIRI.

Are you equating "morality" with "figuring out an answer for one's goals that converges for all humans"? If yes, then I suspect that the reference of "morality" fails because goals probably don't converge (completely). Why is there so much disagreement in moral philosophy? To a large extent, people seem to be trying to answer different questions. In addition, some people are certainly being irrational at what they're trying to do, e.g. they fail to distinguish between things that they care about terminally and things they only care about instrumentally, or they might fail to even ask fundamental questions.

No, in this case I'm referring to true morality, whatever it might be. If your explanation was true - if the divergence in theories was because of goals failing to converge or people answering different questions - we would expect philosophers to actually answer their questions. However, what we see is that philosophers do not manage to answer their own questions: every moral theory has holes and unanswered questions, not just differences of opinion between the author and others.

If there were moral consensus, then obviously there would be a single morality, so the lack of a consensus carries some evidential weight, but not much. People are great at creating disagreements over nothing, and ethics is complex enough to be opaque, so we would expect moral disagreement in both worlds with a single coherent morality for humanity and worlds without one.

I agree, see my 2nd footnote in my original post. The point where we disagree is whether you can infer from an existing disagreement about goals that at least one participant is necessary being irrational/wrong about her goals. I'm saying that's not the case.

And my point is that the inference is obsolete: since neither person has psychological knowledge thirty years ahead of their time, both are necessarily wrong and irrational about their goals.

I'm sufficiently confident that I'm either misunderstanding you or that you're wrong about your morality [...]

I probably thought about my values more than most EAs and have gone through unusual lengths to lay out my arguments and reasons. If you want to try to find mistakes, inconsistencies or thought experiment that would make me change them, feel free to send me a PM here or on FB.

Why not do this publicly? Why not address the thought experiment I proposed?

In addition, people will disagree about the specifics of even such "straightfoward" things as what "altruism" implies. Is it altruistic to give someone a sleeping pill against their will if they plan to engage in some activity you consider bad for them? Is it altruistic to turn rocks into happy people? People will disagree about what they would choose here, and it's entirely possible that they are not making any meaningful sort of mistake in the process of disagreeing.

Those things are less straightforward than string theory, in the sense of Kolmogorov complexity. The fact that we can compress those queries into sentences which are simpler to introduce one to than algebra is testimony to how similar humans are.

OK, but even so, I would in such a case at least be right about the theoretical possibility of there being people to whom my advice applies correctly.

Yes, but you couldn't act on it without the benefit of hindsight. It is also theoretically possible that the moon is made out of cheese and that all information to the contrary has been spread by communist mice.

For what it's worth, I consider it dangerous that EA will be associated with a lot of "bad press" if people drop out due to it being too stressful.

This should be included in the productivity calculation, naturally. Just like your own mental wellbeing naturally should be part of EA optimisation.

All my experience with pitching EA so far indicates that it's bad to be too demanding.

And all your experience has had the constant factor of being pitched by you, someone who believes "optimising for EA" being tiring and draining is all part of the plan.

Yes, if "optimising for EA" drains you, you should do less of it, because you aren't optimising for EA (unless there's an overlap, which there probably is, in which case you should keep doing the things which optimise for EA).

As a general point, I object to your choice of words: I don't think my posts ever argued for people to stop trying.

You're telling people not to try to optimise their full lives to EA right now. If that is what they were trying before, then you are arguing for people to stop trying, QED.

On the topic of choice of words, though, in the original post you write "The same of course also applies to women." - this implies that the author of the quote did not intend his statement to apply to women, despite using a (or at the time perhaps the) grammatically correct way to refer to an unspecified person of any gender ("he"). Considering you use a gendered pronoun to refer to unspecified people of any gender as well ("she"), I'm confused why you would wrongly 'correct' someone out like that.

Sorry for not replying sooner.

tl;dr: Effective Altruism shouldn’t be a job you do because it’s The Right Thing To Do, which you come home from tired and drained, you should integrate it with your life and include your own wellbeing in the decision process.

I strongly disagree. Why would people be so deeply affected if they didn't truly care?

They do truly care, in the emotional sense. They just can't be modeled like a utility-maximiser which values it greatly compared to their own mental well-being. You call the aberration 'irrationality', but that isn't an explanation. A model which does offer an explanation (simpler than an ad-hoc rule) is therefore strictly better. Given how predictable and intentional it is, I think it makes more sense to model it as a rational action of an agent which values the well-being of humanity less than the emotions generated by caring (about the well-being of humanity or something else).

Suppose we have an agent. It has a utility function over its 'emotional states', and these emotional states are a priori linked to the environment in certain ways. It has a strong utility penalty for changing these links, but it is able to. In that case, if we place the agent in an environment which causes misery, then if it becomes unlikely that the situation will change, the agent will change the way the environment links to misery to prevent future misery. The link between the environment and the emotions is "caring for things in the environment", with all expected behaviours, but the agent does not terminally value the environment in this model.

We should also consider that people sometimes do start emotionally caring again if a problem stops appearing hopeless. This could be modeled by a utility boost for switching back to the "proper" emotional link-ups (though smaller than the utility loss for becoming jaded, because otherwise you would just always shield yourself from nasty emotions and switch back for the positive ones afterwards), which means that there is a complete map of "proper" emotional link-ups in the utility function, albeit lower-ranked than the emotional hookups. The agent's true optimum would therefore mean having "proper" emotional link-ups, and an environment identical to that of an agent which has the proper emotional link-ups as its utility function.

This matches the data quite nicely, methinks. Better than "irrationality", anyway.

Giving up is not a rational decision made by your system-2*, it's a coping mechanism triggered by your system-1 feeling miserable, which then creates changes/rationalizations in system-2 that could become permanent.

Agenty/rational behaviour isn't exclusive to system 2. How does system 1 decide when to trigger this coping mechanism? Or to beg the question less, how is existence parsed into the existence or nonexistence of a trigger?

As I said before (and you expressed skepticism), humans are not designed to efficiently pursue a single goal.

That does not follow from the linked page. It states that our utility function (such that it exists) is very complex, not that there isn't a way to make one value dominant. For example, humans can be convinced to efficiently pursue the singular goal of watching flashing lights of a slot machine, getting heroin into their bloodstream, etc.

Was that the evidence you have for the claim that humans aren't designed to efficiently pursue a single goal? Or do you have more evidence?

A neuroscientist of the future, when the remaining mysteries of the human brain will be solved, will not be able to look at people's brains and read out a clear utility-function.

It is trivially true that a utility function-based agent exists (in a mathematical sense) which perfectly models someone's behaviour. It may not be the simplest, but it must exist.

Instead, what you have is a web of situational heuristics (system-1), combined with some explicitly or implicitly represented beliefs and goals (system-2),

"Non-technical" is one thing, an entirely different sorting from the common usage which AFAIK has no basis in cognitive science is quite another. How did you manage to come upon those definitions? Never mind that situational heuristics by construction have implicitly represented beliefs and goals (ducking if someone's fist moves towards your face: I believe that their fist will continue to move in the rough direction it is going and I do not want to be hit in the face).

There is often no clear way to get out a utility-function.

That's evidence against a simple UF, only a little against complex UFs. And since Thou Art Godshatter, a complex UF is expected.

Of course, people can decide to do what they can to self-modify towards becoming more agenty, and some succeed quite well despite of all the messy obstacles your brain throws at you. But if your ideal self-image and system-2 goals are too far removed from your system-1 intuitions and generally the way your mind works, then this will create a tension that leads to unhappiness and quite likely cognitive dissonance somewhere. If you keep going without changing anything, the outcome won't be good for neither you nor your goals.

How could you possibly know this? How do you know what "keeping going" is for those who are going to read this?

You mentioned in your earlier comment that lowering your expectations is exactly what evading cognitive dissonance is. Indeed! But look at the alternatives: If your expectations are impossible to fulfill for you, then you cannot reduce cognitive dissonance by improving your behavior. So either you lower your expectations (which preserves your EA-goal!), or you don't, in which case the only way to reduce the cognitive dissonance is by rationalizing and changing your goal.

This is very different from how I would describe it, to the point that I have trouble understanding you. Am I correct in interpreting this as you expecting people to use some kind of EA utility quota, where "expectations" are a moral standard you want yourself to reach? That's... well, I guess it explains why people have donation quota, but it's very different from how I think about it by default.

If you're a utilitarian, it is also Wrong, because either you're not optimising for the right utility before meeting the quota or you're necessarily doing worse after passing the quota than if you hadn't passed it. Problem cases are failing to optimise for the right utility function - one which places great instrumental value on their emotional health. A utility quota masks that problem by allowing people to patch up their emotional health during down time, but it is not a solution: For example, problem cases would still be damaging their emotional health while 'working', requiring a longer time to fix than if they took action to minimise emotional health damage while working, which is not allowed under the utility quota model because it's "slacking off during work time". Someone whose quota allows them to just barely be okay would be in a constant struggle between their misaligned "EA utility quota" and their free time which tries to make them happy, as opposed to someone who has a more properly aligned EA utility quota, who also partially optimises work to be something they enjoy and as a consequence can get system 1 involved in creative thinking during work and spend more time working, leading to better happiness and productivity. (Disclaimer: no large scale test that I know of).

In my opinion, there is always cognitive dissonance in this entire paradigm of utility quotas. You're making yourself act like two agents with two different moralities who share the same body but get control at different times. There is cognitive dissonance between those two agents. Even if you try to always have one agent in charge, there's cognitive dissonance with the part you're denying.

By choosing strategies like "Avoiding daily dilemmas", you're not changing your goals, your only changing the expectations you set for yourself in regard to these goals.

These "expectations", as you use them, are the goals you actually engage in. I agree you're not changing your true goals by changing your expectations, but you're doing something which is suboptimal by your own standards, which you don't see because you can't naturally empathise with everything in your future light-cone and system 2 is saying that it's all right.

(part 1/2)

Thanks for replying. (note: I'm making this up as I go along. I'm forgoing self-consistency for accuracy).

You bring up a very important point with the danger of things turning into merely "pretending to try". I see this problem, but at the same time I think many people are far closer to the other end of the spectrum.

Merely trying isn't the same as pretending to try. It isn't on the same axis as emotionally caring, it's the (lack of) agency towards achieving a goal. Someone who is so emotionally affected by EA that they give up is definitely someone who 'merely tried' to affect the world, because you can't just give up if you care in an agentic sense.

What we want is for people to be emotionally healthy - not caring too much or too little, and with control over how affected they are - but with high agency. Telling people they don't need to be like highly agentic EA people affects both, and to me at least it isn't obvious if you meant that people should still try their hardest to be highly agentic but merely not beat themselves up over falling short.

Your "two considerations", look like a two-tiered defence against EA pressures rather than convergence on a single right answer on how to consider your goals.

That's what they are. I think there's no other criterion that make your goals the "right" ones other than that you would in fact choose these goals upon careful reflection.

Whose "right" are we talking about, here? If it's "right" according to effective altruism, that is obviously false: someone who discovers they like murdering is wrong by EA standards (as well as those of the general population). "Careful reflection" also isn't enough for humans to converge on an answer for themselves. If it was, tens of thousands of philosophers should have managed to map out morality, and we wouldn't need the likes of MIRI.

Why should (some) people who are partial EAs not be pushed to become full EAs? Or why should (some) full EAs not be pushed to become partial EAs? Do you expect people to just happen to have the morality which has highest utility^1 by this standard? I suppose there is the trivial solution where people should always have the morality they have, but in that case we can't judge people who like murdering.

I think there's absolutely nothing wrong with 1), if your goals are different from mine then that doesn't necessarily mean you're making a mistake about your goals. I personally focus on suffering and don't care about preventing death, all else being equal, but I don't (necessarily) consider you irrational for doing so.

People's goals can be changed and/or people can be wrong about their goals, depending on what you consider proper "goals". I'm sufficiently confident that I'm either misunderstanding you or that you're wrong about your morality that I can point out that the best way to achieve "minimise suffering, without caring about death" is to kill things as painlessly as possible (and by extension, to kill everything everywhere). I would expect people who believe they are suffering-minimisers to be objectively wrong.

I'm arguing within a framework of moral anti-realism. I basically don't understand what people mean by the term "good" that could do the philosophical work they expect it to do. A partial EA is someone who would refuse to self-modify to become more altruistic IF this conflicts with other goals (like personal happiness, specific commitments/relationships, etc). I don't think there's any meaningful and fruitful sense in which these people are doing something bad or making some sort of mistake, all you can say is that they're being less altruistic as someone with a 100%-EA goal, and they would reply: "Yes."

Just because there is no objective morality, that doesn't mean people can't be wrong about their own morality. We can observe that people can be convinced to become more altruistic, which contradicts your model: if they were true partial EAs, they would refuse because anything other than what they believe is worse. I don't expect warring ideological states to be made up of people who all happened to be born with the right moral priors at the right time to oppose one another; their environment is much more likely to play a deciding role in what they believe. And environments can be changed, for example by telling people that they're wrong and you're right.

Regarding your second confusion, not knowing how "good" works in a framework of moral anti-realism. Basically, in that case, every agent has its morality where doing good is "good" and doing bad is bad. What's good according to the cat is bad according to the mouse. Humans are sort of like agents and we're all sort of similar, so our moralities tend to always be sort of the same. So much so that I can say many things are good according to humanity, and have it make a decent amount of sense. In common speech, we drop the "according to [x]". Also note that agents can judge each other just as they can judge objects. We can say that Effective Altruism is good and murder is bad, so we can say that an agent becoming more likely to do effective altruism is good and one becoming less likely to commit murder is good.

But the thing is, some people have tried and failed and feel miserable about it, or even the thought of trying makes them feel miserable, so that certainly cannot be ideal because these people aren't being productive at that point.

That isn't trivial. If 1 out of X miserable people manages to find a way to make things work eventually they could be more productive than Y people who chose to give up on levelling up and to be 'regular' EAs instead, with Y greater than X, and in that case we should advice people to keep trying even if they're depressed and miserable. But more importantly, it's a false choice: it should be possible to have people be less miserable but still to continue trying, and you could give advice on how to do that, if you know it. Signing up for a CFAR workshop might help, or showing some sort of clear evidence that happiness increases productivity. Compared to lesswrong posts, this is very light on evidence.

Human brains are not designed to optimize towards a single goal. It can drive you crazy. For some, it works, for others, it probably does not.

This looks like you're contradicting yourself, so I'm not sure if I understand you correctly. But if you mean the first two sentences, do you have a source for that, or could you otherwise explain why you believe it? It doesn't seem obvious to me, and if it's true I need to change my mind.

[1] This may include their personal happiness, EA productivity, right not to have their minds overwritten, etc.

I'm part of the target audience, I think, but this post isn't very helpful to me. Mistrust of arguments which tell me to calm down may be a part of it, but it seems like you're looking for reasons to excuse caring for other things than effective altruism, rather than weighing the evidence for what works better for getting EA results.

Your "two considerations",

  1. If you view EA as a possible goal to have, there is nothing contradictory about having other goals in addition.
  1. Even if EA becomes your only goal, it does not mean that you should necessarily spend the majority of your time thinking about it, or change your life in drastic ways. (More on this below.)

, look like a two-tiered defence against EA pressures rather than convergence on a single right answer on how to consider your goals. Maybe you mean that some people are 'partial EAs' and others are 'full EAs (who are far from highly productive EA work in willpower -space)', but it isn't very clear.

Now, on 'partial EAs': If you agree that effective altruism = good (if you don't, adjust your EA accordingly, IMO), then agency attached to something with different goals is bad compared to agency towards EA. Even if those goals can't be changed right now, they would still be worse, just like death is bad even if we can't change it (yet (except maybe with cryonics)). If you are a 'partial EA' who feels guilty about not being a 'full EA', this seems like an accurate weighing of the relative moral values, only wrong if the guilt makes you weaker rather than stronger. Your explanation doesn't look like a begrudging acceptance of the circumstances, it looks almost like saying 'partial EAs' and 'full EAs' are morally equivalent.

Concerning 'full EAs who are far from being very effective EAs in willpower -space", this triggers many alarm bells in my mind, warning of the risk of it turning into an excuse to merely try. You reduce effective effective altruists' productivity to a personality trait (and 'skills' which in context sound unlearnable), which doesn't match 80,000hours' conclusion that people can't estimate well how good they are at things or how much they'll enjoy things before they've tried.

Your statement on compartmentalisation (and Ben Kuhn's original post) both just seem to assume that because denying yourself social contact because you could be making money itself is bad, therefore compartmentalisation is good. But the reasoning for this compartmentalisation - it causes happiness, which causes productivity - isn't (necessarily) compartmentalised, so why compartmentalise at all? Your choice isn't just between a delicious candy bar and deworming someone, it's between a delicious candy bar which empowers you to work to deworm two people and deworming one person. This choice isn't removed when you use the compartmentalisation heuristic, it's just hidden. You're "freeing your mind from the moral dilemma", but that is exactly what evading cognitive dissonance is.

I don't have a good answer. I still have an ugh field around making actual decisions and a whole bunch of stress, but this doesn't sound like it should convince anyone.

It isn't apparent to me that under your definition of privilege, [demographic] privilege is nearly as significant as many other unique experiences. And also, [demographic] privilege is often used as if everyone in the demographic has the same experience as the average. "White privilege" despite being born in a South African neighborhood where whites are ostracized, "Male privilege" despite being in a female-dominated field, "First World Privilege" despite being born into a situation devoid of growth opportunities, etc.

Eliezer Yudkowsky has challenged utilitarianism and some forms of moral realism in the Fun Theory sequence, the enigmatic (or merely misunderstood) Metaethics sequence and the fictionalised dilemma Three Worlds Collide.

I'm confused. AFAIK Yudkowsky's position is utilitarian, and none of the linked posts and sequences challenge utilitarianism. 3WC being an obvious example where only one specific branch - average preference utilitarianism - is argued to be wrong. The sequences are attempts to specify parts of the utility function and its behavior - even going so far as to argue for deontological laws as part of utilitarianism for corrupt humans - not refutations.

the enigmatic (or merely misunderstood) Metaethics sequence

This looks like mind projection fallacy. If so, the obvious explanation is that you don't understand Yudkowsky's position properly.