All of dph121's Comments + Replies

part 2/2

Have you considered that for some people, the most agenty thing to do would be to change their decision-procedure so it becomes less "agenty"? [...] your idealized self-image/system-2/goal.

Yes, I have, for myself, and I declined, because I didn't see how it would help. Your analogy is indeed an analogy for what you believe, but it is not evidence. I asked you why you advise reducing one's agency, and the mere fact that it is theoretically possible for it to be a good idea doesn't demonstrate that it in fact is.

Note that in the analogy... (read more)

0
Lukas_Gloor
9y
There are problems to every approach. Talk about your commitment to others, they will remind you. I'm not saying this whole strategy always works, but I'm quite sure there are many people for whom it is the best idea to try. Regarding the "utility quota", what I mean by "personal moral expecations": Basically, this just makes the point that it is useless to beat yourself up over things you cannot change. And yet we often do this, feel sad about things we probably couldn't have done differently. (One interesting hypothesis for this reaction is described here.) Note that if this were true, you still need reasons why you expect there to be just one human morality. I know what EY wrote on the topic, and I find it question-begging and unconvincing. What EY is saying is that human utility-function_1s are complex and similar. What I'm interested in, and what I think you and EY should also be interested in, are utility-function_2s. But that's another discussion, I've been meaning to write up my views on the topic of metaethics and goal-uncertainty, but I expect it'll take me at least a few months until I get around to it. This doesn't really prove my case by itself, but it's an awesome quote nevertheless, so I'm including it here (David Hume, Enquiry): Lack of time, given that I've already written a lot of text on the topic. And because I'm considering to publish some of it at some point in the future, I'm wary of posting long excerpts of it online. Isn't the whole point of string theory that it is pretty simple (in terms of Kolmogorov complexity that is, not in whether I can understand it)? If anything, this would be testimony to how good humans are at natural speech as opposed to math. Although humans aren't that good at natural speech, because they often don't notice when they're being confused or talking past each other. But this is being too metaphorical. I don't really understand your point here. Aren't you presupposing that there is one answer people will conve

Sorry for not replying sooner.

tl;dr: Effective Altruism shouldn’t be a job you do because it’s The Right Thing To Do, which you come home from tired and drained, you should integrate it with your life and include your own wellbeing in the decision process.

I strongly disagree. Why would people be so deeply affected if they didn't truly care?

They do truly care, in the emotional sense. They just can't be modeled like a utility-maximiser which values it greatly compared to their own mental well-being. You call the aberration 'irrationality', but that isn't... (read more)

0
Lukas_Gloor
9y
Just read this, it expresses well what I meant by "humans are not designed to pursue a single goal": http://lesswrong.com/lw/2p5/humans_are_not_automatically_strategic/
0
Lukas_Gloor
9y
It seems obvious to me that we're talking past each other, meaning we're in many cases trying to accomplish different things with our models/explanations. The fact that this doesn't seem obvious to you suggests to me that I'm either bad at explaining, or that you might be interpreting my comments uncharitably. I agree with your tl;dr, btw! You're presupposing that the agent would not modify any of its emotional links if it had the means to do so. This assumption might apply in some cases, but it seems obviously wrong as a generalization. Therefore, your model is incomplete. Reread what I wrote in the part "On goals". I'm making a distinction between "utility-function_1", which reflects all the decisions/actions an agent will make in all possible situations, and "utility-function_2", which reflects all the decisions/actions an agent would want to make in all possible situations. You're focusing on "utility-function_1", and what you're saying is entirely accurate – I like your model in regard to what it is trying to do. However, I find "utility-function_2s" much more interesting and relevant, which is why I'm focusing on them. Why don't you find them interesting? Again, we have different understandings of rationality. The way I defined "goals" in the section "On goals", it is only the system 2 that is defining what is rational, and system 1 heuristics can be "rational" if they are calibrated in a way that produces outcomes that are good in regard to the system 2 goals, given the most probable environment the agent will encounter. This part seems to be standard usage, in fact. Side note: Your theory of rationality is quite Panglossian, it is certainly possible to interpret all of human behavior as "rational" (as e.g. Gigerenzer does), but that would strike me as a weird/pointless thing to do. This claim strikes me as really obvious, so I'm wondering whether you might be misunderstanding what I mean. Have you never noticed just how bad people are at consequentialism
0
dph121
9y
part 2/2 Yes, I have, for myself, and I declined, because I didn't see how it would help. Your analogy is indeed an analogy for what you believe, but it is not evidence. I asked you why you advise reducing one's agency, and the mere fact that it is theoretically possible for it to be a good idea doesn't demonstrate that it in fact is. Note that in the analogy, if the robbers aren't stupid, they will kill one of your family members because taking the pill is a form of non-compliance, wait for the pill to wear off, and then ask the same question minus one. If the hostage crisis is a good analogy for your internal self, what is to stop "system 1" from breaking its promises or being clever? That's basically how addiction works: take a pill a day or start withdrawal. Doing that? Good. Now take one more a day or start withdrawal. Replace “pills” with “time/effort not spent on altruism” and you’re doomed. The utility quota strikes again. Here, your problem is that EA can be "too demanding" - apparently there is some kind of better morality by which you can say that EA is being Wrong, but somehow you don't decide to use that morality for EA instead. No, in this case I'm referring to true morality, whatever it might be. If your explanation was true - if the divergence in theories was because of goals failing to converge or people answering different questions - we would expect philosophers to actually answer their questions. However, what we see is that philosophers do not manage to answer their own questions: every moral theory has holes and unanswered questions, not just differences of opinion between the author and others. If there were moral consensus, then obviously there would be a single morality, so the lack of a consensus carries some evidential weight, but not much. People are great at creating disagreements over nothing, and ethics is complex enough to be opaque, so we would expect moral disagreement in both worlds with a single coherent morality for humanity

Thanks for replying. (note: I'm making this up as I go along. I'm forgoing self-consistency for accuracy).

You bring up a very important point with the danger of things turning into merely "pretending to try". I see this problem, but at the same time I think many people are far closer to the other end of the spectrum.

Merely trying isn't the same as pretending to try. It isn't on the same axis as emotionally caring, it's the (lack of) agency towards achieving a goal. Someone who is so emotionally affected by EA that they give up is definitely s... (read more)

1
Lukas_Gloor
9y
I strongly disagree. Why would people be so deeply affected if they didn't truly care? The way I see it, when you give up EA because it's causing you too much stress, what happens constitutes a failure of goal-preservation, which is irrational, but after you've given up, you've become a different sort of agent. Just because you don't care/try anymore does not mean that caring/trying in the earlier stages was somehow fake. Giving up is not a rational decision made by your system-2*, it's a coping mechanism triggered by your system-1 feeling miserable, which then creates changes/rationalizations in system-2 that could become permanent. As I said before (and you expressed skepticism), humans are not designed to efficiently pursue a single goal. A neuroscientist of the future, when the remaining mysteries of the human brain will be solved, will not be able to look at people's brains and read out a clear utility-function. Instead, what you have is a web of situational heuristics (system-1), combined with some explicitly or implicitly represented beliefs and goals (system-2), which can well be contradictory. There is often no clear way to get out a utility-function. Of course, people can decide to do what they can to self-modify towards becoming more agenty, and some succeed quite well despite of all the messy obstacles your brain throws at you. But if your ideal self-image and system-2 goals are too far removed from your system-1 intuitions and generally the way your mind works, then this will create a tension that leads to unhappiness and quite likely cognitive dissonance somewhere. If you keep going without changing anything, the outcome won't be good for neither you nor your goals. You mentioned in your earlier comment that lowering your expectations is exactly what evading cognitive dissonance is. Indeed! But look at the alternatives: If your expectations are impossible to fulfill for you, then you cannot reduce cognitive dissonance by improving your behavior. So e

I'm part of the target audience, I think, but this post isn't very helpful to me. Mistrust of arguments which tell me to calm down may be a part of it, but it seems like you're looking for reasons to excuse caring for other things than effective altruism, rather than weighing the evidence for what works better for getting EA results.

Your "two considerations",

  1. If you view EA as a possible goal to have, there is nothing contradictory about having other goals in addition.
  1. Even if EA becomes your only goal, it does not mean that you should nece
... (read more)
2
Lukas_Gloor
9y
Thanks for this feedback! You bring up a very important point with the danger of things turning into merely "pretending to try". I see this problem, but at the same time I think many people are far closer to the other end of the spectrum. I suspect that many people don't really get involved in EA in the first place because they're on some level afraid that things will grow over their head. And I know of cases where people gave up EA at least partially because of these problems. This to me is enough evidence that there are people who are putting too much pressure on themselves and would benefit from doing it less. Of course, there is a possibility that a post like this one does more harm because it provides others with "ammunition" to rationalize more, but I doubt this would make much of a difference – it's unfortunately easy to rationalize in general and you don't need that much "ammunition" for it. That's what they are. I think there's no other criterion that make your goals the "right" ones other than that you would in fact choose these goals upon careful reflection. Yes, that's what I meant. And I agree it's unclear because it's confusing that I'm talking only about 2) in all of what follows, I'll try to make this more clear. So to clarify, most of my post addresses 2), "full EAs (who are far from highly productive in willpower-space)", and 1) is another option that I mention and then don't explore more because the consequences are straightforward. I think there's absolutely nothing wrong with 1), if your goals are different from mine then that doesn't necessarily mean you're making a mistake about your goals. I personally focus on suffering and don't care about preventing death, all else being equal, but I don't (necessarily) consider you irrational for doing so. I'm arguing within a framework of moral anti-realism. I basically don't understand what people mean by the term "good" that could do the philosophical work they expect it to do. A partial EA is some

It isn't apparent to me that under your definition of privilege, [demographic] privilege is nearly as significant as many other unique experiences. And also, [demographic] privilege is often used as if everyone in the demographic has the same experience as the average. "White privilege" despite being born in a South African neighborhood where whites are ostracized, "Male privilege" despite being in a female-dominated field, "First World Privilege" despite being born into a situation devoid of growth opportunities, etc.

Eliezer Yudkowsky has challenged utilitarianism and some forms of moral realism in the Fun Theory sequence, the enigmatic (or merely misunderstood) Metaethics sequence and the fictionalised dilemma Three Worlds Collide.

I'm confused. AFAIK Yudkowsky's position is utilitarian, and none of the linked posts and sequences challenge utilitarianism. 3WC being an obvious example where only one specific branch - average preference utilitarianism - is argued to be wrong. The sequences are attempts to specify parts of the utility function and its behavior - even ... (read more)

2
RyanCarey
9y
I've added the word 'hedonistic' and fixed a duplicate link. Maybe he's an atypical utilitarian, depending on our definitions. He's consequentialist and I think he endorses following a utility function but he certainly opposes simple hedonistic utilitarianism, or the maximisation of any simple good. Yes, I found Eliezer's Metaethics sequence difficult but so did lots of people. Eliezer agrees: