FCCC

# Posts

Sorted by New

This gives equal weight to all voters, which is bad because opinions have differing levels of information backing them up. An "unpopular" decision might be supported by everyone who knows what they're talking about; a "popular" decision might be considered to be bad by every informed person.

If I disagree with the Fund's decisions, I can send an email listing the reasons why. If my reasons aren't any good, the Fund can see that, and ignore me. If I have good reasons, the Fund should (hopefully) be swayed.

I generally dislike when people state conclusions without any intention to give a supporting argument, and multiple-choice surveys have this exact problem. If people fill in the free text box, this is essentially the same as sending an email.

EA's abstract moral epistemology

My idea of EA's essential beliefs are:

• Some possible timelines are much better than others
• What "feels" like the best action often won't result in anything close to the best possible timeline
• In such situations, it's better to disregard our feelings and go with the actions that get us closer to the best timeline.

This doesn't commit you to a particular moral philosophy. You can rank timelines by whatever aspects you want: Your moral rule can tell you to only consider your own actions, and disregard their effects on the behaviour of other people's actions. I could consider such a person to be an effective altruist, even though they'd be a non-consequentialist. While I think it's fair to say that, after the above beliefs, consequentialism is fairly core to EA, I think the whole EA community could switch away from consequentialism without having to rebrand itself.

The critique targets effective altruists’ tendency to focus on single actions and their proximate consequences and, more specifically, to focus on simple interventions that reduce suffering in the short term.

But she also says EA has a "god’s eye moral epistemology". This seems contradictory. Even if we suppose that most EAs focus on proximate consequences, that's not a fundamental failing of the philosophy, it's a failed application of it. If many fail to accurately implement the philosophy, it doesn't imply the philosophy bad[1]: There's a difference between a "criterion of right" and a "decision procedure". Many EAs are longtermists who essentially use entire timelines as the unit of moral analysis. This is clearly is not focused on "proximate consequences". That's more the domain of non-consequentialists (e.g. "Are my actions directly harming anyone?").

The article's an incoherent mess, even ignoring the Communist nonsense at the end.

1. This is in contrast with a policies being bad because no one can implement them with the desired consequences. ↩︎

Can my self-worth compare to my instrumental value?

It happens in philosophy sometimes too: "Saving your wife over 10 strangers is morally required because..." Can't we just say that we aren't moral angels? It's not hypocritical to say the best thing is to do is save the 10 strangers, and then not do it (unless you also claim to be morally perfect). Same thing here. You can treat yourself well even if it's not the best moral thing to do. You can value non-moral things.

Can my self-worth compare to my instrumental value?

I think you're conflating moral value with value in general. People value their pets, but this has nothing to do with the pet's instrumental moral value.

So a relevant question is "Are you allowed to trade off moral value for non-moral value?" To me, morality ranks (probability distributions of) timelines by moral preference. Morally better is morally better, but nothing is required of you. There's no "demandingness". I don't buy into the notions of "morally permissible" or "morally required": These lines in the sand seem like sociological observations (e.g. whether people are morally repulsed by certain actions in the current time and place) rather than normative truths.

I do think having more focus on moral value is beneficial, not just because it's moral, but because it endures. If you help a lot of people, that's something you'll value until you die. Whereas if I put a bunch of my time into playing chess, maybe I'll consider that to be a waste of time at some point in the future. There's other things, like enjoying relationships with your family, that also aren't anywhere close to the most moral thing you could be doing, but you'll probably continue to value.

You're allowed to value things that aren't about serving the world.

Timeline Utilitarianism

Hey Bob, good post. I've had the same thought (i.e. the unit of moral analysis is timelines, or probability distributions of timelines) with different formalism

The trolley problem gives you a choice between two timelines (). Each timeline can be represented as the set containing all statements that are true within that timeline. This representation can neatly state whether something is true within a given timeline or not: “You pull the lever” , and “You pull the lever” . Timelines contain statements that are combined as well as statements that are atomized. For example, since “You pull the lever”, “The five live”, and “The one dies” are all elements of , you can string these into a larger statement that is also in : “You pull the lever, and the five live, and the one dies”. Therefore, each timeline contains a very large statement that uniquely identifies it within any finite subset of . However, timelines won’t be our unit of analysis because the statements they contain have no subjective empirical uncertainty.

This uncertainty can be incorporated by using a probability distribution of timelines, which we’ll call a forecast (). Though there is no uncertainty in the trolley problem, we could still represent it as a choice between two forecasts: guarantees (the pull-the-lever timeline) and guarantees (the no-action timeline). Since each timeline contains a statement that uniquely identifies it, each forecast can, like timelines, be represented as a set of statements. Each statement within a forecast is an empirical prediction. For example, would contain “The five live with a credence of 1”. So, the trolley problem reveals that you either morally prefer (denoted as ), prefer (denoted as ), or you believe that both forecasts are morally equivalent (denoted as ).

Deliberate Consumption of Emotional Content to Increase Altruistic Motivation

I watched those videos you linked. I don't judge you for feeling that way.

Did you convert anyone to veganism? If people did get converted, maybe there were even more effective ways to do so. Or maybe anger was the most effective way; I don't know. But if not, your own subjective experience was worse (by feeling contempt), other people felt worse, and fewer animals were helped. Anger might be justified but, assuming there was some better way to convert people, you'd be unintentionally prioritizing emotions ahead of helping the animals.

Another thing to keep in mind: When we train particular physical actions, we get better at repeating that action. Athletes sometimes repeat complex, trained actions before they have any time to consciously decide to act. I assume the same thing happens with our emotions: If we feel a particular way repeatedly, we're more likely to feel that way in future, maybe even when it's not warranted.

We can be motivated to do something good for the world in lots of different ways. Helping people by solving problems gives my life meaning and I enjoy doing it. No negative emotions needed.

The case of the missing cause prioritisation research

“writing down stylized models of the world and solving for the optimal thing for EAs to do in them”

I think this is one of the most important things we can be doing. Maybe even the most important since it covers such a wide area and so much government policy is so far from optimal.

you just solve for the policy ... that maximizes your objective function, whatever that may be.

I don't think that's right. I've written about what it means for a system to do "the optimal thing" and the answer cannot be that a single policy maximizes your objective function:

Societies need many distinct systems: a transport system, a school system, etc. These systems cannot be justified if they are amoral, so they must serve morality. Each system cannot, however, achieve the best moral outcome on its own: If your transport system doesn’t cure cancer, it probably isn’t doing everything you want; if it does cure cancer, it isn’t just a “transport” system...

Unless by policy, you mean "the entirety of what government does", then yes. But given that you're going to consider one area at a time, and you're "only including all the levers between which you’re considering", you could reach a local optimum rather than a truly ideal end state. The way I like to think about it is "How would a system for prisons (for example) be in the best possible future?" This is not necessarily going to be the system that does the greatest good at the margin when constrained to the domain you're considering (though they often are). Rather than think about a system maximizing your objective function, it's better to think of systems as satisfying goals that are aligned with your objective function.

Use resilience, instead of imprecision, to communicate uncertainty
And bits describe proportional changes in the number of possibilities, not absolute changes...
And similarly, the 3.3 bits that take you from 100 possibilities to 10 are the same amount of information as the 3.3 bits that take you from 10 possibilities to 1. In each case you're reducing the number of possibilities by a factor of 10.

Ahhh. Thanks for clearing that up for me. Looking at the entropy formula, that makes sense and I get the same answer as you for each digit (3.3). If I understand, I incorrectly conflated "information" with "value of information".

Use resilience, instead of imprecision, to communicate uncertainty
I think this is better parsed as diminishing marginal returns to information.

How does this account for the leftmost digit giving the most information, rather than the rightmost digit (or indeed any digit between them)?

per-thousandths does not have double the information of per-cents, but 50% more

Let's say I give you $1 +$ where is either 0, $0.1,$0.2 ... or $0.9. (Note$1 is analogous to 1%, and is equivalent adding a decimal place. I.e. per-thousandths vs per-cents.) The average value of , given a uniform distribution, is $0.45. Thus, against$1, adds almost half the original value, i.e. $0.45/$1 (45%). But what if I instead gave you $99 +$? $0.45 is less than 1% of the value of$99.

The leftmost digit is more valuable because it corresponds to a greater place value (so the magnitude of the value difference between places is going to be dependent on the numeric base you use). I don't know information theory, so I'm not sure how to calculate the value of the first two digits compared to the third, but I don't think per-thousandths has 50% more information than per-cents.

[This comment is no longer endorsed by its author]Reply