MichaelStJules

Organizer for Effective Altruism Waterloo.

Earned to give in deep learning at a startup for 2 years for animal charities, and now trying to get into effective animal advocacy research. Curious about s-risks.

Antispeciesist, antifrustrationist, prioritarian, consequentialist. Also, I like math and ethics.

MichaelStJules's Comments

A cause can be too neglected

I think this suggests the cause prioritization factors should ideally take the size of the marginal investment we're prepared to make into account, so Neglectedness should be

"% increase in resources / extra investment of size X"

instead of

"% increase in resources / extra person or $",

since the latter assumes a small investment. At the margin, a small investment in a neglected cause has little impact because of setup costs (so Solvability/Tractability is low), but a large investment might get us past the setup costs and into better returns (so Solvability/Tractability is higher).

As you suggest, if you're spreading too thin between neglected causes, you don't get far past their setup costs, and Solvability/Tractability remains lower for each than if you'd just chosen a smaller number to invest in.

A cause can be too neglected

I would guess that for baitfish, fish stocking, and rodents fed to pet snakes, there's a lot of existing expertise in animal welfare and animal interventions (e.g. corporate outreach/campaigns) that's transferable, so the setup costs wouldn't be too high. Did you find this not to be the case?

A cause can be too neglected

If we're being careful, should these considerations just be fully captured by Tractability/Solvability? Essentially, the marginal % increase in resources only solves a small % of the problem, based on the definitions here.

Is Existential Risk a Useless Category? Could the Concept Be Dangerous?

Besides the risks of harm by omission and focusing on the wrong things, which I agree with others here is a legitimate place for debate in cause prioritization, there are risks of contributing to active harm, which is a slightly different concern (although not fundamentally different for a consequentialist, but it might have greater reputational costs for EA). I think this passage is illustrative:

For example, consider the following scenario from Olle Häggström (2016); quoting him at length:
"Recall … Bostrom’s conclusion about how reducing the probability of existential catastrophe by even a minuscule amount can be more important than saving the lives of a million people. While it is hard to find any flaw in his reasoning leading up to the conclusion [note: the present author objects], and while if the discussion remains sufficiently abstract I am inclined to accept it as correct, I feel extremely uneasy about the prospect that it might become recognized among politicians and decision-makers as a guide to policy worth taking literally. It is simply too reminiscent of the old saying “If you want to make an omelet, you must be willing to break a few eggs,” which has typically been used to explain that a bit of genocide or so might be a good thing, if it can contribute to the goal of creating a future utopia. Imagine a situation where the head of the CIA explains to the US president that they have credible evidence that somewhere in Germany, there is a lunatic who is working on a doomsday weapon and intends to use it to wipe out humanity, and that this lunatic has a one-in-a-million chance of succeeding. They have no further information on the identity or whereabouts of this lunatic. If the president has taken Bostrom’s argument to heart, and if he knows how to do the arithmetic, he may conclude that it is worthwhile conducting a full-scale nuclear assault on Germany to kill every single person within its borders."
Häggström offers several reasons why this scenario might not occur. For example, he suggests that “the annihilation of Germany would be bad for international political stability and increase existential risk from global nuclear war by more than one in a million.” But he adds that we should wonder “whether we can trust that our world leaders understand [such] points.” Ultimately, Häggström abandons total utilitarianism and embraces an absolutist deontological constraint according to which “there are things that you simply cannot do, no matter how much future value is at stake!” But not everyone would follow this lead, especially when assessing the situation from the point of view of the universe; one might claim that, paraphrasing Bostrom, as tragic as this event would be to the people immediately affected, in the big picture of things—from the perspective of humankind as a whole—it wouldn’t significantly affect the total amount of human suffering or happiness or determine the long-term fate of our species, except to ensure that we continue to exist (thereby making it possible to colonize the universe, simulate vast numbers of people on exoplanetary computers, and so on).

I think you don't need Bostroniam stakes or utilitarianism for these types of scenarios, though. Consider torture, collateral civilian casualties in war, the bombings of Hiroshima and Nagasaki. Maybe you could argue in many cases that more civilians will be saved, so the trade seems more comparable, actual lives for actual lives, not actual lives for extra lives (extra in number, not in identity, for a wide person-affecting view), but it seems act consequentialism is susceptible to making similar trades generally.

I think one partial solution is to just not promote act consequentialism publicly unless you preface with important caveats. Another is to correct naive act consequentialist analyses in high stakes scenarios as they come up (like Phil is doing here, but also to individual comments).

Effective Animal Advocacy Nonprofit Roles Spot-Check

You could just ask orgs which roles were filled within the last X days/months, since they should know, so it wouldn't require ongoing monitoring, but this might still be substantial work for you and them (cumulatively) to get this info, depending on how many orgs you need to contact.

Is Existential Risk a Useless Category? Could the Concept Be Dangerous?

The author still cares about x-risks, just not in the Bostroniam way. Here's the first sentence from the abstract:

This paper offers a number of reasons for why the Bostromian notion of existential risk is useless.

Weird that you made a throwaway just to leave a sarcastic and misguided comment.

Is Existential Risk a Useless Category? Could the Concept Be Dangerous?
For whatever it’s worth, my own tentative guess would actually be that saving a life in the developing world contributes more to growth in the long run than saving a life in the developed world. Fertility in the former is much higher, and in the long run I expect growth and technological development to be increasing in global population size (at least over the ranges we can expect to see).

Is this taking more immediate existential risks into account and to what degree and how people in the developing and developed worlds affect them?

Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism
It seems as though much of the discussion assumes a hedonistic theory of well-being (or at least uses a hedonistic theory as a synecdoche for theories of well-being taken as a whole?) But, as the authors themselves acknowledge, some theories of well-being are not purely hedonistic.
It is also a bit misleading to say that "many effective altruists are not utilitarians and care intrinsically about things besides welfare, such as rights, freedom, equality, personal virtue and more." On some theories, these things are components of welfare.

It's discussed a bit here:

The two main rivals of hedonism are desire theories and objective-list theories. According to desire theories only the satisfaction of desires or preferences matters for an individual’s wellbeing, as opposed to the individual’s conscious experiences. Objective list theories propose a list of items that constitute wellbeing. This list can include conscious experiences or preference-satisfaction, but it rarely stops there; other common items that ethicists might put on their objective list include art, knowledge, love, friendship and more.
Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism
Do non-utilitarian moral theories have readily available solutions to infinite ethics either?

I think it isn't a problem in the first place for non-consequentialist theories, because the problem comes from trying to compare infinite sets of individuals with utilities when identities (including locations in spacetime) aren't taken to matter at all, but you could let identities matter in certain ways and possibly get around it this way. I think it's generally a problem for consequentialist theories, utilitarian or not.

I'd also recommend the very repugnant conclusion as an important objection (at least to classical or symmetric utilitarianism).

It's worth considering that avoiding it (Weak Quality Addition) is one of several intuitive conditions in an important impossibility theorem (of which there are many similar ones, including the earlier one which is cited in the post you cite), which could be a response to the objection.

EDIT: Or maybe the impossibility theorems and paradoxes should be taken to be objections to consequentialism generally, because there's no satisfactory way to compare outcomes generally, so we shouldn't rely purely on comparing outcomes to guide actions.

Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism
the idea that suffering is the dominant component of the expected utility of the future is both consistent with standard utilitarian positions, and also captures the key point that most EA NU thinkers are making.

I don't think it quite captures the key point. The key point is working to prevent suffering, which "symmetric" utilitarians often do. It's possible the future is positive in expectation, but it's best for a symmetric utilitarian to work on suffering, and it's possible that the future is negative in expectation, but it's best for them to work on pleasure or some other good.

Symmetric utilitarians might sometimes try to improve a situation by creating lots of happy individuals rather than addressing any of the suffering, and someone with suffering-focused views (including NU) might find this pointless and lacking in compassion for those who suffer.

Load More