MichaelStJules

Animal welfare research intern at Charity Entrepreneurship and organizer for Effective Altruism Waterloo.

Earned to give in deep learning at a startup for 2 years for animal charities, and now trying to get into effective animal advocacy research. Curious about s-risks.

Antispeciesist, antifrustrationist, prioritarian, consequentialist. Also, I like math and ethics.

MichaelStJules's Comments

Postponing research can sometimes be the optimal decision

Another consideration might be that if you expect this research technology to be developed but had not taken that into consideration for estimating your impact, you may be underestimating the likelihood that someone else would do that same research before anyway since if the research becomes easier, others are more likely to do it. You could be overestimating your counterfactual impact.

How to Measure Capacity for Welfare and Moral Status

Have you considered a (semi-)blind approach? Collect data on each of the species/taxa of interests into a table, but hide the species (except possibly human, as the reference?) and make moral weight judgements based on that (and the judges can do this without any formal or precise weighting of features if they prefer). You could also get separate people who do the research and prepare the table from those who make the judgements, to reduce the identifiability of the species/taxa from the data, although this risk won't really go away.

MichaelStJules's Shortform

Here's a way to capture lexical threshold utilitarianism with a separable theory and while avoiding Pascalian fanaticism, with a negative threshold and a positive threshold > 0:

  • The first term is just standard utilitarianism, but squashed with a function into an interval of length at most 1.
  • The second/middle sum is the number of individuals (or experiences or person-moments) with welfare at least , which we add to the first term. Any change in number past this threshold dominates the first term.
  • The third/last sum is the number of individuals with welfare at most , which we subtract from the rest. Any change in number past this threshold dominates the first term.

Either of the second or third term can be omitted.

We could require for all , although this isn't necessary.

More thresholds could be used, as in this comment: we would apply to the whole expression above, and then add new terms like the second and/or the third, with thresholds and , and repeat as necessary.

MichaelStJules's Shortform

This nesting approach with above also allows us to "fix" maximin/leximin under conditions of uncertainty to avoid Pascalian fanaticism, given a finite discretization of welfare levels or finite number of lexical thresholds. Let the welfare levels be , and define:

i.e. is the number of individuals with welfare level at most , where is the welfare of individual , and is 1 if and 0 otherwise. Alternatively, we could use .

In situations without uncertainty, this requires us to first choose among options that minimize the number of individuals with welfare at most , because takes priority over , for all , and then, having done that, choose among those that minimize the number of individuals with welfare at most , since takes priority over , for all , and then choose among those that minimize the number of individuals with welfare at most , and so on, until .

This particular social welfare function assigns negative value to new existences when there are no impacts on others, which leximin/maximin need not do in general, although it typically does in practice, anyway.

This approach does not require welfare to be cardinal, i.e. adding and dividing welfare levels need not be defined. It also dodges representation theorems like this one (or the stronger one in Lemma 1 here, see the discussion here), because continuity is not satisfied (and welfare need not have any topological structure at all, let alone be real-valued). Yet, it still satisfies anonymity/symmetry/impartiality, monotonicity/Pareto, and separability/independence. Separability means that whether one outcome is better or worse than another does not depend on individuals unaffected by the choice between the two.

3 suggestions about jargon in EA

Did "information hazard" originate in EA? Plenty of results on Google for "dangerous information" and "dangerous knowledge", which I think mean almost the same thing, although I suppose "information hazard" refers to the risk itself, while "dangerous information" and "dangerous knowledge" refer to the information/knowledge and might suggest likely harm rather than just risk.

Comparisons of Capacity for Welfare and Moral Status Across Species

For anyone who might doubt that clock speed should have a multiplying effect (assuming linear/additive aggregation), if it didn't, then I think how good it would be to help another human being would depend on how fast they are moving relative to you, and whether they are in an area of greater or lower gravitational "force" than you, due to special and general relativity. That is, if they are in relative motion or under stronger gravitational effects, time passes more slowly for them from your point of view, i.e. their clock speed is lower, but they also live longer. Relative motion goes both ways: time passes more slowly for you from their point of view. If you don't adjust for clock speed by multiplying, there are two hypothetical identical humans in different frames of reference (relative motion or gravitational potential or acceleration; one frame of reference can be your own) with identical experiences and lives from their own points of view that should receive different moral weights from your point of view. That seems pretty absurd to me.

Comparisons of Capacity for Welfare and Moral Status Across Species

If an objective list theory is true, couldn't it be the case that there are kinds of goods unavailable to us that are available to some other nonhuman animals? Or that they are available to us, but most of us don't appreciate them so they aren't recognized as goods? How could we find out? Are objective list theories therefore doomed to anthropocentrism and speciesism? How do objective list theories argue that something is or isn't one of these goods?

I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA

Thanks for the answer. Makes sense!

I'm also still pretty confused about why the S&P 500 is so high now.

Some possible insight: the NASDAQ is doing even better, at its all-time high and wasn't hit as hard initially, and the equal-weight S&P 500 is doing worse than the regular S&P 500 (which weights based on market cap), so this tells me that disproportionately large companies (and tech companies) are still growing pretty fast. Some of these companies may even have benefitted in some ways, like Amazon (online shopping and streaming) and Netflix (streaming).

20% of the S&P 500 is Microsoft, Apple, Amazon, Facebook and Google. Only Google is still down since February at their peaks before the crash, the rest are up 5-15%, other than Amazon (4% of the S&P 500), which is up 40%!

I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA

Are you using or do you plan to use your forecasting skills for investing?

Load More