All of Rook's Comments + Replies

Why Hasn't Effective Altruism Grown Since 2015?

Scott Alexander has a very interesting response to this post on reddit: see here.

Where are you donating in 2020 and why?

I wrote something about campaign contributions in federal US elections earlier this year. I could be wrong, but based on my (non-expert) survey of the campaign finance literature, it doesn't seem like donating to political campaigns has a very substantial impact on election outcomes (most of the time). The main takeaway is that spending and success are correlated, but the former doesn't cause the latter. Spending is simply a useful heuristic for the size/traction/etc. of a campaign.

Should we think more about EA dating?

This is very similar to the comment I was going to make.

I admit that it has crossed my mind that even a moderate EA lifestyle is unusually demanding, especially in the longterm, and therefore could make finding a longterm partner more difficult. However, I do resonate with that last bit – encouraging inter-EA dating also seems culty and insular to me, and I’d like to think that most of us could integrate EA (as a project and set of values) into our lives in way that allows us to have other interests, values, friends, and so on (i.e., our live... (read more)

I responded to Marisa with this comment [] which pushes back on the notion that inter-EA dating is a particularly culty and insular phenomenon. Upshots: * Some public accusations of cultishness should be taken seriously, but EA should respond to them by doing what we do best: looking into scientific research, specifically about cults, in evaluating such allegations to ourselves. This is a more sensible approach than hand-wringing about hypothetical accusations of cultishness that haven't been levelled yet. To do so only plays into the hands of moral panics over cults in public discourse that don't themselves typically lessen the harms of cults, real or perceived. * Dozens if not hundreds in EA have dated, formed relationships, gotten married or started families in ways that have benefited themselves personally and also their capacity to do good. This is similarly true in its own ways of tens of millions of people who marry and start families within their own religions, cultures or ethnic groups, including in more diverse and pluralistic societies. While EA ought to be worried about ways in which it could cult-like, the common human tendency to spend our lives with those who share our own respective ways of life doesn't appear to be high on that list. * One could argue that that's a problematic tendency within societies at large and EA should aspire to more than that. Given my perception that those in EA who've formed flourishing relationships within the community have done so organically as individuals, there doesn't seem to me to be a reason to encourage intra-community dating. Yet to discourage it based on a concern it may appear cult-like would be to impel community members to a kind of romantic asceticism for nobody's benefit.
I do think you could compromise, but I worry that some EAs won't want to. If you take Peter Singer's drowning child thought experiment seriously you may not want to placate your non-EA girlfriend by going on that holiday abroad. Taking that thought experiment seriously for many people really will entail a high degree of demandingness without much room for compromise.
How bad is coronavirus really?
Answer by RookMay 09, 20203

There are two different angles on this question. One is whether the level of response in EA has been appropriate, the second is whether the level of response outside of EA (i.e., by society at large) has been appropriate.

I really don't know about the first one. People outside of EA radically underestimate the scale of ongoing moral catastrophes, but once you take those into account, it's not clear to me how to compare -- as one example -- the suffering produced by factory farming to the suffering produced by a bad response to coronavirus in devel... (read more)

I’m curious about people’s evaluations of (2)— how long would that go on? How bad would it really be compared to the losses from shutdown?
The Alienation Objection to Consequentialism

Glad the alienation objection is getting some airtime in EA. I wanted to add two very brief notes in defense of consequentialism:

1) The alienation objection seems generalizable beyond consequentialism to any moral theory which (as you put it) inhibits you from participating in a normative ideal. I am not too familiar with other moral traditions, but it is possible for me to see how following certain deontological or contractualist theories too far also results in a kind of alienation. (Virtue ethics may be the safest here!)

2) The normative ideals that deal... (read more)

Should recent events make us more or less concerned about biorisk?

This was basically going to be my response -- but to expand on it, in a slightly different direction, I would say that, although maybe we shouldn't be more concerned about biorisk, young EAs who are interested in biorisk should update in favor of pursuing a career in/getting involved with biorisk. My two reasons for this are:

1) There will likely be more opportunities in biorisk (in particular around pandemic preparedness) in the near-future.

2) EAs will still be unusually invested in lower-probability, higher-risk problems than non-EAs (like GCBRs).

(1)... (read more)

How to estimate the EV of general intellectual progress

Some low-effort thoughts (I am not an economist so I might be embarrassing myself!):

  • My first inclination is something like "find the average output of the field per unit time, then find the average growth rate of a field, and then calculate the 'extra' output you'd get with a higher growth rate." In other words: (1) what is the field currently doing of value? (2) how much more value would that field produce if they did whatever they're currently doing faster?
    • It would be interesting to see someone do a quantitative analysis of the history of progress in som
... (read more)
6Ozzie Gooen3y
Thanks! This is interesting, will spend some time thinking about. 1. Please don't worry much about embarrassing yourself! It's definitely a challenge with forums like this, but it would be pretty unreasonable for anyone to expect that post/comment authors have degrees in all the possibly relevant topics. 2. Low-effort thoughts can be pretty great, they may be some of the highest value-per-difficulty work.
I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA

I dug up a few other places 80,000 Hours mentions law careers, but I couldn't find any article where they discuss US commercial law for earning-to-give. The other mentions I found include:

In their profile on US AI Policy, one of their recommended graduate programs is a "prestigious law JD from Yale or Harvard, or possibly another top 6 law school."

In this article for people with existing experience in a particular field, they write “If you have experience as a lawyer in the U.S. that’s great because it’s among the best w... (read more)

TL;DR, I think EAs should probably use the following heuristics if they are interested in some career for which law school is a plausible path:

  1. If you can get into a T3 law school (Harvard, Yale, Stanford), have a fairly strong prior that it's worth going.
  2. If you can get into a T6 law school (Columbia, Chicago, NYU), probably take it.
  3. If you can get into a T14 law school, seriously consider it. But employment statistics at the bottom of the T14 are very different from those at the top.
  4. Be wary of things outside the T14.

In general, definitely carefully r

... (read more)
I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA

You mentioned in the answer to another question that you made the transition from being heavily involved with social justice in undergrad to being more involved with EA in law school. This makes me kind of curious -- what's your EA "origin story"? (How did you find out about effective altruism, how did you first become involved, etc.)

My EA origins story is pretty boring! I was a research assistant for a Philosophy professor who included a unit on EA in her Environmental Ethics course. That was my first exposure to the ideas of EA (although obviously I had exposure to Peter Singer previously). As a result, I added Doing Good Better to my reading list, and I read it in December 2016 (halfway through my first year of law school). I was pretty immediately convinced of its core ideas.

I then joined the Harvard Law School EA group, which was a really cool group at the time. In fact, it's some

... (read more)
In praise of unhistoric heroism

I love this post! It’s beautifully written, and one of the best things I’ve read on the forum in a while. So take my subsequent criticism of it with that in mind! I apologize in advance if I’m totally missing the point.

I feel like EAs (and most ambitious people generally) are pretty confused about how to reconcile status/impact with self-worth (I’m including myself in this group). If confronted, many of us would say that status/impact should really be orthogonal to how we feel about ourselves, but we can’t quite bring t... (read more)

[Link] Aiming for Moral Mediocrity | Eric Schwitzgebel

This is fair. I was trying to salvage his argument without running into the problems mentioned in the above comment, but if he means "aim" objectively, then its tautologically true that people aim to be morally average, and if he means "aim" subjectively, then it contradicts the claim that most people subjectively aim to be slightly above average (which is what he seems to say in the B+ section).

The options are: (1) his central claim is uninteresting (2) his central claim is wrong (3) I'm misunderstanding his central claim. And I normally would feel like I should play it safe and default to (3), but it's probably (2).

[Link] Aiming for Moral Mediocrity | Eric Schwitzgebel

This was a good comment and very clarifying. I agree with most of what you say about the evidence – Schwitzgebel seems to be misinterpreting the evidence (and I think I was also initially).

Just to be extra charitable to Schwitzgebel, however, I think we can assume his central claim is basically intelligible (even if it’s not supported by the evidence), and he’s just using some words in an inconsistent way. Some of the confusion in your comment may be caused by this inconsistency.

In most of his piece, by “aiming to be mediocre&#x... (read more)

This skirts close to a tautology. People's average moral behavior equals people's average moral behavior. The output that people's moral processes actually produce is the observed distribution of moral behavior. The "aiming" part of Schwitzgebel's hypothesis that people aim for moral mediocrity gives it empirical content. It gets harder to pick out the empirical content when interpreting aim in the objective sense.
[Link] Aiming for Moral Mediocrity | Eric Schwitzgebel

Not to be pedantic, but

  • "People behave morally mediocre" and "People regard themselves as morally mediocre" are two different types of claims. I take Schwitzgebel as claiming the former, and I think he agrees with you that people regard themselves as slightly above average (e.g. section 6 titled "Aiming for a B+").
  • He also agrees with you that the evidence is unsatisfactory in many ways (see section 4 titled " The Gap Between the Evidence Above and the Thesis That Most People Aim for Moral Mediocrity"). Granted, he doe
... (read more)
The first two sentences of his article "Aiming For Moral Mediocrity" are: I take the fact that people systematically evaluate themselves as being significantly (morally) better than average, as strong evidence against the claim that people are aiming to be morally mediocre. If people systematically believed themselves to be better than average and were aiming for mediocrity, then they could (and would) save themselves effort and reduce their moral behaviour until they no longer thought themselves to be above average. Note that the evidence Schwitzgebel cites for his empirical thesis doesn't show that "People behave morally mediocre" any more than it shows that people aim to be morally mediocre: it shows people's behaviour goes up or down when you tell them that a reference class is behaving much better or worse, but not that most people's behaviour is anywhere near the mediocre reference point. For example, in Cialdini et al (2006), 5% of people took wood from a forest when told that "the vast majority of people did not" and 7.92% did when told that "many past visitors" had (which was not a significant difference, as it happened). Unfortunately, the reference points "vast majority" and "many" are vague, but it doesn't suggest that most people are behaving anywhere near the mediocre reference point. I recognise that Schwitzgebel acknowledges this "gap" between his evidence and his thesis in section 4, but I think he fails to appreciate that extent of the gap (near total) or that the evidence he cites can actually be seen as evidence against his thesis if we infer on the basis of these results that most people don't seem to be acting in line with the mediocre reference point. -------------------------------------------------------------------------------- In the "aiming for a B+" section you cite he actually seems to shift quite a bit to be more in line with my claim. Here he suggests that "B+ probably isn’t low enough to be mediocre, exactly. B+ is good. It’s
How Fungible Are Interests?
This is not to say that she couldn't and that she might use this as an excuse to avoid doing what she thinks is necessary to excuse doing what is convenient, but to say that we should have compassion for those who may find they agree with EA but find they cannot immediately make the changes they would like to due to life conditions, and we should not judge them as less good EAs even if they are less able to contribute to EA missions than if they were a different person in a different world that doesn't exist.

This is great, and I'd like to ad... (read more)