Rook

Comments

Should we think more about EA dating?

This is very similar to the comment I was going to make.

I admit that it has crossed my mind that even a moderate EA lifestyle is unusually demanding, especially in the longterm, and therefore could make finding a longterm partner more difficult. However, I do resonate with that last bit – encouraging inter-EA dating also seems culty and insular to me, and I’d like to think that most of us could integrate EA (as a project and set of values) into our lives in way that allows us to have other interests, values, friends, and so on (i.e., our lives don’t have to entirely revolve around our EA-esque commitments!). I don’t see why an EA and a non-EA who were romantically compatible couldn’t find comfortable ways to compromise on lifestyle questions – after all, plenty of frugal people find love, and plenty of vegan people find love, whose to say a frugal vegan couldn’t find love?

Will protests lead to thousands of coronavirus deaths?

This was a very clear and valuable comment for me. Strongly upvoted.

However, you could argue, given the current political momentum of the BLM protests, there's a unique reason to support those now over protests that support other causes. BLM protests today may be able to encourage criminal justice reforms that (1) could last for a very long time (2) wouldn't be possible in the future (or would be significantly more difficult), when there's less political momentum behind criminal justice reform.

How bad is coronavirus really?
Answer by RookMay 09, 20203

There are two different angles on this question. One is whether the level of response in EA has been appropriate, the second is whether the level of response outside of EA (i.e., by society at large) has been appropriate.

I really don't know about the first one. People outside of EA radically underestimate the scale of ongoing moral catastrophes, but once you take those into account, it's not clear to me how to compare -- as one example -- the suffering produced by factory farming to the suffering produced by a bad response to coronavirus in developed countries (replace "suffering" with "negative effects" or something else if "suffering" isn't the locus of your moral concern). My guess is many of the best EA causes should still be the primary focus of EAs, as non-EAs are counterfactually unlikely to be motivated by them. I do think, however, that at the very beginning of the coronavirus timeline (January to early March), the massive EA focus on coronavirus was by-and-large appropriate, given how nonchalant most of society seemed to be about the coronavirus.

Now for the second one -- has the response of society been appropriate? I'm also under-informed here, but my very unoriginal answer is that the response to the coronavirus has been appropriate if you consider it proportional, not to the deadliness of the disease, but to (1) the infectivity of the disease (2) the corresponding inability of the healthcare system to handle a lot of infections. You wrote:

I read the news, too, but there’s something about the level of response to coronavirus given the very moderate deadliness— especially within EA— that just does not add up to me.

And it seems like you're probably not accounting for (1) and (2). It does not seem like a particularly deadly disease (when compared to other, more dangerous pathogens), but it is very easily spread, which is where the worry comes from.

The Alienation Objection to Consequentialism

Glad the alienation objection is getting some airtime in EA. I wanted to add two very brief notes in defense of consequentialism:

1) The alienation objection seems generalizable beyond consequentialism to any moral theory which (as you put it) inhibits you from participating in a normative ideal. I am not too familiar with other moral traditions, but it is possible for me to see how following certain deontological or contractualist theories too far also results in a kind of alienation. (Virtue ethics may be the safest here!)

2) The normative ideals that deal with interpersonal relationships are, as you mentioned, not the only normative ideals on offer. And while the ones that deal with interpersonal relationships may deserve a special weight, it’s still not clear how to weigh them relative to other normative ideals. Some of these other normative ideals may actually be bolstered by updating more in favor of following some kind of consequentialism. For example, consider the below quote from Alienation, Consequentialism, and the Demands of Morality by Peter Railton, which deeply resonated with me when I first read it:

Individuals who will not or cannot allow questions to arise about what they are doing from a broader perspective are in an important way cut off from their society and the larger world. They may not be troubled by this in any very direct way, but even so they may fail to experience that powerful sense of purpose and meaning that comes from seeing oneself as part of something larger and more enduring than oneself or one's intimate circle. The search for such a sense of purpose and meaning seems to me ubiquitous — surely much of the impulse to religion, to ethnic or regional identification (most strikingly, in the ‘rediscovery’ of such identities), or to institutional loyalty stems from this desire to see ourselves as part of a more general, lasting and worthwhile scheme of things. This presumably is part of what is meant by saying that secularization has led to a sense of meaninglessness, or that the decline of traditional communities and societies has meant an increase in anomie.
Should recent events make us more or less concerned about biorisk?

This was basically going to be my response -- but to expand on it, in a slightly different direction, I would say that, although maybe we shouldn't be more concerned about biorisk, young EAs who are interested in biorisk should update in favor of pursuing a career in/getting involved with biorisk. My two reasons for this are:

1) There will likely be more opportunities in biorisk (in particular around pandemic preparedness) in the near-future.

2) EAs will still be unusually invested in lower-probability, higher-risk problems than non-EAs (like GCBRs).

(1) means talented EAs will have more access to potentially high-impact career options in this area, and (2) means EAs may have a higher counterfactual impact than non-EAs by getting involved.

How to estimate the EV of general intellectual progress

Some low-effort thoughts (I am not an economist so I might be embarrassing myself!):

  • My first inclination is something like "find the average output of the field per unit time, then find the average growth rate of a field, and then calculate the 'extra' output you'd get with a higher growth rate." In other words: (1) what is the field currently doing of value? (2) how much more value would that field produce if they did whatever they're currently doing faster?
    • It would be interesting to see someone do a quantitative analysis of the history of progress in some particular field. However, because so much intellectual progress has happened in the last ~300 years by so few people (relatively speaking), my guess is we might not have enough data in many cases.
  • The more something like the "great man theory" applies to a field (i.e. the more stochastic progress is), the more of a problem you have with this model. The first thing I thought of was philosophy: the median philosopher probably has an output close to 0, but the top >0.01% philosopher contributes extraordinary value. You probably couldn't have a very (helpful) systematic model for philosophical discoveries. Maybe you could ask a question like "what's the output we'd get from solving (or making significant headway towards solving) philosophical problem X, and how do we increase the chance that someone solves X?"
  • With regard to that latter question (also your second set-up), I wonder how reliably we could apply heuristics for determining the EV of particular contributions (i.e. how much value do we usually get from papers in field Y with ~X citations?).
I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA

I dug up a few other places 80,000 Hours mentions law careers, but I couldn't find any article where they discuss US commercial law for earning-to-give. The other mentions I found include:

In their profile on US AI Policy, one of their recommended graduate programs is a "prestigious law JD from Yale or Harvard, or possibly another top 6 law school."

In this article for people with existing experience in a particular field, they write “If you have experience as a lawyer in the U.S. that’s great because it’s among the best ways to get positions in government & policy, which is one of our top priority areas.”

It's also mentioned in this article that Congress has a lot of HLS graduates.

I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA

You mentioned in the answer to another question that you made the transition from being heavily involved with social justice in undergrad to being more involved with EA in law school. This makes me kind of curious -- what's your EA "origin story"? (How did you find out about effective altruism, how did you first become involved, etc.)

In praise of unhistoric heroism

I love this post! It’s beautifully written, and one of the best things I’ve read on the forum in a while. So take my subsequent criticism of it with that in mind! I apologize in advance if I’m totally missing the point.

I feel like EAs (and most ambitious people generally) are pretty confused about how to reconcile status/impact with self-worth (I’m including myself in this group). If confronted, many of us would say that status/impact should really be orthogonal to how we feel about ourselves, but we can’t quite bring that to be emotionally true. We helplessly invidiously compare ourselves with successful people like “Carl” (using the name as a label here, not saying we really do this when we look at Carl Schuman), even though we consciously would admit that the feeling doesn’t make much sense.

I’ve read a number of relevant discussions, and I still don’t think anyone has satisfactorily dealt with this problem. But I’ll say that, for now, I think we should separate questions about the moral integrity of our actions (how we should define the goodness/badness of our actions) and those about how we should think about ourselves as people (whether we’re good/bad people). They’re related, but there might not be an easy mapping from one to the other. For instance, I think it’s very conceivable that a “Dorothea” may be a better person than a “Carl”, but a “Carl” does more good than a “Dorothea.” And, perhaps, while we should strive to do as much good as possible, our self-worth should track the kind of people we are much more closely than how much good we do.

[Link] Aiming for Moral Mediocrity | Eric Schwitzgebel

This is fair. I was trying to salvage his argument without running into the problems mentioned in the above comment, but if he means "aim" objectively, then its tautologically true that people aim to be morally average, and if he means "aim" subjectively, then it contradicts the claim that most people subjectively aim to be slightly above average (which is what he seems to say in the B+ section).

The options are: (1) his central claim is uninteresting (2) his central claim is wrong (3) I'm misunderstanding his central claim. And I normally would feel like I should play it safe and default to (3), but it's probably (2).

Load More