Ben_West

Division Manager: Forum and Events at CEA. Non-EA interests include puns, chess, YouTube, and applying science to things it isn't usually applied to.

TikTok: @benthamite

Comments

Ranking animal foods based on suffering and GHG emissions

This is awesome! I like the model, and the UI is intuitive and clean. Two requests/suggestions:

  1. Could you say "eggs from caged hens" or something instead of just "caged hen"? And similarly "chicken meat" instead of "broiler"? Or something like that – I think many people aren't familiar with those more technical terms.
  2. Would you be able to get a simpler domain name? I'd like to direct people to this, and I think the current name will be hard to remember.
Scope-sensitive ethics: capturing the core intuition motivating utilitarianism

I think it means that there is something which we value linearly, but that thing might be a complicated function of happiness, preference satisfaction, etc.

As a toy example, say that  is some bounded sigmoid function, and my utility function is to maximize ; it's always going to be the case that  so I am in some sense scope sensitive, but I don't think I'm open to Pascal's mugging. (Correct me if this is wrong though.)

Things CEA is not doing

(These are personal comments, I'm not sure to what extent they are endorsed by others at CEA.)

Thanks for writing this up Ozzie! For what it's worth, I'm not sure that you and Max disagree too much, though I don't want to speak for him.

Here's my attempt at a crux: suppose CEA takes on some new thing, and as a result Max manages me less well, making my work worse, but does that new thing better (or at all) because Max is spending time on it.

My view is that the marginal value of a Max hour is inverse U-shaped for both of these, and the maxima are fairly far out. (E.g. Max meeting with his directs once every two weeks would be substantially worse than meeting once a week.) As CEA develops, the maximum marginal value of his management hour will shift left while the curve for new projects remains constant, and at some point it will be more valuable for him to think about a new thing than speak with me about an old thing.

Please enjoy my attached paint image illustrating this:

I can think of two objections:

1. Management: Max is currently spending too much time managing me. Processes are well-developed and don't need his oversight (or I'm too stubborn to listen to him anyway or something) so there's no point in him spending so much time. (I.e. my "CEA in the future" picture is actually how CEA looks today for management.)
2. New projects: there is super low hanging fruit, and even doing a half-assed version of some new project would be way more valuable than making our existing projects better. (I.e. my "CEA in the future" picture is actually how CEA looks today for new projects.)

I'm curious if either of those seem right/useful to you?

Training Bottlenecks in EA (professional skills)

Thanks for writing this up Michelle! I would be excited for you to write more things like this in the future. Regarding this:

The more similar to mine someone’s situation is, the more likely they’ll be able to recommend resources tailored to me

A common observation[1] is that firms retain older employees but rarely hire them. One explanation for this is that organization-specific knowledge (what acronyms mean, how you make a project plan, etc.) is valuable, but general-purpose skills aren't as valuable, so there's no point in recruiting someone who has 30 years of experience from your competitor. (Or, alternatively: too few people actually learn valuable general-purpose skills for this to show up in the data.)

This roughly seems correct to me, anecdotally.

To the extent that this is accurate in EA, it might imply that EA-specific communication norms or other EA-specific things are the most valuable to train.

An additional hobbyhorse of mine is that certification might be more valuable than training. Having a mentor who can teach you things is nice, but it might actually be more valuable for these skilled and trusted mentors to evaluate people's existing abilities and then credibly certify them.


  1. See e.g. Are older workers overpaid? A literature review: "Theories emphasizing specific human capital are able to explain why firms employ older workers but hardly ever hire them." ↩︎

Why EA meta, and the top 3 charity ideas in the space

Thanks for sharing this! All three of these seem valuable.

A couple questions about the EA training one:

  1. You give the examples of operations skills, communication skills, and burnout prevention. These all seem valuable but not differentially valuable to EA. Are you thinking that this would be training for EA-specific things like cause prioritization or that they would do non-EA-specific things but in an EA way? If the latter, could you elaborate why an EA-specific training organization like this would be better than people just going to Toastmasters or one of the other million existing professional development firms?
  2. Sometimes when people say that they wish there were more EA's with some certain skill, I think they actually mean that they wish there were more EA's who had credibly demonstrated that skill. When I think of EA-specific training (e.g. cause prioritization) I have a hard time imagining a 3 week course[1] which substantially improves someone's skills, but it seems a little more plausible to me that people could work on a month-long "capstone project" which is evaluated by some person whose endorsement of their work would be meaningful. (And so the benefit someone would get from attending is a certification to put on their resume, rather than some new skill they have learned.) Have you considered "EA certification" as opposed to training?

  1. I think there are weeks long courses like "learn how to comply with this regulation" which are helpful, but those already exist outside EA. ↩︎

Incompatibility of moral realism and time discounting

Thanks for posting this!

You might be interested in this from On the Overwhelming Importance of Shaping the Far Future:

The Separated Worlds: There are only two planets with life. These planets are outside of each other’s light cones. On each planet, people live good lives. Relative to each of these planets’ reference frames, the planets exist at the same time. But relative to the reference frame of some comet traveling at a great speed (relative to the reference frame of the planets), one planet is created and destroyed before the other is created. If we treat space and time asymmetrically, we would have to claim that, relative to the reference frame of the planets, this outcome was not as good as it is relative to the reference frame of the comet. But this is very hard to believe. The value of this possible world should not be relative to any reference frame.

Also it's worth pointing out that "regular claims about the world (like 'Elsa is taller than Anna')" are also not "real" in the sense you are using the term. I'm not super familiar with the subject, but I wouldn't be surprised if many moral realists are okay describing moral claims as "only" as real as claims about length.

Careers Questions Open Thread

My experience with bioinformatics is almost exclusively on the industry side, and more the informatics than the bio. With that caveat, a few thoughts:

should I prioritize developing skills that will make me more employable and E2G (e.g. develop and apply sexy, ad hoc methods to rich-person illnesses in a more mainstream bioinformatics-y role)

My experience is that the highest earning positions are not "sexy" (in the way I think you are using the term). I recall one conference I attended in which the speaker was describing some advanced predictive algorithm, and a doctor in the back raised their hand and said "this is all nice but I can't even generate a list of my diabetic patients so could you start with that please?"

This might also address your question "how easy is it to, say, break into industry data science for anthropology graduates with experience in computational stats methods development?" – I think it depends very much on what you mean by "data science". A lot of the most successful bioinformatics companies' products are quite mundane by academic standards: alerting clinicians to well-known drug-drug interactions, identifying patients based on well validated reference ranges for lab tests, etc. My impression is that getting a position at one of these places is approximately similar to getting any other programming job. If you are looking for something more academic though, the requirements are different.

focus more on greater blights afflicting larger numbers of human and non-human animals (say, to understand differential responses to tropical diseases, or maybe variation in the human aging process, or pivot to food science and work on cultured meat or something, as well as work on more interpretable methods)

A problem I suspect you will run into is that methods development requires (often quite large) data sets. I get the sense from your brief bio that you aren't interested in doing any wet lab work, meaning that if you were to work on, say, cultured meat, you would need a data set from some collaborator.

If I were you, I might try to resolve this first. I know GFI has an academic network you can join and you could message people there about the existence of data sets.

Also, you might be interested in OpenPhil's early career GCBR funding. Even if you don't need funding, they might be able to connect you with useful collaborators.

What do you think about the "Initiative to Accelerate Charitable Giving", a new US legislative proposal? Yay or nay?

I find it hard to come up with an argument supportive of this proposal, but as one clarification: the proposal is that donors could choose to create a DAF with no time limit, but where the donor receives only capital gains tax benefits at the time of donation, and income tax benefits at the time of disbursement. Many large donors get most of their income through capital gains, so maybe aren't too bothered by this, and small donors might receive some benefit by being able to save up their donations for several years and then receive income tax benefits all at once when they disburse. (This would be helpful if they normally don't donate enough per year to get over the standard deduction but would be able to get over it after saving up donations for several years.)

My guess is that this is mostly harmful for people with low six-figure incomes who want to donate a substantial portion of their incomes and wait > 15 years.

Guerrilla Foundation Response to EA Forum Discussion

Thanks for continuing to engage! I have been looking forward to seeing your response article, and this was interesting to read.

I suspect that many readers of this Forum would agree with most of your points, particularly the first one. Ironically, it sometimes feels like the two most common criticisms of EA are that it focuses too much on measurable data (e.g. critiquing randomista-related areas of EA) and that it focuses too little on measurable data (e.g. critiquing AI safety). This seems like a sign that we could better explain ourselves.

One area of genuine difference might be regarding impact investing: plenty of EA's believe you should invest instead of donating now, but impact investing seems relatively rare (OpenPhil's investment in Imposssible Foods being one prominent counter example). I'm curious if you have read Founders Pledge's report on impact investing? In particular: you mentioned divestment from publicly traded companies, which FP considers an especially difficult way to have an impact (Principle 4, pages 17-27). I would be curious to hear if you disagree with any of their claims, or the examples they analyzed like Acumen Fund.

80k hrs #88 - Response to criticism

Thanks for posting this! I thought it was interesting, and I would support more people writing up responses to 80 K podcasts.

Minor: you have a typo in your link to transparency.tube

Load More