MichaelDickens

I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me). Lately, I mainly write about EA investing strategy, but my attention span is too short to pick just one topic.

I have a website: https://mdickens.me/ Most of the content on my website gets cross-posted to the EA Forum.

My favorite things that I've written: https://mdickens.me/favorite-posts/

I used to work as a software developer at Affirm.

Sequences

Quantitative Models for Cause Selection

Topic Contributions

Comments

Agrippa's Shortform

Eliezer said something similar, and he seems similarly upset about it: https://twitter.com/ESYudkowsky/status/1446562238848847877

(FWIW I am also upset about it, I just don't know that I have anything constructive to say)

MichaelDickens's Shortform

Looking at the Decade in Review, I feel like voters systematically over-rate cool but ultimately unimportant posts, and systematically under-rate complicated technical posts that have a reasonable probability of changing people's actual prioritization decisions.

Example: "Effective Altruism is a Question (not an ideology)", the #2 voted post, is a very cool concept and I really like it, but ultimately I don't see how it would change anyone's important life decisions, so I think it's overrated in the decade review.

"Differences in the Intensity of Valenced Experience across Species", the #35 voted post (with 1/3 as many votes as #2), has a significant probability of changing how people prioritize helping different species, which is very important, so I think it's underrated.

(I do think the winning post, "Growth and the case against randomista development", is fairly rated because if true, it suggests that all global-poverty-focused EAs should be behaving very differently.)

This pattern of voting probably happens because people tend to upvote things they like, and a post that's mildly helpful for lots of people is easier to like than a post that's very helpful for a smaller number of people.

(For the record, I enjoy reading the cool conceptual posts much more than the complicated technical posts.)

What are some high-EV but failed EA projects?

I will give an example of one of my own failed projects: I spent a couple months writing Should Global Poverty Donors Give Now or Later? It's an important question, and my approach was at least sort of correct, but it had some flaws that made my approach pretty much useless.

Why Helping the Flynn Campaign is especially useful right now

How quickly can campaigns spend money? Can they reasonably make use of new donations within less than 8 days?

New substack on utilitarian ethics: Good Thoughts

Sounds plausible. Some data: The PhilPapers survey found that 31% of philosophers accept or lean toward consequentailism, vs. 32% deontology and 37% virtue ethics. The ratios are about the same if instead of looking at all philosophers, you look at just applied ethicists or normative ethicists.

I don't know of any surveys on normative views of philosophy-adjacent people, but I expect that (e.g.) economists lean much more consequentialist than philosophers. Not sure what other fields one would consider adjacent to philosophy. Maybe quant finance?

How to optimize your taxes as a donor in the US: donate appreciated securities, make a donor-advised fund, and bunch your donations

You could do something very similar by having one person short a liquid security with low borrowing costs (like SPY maybe) and have the other person buy it.

The buyer will tend to make more money than the shorter, so you could find a pair of securities with similar expected return (e.g., SPY and EFA) and have each person buy one and short the other.

You could also buy one security and short another without there being a second person. But I don't think this is an efficient use of capital—it's better to just buy something with good expected return.

Kyle Lucchese's Shortform

Is it possible to do the most good while retaining current systems (especially economic)? What in these systems needs to be transformed?

This question is already pretty heavily researched by economists. There are some known answers (immigration liberalization would be very good) and some unknowns (how much is the right amount of fiscal stimulus in recessions?). For the most part, I don't think there's much low-hanging fruit in terms of questions that matter a lot but haven't been addressed yet. Global Priorities Institute does some economics research, IMO that's the best source of EA-relevant and neglected questions of this type.

FTX/CEA - show us your numbers!

As a positive example, 80,000 Hours does relatively extensive impact evaluations. The most obvious limitation is that they have to guess whether any career changes are actually improvements, but I don't see how to fix that—determining the EV of even a single person's career is an extremely hard problem. IIRC they've done some quasi-experiments but I couldn't find them from quickly skimming their impact evaluations.

FTX/CEA - show us your numbers!

A related thought: If an org is willing to delay spending (say) $500M/year due to reputational/epistemic concerns, then it should easily be willing to pay $50M to hire top PR experts to figure out the reputational effects of spending at different rates.

(I think delays in spending by big orgs are mostly due to uncertainty about where to donate, not about PR. But off the cuff, I suspect that EA orgs spend less than the optimal amount on strategic PR (as opposed to "un-strategic PR", e.g., doing whatever the CEO's gut says is best for PR).)

Load More