MichaelDickens

I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me). Lately, I mainly write about EA investing strategy, but my attention span is too short to pick just one topic.

I have a website: https://mdickens.me/ Most of the content on my website gets cross-posted to the EA Forum.

My favorite things that I've written: https://mdickens.me/favorite-posts/

I used to work as a software developer at Affirm.

Sequences

Quantitative Models for Cause Selection

Topic Contributions

Comments

A Preliminary Model of Mission-Correlated Investing

It's the same as the standard notion in that you're hedging something. It's different in that the thing you're hedging isn't a security. If you wanted to, you could talk about it in terms of the beta between the hedge and the mission target.

The Forum should consider anonymizing names

I use the LessWrong anti-kibitzer to hide names. All you have to do to make it work on the EA Forum is change the URL from lesswrong.com to forum.effectivealtruism.org.

Announcing the launch of Open Phil's new website

To piggyback on this, "with the resources available to us" is tautologically true. The mission statement would have identical meaning if it was simply "Our mission is to help others as much as we can."

Taking a step back, I don't really like the concept of mission statements in general. I think they almost always communicate close to zero information, and organizations shouldn't have them.

On Deference and Yudkowsky's AI Risk Estimates

I read this post kind of quickly, so apologies if I'm misunderstanding. It seems to me that this post's claim is basically:

  1. Eliezer wrote some arguments about what he believes about AI safety.
  2. People updated toward Eliezer's beliefs.
  3. Therefore, people defer too much to Eliezer.

I think this is dismissing a different (and much more likely IMO) possibility, which is that Eliezer's arguments were good, and people updated based on the strength of the arguments.

(Even if his recent posts didn't contain novel arguments, the arguments still could have been novel to many readers.)

What are EA's biggest legible achievements in x-risk?

That being said, I don't think an Iran-Israel exchange would constitute an existential risk, unless it then triggered a global nuclear war.

I wouldn't sell yourself short. IMO, any nuclear exchange would dramatically increase the probability of a global nuclear war, even if the probability is still small by non-xrisk standards.

Steering AI to care for animals, and soon

Some people have told me (probably as a joke) that the best way to improve wild animal welfare is to invent AGI and let the AGI figure it out.

I believe this, not as a joke. But I do agree with you that this requires solving the broader alignment problem and also ensuring that the AGI cares about all sentient beings.

The Strange Shortage of Moral Optimizers

Viewed in this light, the absence of any competing explicitly beneficentrist movements is striking. EA seems to be the only game in town for those who are practically concerned to promote the general good in a serious, scope-sensitive, goal-directed kind of way.

Before EA, I think there were at least two such movements:

  1. a particular subset of the animal welfare movement that cared about effectiveness, e.g., focusing on factory farming over other animal welfare issues explicitly because it's the biggest source of harm
  2. AI safety

Both are now broadly considered to be part of the EA movement.

How to determine distribution parameters from quantiles

Thank you for this! I had been trying to solve this exact problem recently, and I wasn't sure if I was doing it right. And this spreadsheet is much more convenient than the way I was doing it.

How to determine distribution parameters from quantiles

The hyperlink on the word "this" (in both instances) is broken. I don't see how to get to the calculator.

Load More