4325 karmaJoined


I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me). Lately, I mainly write about EA investing strategy, but my attention span is too short to pick just one topic.

I have a website: https://mdickens.me/ Most of the content on my website gets cross-posted to the EA Forum.

My favorite things that I've written: https://mdickens.me/favorite-posts/

I used to work as a software developer at Affirm.


Quantitative Models for Cause Selection


I disagree, I believe people's fear of AGI is mostly a response to reasonable arguments that AGI poses a very real risk to humanity.

I believe it's because:

  1. you are really reaching by taking Altman's basically normal statement about "neighbors" and using this to infer that he has a psychological condition
  2. speculating that people you don't like must be mentally ill is kind of rude and not good epistemic practice (I think it's justified sometimes but there's a high bar)
  3. your comment doesn't have anything to do with the original post, except that it's about Sam Altman

(I think Sam Altman is deeply untrustworthy and should not be allowed anywhere near AGI development, but I don't think the quote in your post is evidence of this)

I think you're significantly misinterpreting what Geoffrey is trying to say and I don't like the chilling effect caused by trying to avoid making an analogy that could be offensive to anyone who misinterprets you.

The value of a statistical life is determined by governments, right? Governments of rich countries value their own citizens more than they value the citizens of poor countries, which makes sense from their perspective, but it's not morally correct so you shouldn't accept their VSLs.

This might not be exactly what OP meant but I think of "Bayesian" as distinguishing between the types of evidence Eliezer talked about in Scientific Evidence, Legal Evidence, Rational Evidence. There's a perspective that "blog posts aren't evidence" or "personal beliefs aren't evidence". This is clearly false in an obvious sense (people often update their beliefs based on blog posts or other people's beliefs) but it's true in another sense—in some contexts, people only accept "formal" evidence as evidence.

I would roughly define Bayesianism as the philosophy that anything that can change people's beliefs counts as evidence.

In some sense, this sort of Bayesianism is a trivial philosophy because everyone already behaves as if it's true, but I think it's useful as an explicit reminder.

Can you explain? I see why the implied vols for puts and calls should be identical, but empirically, they are not—right now calls at $450 have an implied vol of 215% and puts at $450 have an implied vol of 158%. Are you saying that the implied vol from one side isn't the proper implied vol, or something?

I assume the argument is that neurotic people suffer more when they don't get resources, so resources should go to more neurotic people first?

I think that's correct in an abstract sense but wrong in practice for at least two reasons:

  1. Utilitarianism says you should work on the biggest problems first. Right now the biggest problems are (roughly) global poverty, farm animal welfare, and x-risk.
  2. A policy of helping neurotic people encourages people to act more neurotic and even to make themselves more neurotic, which is net negative, and therefore bad according to utilitarianism. Properly-implemented utilitarianism needs to consider incentives.

FWIW this might not be true of the average reader but I felt like I understood all the implicit assumptions Ben was making and I think it's fine that he didn't add more caveats/hedging. His argument improved my model of the world.

I primarily prioritize animal welfare in my personal donations since I think that on the margin, it is greatly neglected compared to other EA priorities and leads to orders of magnitude more suffering reduction compared to GHP charities.

Could you say more about your thoughts on animal welfare vs. x-risk? I agree that animal welfare is relatively neglected, but it also seems to me that x-risk needs a lot more funding and marginal dollars are still really valuable. (I don't have a strong opinion about which to prioritize but those two considerations seem relevant.)

I'm not particularly knowledgeable about this but my take is:

  1. Yes enlightenment is real, for some understanding of what "enlightenment" means.
  2. As I understand, enlightenment doesn't free you from all suffering. Enlightenment is better described as "ego death", where you stop identifying with your experiences. There is a sense in which you still suffer but you don't identify with your suffering.
  3. Enlightenment is extremely hard to achieve (it requires spending >10% of your waking life meditating for many years) and doesn't appear to make you particularly better at anything. Like if I could become enlightened and then successfully work 80 hours a week because I stop caring about things like motivation and tiredness, that would be great, but I don't think that's possible.
Load more