4313 karmaJoined


I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me). Lately, I mainly write about EA investing strategy, but my attention span is too short to pick just one topic.

I have a website: https://mdickens.me/ Most of the content on my website gets cross-posted to the EA Forum.

My favorite things that I've written: https://mdickens.me/favorite-posts/

I used to work as a software developer at Affirm.


Quantitative Models for Cause Selection


This might not be exactly what OP meant but I think of "Bayesian" as distinguishing between the types of evidence Eliezer talked about in Scientific Evidence, Legal Evidence, Rational Evidence. There's a perspective that "blog posts aren't evidence" or "personal beliefs aren't evidence". This is clearly false in an obvious sense (people often update their beliefs based on blog posts or other people's beliefs) but it's true in another sense—in some contexts, people only accept "formal" evidence as evidence.

I would roughly define Bayesianism as the philosophy that anything that can change people's beliefs counts as evidence.

In some sense, this sort of Bayesianism is a trivial philosophy because everyone already behaves as if it's true, but I think it's useful as an explicit reminder.

Can you explain? I see why the implied vols for puts and calls should be identical, but empirically, they are not—right now calls at $450 have an implied vol of 215% and puts at $450 have an implied vol of 158%. Are you saying that the implied vol from one side isn't the proper implied vol, or something?

I assume the argument is that neurotic people suffer more when they don't get resources, so resources should go to more neurotic people first?

I think that's correct in an abstract sense but wrong in practice for at least two reasons:

  1. Utilitarianism says you should work on the biggest problems first. Right now the biggest problems are (roughly) global poverty, farm animal welfare, and x-risk.
  2. A policy of helping neurotic people encourages people to act more neurotic and even to make themselves more neurotic, which is net negative, and therefore bad according to utilitarianism. Properly-implemented utilitarianism needs to consider incentives.

FWIW this might not be true of the average reader but I felt like I understood all the implicit assumptions Ben was making and I think it's fine that he didn't add more caveats/hedging. His argument improved my model of the world.

I primarily prioritize animal welfare in my personal donations since I think that on the margin, it is greatly neglected compared to other EA priorities and leads to orders of magnitude more suffering reduction compared to GHP charities.

Could you say more about your thoughts on animal welfare vs. x-risk? I agree that animal welfare is relatively neglected, but it also seems to me that x-risk needs a lot more funding and marginal dollars are still really valuable. (I don't have a strong opinion about which to prioritize but those two considerations seem relevant.)

I'm not particularly knowledgeable about this but my take is:

  1. Yes enlightenment is real, for some understanding of what "enlightenment" means.
  2. As I understand, enlightenment doesn't free you from all suffering. Enlightenment is better described as "ego death", where you stop identifying with your experiences. There is a sense in which you still suffer but you don't identify with your suffering.
  3. Enlightenment is extremely hard to achieve (it requires spending >10% of your waking life meditating for many years) and doesn't appear to make you particularly better at anything. Like if I could become enlightened and then successfully work 80 hours a week because I stop caring about things like motivation and tiredness, that would be great, but I don't think that's possible.
  • Example 1 is referencing a post that's sitting at a score of –6. It was not a well-received post.
  • Example 2 is a very popular post denouncing Richard Hanania.

I would not interpret that as the community being complacent.

I had an idea for a different way to evaluate meta-options. A meta-option behaves like a call option where the price equals the current value of the equity and the strike price equals the cash salary you'd be able to get instead.[1]

If I compare an equity package worth $100K per year versus a counterfactual cash salary of $100K and assume a volatility of 70% (my research suggests that small companies have a volatility around 70–100%), the call option for the equity that vests in the first year is worth $29K, and the call option for the equity that vests in the 4th year is worth $56K (which is equivalent to a 12% annual return). So on average, a meta-option on a 4-year equity package is worth somewhere in the ballpark of an 18% annual return.

(But if the equity has a lower face value than the counterfactual cash salary, it pretty quickly becomes not worth it.)

[1] This is kind of wrong because with a normal stock option you don't have to pay the strike until you exercise, but with an employee meta-option, you have to give up your counterfactual salary as soon as you start working, and you don't vest for the first year so you have to give up a full year of cash salary no matter what. If you have monthly vesting, the fact that you have to pay at the beginning of the month instead of the end doesn't matter much.

(edited to make the numbers make more sense)

I disagree-voted to indicate that I did not donate my mana because of this post (I use Manifold sometimes but I have only a trivial amount of mana)

I feel your pain. I hope the amount of upvotes and hearts you're getting helps you feel better, but I know brains don't always work that way (mine doesn't).

Load more