All of MichaelDickens's Comments + Replies

MichaelDickens's Shortform

Looking at the Decade in Review, I feel like voters systematically over-rate cool but ultimately unimportant posts, and systematically under-rate complicated technical posts that have a reasonable probability of changing people's actual prioritization decisions.

Example: "Effective Altruism is a Question (not an ideology)", the #2 voted post, is a very cool concept and I really like it, but ultimately I don't see how it would change anyone's important life decisions, so I think it's overrated in the decade review.

"Differences in the Intensity of Valenced Ex... (read more)

What are some high-EV but failed EA projects?

I will give an example of one of my own failed projects: I spent a couple months writing Should Global Poverty Donors Give Now or Later? It's an important question, and my approach was at least sort of correct, but it had some flaws that made my approach pretty much useless.

Why Helping the Flynn Campaign is especially useful right now

How quickly can campaigns spend money? Can they reasonably make use of new donations within less than 8 days?

These days campaigns can use late money thanks to digital ad opportunities

I think donations in the next 2-3 days would be very useful (probably even more useful than door-knocking and phone-banking if one had to pick) for TV ads, but after that the benefits diminish somewhat steeply over the remaining days.

New substack on utilitarian ethics: Good Thoughts

Sounds plausible. Some data: The PhilPapers survey found that 31% of philosophers accept or lean toward consequentailism, vs. 32% deontology and 37% virtue ethics. The ratios are about the same if instead of looking at all philosophers, you look at just applied ethicists or normative ethicists.

I don't know of any surveys on normative views of philosophy-adjacent people, but I expect that (e.g.) economists lean much more consequentialist than philosophers. Not sure what other fields one would consider adjacent to philosophy. Maybe quant finance?

How to optimize your taxes as a donor in the US: donate appreciated securities, make a donor-advised fund, and bunch your donations

You could do something very similar by having one person short a liquid security with low borrowing costs (like SPY maybe) and have the other person buy it.

The buyer will tend to make more money than the shorter, so you could find a pair of securities with similar expected return (e.g., SPY and EFA) and have each person buy one and short the other.

You could also buy one security and short another without there being a second person. But I don't think this is an efficient use of capital—it's better to just buy something with good expected return.

Kyle Lucchese's Shortform

Is it possible to do the most good while retaining current systems (especially economic)? What in these systems needs to be transformed?

This question is already pretty heavily researched by economists. There are some known answers (immigration liberalization would be very good) and some unknowns (how much is the right amount of fiscal stimulus in recessions?). For the most part, I don't think there's much low-hanging fruit in terms of questions that matter a lot but haven't been addressed yet. Global Priorities Institute does some economics research, IMO that's the best source of EA-relevant and neglected questions of this type.

1Kyle Lucchese23d
Thanks, Michael. Regarding the economists - yes, I think that is true. I do, however, believe that we have other angles/perspectives/specializations that are less considered but might be valuable to consult. Essentially - subject matter experts are, understandably, highly influential in shaping these conversations, but their voices may be disproportionately valued. My next note asks questions in this vein.
FTX/CEA - show us your numbers!

As a positive example, 80,000 Hours does relatively extensive impact evaluations. The most obvious limitation is that they have to guess whether any career changes are actually improvements, but I don't see how to fix that—determining the EV of even a single person's career is an extremely hard problem. IIRC they've done some quasi-experiments but I couldn't find them from quickly skimming their impact evaluations.

FTX/CEA - show us your numbers!

A related thought: If an org is willing to delay spending (say) $500M/year due to reputational/epistemic concerns, then it should easily be willing to pay $50M to hire top PR experts to figure out the reputational effects of spending at different rates.

(I think delays in spending by big orgs are mostly due to uncertainty about where to donate, not about PR. But off the cuff, I suspect that EA orgs spend less than the optimal amount on strategic PR (as opposed to "un-strategic PR", e.g., doing whatever the CEO's gut says is best for PR).)

4david_reinstein1mo
thanks, fixing now ... I've made that mistake before in the forum
Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"?

FWIW my intuition is that if you have a name for a thing, it means the opposite of that is the default. If there's a special term for "longtermist", that means people are not longtermists by default (which I think is basically true—most people are not longtermists, and longtermism is kind of a weird position (although I do happen to agree with it)). Sort of like how EAs are called EAs, but there's no word for people who aren't EAs, because being not-EA is the default.

2Jack Malde1mo
Yeah I think that’s true if you only have the term “longtermist”. If you have both “longtermist” and “non-longtermist” I’m not so sure.
A Complete Quantitative Model for Cause Selection

Thanks for the heads up, it should be working again now.

"Long-Termism" vs. "Existential Risk"

FWIW I would not be offended if someone said Scott's writing is better than mine. Scott's writing is better than almost everyone's.

Your comment inspired me to work harder to make my writings more Scott-like.

"Long-Termism" vs. "Existential Risk"

Yeah, the two things are orthogonal as far as I can see. The person-affecting view is perfectly with consistent with either a zero or a nonzero pure time preference.

1Michael_Wiebe1mo
Okay, so you could hold the person-affecting view and be indifferent to creating new lives, but also have zero pure time preference in that you don't value future lives any less because they're in the future. So this is really getting at creating new lives vs how to treat them given that they already exist.
"Long-Termism" vs. "Existential Risk"

I don't know of any EAs or philosophers with a nonzero pure time preference, but it's pretty common to believe that creating new lives is morally neutral. Someone who believes this might plausibly be a short-termist. I have a few friends who are short-termist for that reason.

1Michael_Wiebe1mo
Hmm, is it consistent to have zero pure time preference and be indifferent to creating new lives?
What general financial advising advice would you give to EAs?

In addition to what Brendon said, I'd say that finance best practices for EAs are mostly the same as best practices for anyone else. I like the Bogleheads wiki as a good resource for beginners.

IMO you can get most of the benefits of investing just by following best practices. If you want to take it further, you can follow some of the tips in the articles Brendon linked, or read my post Asset Allocation and Leverage for Altruists with Constraints, which gives my best guess as to how EAs should invest differently than most people.

Liars

The most prominent example I've seen recently is Frank Abagnale, the real-life protagonist of the supposedly-nonfiction movie Catch Me If You Can, who basically totally fabricated his life story, and (AFAICT) makes a living off making appearances where he tells his story, and he still regularly gets paid to do this, even though it's pretty well-documented that he's lying about almost everything.

A Comparison of Donor-Advised Fund Providers

Thanks for pointing this out! I updated the post.

Why 80 000 hours should recommend more people become drug lords

I haven't drug-lorded personally, but I've watched Breaking Bad, and my understanding of the general process is

customers buy drugs in cash -> street dealers kick up to managers -> managers kick up to drug lords

so the drug lords end up accumulating piles of cash. Hard to convert cash into crypto so I think it would be better if CEA could directly receive cash.

Maybe a drug lord mega-donor could donate a storage unit to CEA, and that storage unit happens to be filled with cash? That's probably better than a direct cash donation, because the drug lord would have to report the cash donation on their taxes.

Why 80 000 hours should recommend more people become drug lords

EA-aligned drug lord can solve this problem by donating colossal wonga to charity.

How capable are charities at accepting large cash donations? If this is an issue, maybe CEA could serve as an intermediary to redistribute drug lord cash to other charities, I know they've done similar things for e.g. helping new EA charities that aren't yet officially registered.

2alexrjl2mo
Many are well set up for crypto donations, which should mean this is fine - I gather crypto is the preferred wonga in this field.
Is misinformation a serious problem, and is it tractable?

This isn't a particularly deep or informed take, but my perspective on it is that the "misinformation problem" is similar to what Scott called the cowpox of doubt:

What annoys me about the people who harp on moon-hoaxing and homeopathy – without any interest in the rest of medicine or space history – is that it seems like an attempt to Other irrationality.

It’s saying “Look, over here! It’s irrational people, believing things that we can instantly dismiss as dumb. Things we feel no temptation, not one bit, to believe. It must be that they are defective and

... (read more)
2MaxRa2mo
You mean people hate on others who fall for misinformation? I haven't noticed that so far. My impression of the misinformation discourse is ~ "Yeah, this shit is scary, today it might still be mostly easy to avoid, but we'll soon drown in an ocean of AI-generated misinformation!" Which also doesn't seem right. I think I expect this to be in large part a technical problem that will mostly get solved because it is and probably will be such a prominent issue in the coming years, affecting many of the most profitable tech firms.
A Forum post can be short

Yeah I feel the same way, I wonder if there's a good fix for that. Given the current setup, long effortposts are usually only of interest to a small % of people, so they don't get as many upvotes.

2MaxRa2mo
But as long as a large fraction of this small % of people sees the post, this is not a big problem, no? I imagine that this is for example true for EAs interested in improving institutions and the landscape analysis of institutional improvements.
A Forum post can be short

I know it's a joke, but if you want to build status, short posts are much better than long posts.

Which is more impressive: the millionth 200-page dissertation published this year, or John Nash's 10-page dissertation?

Which is more impressive: the latest complicated math paper, or Conway & Soifer's two-word paper?

A Forum post can be short

I like when writing advice is self-demonstrating.

1Simon Skade2mo
Jup, would have been even funnier if the post content was just ".", but perhaps this wouldn't have helped that much convincing people that short posts are ok. xD
A Comparison of Donor-Advised Fund Providers

That's a complicated question, but in short, if you believe that there will be better donation opportunities in the future, you might use a DAF.

Some thoughts on recent Effective Altruism funding announcements

This question seems like it should be a private message? I don't see how it's relevant to the post you're replying to.

-1Charles He3mo
The EA forum is complex and probably unique. I think there are several important features: * It's performative, as EA has various audiences to whom maintaining tone and norms is important. It's also part work forum, like a company intranet. Every funder and collaborator can see everything you ever write, so breaking norms, such as being negative or confrontational, is costly (while certain actions may be risky or have no personal reward). * The forum is a way to communicate and try to find the truth about important causes or decisions. However it does this in a funky way—you can confront ideas with extreme aggression (and get authority for doing so), yet you might not even be able to indirectly suggest that there are issues about someone's relevant credentials or ability (even when they use these explicitly or you suspect they have arrogated themselves). * The forum has strikingly different reactions based on insider or outsider status: content from newcomers and many critics are treated well, even when it's pretty bad, or they make direct personal attacks. At the same time, people who occupy meta positions, places of authority regularly encounter hostility. This is probably a feature, not a defect. However, it's possible someone could straddle the space between these roles, and shield themselves by the norms of one, while using the other to advance their goals. * There's more prosaic issues. Like other forums, it isn't always representative and can acquire constituencies with their own views. Issues or grievances (that are very real or neglected) can be hard to explain or confront, and exist for long periods of time without challenge or solution. There's some other features that are relevant, but this is too long already. If you thought people were exploiting these features in some way, I guess you could write a short form post or something directly denouncing people personally. But that seems hard and
What psychological traits predict interest in effective altruism?

Is it possible that those are confounded by age? That is, young people are more likely to favor expansive altruism (which the surveys say are true) and also incidentally have less education and lower income.

1Agrippa3mo
That would be my assumption, but OP says > Note that the significant correlations with education level and income held even after controlling for age.
2Lucius Caviola3mo
We considered this too. But the significant correlations with education level and income held even after controlling for age. (We mention this below one of the tables.)
Yonatan Cale's Shortform

Applicants to ACX grants were almost by definition not working on problems with well-established solutions (in EA or otherwise), eg nobody was applying for an ACX grant to distribute bednets. That made the grants more difficult to evaluate than many popular EA causes, and also made it hard to rely on previous work.

7Yonatan Cale3mo
1. Totally agree 2. The concern I'm raising is something like "our articles only help for [something like] well established solutions". Or in other words, there is no situation where [someone is able to vet an org and this was only true because of reading the article] The other example I have in mind is trying to help people in Israel find an impactful job, especially in tech. We can offer them 100 pages of theory on how to vet companies, but almost no concrete companies to recommend
As an independent researcher, what are the biggest bottlenecks (if any) to your motivation, productivity, or impact?

This is something that I think EA Spain gets very right.

What are they doing right, do you think?

6NunoSempere3mo
They invested early into hiring a competent leader full-time. We have a high-quality slack with nice conversations. And we have Jaime Sevilla, who is a few years older than me and thus further along the road into researchdom. Otherwise, I don't really know!
The Bioethicists are (Mostly) Alright

The most common type is various instances of "utilitarianism endorses doing this thing that clearly decreases utility, therefore utilitarianism is wrong." Hard to remember specifics because this was 6 to 10 years ago. I just remember being struck by how these supposed experts had such basic misunderstandings.

2Linch4mo
Taking what you said at face value, what's going on here, institutionally? Philosophy is a nontrivially competitive field, and Stanford professorships aren't easy to get.
The Bioethicists are (Mostly) Alright

FWIW, I didn't major in ethics but I did take a few ethics classes, and I found that every professor I saw had basic, obvious misunderstandings of utilitarianism.

3Thomas Kwa4mo
Could you give some examples?
Pedant, a type checker for Cost Effectiveness Analysis

This is a very cool project!

Have you looked into Idris? It has at least some of the capabilities that you'd want in a CEA language.

I still haven't looked much at Pedant, but I'm inclined to favor a DSL on top of a pre-existing language rather than a new language that requires its own compiler, largely because the former will be much easier to maintain and should be more portable—you're offloading all the work of writing the compiler to someone else. A custom language will indeed have a much simpler compiler, but the problem is you have to write and maintai... (read more)

3Hazelfire5mo
Hello Michael! Yes, I've heard of Idris (I don't know it, but I'm a fan, I'm looking into Coq for this project). I'm also already a massive fan of your work on CEAs, I believe I emailed you about it a while back. I'm not sure I agree with you about the DSL implementation issue. You seem to be mainly citing development difficulties, whereas I would think that doing this may put a stop to some interesting features. It would definitely restrict the amount of applications. For instance, I'm fully considering Pedant to be simply a serialization format for Causal [https://www.causal.app/]. Which would be difficult to do if it was embedded within an existing language. Making a language server that checks for dimensional errors would be very difficult to do in a non-custom language. It may just be possible in a language like Coq or Idris, but I think Coq and Idris are not particularly user friendly, in the sense that someone with no programming background could just "pick them up". I may be interested in writing your CEAs into Pedant in the future, because I find them very impressive!
Convergence thesis between longtermism and neartermism

No comment on the specific arguments given, but I like the way this post is structured: a list of weak arguments, grouped into categories, each short enough that they're easy to read quickly.

A Red-Team Against the Impact of Small Donations

When I talked to an AI safety grad student about this, he said that Top 4 CS programs are not funding constrained, but top 10-20 are somewhat.

I've never been a grad student, but I suspect that CS grad students are constrained in ways that EA donors could fairly easily fix. They might not be grant-funding-constrained, but they're probably make-enough-to-feel-financially-secure-constrained or grantwriting-time-constrained, and you could convert AI grad students into AI safety grad students by lifting these constraints for them.

We’re Rethink Priorities. Ask us anything!

To the extent that you think good operations can emerge out of replicable processes rather than singularly talented ops managers, do you think it would be useful to write a longer article about how RP does operations? (Or perhaps you've already written this and I missed it)

2abrahamrowe6mo
This potentially sounds useful, and I can definitely write about it at some point (though no promises on when just due to time constraints right now).
FTX EA Fellowships

Isn't housing more relevant than groceries? A typical household spends about 3x as much on housing than on all consumption goods combined (IIRC). And that site says housing in Nassau is a lot cheaper than in SF.

6Pablo6mo
FTX is providing housing, so housing costs aren't decision-relevant for potential applicants.
A Model of Patient Spending and Movement Building

Hey, in hindsight I realize that the paper + summarization don't make clear that this does depend on model assumptions/empirical points

FWIW this was clear to me, I was using "conclusions" to mean "conclusions, given the model assumptions", not "conclusions, which the authors definitely think are true".

2NunoSempere6mo
Right, thanks, it seemed better to be too paranoid than to be too little.
Should Earners-to-Give Work at Startups Instead of Big Companies?

Yes this is something worth considering. I did look at how much alpha the Cambridge Associates startup data had on top of US publicly-traded tech stocks vs. the US total market, and there wasn't much difference. EA money is in much more specific investments than just the tech sector, but that makes it harder to test the correlation.

Should Earners-to-Give Work at Startups Instead of Big Companies?

I would not consider Stripe a startup for the purposes of this post.

Should Earners-to-Give Work at Startups Instead of Big Companies?

At a glance, I don't see startup salaries on levels.fyi. In my experience, most startups offer worse face-value compensation than large tech companies, but a significant minority offer competitive compensation. I was able to get a (slightly) higher offer from a startup than from Google.

2Linch6mo
For the record, this was true for me as well.
Should Earners-to-Give Work at Startups Instead of Big Companies?

I believe I accounted for this by factoring in the persistence of VC firm returns.

2Linch6mo
I think Samuel's second paragraph is a more intuitive albeit less precise explanation of meta-options for people with less of a finance background:
A Model of Patient Spending and Movement Building

Thanks for this! I think the setup is excellent, especially the diagram that makes it very clear what's going on. It seems basically comprehensive to me—not fully comprehensive, but it covers the most important stuff.

My approach when reading the full paper was:

  • What are the interesting conclusions?
  • What model assumptions produce those conclusions?
  • Are any assumptions worth changing, and how might that change the conclusion?

The main conclusions, as I see it:

  1. Labor grows to a constant size, while capital keeps growing.
  2. Fraction of labor dedicated to earn
... (read more)
2NunoSempere6mo
Re: Labor grows to a constant size Hey, in hindsight I realize that the paper + summarization don't make clear that this does depend on model assumptions/empirical points, sorry. I've edited the post to make this clearer (here [https://web.archive.org/web/20211110172506/https://forum.effectivealtruism.org/posts/FXPaccMDPaEZNyyre/a-model-of-patient-spending-and-movement-building#Main_results] is the previous version without the edits, in case it's of interest.) tl;dr: This comes from model assumptions which seem reasonable, but empirical investigations + historical case studies, or alternatively sci-fi scenarios could flip the conclusion. In particular, letL′=−r⋅L+f(a⋅L,b⋅K), i.e. roughlyL(t)=L(t−1)⋅(1−r)+f(a⋅L,b⋅K), so each year you loser% of people, but you also do some movement building, for which you spenda⋅Llabor andb⋅Kcapital. Then for some functions f which determine movement building, this already implies that the movement has a maximum size. So for instance, if you havef(a⋅L, b⋅K)=log(11a⋅L+1b⋅K), then with infinite capital this reduces tof(a⋅L,b⋅K)=log(1 1a⋅L+1∞)=log(11a⋅L)=log(a⋅L) But then even if you allocate all labor to movement building (so thata=1, or something), you'd have something likeL′=−r⋅L+log(L), and this eventually converges to the point wherelog(L)=r⋅Lno matter where you start. Now, above I've omitted some constants, and our function isn't quite the same, but that's essentially what's going on (seeρR<0,λ<1in equation 6 in page 4.) I.e., if you lose movement participants as a percentage but have a recruitment function that eventually has "brutal" diminishing returns (sub-linear diminishing returns to labor + throwing money at movement building doesn't solve it), you get a similar result (movement converges to a constant.) But you could also imagine a scenario where the returns are less brutal—e.g., you're always able to recruit an additional participant by throwing money at the problem, or every movement builder can sort of eternally
3trammell6mo
Thanks! A lot of good points here. Re 1: if I'm understanding you right, this would just lower the interest rate from r to r - capital 'depreciation rate'. So it wouldn't change any of the qualitative conclusions, except that it would make it more plausible that the EA movement (or any particular movement) is, for modeling purposes, "impatient". But cool, that's an important point. And particularly relevant these days; my understanding is that a lot of Will's(/etc) excitement around finding megaprojects ASAP is driven by the sense that if we don't, some of the money will wander off. Re 2: another good point. In this case I just think it would make the big qualitative conclusion hold even more strongly--no need to earn to give because money is even easier to come by, relative to labor, than the model suggests. But maybe it would be worth working through it after adding an explicit "wealth recruitment" function, to make sure there are no surprises. Re 3: I agree, but I suspect--perhaps pessimistically--that the asymptotics of this model (if it's roughly accurate at all) bite a long time before EA wealth is a large enough fraction of global capital to push down the interest rate! Indeed, I don't think it's crazy to think they're already biting. Presumably the thing to do if you actually got to that point would be to start allocating more resources to R&D, to raise labor productivity and thus the return to capital. There are many ways I'd want to make the model more realistic before worrying about the constraints you run into when you start owning continents (a scenario for which there would presumably be plenty of time to prepare...!); but as noted, one of the extensions I'm hoping gets done before too long is to make (at least certain kinds of) R&D endogenous. So hopefully that would be at least somewhat relevant.
Linch's Shortform

Michael D's advice are currently not followed by the super-HNWs (for reasons that are not super-legible to me, though admittedly I haven't looked too deeply).

I don't really know, but my guess is it's mostly because of two things:

  1. Most people are not strategic and don't do cost-benefit analyses on big decisions. HNW people are often better at this than most, but still not great.
  2. On the whole, investment advisors are surprisingly incompetent. That raises the question of why this is. I'm not sure, but I think it's mainly principal-agent problems—they're n
... (read more)
FTX EA Fellowships

Can you say more about the motivation behind building an EA community in the Bahamas?

FTX moved there due primarily to the friendly regulatory environment: on crypto specifically, they're basically the first country in the world to put out a comprehensive framework for crypto regulation, while most countries have been working on this for years and will probably still be working on it for years to come. More generally, the government seems excited about encouraging tech/innovation and cutting back on red tape. 

It's a fairly small country, and I think if a lot of EAs move there EA could end up being a somewhat influential force in the co... (read more)

Future Funding/Talent/Capacity Constraints Matter, Too

Are you thinking they're related in the sense of "if money is less valuable in the future, then we should include that in the discount rate"? I was thinking of the pure discount rate—the way you discount future utility, e.g., due to the probability of extinction.

Mission Hedgers Want to Hedge Quantity, Not Price

Run climate model and buy real estate or assets based on real estate for places that will be much nicer to live after 3 degrees of warming

This strikes me as the best idea, as long as you're wealthy enough to buy a bunch of real estate as a hedge.

(FWIW It's kind of irrelevant because I don't actually think EAs should mission hedge climate change, that was just an example. I'm still 50/50 on whether mission hedging is even worth it, and if it is, climate change would not be on my list of the top 5 causes worth hedging.)

The psychology of population ethics

Most disagreements between professional philosophers on population ethics come down to disagreements about intuition:

  • Alice supports the total view because she has an intuition that the Repugnant Conclusion is not actually repugnant
  • Bob adopts a person-affecting view and rejects the independence of irrelevant alternatives (IIA) because his intuition is that IIA doesn't matter
  • Carol rejects transitivity of preferences because her intuition is that that's the least important premise

But none of them ultimately have any justification beyond their intuition. So I think it's totally fair and relevant to survey non-philosophers' intuitions.

4MichaelPlant9mo
Well, all disagreements in philosophy ultimately come down to intuitions, not just those in population ethics! The question I was pressing is what, if anything, the authors think we should infer from data about intuitions. One might think you should update toward people's intuitions, but that's not obvious to me, not least when (1) in aggregate, people's answers are inconsistent and (2) this isn't something they've thought about.
Mission Hedgers Want to Hedge Quantity, Not Price

Also, climate change would be more related to cumulative oil production, rather than annual.

True. I tested the correlation between S&P 500 price and cumulative oil production and got r=0.81 (p < 1e-29).

Investing in companies with large food storage would be a particularly good hedge against abrupt food catastrophes.

That's a neat idea. It behaves more like insurance—most of the time it doesn't do much, but when it matters, it will give you a lot of money.

Load More