Shortform Content [Beta]

Write your brief or quickly written post here.
Exploratory, draft-stage, rough, and off-the-cuff thoughts are all welcome on Shortform.

Who should pay the cost of Googling studies on the EA Forum?

  1. Many EA Forum posts have minimal engagement with relevant academic literature

  2. If you see a Forum post that doesn't engage with literature you think is relevant, you could make a claim without looking up a citation based on your memory, but there's a reasonable chance you'll be wrong.

  3. Many people say they'd rather see an imperfect post or comment than not have it at all.

  4. But people tend to remember an original claim, even if it's later debunked.

  5. Maybe the best option is to phrase my com

... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
4Stefan_Schubert21h 3. was discussed here [] . My impression of that discussion is that many of the forum readers thought that it's important that one familiarises oneself with the literature before commenting. Like I say in my comment, that's certainly my view. I agree that too many EA Forum posts fail to appropriately engage with relevant literature.

I've always thought there's a lower bar for commenting than a top-level post, but maybe both should be reasonably high (you should be able to provide some evidence for your claim in a comment, and have some actual engagement with relevant literature in a post, for example)

I find the unilateralist’s curse a particularly valuable concept to think about. However, I now worry that “unilateralist” is an easy label to tack on, and whether a particular action is unilateralist or not is susceptible to small changes in framing.

Consider the following hypothetical situations:

  1. Company policy vs. team discretion
    1. Alice is a researcher in a team of scientists at a large biomedical company. While working on the development of an HIV vaccine, the team accidentally created an air-transmissible variant of HIV. The scientist
... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
3jpaddison3d I really like this (I think you could make it top level if you wanted). I think these of these are cases of multiple levels of cooperation. If you're part of an organization that wants to be uncooperative (and you can't leave cooperatively), then you're going to be uncooperative with one of them.

Good point. Now that you bring this up, I vaguely remember a Reddit AMA where an evolutionary biologist made the (obvious in hindsight, but never occurred to me at the time) claim that with multilevel selection, altruism on one level is often means defecting on a higher (or lower) level. Which probably unconsciously inspired this post!

As for making it top level, I originally wanted to include a bunch of thoughts on the unilateralist's curse as a post, but then I realized that I'm a one-trick pony in this domain...hard to think of novel/useful thi... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Philosophers and economists seem to disagree about the marginalist/arbitrage argument that a social discount rate should equal (or at least be majorly influenced by) the marginal social opportunity cost of capital. I wonder if there's any discussion of this topic in the context of negative interest rates. For example, would defenders of that argument accept that, as those opportunity costs decline, so should the SDR?

Yes, governments lower the SDR as the interest rate changes. See for example the US Council of Economic Advisers's recommendation on this three years ago:

While the "risk-free" interest rate is roughly zero these days, the interest rate to use when discounting payoffs from a public project is the rate of return on investments whose risk profile is similar to that of the public project in question. This is still positive for basically a... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

[Some of my tentative and uncertain views on AI governance, and different ways of having impact in that area. Excerpts, not in order, from things I wrote in a recent email discussion, so not a coherent text.]

1. In scenarios where OpenAI, DeepMind etc. become key actors because they develop TAI capabilities, our theory of impact will rely on a combination of affecting (a) 'structure' and (b) 'content'. By (a) I roughly mean how the relevant decision-making mechanisms look like irrespective of the specific goals and resources of the actor... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

5aarongertler5d I found this really fascinating to read. Is there any chance that you might turn it into a "coherent text" at some point? I especially liked the question on possible downsides of working with key actors; orgs in a position to do this are often accused of collaborating in the perpetuation of bad systems (or something like that), but rarely with much evidence to back up those claims. I think your take on the issue would be enlightening.

Thanks for sharing your reaction! There is some chance that I'll write up these and maybe other thoughts on AI strategy/governance over the coming months, but it depends a lot on my other commitments. My current guess is that it's maybe only 15% likely that I'll think this is the best use of my time within the next 6 months.

I don't know if there is a designated place to leave comments about the EA Forum, so for the time being I'm posting them here. I think the current homepage has a number of problems:

  • The 'Community Favorites' section keeps listing the same posts over and over again. I don't see the point of having a prominent list of favorite posts in the home page that changes so little. I suggest expanding the list considerably so that regular visitors can still expect to see novel posts every time they visit the homepage.
  • [Note: in light of Oli&apo
... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
Showing 3 of 14 replies (Click to show all)
After more thought, we’ve decided that we will change the name to “Forum Favorites”

Great, thank you!

2Pablo_Stafforini6d Thanks for the reply. I think it's totally fine for you to deprioritize this suggestion—not very important.
4aarongertler7d Regarding the categories: We’ve been thinking for a while about whether they should remain on the Forum. We hoped early on that they would improve the reading experience for people who were primarily interested in research rather than community topics (or vice versa), but we’re unsure of the extent to which this has happened. For now, these are internal conversations, but I wouldn't be surprised if we made a decision on this soon after an upcoming feature (tagging posts) becomes available to users (no date on this yet). It’s possible that using tags like “research”, “events”, or “community culture” will obsolete the broader categories we currently have, in which case the distinction could disappear; it's also possible that we'll find ways to make use of broader category pages that aren't covered by tags.

What's the right narrative about global poverty and progress? Link dump of a recent debate.

The two opposing views are:

(a) "New optimism:" [1] This is broadly the view that, over the last couple of hundred years, the world has been getting significantly better, and that's great. [2] In particular, extreme poverty has declined dramatically, and most other welfare-relevant indicators have improved a lot. Often, these effects are largely attributed to economic growth.

  • Proponents in this debate were originally Bill Gates, Steven Pinker, and M
... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
Showing 3 of 5 replies (Click to show all)
0lucy.ea81mo When downvoting please explain why
2aarongertler7d I just now saw this post, but I would guess that some readers wanted more justification for the use of the term "secondary", which implies that you're assigning value to both of (improvements in knowledge) and (tapping of fossil fuels) and saying that the negative value of the latter outweighs the value of the former. I'd guess that readers were curious how you weighed these things against each other. I'll also note that Pinker makes no claim that the world is perfect or has no problems, and that claiming that "reason" or "humanism" has made the world better does not entail that they've solved all the world's problems or even that the world is improving in all important ways. You seem to be making different claims than Pinker does about the meaning of those terms, but you don't explain how you define them differently. (I could be wrong about this, of course; that's just what I picked up from a quick reading of the comment.)

Thanks Aaron for your response. I am assigning positive value to both improvements in knowledge and increased energy use (via tapping of fossil fuel energy). I am not weighing them one vs the other. I am saying that without the increased energy from fossil fuels we would still be agricultural societies, with repeated rise and fall of empires. The indus valley civilization, ancient greeks, mayans all of the repeatedly crashed. At the peak of those civilizations I am sure art, culture and knowledge flourished. Eventually humans out ran their resources and cr

... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

So, I saw Vox's article on how air filters create huge educational gains; I'm particularly surprised that indoor air quality (actually, indoor environmental conditions) is kinda neglected everywhere (except, maybe, in dagerous jobs). But then I saw this (convincing) critique of the underlying paper.

It seems to me that this is a suitable case for a blind RCT: you could install fake air filters in order to control for placebo effects, etc. But then I googled a little bit... and I haven't found significant studies using blind RCTs in social sci... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

There are some pretty good reasons to keep your identity small.

But I see people using that as an excuse to not identify as... anything. As in, they avoid affiliating themselves with any social movements, sports teams, schools, nation-states, professions, etc.

It can be annoying and confusing when you ask someone "are you an EA?" or "are you a Christian?" or "are you British?" and they won't give you a straight answer. It's partly annoying because I'm very rationally trying to make some shortcut assumptions about them

... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
Showing 3 of 4 replies (Click to show all)
4Stefan_Schubert7d In some cases, I think people feel that they have a nuanced position that isn't captured by broad labels. I think that reasoning can go to far, however: if that argument is pushed far enough, no one will count as a socialist, postmodernist, effective altruist, etc. And as you imply, these kinds of broad categories are useful, even while in some respects imperfect.

Yep, makes sense to me! It's difficult for me to identify with a particular denomination of Christianity because I grew up at a non-denominational church and since then I've attended 3 different denominations. So I definitely get the struggle to identify yourself when none of the usual labels quite fit! But I don't have to be a complete mystery - at least I can still say I'm "Christian" or "Protestant"

2Khorton7d Yeah that's probably true - I guess it goes both ways.

I'm 60% sure that LessWrong people use the term "Moloch" in almost exactly the same way as social justice people use the term "kyriarchy" (or "capitalist cis-hetero patriarchy").

I might program my browser to replace "Moloch" with "kyriarchy". Might make Christian Twitter confusing though.

Basic Research vs Applied Research

1. If we are at the Hinge of History, it is less reasonable to focus on long-term knowledge building via basic research, and vice versa.

2. If we have identified the most promising causes well, then targeted applied research is promising.

This is a summary of the argument for the procreation asymmetry here and in the comments, especially this comment, which also looks further at the case of bringing someone into existence with a good life. This is essentially Johann Frick's argument, reframed. The starting claim is that your ethical reasons are in some sense conditional on the existence of individuals, and the asymmetry between existence and nonexistence can lead to the procreation asymmetry.

1. Choosing to not bring someone into existence, all else equal, is a "stable solution&quo... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

I think that some causes may have increasing marginal utility. Specifically, I think that it may be true in some types of research that are expected to generate insights about it's own domain.

Testing another idea for a cancer treatment is probably of decreasing marginal utility (because the low hanging fruits are being picked up), but basic research in genetics may be of increasing marginal utility (because even if others may work on the best approaches, you could still improve their productivity by giving them further insights).

This is not true if t... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Morgan Kelly, The Standard Errors of Persistence

A large literature on persistence finds that many modern outcomes strongly reflect characteristics of the same places in the distant past. However, alongside unusually high t statistics, these regressions display severe spatial autocorrelation in residuals, and the purpose of this paper is to examine whether these two properties might be connected. We start by running artificial regressions where both variables are spatial noise and find that, even for modest ranges of spatial correlation between points, t st... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Discounting the future consequences of welfare producing actions:

  • there's almost unanimous agreement among moral philosophers that welfare itself should not be discounted in the future.
  • however many systems in the world are chaotic, and it's very uncontroversial that in consequentialist theories the value of an action should depend on the expected utility it produces.
  • is it possible that the rational conclusion is to exponentially discount future welfare as a way of accounting for the exponential sensitivity to initial conditions exhibited by the long ter
... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

The Double Up Drive, an EA donation matching campaign (highly recommended) has, in one group of charities that it's matching donations to:

  • International Refugee Assistance Project
  • Massachusetts Bail Fund
  • StrongMinds

StrongMinds is quite prominent in EA as the mental health charity; most recently, Founders Pledge recommends it in their report on mental health.

The International Refugee Assistance Project (IRAP) works in immigration reform, and is a recipient of grants from OpenPhilanthropy as well as recommended for individual donors by an OpenPhil member o... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Open Phil has made multiple grants to the Brooklyn Community Bail Fund, which seems to do similar work to the MA Bail Fund (and was included in Dan Smith's 2017 match). I don't know why MA is still here and Brooklyn isn't, but it may have something to do with room for more funding or a switch in one of the orgs' priorities.

You've probably seen this, but Michael Plant included StrongMinds in his mental health writeup on the Forum.

The EA movement is disproportionately composed of highly logical, analytically minded individuals, often with explicitly quantitative backgrounds. The intuitive-seeming folk explanation for this phenomenon is that that EA, with its focus on rigor and quantification, appeals to people with a certain mindset, and that the relative lack of diversity of thinking styles in the movement is a function of personality type.

I want to reframe this in a way that I think makes a little more sense: the case for an EA perspective is really only made in an analytic, quant... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Showing 3 of 4 replies (Click to show all)
1Matt_Lerner23d Thanks for your thoughts. I wasn't thinking about the submerged part of the EA iceberg (e.g. GWWC membership), and I do feel somewhat less confident in my initial thoughts. Still, I wonder if you'd countenance a broader version of my initial point- that there is a way of thinking that is not itself explicitly quantitative, but that is nonetheless very common among quantitative types. I'm tempted to call this 'rationality,' but it's not obvious to me that this thinking style is as all-encompassing as what LW-ers, for example, mean when they talk about rationality. The examples you give of commonsensical versions of expected value and probability are what I'm thinking about here- perhaps the intuitive, informal versions of these concepts are soft prerequisites. This thinking style is not restricted to the formally trained, but it is more common among them (because it's trained into them). So in my (revised) telling, the thinking style is a prerequisite and explicitly quantitative types are overrepresented in EA simply because they're more likely to have been exposed to these concepts in either a formal or informal setting. The reason I think this might be important is that I occasionally have conversations in which these concepts—in the informal sense—seem unfamiliar. "Do what has the best chance of working out" is, in my experience, a surprisingly rare way of conducting everyday business in the world, and some people seem to find it strange and new to think in that fashion. The possible takeaway is that some basic informal groundwork might need to be done to maximize the efficacy of different EA messages.
2aarongertler22d I basically agree that having intuitions similar to those I outlined seems very important and perhaps necessary for getting involved with EA. (I think you can be "interested" without those things, because EA seems shiny and impressive if you read certain things about it, but not having a sense for how you should act based on EA ideas will limit how involved you actually get.) Your explanation about exposure to related concepts almost definitely explains some of the variance you've spotted. I spend a lot of my EA-centric conversations trying to frame things to people in a non-quantitative way (at least if they aren't especially quantitative themselves). I'm a huge fan of people doing "basic groundwork" to maximize the efficacy of EA messages. I'd be likely to fund such work if it existed and I thought the quality was reasonably high. However, I'm not aware of many active projects in this domain; [] and normal marketing by GiveWell et al. are all that come to mind, plus things like big charitable matches that raise awareness of EA charities as a side effect.

Oh, and then there's this contest, which I'm very excited about and would gladly sponsor more test subjects for if possible. Thanks for reminding me that I should write to Eric Schwitzgebel about this.

Question to look into later: How has the EA community affected the charities it has donated to over the past decade?

Some charities that seem like they'd be able to provide especially good feedback on this:

  • AMF (major recipient of EA funding, very thoughtful leader in Rob Mather)
  • GiveDirectly (major recipient, thoughtful leadership)
  • Evidence Action (got lots of EA funding for some projects but not others, wound up shutting down a project, not sure how much the shutdown was their initiative vs. external pressure from GiveWell)
  • Any other GiveWell top charity (especially if they dropped off the Top Charities list at some point, or were added after being considered but rejected)
... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Economic benefits of mediocre local human preferences modeling.

Epistemic status: Half-baked, probably dumb.

Note: writing is mediocre because it's half-baked.

Some vague brainstorming of economic benefits from mediocre human preferences models.

Many AI Safety proposals include understanding human preferences as one of its subcomponents [1]. While this is not obviously good[2], human modeling seems at least plausibly relevant and good.

Short-term economic benefits often spur additional funding and research interest [citation not given]. So a possible quest... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

I do a lot of cross-posting because of my role at CEA. I've noticed that this racks up a lot of karma that feels "undeserved" because of the automatic strong upvotes that get applied to posts. From now on (if I remember; feel free to remind me!), I'll be downgrading the automatic strong upvotes to weak upvotes. I'm not canceling the votes entirely because I'm guessing that at least a few people skip over posts that have zero karma.

This could be a bad idea for reasons I haven't thought of yet, and I'd welcome any feedback.

2Stefan_Schubert25d Could the option to strongly upvote one's own comments (and posts, in case you remove the automatic strong upvotes on posts) be disabled, as discussed here [] ? Thanks.
5jpaddison1mo Just to clarify, you know your self-votes don't get you karma?

...nope. That's good to know, thanks! Given that, I don't think I'll bother to un-strong-upvote myself.

Why don't we have more advices / mentions about donating through a last will - like Effective Legacy? Is it too obvious? Or absurd?

All other cases of someone discussing charity & wills were about the dilemma "give now vs. (invest) post mortem". But we can expect that even GWWC pledgers save something for retirement or emergency; so why not to legate a part of it to the most effective charities, too? Besides, this may attract non-pledgers equally: even if you're not willing to sacrifice a portion of your consumption for the sake of t... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

5aaronhamlin1mo I agree with you that this is an important area. I wrote a whole essay on the technical aspects of planned giving. [] I have some more related essays here: []
2Ramiro1mo Thanks. Your post strengthened my conviction that EAs should think about the subject - of course, the optimal strategy may vary a lot according to one's age, wealth, country, personal plans, etc. But I still wonder: a) would similar arguments convince non-EA people? b) why don't EA (even pledgers) do something like that (i.e., take their deaths into account)? Or If they do it "discretely", why don't they talk about it? (I know most people don't think too much about what is gonna happen if they die, but EAs are kinda different) (I greatly admire your work, btw)

I'm aware of many people in EA who have done some amount of legacy planning. Ideally, the number would be "100%", but this sort of thing does take time which might not be worthwhile for many people in the community given their levels of health and wealth.

I used this Charity Science page to put together a will, which I've left in the care of my spouse (though my parents are also signatories).

Load More