Who should pay the cost of Googling studies on the EA Forum?
Many EA Forum posts have minimal engagement with relevant academic literature
If you see a Forum post that doesn't engage with literature you think is relevant, you could make a claim without looking up a citation based on your memory, but there's a reasonable chance you'll be wrong.
Many people say they'd rather see an imperfect post or comment than not have it at all.
But people tend to remember an original claim, even if it's later debunked.
Maybe the best option is to phrase my com
I've always thought there's a lower bar for commenting than a top-level post, but maybe both should be reasonably high (you should be able to provide some evidence for your claim in a comment, and have some actual engagement with relevant literature in a post, for example)
I find the unilateralist’s curse a particularly valuable concept to think about. However, I now worry that “unilateralist” is an easy label to tack on, and whether a particular action is unilateralist or not is susceptible to small changes in framing.
Consider the following hypothetical situations:
Good point. Now that you bring this up, I vaguely remember a Reddit AMA where an evolutionary biologist made the (obvious in hindsight, but never occurred to me at the time) claim that with multilevel selection, altruism on one level is often means defecting on a higher (or lower) level. Which probably unconsciously inspired this post!
As for making it top level, I originally wanted to include a bunch of thoughts on the unilateralist's curse as a post, but then I realized that I'm a one-trick pony in this domain...hard to think of novel/useful thi... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
Philosophers and economists seem to disagree about the marginalist/arbitrage argument that a social discount rate should equal (or at least be majorly influenced by) the marginal social opportunity cost of capital. I wonder if there's any discussion of this topic in the context of negative interest rates. For example, would defenders of that argument accept that, as those opportunity costs decline, so should the SDR?
Yes, governments lower the SDR as the interest rate changes. See for example the US Council of Economic Advisers's recommendation on this three years ago: https://obamawhitehouse.archives.gov/sites/default/files/page/files/201701_cea_discounting_issue_brief.pdf
While the "risk-free" interest rate is roughly zero these days, the interest rate to use when discounting payoffs from a public project is the rate of return on investments whose risk profile is similar to that of the public project in question. This is still positive for basically a... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
[Some of my tentative and uncertain views on AI governance, and different ways of having impact in that area. Excerpts, not in order, from things I wrote in a recent email discussion, so not a coherent text.]
1. In scenarios where OpenAI, DeepMind etc. become key actors because they develop TAI capabilities, our theory of impact will rely on a combination of affecting (a) 'structure' and (b) 'content'. By (a) I roughly mean how the relevant decision-making mechanisms look like irrespective of the specific goals and resources of the actor... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
Thanks for sharing your reaction! There is some chance that I'll write up these and maybe other thoughts on AI strategy/governance over the coming months, but it depends a lot on my other commitments. My current guess is that it's maybe only 15% likely that I'll think this is the best use of my time within the next 6 months.
I don't know if there is a designated place to leave comments about the EA Forum, so for the time being I'm posting them here. I think the current homepage has a number of problems:
After more thought, we’ve decided that we will change the name to “Forum Favorites”
Great, thank you!
What's the right narrative about global poverty and progress? Link dump of a recent debate.
The two opposing views are:
(a) "New optimism:"  This is broadly the view that, over the last couple of hundred years, the world has been getting significantly better, and that's great.  In particular, extreme poverty has declined dramatically, and most other welfare-relevant indicators have improved a lot. Often, these effects are largely attributed to economic growth.
Thanks Aaron for your response.
I am assigning positive value to both improvements in knowledge and increased energy use (via tapping of fossil fuel energy). I am not weighing them one vs the other. I am saying that without the increased energy from fossil fuels we would still be agricultural societies, with repeated rise and fall of empires. The indus valley civilization, ancient greeks, mayans all of the repeatedly crashed. At the peak of those civilizations I am sure art, culture and knowledge flourished. Eventually humans out ran their resources and cr
So, I saw Vox's article on how air filters create huge educational gains; I'm particularly surprised that indoor air quality (actually, indoor environmental conditions) is kinda neglected everywhere (except, maybe, in dagerous jobs). But then I saw this (convincing) critique of the underlying paper.
It seems to me that this is a suitable case for a blind RCT: you could install fake air filters in order to control for placebo effects, etc. But then I googled a little bit... and I haven't found significant studies using blind RCTs in social sci... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
There are some pretty good reasons to keep your identity small. http://www.paulgraham.com/identity.html
But I see people using that as an excuse to not identify as... anything. As in, they avoid affiliating themselves with any social movements, sports teams, schools, nation-states, professions, etc.
It can be annoying and confusing when you ask someone "are you an EA?" or "are you a Christian?" or "are you British?" and they won't give you a straight answer. It's partly annoying because I'm very rationally trying to make some shortcut assumptions about them
Yep, makes sense to me! It's difficult for me to identify with a particular denomination of Christianity because I grew up at a non-denominational church and since then I've attended 3 different denominations. So I definitely get the struggle to identify yourself when none of the usual labels quite fit! But I don't have to be a complete mystery - at least I can still say I'm "Christian" or "Protestant"
I'm 60% sure that LessWrong people use the term "Moloch" in almost exactly the same way as social justice people use the term "kyriarchy" (or "capitalist cis-hetero patriarchy").
I might program my browser to replace "Moloch" with "kyriarchy". Might make Christian Twitter confusing though.
Basic Research vs Applied Research
1. If we are at the Hinge of History, it is less reasonable to focus on long-term knowledge building via basic research, and vice versa.
2. If we have identified the most promising causes well, then targeted applied research is promising.
This is a summary of the argument for the procreation asymmetry here and in the comments, especially this comment, which also looks further at the case of bringing someone into existence with a good life. This is essentially Johann Frick's argument, reframed. The starting claim is that your ethical reasons are in some sense conditional on the existence of individuals, and the asymmetry between existence and nonexistence can lead to the procreation asymmetry.
1. Choosing to not bring someone into existence, all else equal, is a "stable solution&quo... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
I think that some causes may have increasing marginal utility. Specifically, I think that it may be true in some types of research that are expected to generate insights about it's own domain.
Testing another idea for a cancer treatment is probably of decreasing marginal utility (because the low hanging fruits are being picked up), but basic research in genetics may be of increasing marginal utility (because even if others may work on the best approaches, you could still improve their productivity by giving them further insights).
This is not true if t... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
Morgan Kelly, The Standard Errors of Persistence
A large literature on persistence finds that many modern outcomes strongly reflect characteristics of the same places in the distant past. However, alongside unusually high t statistics, these regressions display severe spatial autocorrelation in residuals, and the purpose of this paper is to examine whether these two properties might be connected. We start by running artificial regressions where both variables are spatial noise and find that, even for modest ranges of spatial correlation between points, t st... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
Discounting the future consequences of welfare producing actions:
The Double Up Drive, an EA donation matching campaign (highly recommended) has, in one group of charities that it's matching donations to:
StrongMinds is quite prominent in EA as the mental health charity; most recently, Founders Pledge recommends it in their report on mental health.
The International Refugee Assistance Project (IRAP) works in immigration reform, and is a recipient of grants from OpenPhilanthropy as well as recommended for individual donors by an OpenPhil member o... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
Open Phil has made multiple grants to the Brooklyn Community Bail Fund, which seems to do similar work to the MA Bail Fund (and was included in Dan Smith's 2017 match). I don't know why MA is still here and Brooklyn isn't, but it may have something to do with room for more funding or a switch in one of the orgs' priorities.
You've probably seen this, but Michael Plant included StrongMinds in his mental health writeup on the Forum.
The EA movement is disproportionately composed of highly logical, analytically minded individuals, often with explicitly quantitative backgrounds. The intuitive-seeming folk explanation for this phenomenon is that that EA, with its focus on rigor and quantification, appeals to people with a certain mindset, and that the relative lack of diversity of thinking styles in the movement is a function of personality type.
I want to reframe this in a way that I think makes a little more sense: the case for an EA perspective is really only made in an analytic, quant... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
Oh, and then there's this contest, which I'm very excited about and would gladly sponsor more test subjects for if possible. Thanks for reminding me that I should write to Eric Schwitzgebel about this.
Question to look into later: How has the EA community affected the charities it has donated to over the past decade?
Some charities that seem like they'd be able to provide especially good feedback on this:
Economic benefits of mediocre local human preferences modeling.
Epistemic status: Half-baked, probably dumb.
Note: writing is mediocre because it's half-baked.
Some vague brainstorming of economic benefits from mediocre human preferences models.
Many AI Safety proposals include understanding human preferences as one of its subcomponents . While this is not obviously good, human modeling seems at least plausibly relevant and good.
Short-term economic benefits often spur additional funding and research interest [citation not given]. So a possible quest... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
I do a lot of cross-posting because of my role at CEA. I've noticed that this racks up a lot of karma that feels "undeserved" because of the automatic strong upvotes that get applied to posts. From now on (if I remember; feel free to remind me!), I'll be downgrading the automatic strong upvotes to weak upvotes. I'm not canceling the votes entirely because I'm guessing that at least a few people skip over posts that have zero karma.
This could be a bad idea for reasons I haven't thought of yet, and I'd welcome any feedback.
...nope. That's good to know, thanks! Given that, I don't think I'll bother to un-strong-upvote myself.
Why don't we have more advices / mentions about donating through a last will - like Effective Legacy? Is it too obvious? Or absurd?
All other cases of someone discussing charity & wills were about the dilemma "give now vs. (invest) post mortem". But we can expect that even GWWC pledgers save something for retirement or emergency; so why not to legate a part of it to the most effective charities, too? Besides, this may attract non-pledgers equally: even if you're not willing to sacrifice a portion of your consumption for the sake of t... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
I'm aware of many people in EA who have done some amount of legacy planning. Ideally, the number would be "100%", but this sort of thing does take time which might not be worthwhile for many people in the community given their levels of health and wealth.
I used this Charity Science page to put together a will, which I've left in the care of my spouse (though my parents are also signatories).