Obligatory link to Scott Alexander's "Ambijectivity" regarding the contentiousness of defining great art.
In the last paragraph, did you mean to write "the uncertainty surrounding the expected value of each policy option is high"?
While true, I think most proposed EA policy projects are much too small in scope to be able to move the needle on trust, and so need to take the currently-existing level of trust as a given.
I agree that that the word ‘populism’ is very prone to misunderstandings but I think the term 'technocracy' is acceptably precise. While precision is important, I think we should balance this against the benefits of using more common words, which make it easier for the reader to make connections with other arguments in favour of or against a concept.
I should clarify: I think the misunderstandings are symptoms of a deeper problem, which is that the concept of "technocracy" is too many different things rolled into one word. This isn't about jargon vs. non-ja...
I also think that EAs haven't sufficiently considered populism as a tool to deal with moral uncertainty.
I agree that there hasn't been much systematic study of this question (at least not that I'm aware of), and maybe there should be. That being said, I'm deeply skeptical that it's a good idea, and I think most other EAs who've considered it are too, which is why you don't hear it proposed very often.
Some reasons for this include:
I am not convinced that there is much thinking amongst EAs about experts misusing technocracy by focusing on their own interests
In at least one particular case (AI safety), a somewhat deliberate decision was made to deemphasize this concern, because of a belief not only that it's not the most important concern, but that focus on it is actively harmful to concerns that are more important.
For example, Eliezer (who pioneered the argument for worrying about accident risk from advanced AI) contends that the founding of OpenAI was an instance of this. In his tel...
I don't think there has been much thinking about whether equally distributed political power should or should not be an end in itself.
On the current margin, that's not really the question; the question is whether it's an end-in-itself whose weight in the consequentialist calculus should be high enough to overcome other considerations. I don't feel any qualms about adopting "no" as a working assumption to that question. I do think I value this to some extent, and I think it's right and good for that to affect my views on rich-country policies where the stak...
it seems fairly clear to me that more populism is preferable under higher uncertainty, and more technocracy is preferable when plausible policy options have a greater range of expected values.
I'm sorry, I don't understand what the difference is between those things.
I think someone should research policy changes in democratic countries which counterfactually led to the world getting a lot better or worse (under a range of different moral theories, and under public opinion), and the extent to which these changes were technocratic or populist. This would be useful to establish the track records of technocracy and populism, giving us a better reason to generally lean one way or the other.
This is exactly the kind of thing that I think won't work, because reality is underpowered.
I forgot to link this earlier, but it turns ...
First of all, thanks for this post. The previous post on this topic (full disclosure: I haven't yet managed to read the paper in detail) poisoned the discourse pretty badly by being largely concerned with meta-debate and by throwing out associations between the authors' dispreferred policy views and various unsavory-sounding concepts. I was worried that this meant nobody would try to address these questions in a constructive manner, and I'm glad someone has.
I also agree that there's been a bit of unreflectiveness in the adoption of a technocratic-by-defaul...
This (often framed as being about the hard problem of consciousness) has long been a topic of argument in the rationalsphere. What I've observed is that some people have a strong intuition that they have a particular continuous subjective experience that constitutes what they think of as being "them", and other people don't. I don't think this is because the people in the former group haven't thought about it. As far as I can tell, very little progress has been made by either camp of converting the other to their preferred viewpoint, because the intuitions remain even after the arguments have been made.
I think SpaceX's regular non-Mars-colonization activities are in fact taken seriously by relevant governments, and the Mars colonization stuff seems like it probably won't happen and also wouldn't be that big a deal if it did (in terms of, like, national security; it would definitely affect who gets into the history books). So it doesn't seem to me like governments are necessarily acting irrationally there.
Same with cryptocurrency; its implications for investor protection, tax evasion, capital controls evasion, and facilitating illicit transactions are ind...
As far as I'm aware, the first person to explicitly address the question "why are literary utopias consistently places you wouldn't actually want to live?" was George Orwell, in "Why Socialists Don't Believe in Fun". I consider this important prior art for anyone looking at this question.
EAsphere readers may also be familiar with the Fun Theory Sequence, which Orwell was an important influence on.
On a related note, I get the impression that utopianism was not as outright intellectually discredited and unfashionable when Orwell wrote as it is today (e.g., t...
I'll post Catherine's reply and then raise a couple of issues:
...Thanks for your question. You’re right that we model GiveDirectly as the least cost-effective top charity on our list, and we prioritize directing funds to other top charities (e.g. through the Maximum Impact Fund). GiveDirectly is the benchmark against which we compare the cost-effectiveness of other opportunities we might fund.
As we write in the post above, standout charities were defined as those that “support programs that may be extremely cost-effective and are evidence-backed”
I think it would be good for CEA to provide a clear explanation, that it (not LW) stands behind as an organization, of exactly what real value it views as being on the line here, and why it thinks it was worthwhile to risk that value.
Since you're (among other things) listing reference classes that people might put claims about transformative AI into, I'll note that millennarianism is a common one among skeptics. I.e., "lots of [mostly religious] groups throughout history have claimed that society is soon going to be swept away by an as-yet-unseen force and replaced with something new, and they were all deluded, so you are probably deluded too".
Reading this thread, I sort of get the impression that the crux here is between people who want EA to be more institutional (for which purpose the current name is kind of a problem) and people who want it to be more grassroots (for which purpose the current name works pretty okay).
There are other issues with the current name, like the thing where it opens us up to accusations of hypocrisy every time we fail to outperform anyone on anything, but I'm not sure that that's really what's driving the disagreement here. Partly, this is because people have tried t...
Do you think the Biden campaign had room for more funding, i.e., that your donation made a Biden victory more likely on the margin (by enough to be worth it)? I am pretty skeptical of this; I suspect they already had more money than they were able to spend effectively. (I don't have a source for this other than Maciej Cegłowski, who has relevant experience but whom I don't agree with on everything; on the other hand, I can't recall ever hearing anyone make the case that U.S. presidential general-election campaigns do have room for more funding, and I'd be ...
Alcor claims on their brochure that membership dues "may be" tax-deductible. It's not clear to me how they concluded that. Somebody should probably ask them.
The second point there seems like the one that's actually relevant. It strikes me as unlikely that doing this with blockchain is less work than with conventional payment systems even if the developers have done blockchain things before, and conventional payment systems are even faster and more fungible with other assets than Ethereum. I'm reading the second point there as suggesting something like, you're hoping that funding for this will come in substantial part from people who are blockchain enthusiasts rather than EAs, and who therefore wouldn't be interested if it used conventional payment infrastructure?
(I agree that the "relics" idea is, at best, solving a different problem.)
The post seems relatively optimistic. I'm worried that this may be motivated reasoning, and/or political reasoning (e.g., that people won't listen to anyone who isn't telling them that we can solve the crisis without doing anything too costly). Mind you, I'm not any kind of expert, I'm just suspicious-by-default given that most other analysis I've seen seems less optimistic (note that there are probably all kinds of horrible selection biases in what I'm reading and I have no idea what they are). Also, the author isn'...
I'm more likely to do this if there's a specific set of data I'm supposed to collect, so that I can write it down before I forget.
Yeah, I should have known I'd get called out for not citing any sources. I'm honestly not sure I'd particularly believe most studies on this no matter what side they came out on; too many ways they could fail to generalize. I am pretty sure I've seen LW and SSC posts get cited as more authoritative than their epistemic-status disclaimers suggested, and that's most of why I believe this; generalizability isn't a concern here since we're talking about basically the same context. Ironically, though, I can't remember which posts. I'll keep looking for examples.
"Breakthroughs" feel like the wrong thing to hope for from posts written by non-experts. A lot of the LW posts that the community now seems to consider most valuable weren't "breakthroughs". They were more like explaining a thing, such that each individual fact in the explanation was already known, but the synthesis of them into a single coherent explanation that made sense either hadn't previously been done, or had been done only within the context of an academic field buried in inferential distance. Put another way, it seems...
It seems clear to me that epistemic-status disclaimers don't work for the purpose of mitigating the negative externalities of people saying wrong things, especially wrong things in domains where people naturally tend towards overconfidence (I have in mind anything that has political implications, broadly construed). This follows straightforwardly from the phenomenon of source amnesia, and anecdotally, there doesn't seem to be much correlation between how much, say, Scott Alexander (whom I'm using here because his blog is widely read) hedges in the disclaimer of any given post and how widely that post winds up being cited later on.
This post caused me to apply to a six-month internal rotation program at Google as a security engineer. I start next Tuesday.
I would like to see efforts at calibration training for people running EA projects. This would be useful for helping to push those projects in a more strategic direction, by having people lay out predictions regarding outcomes at the outset, kind of like what Open Phil does with respect to their grants.
Can you give an example of a time when you believe that the EA community got the wrong answer to an important question as a result of not following your advice here, and how we could have gotten the right answer by following it?
Sure. To be clear, I think most of what I'm concerned about applies to prioritization decisions made in highly-uncertain scenarios. So far, I think the EA community has had very few opportunities to look back and conclusively assess whether highly-uncertain things it prioritized turned out to be worthwhile. (Ben makes a similar point at https://www.lesswrong.com/posts/Kb9HeG2jHy2GehHDY/effective-altruism-is-self-recommending.)
That said, there are cases where I believe mistakes are being made. For example, I think mass deworming in areas where almost ...
Apologies if this is a silly question, but could you give examples of specific, concrete problems that you think this analysis is relevant to?
Does your recommendation account for the staff-time costs of doing anything other than whatever an org's current setup is? Orgs like CEA have stated that this is why they don't do financial-optimization things like this.
Since you mentioned CEA, I'll use them as an example. CEA internally values staff time at $75 an hour. Assuming CEA moves the $4,005,000 in cash (as of January 31, 2019) in EA Funds to StoneCastle's zero-risk 2.4% banking option, the expected yearly gain is $96,120. Making the conservative assumption of 10 hours of setup time to fill out a short application and get internal approval and 10 minutes a month using an online interface to transfer money between StoneCastle and a checking account, this adds up to $900 a year in operational expenses, wh...
I don't think there was necessarily anything wrong with it, I'd just encourage future organizers to consider more explicitly what the goal is and how to achieve it.
No one on the team knew the donor, though he had donated to EA causes in the past and was acquainted with relevant people at CEA. We offered him VIP tickets and then he put $2,000 in the pay-what-you-want box in our online ticketing system. I think it was primarily thought of as defraying conference costs, and indeed we came in less than $2,000 under budget.
The organizers included Matt Reardon (OP and lead organizer) from Harvard Law School, Jen Eason and Vanessa Ruales from Harvard College, Juan Gil from MIT, Rebecca Baron from Tufts, and myself (no insti...
I don't think nobody delved into the Cool Earth numbers because they assumed a bunch of smart people had already done it. I think nobody delved into the Cool Earth numbers because it wasn't worth their time, because climate change charities generally aren't competitive with the standard EA donation opportunities, so the question is only relevant if you've decided for non-EA reasons that you're going to focus on climate change. (Indeed, if I understand correctly the Founders Pledge report was written primarily for non-EA donors who'd decided this.)
Whatever'
...I think nobody delved into the Cool Earth numbers because it wasn't worth their time, because climate change charities generally aren't competitive with the standard EA donation opportunities
This claim seems exactly what people felt was too hubristic - how could anyone be so confident on the basis of a quick survey of such a complex area that climate didn't match up to other donation opportunities?
I don't think I would call this hubris. We all knew that the Cool Earth recommendation was low-confidence. But what else were we going to do? To paraphrase Scott Alexander from another recent community controversy, our probability distribution was wide but centered around Cool Earth.
I do think that that nuance occasionally got lost when doing outreach to people not already very informed about EA, but that's a different problem. We haven't solved it, but I feel like that's because it's hard, not because nobody's thought about i...
One thing to emphasize more than that writeup did, is that in EA terms donating to such a lightly researched intervention (a few months work) is very likely dominated by donations to better research the area, finding higher expected value options and influencing others.
On the other hand, the point estimates in that report favored other charities like AMF over Cool Earth anyway, a conclusion strengthened by the OP critique (not that it excludes something else orders of magnitude better being found like unusual energy research, very effective political lobby...
We all knew that the Cool Earth recommendation was low-confidence.
I just glanced at the part of Doing Good Better that discusses Cool Earth. It doesn't seem that low-confidence to me, and does seem a bit self-congratulatory/hubristic.
We haven't solved it, but I feel like that's because it's hard, not because nobody's thought about it.
I think I've observed information cascades in Effective Altruism relating to both global poverty and AI risk. The thinking seems to go something like: EA is a great community full of smart peopl...
I suspect that it was widely recognized for quite some time that GWWC's analysis of Cool Earth was outdated enough not to be trustworthy. People donated to Cool Earth anyway because it was the only climate-change charity that we had any particular reason to believe was better than others. This, of course, has changed with the Founders Pledge report, and as such I predict that EA interest in Cool Earth will fade with time.
I looked a little to try to figure out why the criticisms of Cool Earth don't also apply to the Coalition for Rainforest Nation...
The Rainforest Coalition have advocated for measurement of deforestation prevention at the national level which means that you don't get intra-national displacement. Since all rainforest countries have signed up for REDD (and it is enshrined in the Paris Agreement), you also don't get international displacement.
Project-based approaches to deforestation as carried out by Cool Earth were rejected for years by the UN as verified CO2 reductions because of the problems outlined in the post, amongst others.
Also, the cases for contraception and female education as climate-change interventions seem much, much more speculative than the case for rainforest conservation, so much so that their respective cost-effectiveness numbers probably ought not to be directly compared.
GiveWell doesn't directly use literal DALYs in their current cost-effectiveness estimates. They have a research page on them; the linked blog posts were originally published a long time ago, but were updated relatively recently, so they presumably still stand by them. See also this more recent post.
GiveWell's cost-effectiveness spreadsheet includes a tab on moral weights. You can make a copy of it, change the numbers to represent your preferred views on population ethics, and see what this does to the results.
I think the big problem with the narrow focus is that newbie EAs, especially if they're students, tend to get saturated with the message that the way to do good with your life is to go to 80,000 Hours and follow their career advice. Indeed, CEA's official advice for local group leaders says to heavily emphasize this. And they get this message relatively early in the sales funnel, long before they've gone through anything that would filter out the majority who aren't good candidates for 80,000 Hours's top priority paths. So it ought...
So it ought not to surprise anyone that a huge fraction of them come away demoralized.
I want to quickly point out that we don’t have enough evidence to conclude that ‘a huge fraction’ are demoralized. We have several reports and some intuitive reasons to expect that some are. We also have plenty of reports of people saying 80,000 Hours made them more motivated and ambitious, and helped them find more personally meaningful and satisfying careers. It’s hard to know what the overall effect is on motivation.
I'm not convinced it's the impact-maximizing approach either. Some people who could potentially win the career "lottery" and have a truly extraordinary impact might reasonably be put off early on by advice that doesn't seem to care adequately about what happens to them in the case where they don't win.
I suspect that it is a bad idea to publicly advocate this (though using it is fine). I'm not worried so much about moral licensing; rather, I think the amount of money being moved in this way is so tiny, relative to the amount of attention required in order to move it, that in a genuinely impact-focused discussion of possible ways to do good it would not even come up. I fear that bringing it up in association with EA gives a misleading impression of what the EA approach to prioritization looks like.
Is the nomination form supposed to have contact information? I just nominated a potential speaker who I'm connected to, but realized that you may have no way to get in touch with me.
So assuming you don't win, are you allowed to post your essay on your own blog? Or would this undermine CEA's ability to cannibalize bits of it?
I've often thought that there should be separate "phatic" and "substantive" comment sections.