There's an accompanying column in The Guardian:
Running with MacAskill’s line of reasoning, we asked participants in this week’s Guardian Essential poll to think through whether future time horizons would be positive or negative for humanity (although we confined our frame to a relatively conservative ten millennia).
If I were debating you on the topic, it would be wrong to say that you think it's a Pascal's mugging. But I read your post as being a commentary on the broader public debate over AI risk research, trying to shift it away from "tiny probability of gigantic benefit" in the way that you (and others) have tried to shift perceptions of EA as a whole or the focus of 80k. And in that broader debate, Bostrom gets cited repeatedly as the respectable, mainstream academic who puts the subject on a solid intellectual footing.
(This is in contrast to MIRI, w...
The New Yorker writer got it straight out of this paper of Bostrom's (paragraph starting "Even if we use the most conservative of these estimates"). I've seen a couple of people report that Bostrom made a similar argument at EA Global.
I get what you're saying, but, e.g., in the recent profile of Nick Bostrom in the New Yorker:
...No matter how improbable extinction may be, Bostrom argues, its consequences are near-infinitely bad; thus, even the tiniest step toward reducing the chance that it will happen is near-infinitely valuable. At times, he uses arithmetical sketches to illustrate this point. Imagining one of his utopian scenarios—trillions of digital minds thriving across the cosmos—he reasons that, if there is even a one-per-cent chance of this happening, the expected value of redu
Thanks Peter! I'll make the top-level post later today.
How did you do that so quickly?
(I might have given the impression that I did this all during a weekend. This isn't quite right -- I spent 2-3 evenings, about 8 hours in total, going from the raw csv files to nice and compact .js function. Then I wrote the plotter on the weekend.)
I did this bit in Excel. If the money amounts were in column A, I insert three columns to the right: B for the currency (assumed USD unless otherwise specified), C for the min of the range given, D for the max. In colu...
The first 17 entries in imdata.csv have some mixed-up columns, starting (at latest) from
Have you volunteered or worked for any of the following organisations? [Machine Intelligence Research Institute]
until (at least)
Over 2013, which charities did you donate to? [Against Malaria Foundation].
Some of this I can work out (volunteering at "6-10 friends" should obviously be in the friends column), but the blank cells under the AMF donations have me puzzled.
Thanks for this, and thanks for putting the full data on github. I'll have a sift through it tonight and see how far I get towards processing it all (perhaps I'll decide it's too messy and I'll just be grateful for the results in the report!).
I have one specific comment so far: on page 12 of the PDF you have rationality as the third-highest-ranking cause. This was surprisingly high to me. The table in imdata.csv has it as "Improving rationality or science", which is grouping together two very different things. (I am strongly in favour of improving science, such as with open data, a culture of sharing lab secrets and code, etc.; I'm pretty indifferent to CFAR-style rationality.)
A hero means roughly what you'd expect - someone who takes personal responsibility for solving world problems. Kind of like an effective altruist.
What I understand about rationality 'heroes' is limited to what I've gleaned from Miranda's post, but to me it seems like earning to give fits much more naturally into a sidekick category than into a hero category.
Why doesn't it bother you at all that a theory has counterintuitive implications in counterfactual scenarios? Shouldn't this lower your confidence in the theory?
I think my disagreement is mostly on (1) -- I expect that a correct moral theory would be horrendously complicated. I certainly can't reduce my moral theory to some simple set of principles: there are many realistic circumstances where my principles clash (individual rights versus greater good, say, or plenty of legal battles where it's not clear what a moral decision would be), and I don't kno...
Maybe I've misinterpreted 'repugnant' here? I thought it basically meant "bad", but Google tells me that a second definition is "in conflict or incompatible with", and now that I know this, I'm guessing that it's the latter definition that you are using for 'repugnant'. But I'm finding it difficult to make sense of it all (it carries a really strong negative connotation for me, and I'm not sure if it's supposed to in this context -- there might be nuances that I'm missing), so I'll try to describe my position using other words.
If my m...
If you find this implication repugnant, {you should also find it repugnant that a theory would have that implication if you found yourself in that position, even if as a matter of fact you don't}.
I reject the implication inside the curly brackets that I added. I don't care what would happen to my moral theory if creating these large populations becomes possible; in the unlikely event that I'm still around when it becomes relevant, I'm happy to leave it to future-me to patch up my moral theory in a way that future-me deems appropriate.
As an analogy
I...
Can you (or anyone else who feels similarly) clarify the sense in which you consider the repugnant conclusion 'not actually important', but the drowning child example 'important'?
Because children die of preventable diseases, but no-one creates arbitrarily large populations of people with just-better-than-nothing well-being.
OK -- I mean the hybrid theory -- but I see two possibilities (I don't think it's worth my time reading up on this subject enough to make sure what I mean matches exactly the terminology of the paper(s) you refer to):
In my hybridisation, I've already sacrificed some intuitive principles (improving total welfare versus respecting individual rights, say), by weighing up competing intuitions.
Whatever counter-intuitive implications my mish-mash, sometimes fuzzily defined hybrid theory has, they have been pushed into the realm of "what philosophers can
Hopefully this is my last comment in this thread, since I don't think there's much more I have to say after this.
I don't really mind if people are working on these problems, but it's a looooong way from effective altruism.
Taking into account life-forms outside our observable universe for our moral theories is just absurd. Modelling our actions as affecting an infinite number of our descendants feels a lot more reasonable to me. (I don't know if it's useful to do this, but it doesn't seem obviously stupid.)
Many-worlds is even further away from effec
Letting the child drown in the hope that
a) there's an infinite number of life-forms outside our observable universe, and
b) that the correct moral theory does not simply require counting utilities (or whatever) in some local region
strikes me as far more problematic. More generally, letting the child drown is a reductio of whatever moral system led to that conclusion.
The universe may very well be infinite, and hence contain an infinite amount of happiness and sadness. This causes several problems for altruists
This topic came up on the 80k blog a while ago and I found it utterly ridiculous then and I find it utterly ridiculous now. The possibility of an infinite amount of happiness outside our light-cone (!) does not pose problems for altruists except insofar as they write philosophy textbooks and have to spend a paragraph explaining that, if mathematically necessary, we only count up utilities in some suitably loca...
The envelope icon next to "Messages" in the top-right (just below the banner) becomes an open envelope when you have a reply. (I think it turns a brighter shade of blue as well? I can't remember.) The icon returns to being a closed envelope after you click on it and presumably see what messages/replies you have.
My values align fairly closely with GiveWell's. If they continue to ask for donations then probably about 20% of my giving next year will go to them (as in the past two years). Apart from that:
GiveWell's preferred split across their recommended charities AMF/SCI/GD/EvAc (Evidence Action, which includes Deworm the World) is 67/13/13/7. Since most of the reasoning behind that split is how much money each charity could reasonably use, and I agree with GiveWell that bednets are really cost-effective, I won't be deviating much from GiveWell's recommendation....
Relying on hoped-for compounding long-term benefits to make donation decisions is at least not a complete consensus (I certainly don't).
My understanding of your position is:
Human welfare benefits compound, though we don't know how much or for how long (and I am dubious, along with one of the commenters, about a compounding model for this).
Animal welfare benefits might compound if they're caused by human value changes.
In the case of ACE's recommendations, we have three charities which aim to structurally change human society. So we have short-term b...
My internal definition is "take a job (or build a business) so that you donate more than you otherwise would have" [1]. It's too minimalist a definition to work in every case (it'd be unreasonable to call someone on a $1mn salary who donates $1000 "earning to give", even if they wouldn't donate anything on $500k), but if you're the sort of person who considers "how much will I donate to charity" as an input into their choice of job, then I think the definition will work most of the time.
There probably needs to be a threshold ...
It seems like this comes down to a distinction between effective altruism, meaning altruism which is effective, and EA referring to a narrower group of organizations and ideas.
I'm happy to go with your former definition here (I'm dubious about putting the label 'altruism' onto something that's profit-seeking, but "high-impact good things" are to be encouraged regardless). My objection is that I haven't seen anyone make a case that these long-term ideas are cost-effective. e.g.,
...My best guess is that these activities have a significantly lar
Moderate long-run EA doesn't look close to having fully formed ideas to me, and therefore it seems to me a strange way to introduce people to EA more generally.
you’ll want to make investments in technology
I don't understand this. Is there an appropriate research fund to donate to? Or are we talking about profit-driven capital spending? Or just going into applied science research as part of an otherwise unremarkable career?
and economic growth
Who knows how to make economies grow?
...This will mean better global institutions, smarter leaders, more so
The bottom part of your diagram has lots of boxes in it. Further up, "poverty alleviation is most important" is one box. If there was as much detail in the latter as there is in the former, you could draw an arrow from "poverty alleviation" to a lot of other boxes: economic empowerment, reducing mortality rates, reducing morbidity rates, preventing unwanted births, lobbying for lifting of trade restrictions, open borders (which certainly doesn't exclusively belong below your existential risk bottleneck), education, etc. There could b...
Yes, I agree with that, and it's worth someone making that point. But I think in general it is too common a theme in EA discussion to compare some possible altruistic endeavour (here kidney donation) to perfectly optimal behaviour, and then criticise the endeavour as being sub-optimal -- Ryan even words it as "causing net harm"!
In reality we're all sub-optimal, each in our own many ways. If pointing out that kidney donation is sub-optimal (assuming all the arguments really do hold!) nudges some possible kidney donors to actually donate more of ...
How long would it take to create $2k of value? That's generally 1-2 weeks of work. So if kidney donation makes you lose more than 1-2 weeks of life, and those weeks constitute funds that you would donate, or voluntary contributions that you would make, then it's a net negative activity for an effective altruist.
This can't be the right comparison to make if the 1-2 weeks of life is lost decades from now. The (foregone) altruistic opportunities in 2060 are likely to cost much more than $2000 per 15 DALY's averted.
I think the basic shape of your argument ...
That's an unfair comparison.
But it might be a relevant comparison for many people. i.e., I expect that there are people who would be willing to forego some income to donate a kidney (and they may not need to do this, depending on the availability of paid medical leave), but who wouldn't donate all of that income if they kept both kidneys.
I don't understand what you're pointing us to in that link. The main part of the text tells us that ties are usually broken in swing states by drawing lots (so if you did a full accounting of probabilities and expectation values, you'd include some factors of 1/2, which I think all wash out anyway), and that the probability of a tie in a swing state is around 1 in 10^5.
The second half of the post is Randall doing his usual entertaining thing of describing a ridiculously extreme event. (No-one who argues that a marginal vote is valuable for expectation-va...
My main response is that this is worrying about very little -- it doesn't take much time to choose who to vote for once or twice every few years.
But in particular,
2) The risk you incur in going to the place where you vote (a non-trivial likelihood of dying due to unusual traffic that day).
is an overstated concern at least for the US (relative risk around 1.2 of dying on the road on election day compared to non-election days) and Australia (relative risk around 1.03 +/- error analysis I haven't done).
As I said to Peter in our long thread, "Eh whatevs". :P
I don't think I can make anything more than a very weak defence of avoiding DAF's in this situation (the defence would go: "They seem kinda weird from a signalling perspective"). I'm terrible at finance stuff, and a DAF seems like a finance-y thing, and so I avoid them.
Probability that they'll need my money soon:
GAVI: ~0%
AMF: ~50%
SCI: ~100%
You might say "well there's a 50-percentage-point difference at each of those two steps" and think I'm being inconsistent in donating to AMF and not GAVI. But if I try some expectation-value-type calculation, I'll be multiplying the impact of AMF's work by 50% and getting something comparable to SCI, but getting something close to zero for GAVI.
Presumably they've already factored in the relative strength of bednets.
I don't think this is relevant to GiveWell's decision not to recommend AMF.... Immunisations are super-cost-effective, but GiveWell don't make a recommendation in this area because GAVI or UNICEF or whoever already have committed funding for this.
I've got two choices if I want to donate all my donation money this year:
Donate to AMF, which is likely higher impact, but maybe my money won't be spent for a couple of years.
Donate somewhere else, likely lower impact.
I think an AMF ...
The healthcare thing was just an example (though, despite the FAQ on this topic that Owen brought up below, I would still feel dishonest withdrawing from a pledge for this reason). It's the lock-in thing that I just don't feel comfortable with.
I ramped up my donations after discovering GiveWell, and at the time it looked like it cost ~$500 to save a life. Now they reckon it's roughly ten times that amount. The overwhelming moral case for donating today feels around ten times weaker to me than it did in 2009. If the cost per life saved(-equivalent) rise...
I'm not a GWWC member, because I don't want to lock myself in to a pledge. (I've been comfortably over 10% for a few years, and expect that to continue, but I could imagine, e.g., needing expensive medical care in the future and cutting out my donations to pay for that.) For that reason I wouldn't take the pledge in either its current or its proposed form.
My take on this is that it's okay to make a pledge in good faith if you intend to fulfil it and will make an effort to do so even if this becomes inconvenient.
That doesn't mean committing yourself come what may. If we thought we had to carry through on our promises no matter what, nobody would make promises, and the world would be a sorrier place for that. Similarly people getting married usually intend in good faith to stay with the marriage for the rest of their life, and to make an effort to make that work, but I think the process works better by allowi...
The point of that word being there is to reduce the strength of the claim: you're focused on being effective, you're trying hard to be effective, but to say that you are effective is different.
I don't really want to reduce the strength of my claim though[1] -- if I have to be pedantic, I'll talk about being effective in probabilistic expectation-value terms. If donating to our best guesses of the most cost-effective charities we can find today doesn't qualify as "effective", then I don't think there's much use in the word, either to describe ...
Thanks for mentioning that you run EA Melbourne -- I think this difference in perspective is what's driving our -ism/-ist disagreement that I talk about in my earlier comment. I've never been to an EA meetup group (I moved away from Brisbane earlier in February, missing out by about half a year on the new group that's just starting there...), and I'd wondered what EA "looked like" in these contexts. If a lot of it is just meeting up every few weeks for a chat about EA-ish topics, then I agree that "effective altruist" is a dubious ter...
Pretty passively.... Like I'll send some money GiveWell's way later this year to help find effective giving opportunities, but it doesn't feel inside of me as though I'm aspiring to something here. The GiveWell staff might aspire to find those better giving opportunities; I merely help them a bit and hope that they succeed.
I also think that describing ourselves primarily as having a never-ending aspiration is selling us short if we're actually achieving stuff.
I disagree with a bit of the intro and part one.
You can easily say that Effective Altruism answers a question. The question is, "What should I do with my life?" and the answer is, "As much good as possible (or at least a decent step in that direction)." Only if you take that answer as a starting premise can you then say that EA asks the question, "How do I do the most good?"
Conversely, you can just as easily say that feminism doesn't ask whether men and women should be equal (that they should be is the starting premise), it ...
I didn't make an introduction comment in the last post, so I suppose I should do one here. I'm David Barry -- one of the migrated posts from the old blog is authored by the user David_Barry, but I signed up my usual Internet handle before thinking about the account that had already been made for me. I live in Perth, where I moved for work earlier this year, having previously lived in Brisbane.
I always used to think I'd become a physicist one day, but what was supposed to be a PhD went badly for too long and I escaped with a Master's. I've now been worki...
Speaking from my geographically distant perspective: I definitely saw it as a leader-led shift rather than coming from the rank-and-file. There was always a minority of rank-and-file coming from Less Wrong who saw AI risk as supremely important, but my impression was that this position was disproportionately common in the (then) Centre for Effective Altruism, and there was occasional chatter on Facebook (circa 2014?) that some people there s... (read more)