All of pappubahry's Comments + Replies

Saying 'AI safety research is a Pascal's Mugging' isn't a strong response

If I were debating you on the topic, it would be wrong to say that you think it's a Pascal's mugging. But I read your post as being a commentary on the broader public debate over AI risk research, trying to shift it away from "tiny probability of gigantic benefit" in the way that you (and others) have tried to shift perceptions of EA as a whole or the focus of 80k. And in that broader debate, Bostrom gets cited repeatedly as the respectable, mainstream academic who puts the subject on a solid intellectual footing.

(This is in contrast to MIRI, w... (read more)

Saying 'AI safety research is a Pascal's Mugging' isn't a strong response

The New Yorker writer got it straight out of this paper of Bostrom's (paragraph starting "Even if we use the most conservative of these estimates"). I've seen a couple of people report that Bostrom made a similar argument at EA Global.

3Robert_Wiblin6y
Look, no doubt the argument has been made by people in the past, including Bostrom who wrote it up for consideration as a counterargument. I do think the 'astronomical waste' argument should be considered, and it's far from obvious that 'this is a Pascal's Mugging' is enough to overcome its strength. But it's also not the main reason, only reason, or best reason, many people who work on these problems could ground their choice to do so. So if you dismiss this argument, before you dismiss the work, move on to look at what you think is the strongest argument, not the weakest.
Saying 'AI safety research is a Pascal's Mugging' isn't a strong response

I get what you're saying, but, e.g., in the recent profile of Nick Bostrom in the New Yorker:

No matter how improbable extinction may be, Bostrom argues, its consequences are near-infinitely bad; thus, even the tiniest step toward reducing the chance that it will happen is near-­infinitely valuable. At times, he uses arithmetical sketches to illustrate this point. Imagining one of his utopian scenarios—trillions of digital minds thriving across the cosmos—he reasons that, if there is even a one-per-cent chance of this happening, the expected value of redu

... (read more)
3Robert_Wiblin6y
I've also seen Eliezer (the person who came up with the term Pascal's mugging) give talks where he explicitly disavows this argument.
3Robert_Wiblin6y
Two things: i) I bet Bostrom thinks the odds of a collective AI safety effort of achieving its goal is better than 1%, which would itself be enough to avoid the Pascal's Mugging situation. ii) This is a fallback position from which you can defend the work if someone thinks it almost certainly won't work. I don't think we should do that, instead we should argue that we can likely solve the problem. But I see the temptation. iii) I don't think it's clear you should always reject a Pascal's Mugging (or if you should, it may only be because there are more promising options for enormous returns than giving it to the mugger).
[link] GiveWell's 2015 recommendations are out!

I confess I'm a bit surprised no one else has linked this yet

Judging by GiveWell's Twitter and Facebook feeds, the post is mis-dated -- it only went live about 8 hours ago (at time of writing my comment), rather than 2 or 3 days ago.

2Ben_Kuhn7y
I know. I'm surprised it took 8 hours :)
Suggestions thread for questions for the 2015 EA Survey

I think this is referring to a common probability question, e.g., example 3 here.

The 2014 Survey of Effective Altruists: Results and Analysis

Thanks Peter! I'll make the top-level post later today.

How did you do that so quickly?

(I might have given the impression that I did this all during a weekend. This isn't quite right -- I spent 2-3 evenings, about 8 hours in total, going from the raw csv files to nice and compact .js function. Then I wrote the plotter on the weekend.)

I did this bit in Excel. If the money amounts were in column A, I insert three columns to the right: B for the currency (assumed USD unless otherwise specified), C for the min of the range given, D for the max. In colu... (read more)

0Peter Wildeford7y
Well I admire your dedication to do it yourself and not use my conversions. :) - Aww man, you should learn. It's tremendously useful. Not to mention a requirement for any programming job.
The 2014 Survey of Effective Altruists: Results and Analysis

I've made a bar chart plotter thing with the survey data: link.

0Peter Wildeford7y
Also, do you have the GitHub code for your plotter? Would love to see.
0Peter Wildeford7y
Woah, this is an impressive data viz accomplishment! You should make it a top-level post -- it's cooler than a comment. :) - Also, ... How did you do that so quickly? We had to pay $60 to get it done manually via virtual assistants.
The 2014 Survey of Effective Altruists: Results and Analysis

The first 17 entries in imdata.csv have some mixed-up columns, starting (at latest) from

Have you volunteered or worked for any of the following organisations? [Machine Intelligence Research Institute]

until (at least)

Over 2013, which charities did you donate to? [Against Malaria Foundation].

Some of this I can work out (volunteering at "6-10 friends" should obviously be in the friends column), but the blank cells under the AMF donations have me puzzled.

1Peter Wildeford7y
Yes, it looks like the first 17 entries are corrupted for some reason. I'll look into it.
The 2014 Survey of Effective Altruists: Results and Analysis

Thanks for this, and thanks for putting the full data on github. I'll have a sift through it tonight and see how far I get towards processing it all (perhaps I'll decide it's too messy and I'll just be grateful for the results in the report!).

I have one specific comment so far: on page 12 of the PDF you have rationality as the third-highest-ranking cause. This was surprisingly high to me. The table in imdata.csv has it as "Improving rationality or science", which is grouping together two very different things. (I am strongly in favour of improving science, such as with open data, a culture of sharing lab secrets and code, etc.; I'm pretty indifferent to CFAR-style rationality.)

1Peter Wildeford7y
Good point. Yes, this is my bad, I forgot that part. Definitely a mistake. Definitely will break that apart next time.
I am Samwise [link]

A hero means roughly what you'd expect - someone who takes personal responsibility for solving world problems. Kind of like an effective altruist.

What I understand about rationality 'heroes' is limited to what I've gleaned from Miranda's post, but to me it seems like earning to give fits much more naturally into a sidekick category than into a hero category.

Problems and Solutions in Infinite Ethics

Why doesn't it bother you at all that a theory has counterintuitive implications in counterfactual scenarios? Shouldn't this lower your confidence in the theory?

I think my disagreement is mostly on (1) -- I expect that a correct moral theory would be horrendously complicated. I certainly can't reduce my moral theory to some simple set of principles: there are many realistic circumstances where my principles clash (individual rights versus greater good, say, or plenty of legal battles where it's not clear what a moral decision would be), and I don't kno... (read more)

Problems and Solutions in Infinite Ethics

Maybe I've misinterpreted 'repugnant' here? I thought it basically meant "bad", but Google tells me that a second definition is "in conflict or incompatible with", and now that I know this, I'm guessing that it's the latter definition that you are using for 'repugnant'. But I'm finding it difficult to make sense of it all (it carries a really strong negative connotation for me, and I'm not sure if it's supposed to in this context -- there might be nuances that I'm missing), so I'll try to describe my position using other words.

If my m... (read more)

1Pablo7y
Thank you for the clarification. I think I understand your position now. Why doesn't it bother you at all that a theory has counterintuitive implications in counterfactual scenarios? Shouldn't this lower your confidence in the theory? After all, our justification for believing a moral theory seems to turn on (1) the theory's simplicity and (2) the degree to which it fits our intuitions. When you learn that your theory has counterintuitive implications, this causes you to either restrict the scope of the theory, and thus make it more complex, or recognize that it doesn't fit the data as well as you thought before. In either case, it seems you should update by believing the theory to a lesser degree.
Problems and Solutions in Infinite Ethics

If you find this implication repugnant, {you should also find it repugnant that a theory would have that implication if you found yourself in that position, even if as a matter of fact you don't}.

I reject the implication inside the curly brackets that I added. I don't care what would happen to my moral theory if creating these large populations becomes possible; in the unlikely event that I'm still around when it becomes relevant, I'm happy to leave it to future-me to patch up my moral theory in a way that future-me deems appropriate.

As an analogy

I... (read more)

0Pablo7y
I find your position unclear. On the one hand, you suggest that thought experiments involving situations that aren't actual don't constitute a problem for a theory (first quote above). On the other hand, you imply that they do constitute a problem, which is addressed by restricting the scope of the theory so that it doesn't apply to such situations (second quote above). Could you clarify?
Problems and Solutions in Infinite Ethics

Can you (or anyone else who feels similarly) clarify the sense in which you consider the repugnant conclusion 'not actually important', but the drowning child example 'important'?

Because children die of preventable diseases, but no-one creates arbitrarily large populations of people with just-better-than-nothing well-being.

1Pablo7y
I'm sorry, but I don't understand this reply. Suppose you can in fact create arbitrarily large populations of people with lives barely worth living. Some moral theories would then imply that this is what you should do. If you find this implication repugnant, you should also find it repugnant that a theory would have that implication if you found yourself in that position, even if as a matter of fact you don't. As an analogy, consider Kant's theory, which implies that a man who is hiding a Jewish family should tell the truth when Nazi officials question him about it. It would be strange to defend Kant's theory by alleging that, in fact, no actual person ever found himself in that situation. What matters is that the situation is possible, not whether the situation is actual. But maybe I'm misunderstanding what you meant by "not actually important"?
Problems and Solutions in Infinite Ethics

OK -- I mean the hybrid theory -- but I see two possibilities (I don't think it's worth my time reading up on this subject enough to make sure what I mean matches exactly the terminology of the paper(s) you refer to):

  • In my hybridisation, I've already sacrificed some intuitive principles (improving total welfare versus respecting individual rights, say), by weighing up competing intuitions.

  • Whatever counter-intuitive implications my mish-mash, sometimes fuzzily defined hybrid theory has, they have been pushed into the realm of "what philosophers can

... (read more)
1AGB7y
Can you (or anyone else who feels similarly) clarify the sense in which you consider the repugnant conclusion 'not actually important', but the drowning child example 'important'? Both are hypotheticals, both are trying to highlight contradictions in our intuitions about the world, both require you to either (a) put up with the fact that your theory is self-contradictory or (b) accept something that most people would consider unusual/counter-intuitive.
Problems and Solutions in Infinite Ethics

If that procedure was followed consistently, it would disprove all moral theories.

I consider this a reason to not strictly adhere to any single moral theory.

8Pablo7y
This statement is ambiguous. It either means that you adhere to a hybrid theory made up of parts of different moral theories, or that you don't adhere to a moral theory at all. If you adhere to a hybrid moral theory, this theory is itself subject to the impossibility theorems, so it, too, will have counterintuitive implications. If you adhere to no theory at all, then nothing is right or wrong; a fortiori, not rescuing the child isn't wrong, and a theory's implying that not rescuing the child isn't wrong cannot therefore be a reason for rejecting this theory.
Problems and Solutions in Infinite Ethics

Hopefully this is my last comment in this thread, since I don't think there's much more I have to say after this.

  1. I don't really mind if people are working on these problems, but it's a looooong way from effective altruism.

  2. Taking into account life-forms outside our observable universe for our moral theories is just absurd. Modelling our actions as affecting an infinite number of our descendants feels a lot more reasonable to me. (I don't know if it's useful to do this, but it doesn't seem obviously stupid.)

  3. Many-worlds is even further away from effec

... (read more)
4Lila7y
I think the relevance of this post is that it tentatively endorses some type of time-discounting (and also space-discounting?) in utilitarianism. This could be relevant to considerations of the far future, which many EAs think is very important. Though presumably we could make the asymptotic part of the function as far away as we like, so we shouldn't run into any asymptotic issues?
Problems and Solutions in Infinite Ethics

Letting the child drown in the hope that

a) there's an infinite number of life-forms outside our observable universe, and

b) that the correct moral theory does not simply require counting utilities (or whatever) in some local region

strikes me as far more problematic. More generally, letting the child drown is a reductio of whatever moral system led to that conclusion.

4Pablo7y
Population ethics (including infinite ethics) is replete with impossibility theorems showing that no moral theory can satisfy all of our considered intuitions. (See this paper [http://www.repugnant-conclusion.com/population-ethics.pdf] for an overview.) So you cannot simply point to a counterintuitive implication and claim that it disproves the theory from which it follows. If that procedure was followed consistently, it would disprove all moral theories.
Problems and Solutions in Infinite Ethics

That is precisely the argument that I maintain is only a problem for people who want to write philosophy textbooks, and even then one that should only take a paragraph to tidy up. It is not an issue for altruists otherwise -- everyone saves the drowning child.

Problems and Solutions in Infinite Ethics

The universe may very well be infinite, and hence contain an infinite amount of happiness and sadness. This causes several problems for altruists

This topic came up on the 80k blog a while ago and I found it utterly ridiculous then and I find it utterly ridiculous now. The possibility of an infinite amount of happiness outside our light-cone (!) does not pose problems for altruists except insofar as they write philosophy textbooks and have to spend a paragraph explaining that, if mathematically necessary, we only count up utilities in some suitably loca... (read more)

1Gregory_Lewis7y
I sympathize with this. It seems likely that the accessible population of our actions is finite, so I'm not sure one need to necessarily worried about what happens in the infinite case. I'm unworried if my impact on earth across its future is significantly positive, yet the answer of whether I've made the (possibly infinite) universe better is undefined. However, one frustration to this tactic is that infinitarian concerns can 'slip in' whenever afforded a non-zero credence. So although given our best physics it is overwhelmingly likely the morally relevant domain of our actions will be constrained by a lightcone only finitely extended in the later-than direction (because of heat death, proton decay, etc.), we should assign some non-zero credence our best physics will be mistaken: perhaps life-permitting conditions could continue indefinitely, or we could wring out life asymptotically faster than the second law, etc. These 'infinite outcomes' swamp the expected value calculation, and so infinitarian worries loom large.
2Ben_West7y
Thanks for the feedback. Couple thoughts: 1. I actually agree with you that most people shouldn't be worried about this (hence my disclaimer that this is not for a general audience). But that doesn't mean no one should care about it. 2. Whether we are concerned about an infinite amount of time or an infinite amount of space doesn't really seem relevant to me at a mathematical level, hence why I grouped them together. 3. As per (1), it might not be a good use of your time to worry about this. But if it is, I would encourage you to read the paper of Nick Bostrom's that I linked above, since I think "just look in a local region" is too flippant. E.g. there may be an infinite number of Everett branches we should care about, even if we restrict our attention to earth.
2AGB7y
"No-one responds to the drowning child by saying, "well there might be an infinite number of sentient life-forms out there, so it doesn't matter if the child drowns or I damage my suit". It is just not a consideration." "It is not an issue for altruists otherwise -- everyone saves the drowning child." I don't understand what you are saying here. Are you claiming that because 'everyone' does do X or because 'noone' does not do X (putting those in quotation marks because I presume you don't literally mean what you wrote, rather you mean the 'vast majority of people would/would not do X'), X must be morally correct? That strikes me as...problematic.
1Pablo7y
The text immediately following the passage you quoted reads: This implies that the quantity of happiness in the universe stays the same after you save the drowning child. So if your reason for saving the child is to make the world a better place, you should be troubled by this implication.
Open Thread 6

The envelope icon next to "Messages" in the top-right (just below the banner) becomes an open envelope when you have a reply. (I think it turns a brighter shade of blue as well? I can't remember.) The icon returns to being a closed envelope after you click on it and presumably see what messages/replies you have.

0Larks7y
Thanks very much! Very helpful.
0RyanCarey7y
Yes, the envelope goes to a light, bright blue open envelope..
Where are you giving and why?

My values align fairly closely with GiveWell's. If they continue to ask for donations then probably about 20% of my giving next year will go to them (as in the past two years). Apart from that:

GiveWell's preferred split across their recommended charities AMF/SCI/GD/EvAc (Evidence Action, which includes Deworm the World) is 67/13/13/7. Since most of the reasoning behind that split is how much money each charity could reasonably use, and I agree with GiveWell that bednets are really cost-effective, I won't be deviating much from GiveWell's recommendation.... (read more)

The new Animal Charity Evaluators recommendations are out

Relying on hoped-for compounding long-term benefits to make donation decisions is at least not a complete consensus (I certainly don't).

My understanding of your position is:

  • Human welfare benefits compound, though we don't know how much or for how long (and I am dubious, along with one of the commenters, about a compounding model for this).

  • Animal welfare benefits might compound if they're caused by human value changes.

In the case of ACE's recommendations, we have three charities which aim to structurally change human society. So we have short-term b... (read more)

Open Thread 6

My internal definition is "take a job (or build a business) so that you donate more than you otherwise would have" [1]. It's too minimalist a definition to work in every case (it'd be unreasonable to call someone on a $1mn salary who donates $1000 "earning to give", even if they wouldn't donate anything on $500k), but if you're the sort of person who considers "how much will I donate to charity" as an input into their choice of job, then I think the definition will work most of the time.

There probably needs to be a threshold ... (read more)

0Tom_Ash7y
I think that'd have to be "in part so that you donate more than you otherwise would have". And it doesn't capture people who'd have taken their jobs anyway, like several EtG-ers in finance. But this is nitpicking - it's a pretty good definition.
Why long-run focused effective altruism is more common sense

It seems like this comes down to a distinction between effective altruism, meaning altruism which is effective, and EA referring to a narrower group of organizations and ideas.

I'm happy to go with your former definition here (I'm dubious about putting the label 'altruism' onto something that's profit-seeking, but "high-impact good things" are to be encouraged regardless). My objection is that I haven't seen anyone make a case that these long-term ideas are cost-effective. e.g.,

My best guess is that these activities have a significantly lar

... (read more)
Why long-run focused effective altruism is more common sense

Moderate long-run EA doesn't look close to having fully formed ideas to me, and therefore it seems to me a strange way to introduce people to EA more generally.

you’ll want to make investments in technology

I don't understand this. Is there an appropriate research fund to donate to? Or are we talking about profit-driven capital spending? Or just going into applied science research as part of an otherwise unremarkable career?

and economic growth

Who knows how to make economies grow?

This will mean better global institutions, smarter leaders, more so

... (read more)
3Benjamin_Todd7y
To clarify, I was defining the different forms of EA more along the lines of 'how they evaluate impact', rather than which specific projects they think are best. Short-run focused EA focuses on evaluating short-run effects. Long-run focused EA also tries to take account of long-run effects. Extreme long-run EA combines a focus on long-run effects with other unintuitive positions such as a focus on specific xrisks. Moderate long-run EA doesn't. The point of moderate long-run EA is that it's much less clear which interventions are best by these standards. I wasn't trying to say that moderate long-run EA should focus on promoting economic growth and building better institutions, just that these are valuable outcomes, and it's pretty unclear that we should prefer malaria nets (which were mainly selected on the basis of short-run immediate impact) to other efforts to do good that are widely pursued by smart altruists outside of the EA community. A moderate long-run EA could even think that malaria nets are the best thing (at least for money, if not human capital), but they'll be more uncertain and give greater emphasis to the flow through effects. Yes, moderate long-run EA is more uncertain and doesn't have "fully formed" answers - but that's the situation we're actually in.
6Tom_Ash8y
I’d also find it helpful to know the answers to these questions. In particular, to compare like with like, it would be interesting to know how advocates of long-run focused interventions would recommend spending a thousand dollars rather than funding, say, bednet distribution. This is a key action-relevant question for me and others. I’ve asked quite a few people, but haven’t yet heard an answer that I’ve personally been impressed by. I also haven’t been given many specific charities or interventions, which leaves the argument in the realm of intellectually interesting theory rather than concrete practicality. Of course this isn’t to say that there aren’t any, which is why I ask! (I have made an effort to ask quite a few far-future focused people though.) (I know some people advocate saving your money until a good opportunity comes up. Paul has an interesting discussion of this here [http://rationalaltruist.com/2014/05/04/we-can-probably-influence-the-far-future/] .)
3Paul_Christiano8y
EA's haven't been as substantially involved in science funding, but it's a pretty common target for philanthropy. And many people invest in technology, or pursue careers in technology, in the interests of making the world better. My best guess is that these activities have a significantly larger medium term humanitarian impact than aid. I think this is a common view amongst intellectuals in the US. We probably all agree that it's not a clear case either way. The story with social science, political advocacy, etc., is broadly similar to the story with technology, though I think it's less likely to be as good as poverty alleviation (or at least the case is more speculative). Note that e.g. spending money to influence elections is a pretty common activity, it seems weird to be so skeptical. And while open borders is very speculative, immigration liberalization isn't. I think the prevailing wisdom is that immigration liberalization is good for welfare, and there are many other technocratic policies in the same boat, where you'd expect money to be helpful. It seems like this comes down to a distinction between effective altruism, meaning altruism which is effective, and EA referring to a narrower group of organizations and ideas. I am more interested in the former, which may account for my different view on this point. The point of the introduction also depends on who you are talking to and why (I mostly talk with people whose main impact on the world is via their choice of research area, rather than charitable donations; maybe that means I'm not the target audience here).
4Alexander_Berger8y
I agree, and I'd add that what I see as one of the key ideas of effective altruism, that people should give substantially more than is typical, is harder to get off the ground in this framework. Singer's pond example, for all its flaws, makes the case for giving a lot quite salient, in a way that I don't think general considerations about maximizing the impact of your philanthropy in the long term are going to.
Open thread 5

The bottom part of your diagram has lots of boxes in it. Further up, "poverty alleviation is most important" is one box. If there was as much detail in the latter as there is in the former, you could draw an arrow from "poverty alleviation" to a lot of other boxes: economic empowerment, reducing mortality rates, reducing morbidity rates, preventing unwanted births, lobbying for lifting of trade restrictions, open borders (which certainly doesn't exclusively belong below your existential risk bottleneck), education, etc. There could b... (read more)

Open thread 5

I haven't seen a downvote here that I've agreed with, and for the moment I'd prefer an only-upvote system. I don't know where I'd draw the line on where downvoting is acceptable to me (or what guidelines I'd use); I just know I haven't drawn that line yet.

2jayd8y
Having some downvoting is good, and part of the raison d'etre of this forum as opposed to the Facebook group. I agree that people downvote slightly too often, but that's a matter of changing the norms.
Kidney donation is a reasonable choice for effective altruists and more should consider it

Yes, I agree with that, and it's worth someone making that point. But I think in general it is too common a theme in EA discussion to compare some possible altruistic endeavour (here kidney donation) to perfectly optimal behaviour, and then criticise the endeavour as being sub-optimal -- Ryan even words it as "causing net harm"!

In reality we're all sub-optimal, each in our own many ways. If pointing out that kidney donation is sub-optimal (assuming all the arguments really do hold!) nudges some possible kidney donors to actually donate more of ... (read more)

Kidney donation is a reasonable choice for effective altruists and more should consider it

How long would it take to create $2k of value? That's generally 1-2 weeks of work. So if kidney donation makes you lose more than 1-2 weeks of life, and those weeks constitute funds that you would donate, or voluntary contributions that you would make, then it's a net negative activity for an effective altruist.

This can't be the right comparison to make if the 1-2 weeks of life is lost decades from now. The (foregone) altruistic opportunities in 2060 are likely to cost much more than $2000 per 15 DALY's averted.

I think the basic shape of your argument ... (read more)

Kidney donation is a reasonable choice for effective altruists and more should consider it

That's an unfair comparison.

But it might be a relevant comparison for many people. i.e., I expect that there are people who would be willing to forego some income to donate a kidney (and they may not need to do this, depending on the availability of paid medical leave), but who wouldn't donate all of that income if they kept both kidneys.

5ruthie8y
I think Ben's criticism is fair, in that a perfectly rational altruist wouldn't make it. That is, if you are willing to give up three weeks of income to donate a kidney, you should be willing to work for three weeks and donate all of your income, not just whatever percentage you donate normally. This is not to say that it's an unreasonable decision in all cases-- taking three weeks off of work to donate a kidney has all sorts of other consequences (you probably get to do a lot of reading while you're stuck in bed), but from a first order altruistic standpoint, at the income level I mentioned it still wouldn't make sense.
Open Thread 4

I don't understand what you're pointing us to in that link. The main part of the text tells us that ties are usually broken in swing states by drawing lots (so if you did a full accounting of probabilities and expectation values, you'd include some factors of 1/2, which I think all wash out anyway), and that the probability of a tie in a swing state is around 1 in 10^5.

The second half of the post is Randall doing his usual entertaining thing of describing a ridiculously extreme event. (No-one who argues that a marginal vote is valuable for expectation-va... (read more)

Open Thread 4

My main response is that this is worrying about very little -- it doesn't take much time to choose who to vote for once or twice every few years.

But in particular,

2) The risk you incur in going to the place where you vote (a non-trivial likelihood of dying due to unusual traffic that day).

is an overstated concern at least for the US (relative risk around 1.2 of dying on the road on election day compared to non-election days) and Australia (relative risk around 1.03 +/- error analysis I haven't done).

0Larks8y
Yes, you're right that election day doesn't add much to the danger. But the baseline risk of drying on the road is pretty high relative to other risks you probably face, so if you thought the benefits of voting were negligible this one might be a significant element of your calculus.
Open Thread 4

That's OK, even if I had perceived it as an attack, I've thought enough about this topic for it not to bother me!

Open Thread 4

As I said to Peter in our long thread, "Eh whatevs". :P

I don't think I can make anything more than a very weak defence of avoiding DAF's in this situation (the defence would go: "They seem kinda weird from a signalling perspective"). I'm terrible at finance stuff, and a DAF seems like a finance-y thing, and so I avoid them.

1Tom_Ash8y
Well I can't argue with "whatevs" ;) I hope you don't feel like Peter and I have been attacking your choice of donation - I see where you're coming from, and AMF is a great charity, RFMF concerns apart!
Open Thread 4

Probability that they'll need my money soon:

GAVI: ~0%

AMF: ~50%

SCI: ~100%

You might say "well there's a 50-percentage-point difference at each of those two steps" and think I'm being inconsistent in donating to AMF and not GAVI. But if I try some expectation-value-type calculation, I'll be multiplying the impact of AMF's work by 50% and getting something comparable to SCI, but getting something close to zero for GAVI.

Open Thread 4

AMF is far more likely to need the money soon than GAVI.

1Peter Wildeford8y
But SCI is far more likely to need the money soon than AMF.
Open Thread 4

Presumably they've already factored in the relative strength of bednets.

I don't think this is relevant to GiveWell's decision not to recommend AMF.... Immunisations are super-cost-effective, but GiveWell don't make a recommendation in this area because GAVI or UNICEF or whoever already have committed funding for this.

I've got two choices if I want to donate all my donation money this year:

  • Donate to AMF, which is likely higher impact, but maybe my money won't be spent for a couple of years.

  • Donate somewhere else, likely lower impact.

I think an AMF ... (read more)

-1Peter Wildeford8y
So why not donate to immunizations, then?
Open Thread 4

It's some kind of balancing act between supporting GiveWell-recommended charities as a way of supporting GiveWell, and recognising that our best guess is that bednets are substantially more cost-effective than deworming/cash transfers. (Pending the forthcoming update....)

0Peter Wildeford8y
Not to begrudge you too much because I'm delighted that you're donating, but do you think GiveWell is wrong about AMF? Presumably they've already factored in the relative strength of bednets.
Open Thread 4

About a quarter of my donations this year will go to AMF. I'd feel a bit weird holding on to the money instead of donating it.

0Tom_Ash8y
A middle ground would be to put the money into a donor-advised fund, and then wait and see if AMF gain room for more funding. That way, you can direct the money elsewhere if they don't.
1Peter Wildeford8y
Why AMF and not somewhere else?
Should Giving What We Can change its Pledge?

The healthcare thing was just an example (though, despite the FAQ on this topic that Owen brought up below, I would still feel dishonest withdrawing from a pledge for this reason). It's the lock-in thing that I just don't feel comfortable with.

I ramped up my donations after discovering GiveWell, and at the time it looked like it cost ~$500 to save a life. Now they reckon it's roughly ten times that amount. The overwhelming moral case for donating today feels around ten times weaker to me than it did in 2009. If the cost per life saved(-equivalent) rise... (read more)

0Larks8y
Yes, it seems that EAs have not really addressed this issue. Especially as GiveWell have said their estimates are still likely to be overly-optimistic. At this point the 'child in the pond' example fails to accurately describe the trade-off involved.
Should Giving What We Can change its Pledge?

I'm not a GWWC member, because I don't want to lock myself in to a pledge. (I've been comfortably over 10% for a few years, and expect that to continue, but I could imagine, e.g., needing expensive medical care in the future and cutting out my donations to pay for that.) For that reason I wouldn't take the pledge in either its current or its proposed form.

My take on this is that it's okay to make a pledge in good faith if you intend to fulfil it and will make an effort to do so even if this becomes inconvenient.

That doesn't mean committing yourself come what may. If we thought we had to carry through on our promises no matter what, nobody would make promises, and the world would be a sorrier place for that. Similarly people getting married usually intend in good faith to stay with the marriage for the rest of their life, and to make an effort to make that work, but I think the process works better by allowi... (read more)

6Toby_Ord8y
I don't think this need stop you from taking the pledge. We think of it like making a promise to do something. It is perfectly reasonable to promise to do something (say to pick up a friend's children from school) even if there is a chance you will have to pull out (e.g. if you got sick). We don't usually think of small foreseeable chances of having to pull out as a reason not to make promises, so I wouldn't worry about that here. I think this is mentioned on our FAQ page -- if not, it should be. Another approach is to make sure you have enough health insurance (possibly supplementing your country's public insurance, though I don't think that is needed in the UK), and maybe getting income insurance too. It should be possible to have enough of both kind and still donate 10%.
Effective Altruism is a Question (not an ideology)

The point of that word being there is to reduce the strength of the claim: you're focused on being effective, you're trying hard to be effective, but to say that you are effective is different.

I don't really want to reduce the strength of my claim though[1] -- if I have to be pedantic, I'll talk about being effective in probabilistic expectation-value terms. If donating to our best guesses of the most cost-effective charities we can find today doesn't qualify as "effective", then I don't think there's much use in the word, either to describe ... (read more)

Effective Altruism is a Question (not an ideology)

Thanks for mentioning that you run EA Melbourne -- I think this difference in perspective is what's driving our -ism/-ist disagreement that I talk about in my earlier comment. I've never been to an EA meetup group (I moved away from Brisbane earlier in February, missing out by about half a year on the new group that's just starting there...), and I'd wondered what EA "looked like" in these contexts. If a lot of it is just meeting up every few weeks for a chat about EA-ish topics, then I agree that "effective altruist" is a dubious ter... (read more)

Effective Altruism is a Question (not an ideology)

Pretty passively.... Like I'll send some money GiveWell's way later this year to help find effective giving opportunities, but it doesn't feel inside of me as though I'm aspiring to something here. The GiveWell staff might aspire to find those better giving opportunities; I merely help them a bit and hope that they succeed.

I also think that describing ourselves primarily as having a never-ending aspiration is selling us short if we're actually achieving stuff.

0Helen8y
I think it's fair to say that "aspiring" doesn't quite fit for you. The point of that word being there is to reduce the strength of the claim: you're focused on being effective, you're trying hard to be effective, but to say that you are effective is different. Maybe the slightly poor epistemology doesn't matter enough to make up for the much clearer name... I'm not sure.
Effective Altruism is a Question (not an ideology)

I disagree with a bit of the intro and part one.

You can easily say that Effective Altruism answers a question. The question is, "What should I do with my life?" and the answer is, "As much good as possible (or at least a decent step in that direction)." Only if you take that answer as a starting premise can you then say that EA asks the question, "How do I do the most good?"

Conversely, you can just as easily say that feminism doesn't ask whether men and women should be equal (that they should be is the starting premise), it ... (read more)

1Helen8y
I think this is the key part of our disagreement - I don't think this is the case - and I've answered more fully in my comment in reply to Kerry [http://effective-altruism.com/ea/9s/effective_altruism_is_a_question_not_an_ideology/142] . Would love to hear your thoughts there.
1RyanCarey8y
Do you aspire to find donation targets that are more effective?
One month in - it's time for more introductions

I didn't make an introduction comment in the last post, so I suppose I should do one here. I'm David Barry -- one of the migrated posts from the old blog is authored by the user David_Barry, but I signed up my usual Internet handle before thinking about the account that had already been made for me. I live in Perth, where I moved for work earlier this year, having previously lived in Brisbane.

I always used to think I'd become a physicist one day, but what was supposed to be a PhD went badly for too long and I escaped with a Master's. I've now been worki... (read more)

Open Thread 2

I was too lazy to specify that I was talking about the world as it is.

A couple might have a third (or first, or...) child, or they might not. I can accept that the two possibilities lead to slightly different total or average utilities, but as I said, I am not utilitarian on this point. I think we just allow people to choose how many children they have, and we build the rest of ethics around that.

0[anonymous]8y
I think in the world as it is, allowing people to choose how many children they have is exactly the utilitarian thing to do. Of course, there are forms of persuasion other than coercion. Some ideas like liberal eugenics have world-improvement potential imo.
Open Thread 2

To me, the decision (freely made) to have children is morally neutral -- I am not utilitarian on this topic.

Birth rates usually fall substantially as female education levels rise and women become more empowered generally. I would be happier about the world if countries that currently have high birth rates see those birth rates fall thanks to better education levels etc. The sort of drastic fall in birth rates seen in, e.g., South Korea and Iran, are caused by large society-wide changes, and I don't think it's likely that as an outside donor I can do anyt... (read more)

1Tom_Ash8y
Bernadette Young wrote a great post on this decision (as made by individual parents) here [http://www.effective-altruism.com/ea/66/parenthood_and_effective_altruism/]. For my part, I think that it's healthy to have some parts of your life which you dedicate to doing what seems morally best, and some which you treat as personal, and that having kids should clearly be treated as personal (i.e. you shouldn't agonise about whether it's morally optimal). And I say that as someone who probably doesn't want kids myself, a position that's informed but not determined by ethical concerns.
Load More