All of Alexander_Berger's Comments + Replies

FWIW I think I'm an example of Type 1 (literally, in Lorenzo's data) and I also agree that abstractly more of Type 2 would be helpful (but I think there are various tradeoffs and difficulties that make it not straightforwardly clear what to do about it). 

3
yanni kyriacos
1mo
Just to be clear, my conception of Type 2 is that they're still involved in EA (e.g. through ETG, volunteering, meetups), but their job isn't, for some period of time. Which is partly why I think 25%/75% is better than 50/50

Exciting news! I worked closely with Zach at Open Phil before he left to be interim CEO of EV US, and was sad to lose him, but I was happy for EV at the time, and I'm excited now for what Zach will be able to do at the helm of CEA.

Great to hear about finding such a good fit, thanks for sharing!

Hi Dustin :)

FWIW I also don't particularly understand the normative appeal of democratizing funding within the EA community. It seems to me like the common normative basis for democracy would tend to argue for democratizing control of resources in a much broader way, rather than within the self-selected EA community. I think epistemic/efficiency arguments for empowering more decision-makers within EA are generally more persuasive, but wouldn't necessarily look like "democracy" per se and might look more like more regranting, forecasting tournaments, etc.

This is a great point, Alexander. I suspect some people, like ConcernedEAs, believe the specific ideas are superior in some way to what we do now, and it's just convenient to give them a broad label like "democratizing". (At Asana, we're similarly "democratizing" project management!) 

Others seem to believe democracy is intrinsically  superior to other forms of governance; I'm quite skeptical of that, though agree with tylermjohn that it is often the best way to avoid specific kinds of abuse and coercion. Perhaps in our context there might be more... (read more)

6
Amber Dawn
1y
Yeah, I definitely agree with this!  An idea I've been kicking around in my head for a while is 'someone should found an organization that investigates what existing humans' moral priorities are' - like, if there were a world democracy, what would it vote for?  An idea for a limited version of this within EA could be representatives for interest groups or nations. E.g., the Future Design movement suggests that in decision-making bodies, there should be some people whose role is to advocate for the interests of future generations. There could similarly be a mechanism where (eg) animals got a certain number of votes through human advocates. 
2
Guy Raveh
1y
(Sorry, the formatting here doesn't seem to work but I don't know how to fix it) I think there are two aspects that make "the EA community" a good candidate for who should make decisions: 1. The need to balance between "getting all perspectives by involving the entire world" and "making sure it's still about doing the most good possible". It's much less vetting over value-alignment than the current state, but still some. I'm not sure it's the best point on the scale, but I think it might be better than where we are currently. 1.1. another thought about this is that maybe we ought to fix the problem where "value alignment" is, as the other post argues, actually taken much more narrowly than agreeing about "doing the most good". 2. The fact that EA is, in the end, a collaborative project and not a corporation. It seems wrong and demotivating to me that EAs have to compete and take big risks on themselves individually to try to have a say about the project they're still expected to participate in. 2.1. Maybe a way for funders to test this is to ask yourselves - if there weren't an EA community, would your plans still work as you expect them to? If not, than I think the community ought to also have some say on making decisions.

Also, the (normative, rather than instrumental) arguments for democratisation in political theory are very often based on the idea that states coerce or subjugate their members, and so the only way to justify (or eliminate) this coercion is through something like consent or agreement. Here we find ourselves in quite a radically different situation.

Just wanted to say that I thought this post was very interesting and I was grateful to read it.

Just wanted to comment to say I thought this was very well done, nice work! I agree with Charles that replication work like this seems valuable and under-supplied.

I enjoyed the book and recommend it to others!

In case of of interest to EA forum folks, I wrote a long tweet thread with more substance on what I learned from it and remaining questions I have here: https://twitter.com/albrgr/status/1559570635390562305

6
William_MacAskill
2y
Thanks so much Alexander — It’s a good thread! Highlighting one aspect of it: I agree that being generally silent on prioritization across recommended actions is a way in which WWOTF lacks EA-helpfulness that it could have had. This is just a matter of time and space constraints. For chapters 2-7, my main aim was to respond to someone who says, “You’re saying we can improve the long-term future?!? That’s crazy!”, where my response is “Agree it seems crazy, but actually we can improve the long-term future in lots of ways!” I wasn’t aiming to respond to someone who says “Ok, I buy that we can improve the long-term future. But what’s top-priority?” That would take another few books to do (e.g. one book alone on the magnitude of AI x-risk), and would also be less “timeless”, as our priorities might well change over the coming years.  On the “how much does AI and pandemics need longtermism” - I respond to that line of thinking a bit here (also linked to in the OP).
5
Geoffrey Miller
2y
Good Twitter thread; thanks for sharing it.

Thanks MHR. I agree that one shouldn't need to insist on statistical significance, but if GiveWell thinks that the actual expected effect is ~12% of the MK result, then I think if you're updating on a similarly-to-MK-powered trial, you're almost to the point of updating on a coinflip because of how underpowered you are to detect the expected effect.

I agree it would be useful to do this in a more formal bayesian framework which accurately characterizes the GW priors. It wouldn't surprise me if one of the conclusions was that I'm misinterpreting GiveWell's current views, or that it's hard to articulate a formal prior that gets you from the MK results to GiveWell's current views.

Thanks, appreciate it! I sympathize with this for some definition of low FWIW: "I have an intuition that low VSLs are a problem and we shouldn't respect them" but I think it's just a question of what the relevant "low" is.

Thanks Karthik. I think we might be talking past each other a bit, but replying in order on your first four replies:

  1. My key issue with higher etas isn't philosophical disagreement, it's as guidance for practical decision-making. If I had taken your post at face value and used eta=1.5 to value UK GDP relative to other ways we could spend money, I think I would have predictably destroyed a lot of value for the global poor by failing to account for the full set of spillovers (because I think doing so is somewhere between very difficult and impossible). Even wi
... (read more)
5
Karthik Tadepalli
2y
1. Got it, I think I misunderstood that point the first time. Yes, I am convinced that this is an issue that is worth choosing log over isoelastic for. 2. Yes, I agree with the first order consequence of focusing more on saving lives. The purpose of this is just to compare different approaches that only increase income, and I was just suggesting that a high set point is a sufficient way to avoid having that spill over into unappealing implications for saving lives. It is true that a very high set point is inconsistent with revealed preference VSLs, though. I don't have a good way to resolve that. I have an intuition that low VSLs are a problem and we shouldn't respect them, but it's not one I can defend, so I think you're right on this. 3. Agreed 4. I'm on board with the idea of averaging over scenarios ala Weitzman, my original thinking was that a normalizing constant would shrink the scale of differences between the scenarios and thus reduce the effect of outlier etas. But I was confusing two different concepts - a high normalizing constant would reduce the % difference between them, but not the absolute difference between them which is the important quantity for expected value.

Hey Karthik, starting separate thread for a different issue. I opened your main spreadsheet for the first time, and I'm not positive but I think the 90% reduction claim is due to a spreadsheet error? The utility gain in B5 that flows through to your bottom line takeaway is hardcoded as being in log terms, but if eta changes than the utility gain to $s at the global average should change (and by the way I think it would really matter if you were denominating in units of global average, global median, or global poverty level). In this copy I made a change to... (read more)

You... are absolutely right. That's a very good catch. I think your calculation is correct, as the utility translation only happens twice - utility from productivity growth, which I adjusted, and utility from cash transfers, which I did not. Everything else is unchanged from the original framework.

You're definitely right that it matters whether this is global average/median/poverty level. I think that the issue stems from using productivity as the input to the utility function, rather than income. This is not an issue for log utility if income is directl... (read more)

Hey Karthik,

Thanks for the thoughtful post, I really appreciate it!

Open Phil has thought some about arguments for higher eta but as far as I can find never written them up, so I'll go through some of the relevant arguments in my mind:

  • I think the #1 issue is that as eta gets large, the modeled utility at stake at high income levels approaches zero, which makes it fragile/vulnerable to errors, and those errors are easily decisive because our models do a bad job capturing empirically relevant spillovers that are close to linear rather than logarithmic or wors
... (read more)
5
Karthik Tadepalli
2y
Thanks for the points, I should have done more due diligence into the arguments for each framework. That said, I don't see these as fatal flaws: * I don't know if I see this as a problem. I think it's good for considerations about policy with international spillovers to be dominated by their effect on low-income countries. For example, I think that the welfare effects of US tariffs should be primarily judged by their impact on exporters in low-income countries, and that economic growth in the US is valuable primarily because of spillovers to the rest of the world. Insofar as log utility brackets this effect away, it doesn't seem like the right reasoning process. * Even if you're uncomfortable with that philosophical commitment, you can still use high etas to evaluate policies that focus on low-income countries, such as growth advocacy. That is considerably narrower than I would like, because I think we should make that philosophical commitment, but it's still a useful set of scenarios. * Sharpening the tradeoff between life and income is a much bigger problem to me, as I agree that it would be unattractive to place a low value on life. But I don't think that high etas intrinsically imply a low total welfare. Utility functions are not normalized to scale. We can introduce a large constant for the baseline welfare of being alive, as is done in this framework which has a subsistence welfare s. A high value of s would increase the value of life relative to income, while still maintaining the intuition that each doubling of income is worth less than the last. That s would also be irrelevant for monetary considerations since it would cancel out when looking at the change in utility. Moreover, I think it should be possible to estimate s from IDinsight's work on beneficiary preferences which retains tractability. * I have to admit that I did not scrutinize the studies and I am very open to them being flawed. But I think almost everyone would agree that 10% income increa

I don't have a particularly good estimate on total time, but my impression is that most doctors recommend people plan to take a couple weeks off from office work, which would maybe 2-3x your 52 hr estimate?

2
NicoleJaneway
2y
Since I was able to be productive right away on personal projects and errands, I didn't count this time in my estimate. I used my time off to learn JavaScript well enough to eventually switch careers from Data Scientist to Frontend Developer, so it was a pretty fruitful period for me.
4
joshcmorrison
2y
Thanks for writing this Nicole! Agree about 2ish weeks off work as the standard, though Alexander and I donated ~ten years ago, and I have some (purely anecdotal) sense that the surgery experience (and recovery) for people like Nicole who've donated since then at a big center might be better.  Also, I think this Annals of Internal Medicine meta-analysis on the risks of kidney donation is a good resource for people who feel comfortable reading academic papers.

Hi Nicole,

I think this is a cool choice and a good post - thanks for both! I agree with your bottom line that kidney donation can be a good choice for EAs and just wanted to flag a few additional resources and considerations:

  • I think these other EA forum posts about the costs and benefits of donation are worth checking out. In my mind the most important update relative to when I donated is that the best long-run studies now suggest a roughly 1 percentage point increase in later-life risk of kidney failure because of donating. I think that translates less th
... (read more)
2
NicoleJaneway
2y
Thank you for your comment, Alexander! 1. Made update about risk of kidney failure going up 1 percentage point 2. I felt pretty much good to work the day I left the hospital 🤷‍♀️  maybe my job was uniquely unstrenuous and undemanding.  What would you estimate as the time allocated to donation?  3. My donation was nondirected but paired, so I will take all the credit for the counterfactual of making other people's chains longer lol 4. Totally agree that an important part of impact is movement building.  We will hopefully have a kidney donor meetup at EAG DC where non-donors can come ask questions

Hi MHR,

I really appreciate substantive posts like this, thanks!

This response is just speaking for myself, doing rough math on the weekend that I haven't run by anyone else. Someone (e.g., from @GiveWell) should correct me if I'm wrong, but I think you're vastly understating the difficulty and cost of running an informative replication given the situation on deworming. (My math below seems intuitively too pessimistic, so I welcome corrections!)

If you look at slide 58 here you get the minimum detectable effect (MDE) size with 80% power can be approximated as... (read more)

2
Falk Lieder
2y
I think your estimate of how costly it would be to run a replication study is too pessimistic. In addition to the issues that MHR identified, it strikes me as unrealistic that the cost of rerunning the data collection would be more than 10,000 times as high as the cost of the original research project. I think this is highly unlikely because data collection usually accounts for at most 10% of the cost of research. Moreover, the cost of data collection does not scale linearly with the number of participants, but linearly in the number of researchers that are paid to coordinate data collection. The most difficult parts of organizing data collection, such as developing the strategy and establishing contact with high-ranking relevant officials, only have to be done once.  Moreover, there are economies of scale such that once you can collect data from 1 school, it is very little effort to replicate the process with 100 or 1000 schools, and that work can then be done by local volunteers with minimal training for minimal pay or free of charge. It certainly won't require 10000 times as many professors, postdocs, and graduate students as the original study, and it is almost exclusively the salaries of those people that makes research expensive. To the contrary, collecting more data on an already designed study with an existing data analysis pipeline requires minimal work from the scientists themselves, and that makes it much less expensive. Therefore, I think that the cost of data collection was probably only 10% of the cost of the research project and only scale logarithmically with the sample size. Based on that line of reasoning, I believe that the replication study could be conducted for one or a few million dollars.
MHR
2y15
0
0

Thanks so much for taking the time to read the post and for really engaging with it. I very much appreciate your comment and I think there are some really good points in it. But based on my understanding of what you wrote, I’m not sure I currently agree with your conclusion. In particular, I think that looking in terms of minimum detectable effect can be a helpful shorthand, but it might be misleading more than it’s helping in this case. We don’t really care about getting statistical significance at p <0.05 in a replication, especially given that the pr... (read more)

I also hadn't seen these slides, thanks for posting! (And thanks to Michael for the post, I thought it was interesting/thought-provoking.)

Thanks for the thorough engagement, Michael. We appreciate thoughtful critical engagement with our work and are always happy to see more of it. (And thanks for flagging this to us in advance so we could think about it - we appreciate that too!)

One place where I particularly appreciate the push is on better defining and articulating what we mean by “worldviews” and how we approach worldview diversification. By worldview we definitely do not mean “a set of philosophical assumptions” - as Holden writes in the blog post where he introduced the concept, we defi... (read more)

Thanks very much for these comments! Given that Alex - who I'll refer to in the 3rd person from here - doesn’t want to engage in a written back and forth, I will respond to his main points in writing now and suggest he and I speak at some other time.

Alex’s main point seems to be that Open Philanthropy (OP) won't engage in idle philosophising: they’re willing to get stuck into the philosophy, but only if it makes a difference. I understand that - I only care about decision-relevant philosophy too. Of course, sometimes the philosophy does really matter: the ... (read more)

[anonymous]2y28
1
0

Set point. I think setting a neutral point on a life satisfaction scale of 5/10 is somewhere between unreasonable and unconscionable

The author doesn't argue that the neutral point is 5/10, he argues (1) that the decision about where to set the neutral point is crucial for prioritising resources, (2) you haven't defended a particular neutral point in public. 

 and OP institutionally is comfortable with the implication that saving human lives is almost always good. Given that we think the correct neutral point is low, taking your other points on boa

... (read more)
2
brb243
2y
Since you define worldview as a "set of ... beliefs that favor a certain kind of giving," then it matters whether you understand income and health as "intrinsically [or] instrumentally valuable." In the latter but not the former case, if you learn that income and health do not optimize for your desired end, you would change your giving. I am understanding investment recommendation implications as programs on education, relationship improvement, cooperation (with achievement outcomes), mental health, chronic pain reduction, happiness vs. life satisfaction research, conflict prevention and mitigation, companionship, employment, crime reduction, and democracy: Divestment recommendations can be understood as bednets in Kenya, GiveDirectly transfers to some but not other members of large proportion extremely poor communities, and Centre for Pesticide Suicide Prevention:  I understand that you disengage from replies but I am interested in OP's perspective on the 0-10 life satisfaction value at which you would invest into life satisfaction improving rather than family planning programs. I am also wondering about your definition of health and rationale for selecting the DALY metric to represent this state.

GiveWell could answer more confidently but FWIW my take is:

-December 2022 is totally fine relative to today.

-I currently expect this increase in marginal cost-effectiveness to persist in future years, but with a lot of uncertainty/low confidence.

3
GiveWell
2y
Nathaniel/Imma, we agree with Alexander that giving in December 2022 would not be significantly less impactful than giving now. We think the cost-effectiveness of opportunities we'll support in 2023 will probably be similar to those in 2022. If you expect to give in only one of those years, we'd suggest giving in 2022, so that donation can start having an impact earlier, but this decision would be largely up to your individual giving plans.

I wrote a long twitter thread with some replies here FWIW: https://twitter.com/albrgr/status/1532726108130377729

3
Sergei Garrison
2y
Correct me if I'm wrong-- I think part of what you're saying is that EA has innovated in bringing philanthropy down to a much 'lower' level. It's not just billionaires that can do it. If we look across at other societies with less developed economies, there are plenty of people with worse problems than our own. Even as a middle-class professional in the US, there is plenty of good you can do with a little bit of giving. Maybe part of the innovation is bringing this to a (largely, I assume) secular community and taking an international perspective? I think of church communities and mutual aid organizations as having done this for many years on highly localized and personalized scales. Also, re this: "yes, this set of self-proclaimed altruists isn’t having as much fun as they could be or other people are, that’s correct, and an intentional tradeoff they're making in pursuit of their moral goals." Are we just talking about the survivors' guilt of being born in an advanced capitalist society, benefitting from hundreds of years of imperialist exploitation of other parts of the world?  
1[comment deleted]2y
7
Ben Stewart
2y
I loved this bit: "comfortable modernity is consistent with levels of altruistic impact and moral seriousness that we might normally associate with moral heroism"
5
Geoffrey Miller
2y
It's a good thread, and worth a look!
5
Guy Raveh
2y
I find this tweet interesting - because rather than being neoliberal, the second half exemplifies the Marxist idea of "From each according to his ability, to each according to his needs." I do think EA is too neoliberal, but IMO this isn't it :)

Agree that the paper leaves open the ultimate impact on completed fertility and on your #3. On #2 - I think it would be a mistake to try to adjust for this and neglect long run effects, as in your estimate in fn1.

2
Vasco Grilo
2y
I agree, and have adjusted the text and footnote accordingly.

This isn't an answer to the question, but two additional considerations I think you're missing that point the opposite direction and I think would make AMF look even better than GiveWell counts it as, on the total view:

  1. There's some evidence that bednets lead to higher fertility and that channels sound somewhat intuitively plausible.
  2. Roodman's report is only counting the first generation.  If preventing two under-5 deaths leads to ~one fewer birth, that's still  one more kid net making it to adulthood and being able to have kids of their own. Given
... (read more)
3
Vasco Grilo
2y
Thanks for the feedback! Some thoughts: 1. That evidence supports an increase in fertility, but only in the very short term. From the abstract: * Abstract: "The effect on fertility is positive only temporarily – lasting only 1-3 years after the beginning of the ITN distribution programs – and then becomes negative. Taken together, these results suggest the ITN distribution campaigns may have caused fertility to increase unexpectedly and temporarily, or that these increases may just be a tempo effect – changes in fertility timing which do not lead to increased completed fertility". * Conclusion: "In contrast, our findings do not support the contention that erosion of international funding for malaria control, specifically of ITNs, would lead to higher fertility rates in the short-run. While our results are suggestive that this may be the case for long-run fertility, we show the exact opposite for the short-run". * If the results are suggestive that decreasing ITNs leads to higher long-run fertility, AMF would tend to decrease longterm fertility. 2. Yes, I wrote "neglecting longterm effects" above to signal that Roodman's report did not refer to the longterm effects on population size, but thanks for clarifying! 3. Besides modelling the longterm effects on population size, it would also be important to determine how harmful/beneficial is a given change in population size (e.g. as a function of the country).

Thanks, I thought this was interesting!

This question you called out in "Relevance" particularly struck me: "More concretely, it could help us estimate the potential market size of effective altruism. How many proto-EAs are there? Less than 0.1% of the population or more than 20%?"

How would you currently answer this question based on the research you report here? 

If a five or higher on both scales is one way to operationalize proto-EA (you said 81% of self-ID'd EAs had that or higher), do you think the NYU estimates (6%?) or MTurk estimates (14%?) are more representative of the "relevant" population?

Thank you! 

If  we operationalize proto-EAs as scoring five or higher on both scales, then I’d say the 14% estimate is closer to the actual number of proto-EAs in the general (US) population (though it’s not clear if this is the relevant population or operationalization, more on that below). 

First, the MTurk sample is much more representative of the general population than the NYU sample. The MTurk sample is also larger (n = 534) than the NYU sample (n = 96) so the MTurk number is a more robust estimate. Lastly, the NYU sample mostly consiste... (read more)

Really liked this post, thanks.


Minor comment, wanted to flag that I think "Open Philanthropy has also reduced how much they donate to GiveWell-recommended charities since 2017." was true through 2019, but not in 2020, and we're expecting more growth for the GW recs (along with other areas) in the future.

2
Benjamin_Todd
3y
Thanks! I probably should have just used the 2020 figure rather than the 2017-2019 average. My estimate was an $80m allocation by Open Phil to global health, but this would suggest $100m.

Obv disclaimer: not a tax adviser.

Seems like yes based on this (https://www.thebalancesmb.com/can-my-business-deduct-charitable-contributions-397602) and according to this (https://www.philanthropy.com/article/nonprofits-win-extended-charitable-deductions-and-paycheck-protection-loans-in-stimulus-bill) the recent stimulus bill increased the limit for 2021 to 25% of corporate taxable income (instead of the normal 10%).

1
Will_Grover
3y
Terrific.  Thanks for this info!  Looks like I can do up to 25% this year (although less in a typical year).

Re your last paragraph, I just wanted to drop @jefftk's (IMO) amazing post here: https://www.jefftk.com/p/candy-for-nets

3
Jason Schukraft
4y
Yes, that post is fantastic!

Someone emailed me this and asked for thoughts, so I thought I'd share some cleaned up reactions here. Full disclosure--I work at Open Phil on some related issues:

  • Thanks for the post - I think it's helpful, and I agree that I would like to see the EA community engage more with Lant's arguments.
  • If we're focused primarily on near term human welfare (which seems to be the frame for the post), I think it's really important to think (and do back of the envelope calculations) more explicitly in terms of utility rather than in terms
... (read more)
[anonymous]4y12
0
0

Thanks for these comments Alex. I agree that it would be best to look at how growth translates into subjective wellbeing, and I am planning to do this or to get someone else to do it soon. However, I'm not sure that this defeats our main claim which is that research on and advocacy for growth are likely to be better than GW top charities. There are a few arguments for this.

(1) GW estimates that deworming is the best way to improve economic outcomes for the extreme poor, in expectation. This seems to me very unlikely to be true since deworming explain... (read more)

I think this argument is wrong for broadly the reasons that pappubahry lays out below. In particular, I think it's a mistake to deploy arguments of the form, "the benefit from this altruistic activity that I'm considering are lower than the proportional benefits from donations I'm not currently making, therefore I should not do this activity."

Ryan does it when he says:

How long would it take to create $2k of value? That's generally 1-2 weeks of work. So if kidney donation makes you lose more than 1-2 weeks of life, and those weeks constitute fun

... (read more)
3
joshcmorrison
9y
To follow up on Alexander's point a bit, I think applying the charitable benefits standard to non-charity decisions leads to some really weird results. For example, say someone who identifies as an EA chooses to give 10% of her income each year to a GW charity, and she’s choosing employment between being a schoolteacher for $50K a year or a job that’s not especially prosocial that pays $55K a year; say she has no innate preference between them, prefers to make more money all things being equal, and that being a schoolteacher would be worthmore than the $500 donation. According to the logic Alexander points to about kidney donation, when deciding whether to forgo the $5000 to choose a socially beneficial job, the right calculus is -- 1. does giving up that money do as much good as donating to a GW charity (i.e. saving a life) and 2. if no, EAs shouldn't do it. That leads to the really weird result, though, of committing EA ideology to rejecting socially positive choices even if they involve fairly small sacrifices (here $5,000). Let me give one final thought experiment on this point, which can be a variant of the child-drowning-in-the-puddle -- let's say instead of a child drowning, it's an older woman, and you're wearing expensive clothing that'll be ruined. If the EA standard is -- don't do altruistic acts that aren't of similar value to GW charitable donations -- that principle could very well commit you to not saving the older woman, which, again, seems bizarre. To be clear, that's not to say that should mean donating a kidney -- far from it. Instead considering kidney donation is a way of broadening the options available to EAs beyond giving money.
4
RyanCarey
9y
"The problem with these comparisons is that they're totally made up." I don't think this is true. I think Toby has been giving >50% of his funds and works on FHI full-time. I've used my savings to implement a career change that I wouldn't pursue for selfish reasons. So I do think we're bottlenecked substantially by our available resources at this point, making the comparison legitimate. I think that it's good to be a bit softer on people who are partially altruistic though. Dewey has said that effective altruism is what he calls the part of his life where he takes the demandingness of ethics seriously. Jeff Kaufman has written about making a budget for spending on others so one does not go insane about self/other tradeoffs during every visit to the supermarket. Utilitarianism gets roundly criticised for its vulnerability to this objection of 'demandingness' and some people find it quite psychologically challenging to (Jess' recent post here). So I lean toward including people who give only a smaller fraction of themselves to others. I guess this might be the underlying disagreement. You see this as harmful because it will discourage a beneficial act (even though I don't think it's that beneficial, I admit that this is the part that gives me the most pause), whereas on balance, I think the main issue at stake here is our inclusiveness. There's a further question of how seriously to take these opportunity cost arguments in general, which I think will be picked up in Katja's thread on vegetarianism.

I agree, and I'd add that what I see as one of the key ideas of effective altruism, that people should give substantially more than is typical, is harder to get off the ground in this framework. Singer's pond example, for all its flaws, makes the case for giving a lot quite salient, in a way that I don't think general considerations about maximizing the impact of your philanthropy in the long term are going to.

2
Benjamin_Todd
9y
That's true, though you can just present the best short-run thing as a compelling lower bound rather than an all considered answer to what maximizes your impact.

Yes, kidney selling is officially banned in nearly every country. My preference, at least in the U.S. context, would be to have the government offer benefits to donors to ensure high quality and fair allocation: http://www.nytimes.com/2011/12/06/opinion/why-selling-kidneys-should-be-legal.html