PhD in Economics (focus on applied economics and climate & resource economics in particular) & MSc in Environmental Engineering & Science. Key interests: Interface of Economics, Moral Philosophy, Policy. Public finance, incl. optimal redistribution & tax competition. Evolution. Consciousness. AI/ML/Optimization. Debunking bad Statistics & Theories. Earn my living in energy economics & finance and by writing simulation models.
Fair point, even if my personal feeling is that it would be the same even without the killing (even if indeed the killing itself indeed would alone suffice too).
We can amend the RC2 attempt to avoid the killing : Start with the world with the seeds for huge numbers of lives worth-living-even-if-barely-so, and propose to destroy that world, for the sake of creating a world for very few really rich and happy! (Obviously with the nuance that it is the rich few whose net happiness is slightly larger than the sum of the others).
My gut feeling does not change about this RC2 still feeling repugnant to many, though I admit I'm less sure and might also be biased now, as in not wanting to feel different, oops.
Might simply also a big portion of status-quo bias and/or omission bias (here both with similar effect) - be at play, helping to explain the typical classification of the conclusion as repugnant?
I think this might be the case when I ask myself whether many people who classify the conclusion as repugnant, would not also have classified as just as repugnant the 'opposite' conclusion, if instead they had been offered the same experiment 'the other way round':
Start with a world counting huge numbers of lives worth-living-even-if-barely-so, and propose to destroy them all, for the sake of making very few really rich and happy! (Obviously with the nuance that it is the rich few whose net happiness is slightly larger than the sum of the others). It is just a gut feeling, but I'd guess this would evoke similar types of feelings of repugnance very often (maybe even more so than in the original RC experiment?)! A sort of Repugnant Conclusion 2.
Interesting suggestion! It sounds plausible that "barely worth living" might intuitively be mistaken as something more akin to 'so bad, they'd almost want to kill themselves, i.e. might well have even net negative lives' (which I think would be a poignant way to say what you write).
What about this to reduce the pbly often overwhelming stigma attached to showcasing one's own donations?!
Research vegan cat food as ideal EA cause!? Might also be ideal for human vegan future as 'side'-effect too.
EA dietitians, am I just naive or could this be a thing?
I reckon one drawback of an ideal vegan cat diet could be that many more might want to keep cats. I see some possibilities on net impact from cats+food directly then:
Whether house cats are at all net "happy" or not, I do not know.
* Calculation, based on rough values:
220 mio domestic cats (ignoring 480 mio stray)
3 kg avg. weight (might be slightly low side)
2% of cat weight meat food/day
=60g/cat daily meat = 30g/cat daily "extra" animal meat if quality-adjusting with 50% (see text above)
=6 600 t/day extra meat production
And with approx. 90g meat/day per human (beef veal pork poultry and sheep acc. to OECD) for the 8 bn humans, i.e. with 750 000 t meat/day human consumption, the cat's share is
= 0.9%, bit simplistically approximated.
*It would be wrong to consider only 'tax revenue lost' for the gvmt as effect of the tax deductability. In expectation, in a simple model, gvmt will in the medium term partly respond with (i) lower expenditures, and (ii) increase standard tax rates in response to higher tax deductions.
Btw, I personally would not worry about the €25 threshold. Avoiding to register/count too small sums seems a reasonable thing, even if you're right that it becomes less relevant in the digital world.
From what I read, Snowdrift is not quite "doing this", at least not in as far as the main aim here in Mutual Matching is to ask more from a participant only if leverage increases! But there are close links, thanks for pointing out the great project!
Snowdrift has people contribute as an increasing function of the # of co-donors, but the leverage, which is implicit, stays constant = 2, always (except for those cases where it even declines if others' chosen upper bounds are being surpassed), if my quick calculation is right (pretty sure*). This may or may not be a good idea with +- rational contributors (either way, I btw think it would be valuable for transparency to indicate this leverage explicitly to readers of the snowdrift page, it's a crucial factor for donors imho). Pragmatically it may turn out to be a really useful simplification though.
Here instead, Mutual Matching tries to motivate people by ensuring that they donate more only as leverage really increases. I see this as key innovation also relative to Buchholz et al. (maybe worth looking at that paper, it might be closer to snowdrift, as it also does not make donations directly conditional on leverage I think, tbc). As I discuss, this has pros and cons; the main risk being that the requested donation increases quickly with the leverage and thus with the # of participants.
Thanks to your links I just saw also the Rational Street Performer Protocol, which I should also look at, even if it equally seems to focus on donating more as more is given in total, rather than like here explicitly as leverage is increased; it makes the timing question very explicit, which is a dimension I have here not much looked at yet.
Will expand the text & make the connections to both asap!
*snowdrift: Each gives 0.1 ct per participant, meaning for 1000 (or 5000) you give 1$ (or 5$) and thanks to you all these others give 1$ (or $5) more in total than without you, i.e. extra leverage of constantly 1 in addition to your own contribution itself, meaning total leverage of your contribution = 2 always.
You're right. I see two situations here:
(i) the project has a strict upper limit on funding required. In this case, if you must (a) limit the pool of participants, and/or (b) their allowed contribution scales, and/or (c) maybe indeed the leverage progression, meaning you might incentivize people less strongly.
(ii) the project has strongly decreasing 'utility'-returns for additional money (at some point). In this case, (a), (b), (c) from above may be used, or in theory you as organizer could simply not care: your funding collection leverage still applies, but you let donors judge whether they find they discount the leverage for large contributions, as they judge the value of the money being less valuable on the upper tail; they may then accordingly decide to not contribute, or to contribute with less.
Finally, there is simply the possibility to use a cutoff point, above which the scheme simply must be cancelled, to address the issue that you raise, or the one I discuss in the text: to prevent individual donors to have to contribute excessive amounts, when more than expected commitments are received. If that cutoff point is high enough so that it is unlikely enough to be reached, you as organizer may be happy to accept it. Of course one could then think about dynamics, e.g. cooling-off period before you can re-run the cancelled collection, without indirectly (too strongly) undermining the true marginal effect in a far-sighted assessment of the entire situation.
In reality: I fear even with this scheme, if in some cases it hopefully turns to be practical, many public goods problems remain underfunded (hopefully simply a bit less strongly) rather than overfunded, so, I'm so far not too worried about that one.
Agree with the "easily tens of millions a year", which, however, could also be seen to underline part of what I meant: it is really tricky to know how much we can expect from what exact effort.
I half agree with all your points, but see implicit speculative elements in them too, and hence remain with, a maybe all too obvious statement: let's consider the idea seriously, but let's also not forget that we're obviously not the first ones thinking of this, and in addition to all other uncertainties, keep in our mind that none seems to seriously have very much progress in that domain despite the possibly absolutely enormous value even private firms might have been able to make from it if they had serious progress in it.
Couldn't agree more with
In a similar direction, there's more that struck me as rather discouraging in terms of intelligent public debate:
In addition to this lie you pointed to apparently being popular*, from my experience in discussions about the initiative, the population also showed a basic inability to follow most basic logical principles: