All of Florian Habermacher's Comments + Replies

Why do you find the Repugnant Conclusion repugnant?

Fair point, even if my personal feeling is that it would be the same even without the killing (even if indeed the killing itself indeed would alone suffice too).

We can amend the RC2 attempt to avoid the killing : Start with the world with the seeds for huge numbers of lives worth-living-even-if-barely-so, and propose to destroy that world, for the sake of creating a world for very few really rich and happy! (Obviously with the nuance that it is the rich few whose net happiness is slightly larger than the sum of the others).

My gut feeling does not change about this RC2 still feeling repugnant to many, though I admit I'm less sure and might also be biased now, as in not wanting to feel different, oops.

Why do you find the Repugnant Conclusion repugnant?

Might simply also a big portion of status-quo bias and/or omission bias (here both with similar effect) - be at play, helping to explain the typical classification of the conclusion as repugnant?

I think this might be the case when I ask myself  whether many people who classify the conclusion as repugnant, would not also have classified as just as repugnant the 'opposite' conclusion, if instead they had been offered the same experiment 'the other way round':

Start with a world counting huge numbers of lives worth-living-even-if-barely-so, and propose to... (read more)

I think the killing would probably explain the intuitive repugnance of RC2 most of the time, though.
Why do you find the Repugnant Conclusion repugnant?

Interesting suggestion! It sounds plausible that "barely worth living" might intuitively be mistaken as something more akin to 'so bad, they'd almost want to kill themselves, i.e. might well have even net negative lives' (which I think would be a poignant way to say what you write).

Give in Public Beta is live

What about this to reduce the pbly often overwhelming stigma attached to showcasing one's own donations?!

  1. Maybe the main issue is that I'm showing off with the amount I donate, rather than towards which causes. So: Just show where to, or maybe which share to where I donate, avoiding to show the absolute amount of donations.
  2. Ok, so you do your donations, I do some other donations. But: You showcase my donations, I showcase yours. No clue whether that's stupid and not much better than simply not showing any personal donations. Maybe then, a bunch of us are don
... (read more)
Florian Habermacher's Shortform

Research vegan cat food as ideal EA cause!? Might also be ideal for human vegan future as 'side'-effect too.

  1. Cats are obligate carnivores; must eat meat (or animal products) according to typical recommendations (and cats tend to refuse most non-animal foods). At least, there seems to exist no vegan cat food that is recommended as a main diet for cats without further warnings; often cats would seem to not accept mostly non-animal foods
  2. I guess - but am not sure (?) - animals fed to cats mean significantly more animals are grown in factory farms
    1. Somewhat counte
... (read more)
Is it really that a good idea to increase tax deductibility ?

Great question!

  1. Achieving a direct matching by the gvmt (even if it would be only with, say, a +25% match-factor or so, to keep it roughly in line with what tax-deductability means), instead of tax deductability, could indeed be more just, removing the bias you mention that unnecessarily favors the rich. Spot on imho.
  2. That said, democracies seem to love "tax deductability": stealing from the state that feels a bit less like stealing. So, deductability can be the single mostly easily acceptable policy. If so, it might pragmatically be worthwhile to support it
... (read more)
Incentivizing Donations through Mutual Matching

From what I read, Snowdrift is not quite "doing this", at least not in as far as the main aim here in Mutual Matching is to ask more from a participant only if leverage increases! But there are close links, thanks for pointing out the great project!

Snowdrift has people contribute as an increasing function of the # of co-donors, but the leverage, which is implicit, stays constant = 2, always (except for those cases where it even declines if others' chosen upper bounds are being surpassed), if my quick calculation is right (pretty sure*). This may or ma... (read more)

Incentivizing Donations through Mutual Matching

You're right. I see two situations here:

(i) the project has a strict upper limit on funding required. In this case, if you must (a) limit the pool of participants, and/or (b) their allowed contribution scales, and/or (c) maybe indeed the leverage progression, meaning you might incentivize people less strongly.

(ii) the project has strongly decreasing 'utility'-returns for additional money (at some point). In this case, (a), (b), (c) from above may be used, or in theory you as organizer could simply not care: your funding collection leverage still applies, b... (read more)

What EA projects could grow to become megaprojects, eventually spending $100m per year?

Agree with the "easily tens of millions a year", which, however, could also be seen to underline part of what I meant: it is really tricky to know how much we can expect from what exact effort.

I half agree with all your points, but see implicit speculative elements in them too, and hence remain with, a maybe all too obvious statement: let's consider the idea seriously, but let's also not forget that we're obviously not the first ones thinking of this, and in addition to all other uncertainties, keep in our mind that none seems to seriously have very much progress in that domain despite the possibly absolutely enormous value even private firms might have been able to make from it if they had serious progress in it.

This Can't Go On

I miss a clear definition of economic growth here, and the discussion strongly reminds me of the environmental resources focused critique of growth that has started with 1970's Club of Rome - Limits to Growth, there might be value to examine the huge literature around that topic that has been produced ever since on such topics.

Economic growth = increase in market value, is a typical definition.

Market value can increase if we paint the grey houses pink, or indeed if we design good computer games, or if we find great drugs to constantly awe use in insanely g... (read more)

What EA projects could grow to become megaprojects, eventually spending $100m per year?

I see enormous value in it and think it should be considered seriously.

On the other hand, the huge amount of value in it is also a reason I'm skeptical about it being obvious to be achievable: there are already individual giant firms who'd internally at multi-million annual savings (not to talk about the many billions the first firm marketing something like that would immediately earn) from having a convenient simple secure stack 'for everything', yet none seems to have something close to it (though I guess many may have something like that in some sub-sys... (read more)

I think the budget to do this is easily tens of millions a year, for perhaps a decade, plus the ability to hire the top talent, and it likely only works as a usefully secure system if you open-source it. Are there large firms who are willing to invest $25m/year for 4-5 years on a long-term cybersecurity effort like this, even if it seems somewhat likely to pay off? I suspect not - especially if they worry (plausibly) that governments will actively attempt to interfere in some parts of this.
Give in Public Beta is live

I find it a GREAT idea (have not tested it yet)!

The Harsanyi-Rawls debate: political philosophy as decision theory under uncertainty

Thank you! I was actually always surprised by H's mention of the taxation case as an example where maximin would be (readily) applicable.

IMHO, exactly what he explains in the rest of the article, can also be used to see how optimal taxation/public finance should rather only in exceptional cases be using a maximin principle as the proxy rule for a good redistributive process.

On the other hand, if you asked me whether I'd be happy if our actual very flawed tax/redistribution systems would be reformed such as to conform to the maximin - es, I'd possibly very happily agree on the latter, simply as a lesser of two evils. And maybe that's part of the point; in this case, fair enough!

The Harsanyi-Rawls debate: political philosophy as decision theory under uncertainty

I find this a rather challenging post, even if I like the high-level topic a lot! I didn't read the entire linked paper, but I'd be keen to understand whether you think you can make a concise, simple argument as to why my following view may be missing sth crucial that immediately follows from the Harsanyi vs. Rawls debate (if you think it does; feel free to ignore):

The Harsanyi 1975 paper which your linked post also cites (and which I recommend to any EA), is a great and rather complete rebuttal of Rawls core maximin-claim. The maximin principle, if taken ... (read more)

Thank you so much for your comment. Yeah, I think Harsanyi's review is awesome, too (particularly his criticism of Ralws's position on future generations); I don't know why it seems to be quite often neglected by philosophers. What you might be missing: notice everyone agrees that maximin sort of sucks as a decision principle - Rawls never endorsed it this way. However, I'd add: a) It's only discussed as a decision criteria in cases where you don't have probabilities (it's Wald's criterion). Now the standard response to that would be something like "but you can build credences by applying Laplace's Principle," and I tend to agree. But I'm not sure this is always the best thing to do; not even Savage thought so. Rawls thinks that, particularly in the original position, this would not be done... and really, it's not clear why. b) Notice that people often display higher risk-aversion when making decisions for the sake of others - they're supposed to be "prudent." Of course, I don't think this should be represented as following some sort of maximin principle - but maybe as some kind of "pessimistic" Hurwicz criterion; yet, and I can't stress this enough, my point is that people are not actually implying that negative outcomes are more likely. c) Actually, Harsanyi himself remarks (in the addendum of the review) that the maximin (or the difference principle) is a useful proxy to use, e.g., in the theory of the optimal taxation - even though it's not a "fundamental principle of morality." I think this point is more relevant than it might seem at first sight; actually, my interpretation / defense of the difference principle basically depends on that. I think (almost) all I'd have to say on this matter is in the Section 3 of the paper (especially the first part). But TL;DR: the difference principle is not a basic moral principle (and maximin is not an alternative to expected utility theorey). And the problem of the original position should be seen as a complex bargaining
An evaluation of Mind Ease, an anti-anxiety app

Just re Anxiety prevalence: It seems to me that Anxiety would be a kind of continuum, and you may be able to say 50% of people are suffering from anxiety or 5%, depending on where you make the cutoff. Your description implicitly seems to support exactly this view ("Globally, 284 million people—3.8% of all people—have anxiety disorders. Other estimates suggest that this might be even higher: according to the CDC, 11% of U.S. adults report regular feelings of worry, nervousness, or anxiety and ~19% had any anxiety disorder in the past year according to the N... (read more)

3Hauke Hillebrandt10mo
Yes, excellent point- I go into more detail about this in the full report: "Anxiety is a highly prevalent condition, with lifetime rates for its derived mental disorders between 14.5% and 33.7% in Western countries (Alonso and Lepine, 2007; Kessler et al., 2012), and global estimates across countries between 3.8% to 25.0% (Remes et al., 2016). Many more might have trait social anxiety which is not quite clinical yet still causes suffering. Indeed, trait social anxiety may have evolved to protect our ancestors from social threat []. Similarly, generalized anxiety might have evolved to protect us from other threats. Thus, anxiety might be natural and very widespread."
Matching-donation fundraisers can be harmfully dishonest
  1. Part of your critique is mostly valid in cases where donors have a fixed donation budget and allocate it to the best cause they come across, taking into account a potential leverage factor. I wonder whether instead a lot of the donors - mind EAs are rare - donate on a whim, incentivized by the announcement of the matching, without that they would have donated that money anywhere else with any particularly high probability.
  2. I see another critique to apply with the schemes that have matching "up to a specified level, say $500,000", and I think you have not me
... (read more)
Are there any 'maximum egoism' pledges?

I also wonder about the same thing. Further Pledge does not answer this particular desire of committing to a limited personal annual consumption while potentially saving for particular - or yet to be defined - causes later on. This can make sense also if one believes one's future view on what to donate towards being significantly more enlightened.

I could see such a pledge to not consume above X/year to be valued not overly much by third parties, as we cannot trust our future selves so much I guess, and even investing in own endeavors, even if officially EA... (read more)

How much does performance differ between people?

Surprised to see nothing (did I overlook?) about: The People vs. The Project/Job: The title, and the lead sentence,

Some people seem to achieve orders of magnitudes more than others in the same job.

suggest the work focuses essentially on people's performance, but already in the motivational examples

For instance, among companies funded by Y Combinator the top 0.5% account for more than ⅔ of the total market value; and among successful bestseller authors [wait, it's their books, no?], the top 1% stay on the New York Times bestseller list more than 25 times lo

... (read more)
Some quick notes on "effective altruism"

Thanks, I think antipathy effects towards the name “Effective Altruism”, or worse, “I’m an effective altruist”, are difficult to overstate.

Also, somewhat related to what you write I happen to have thought to myself just today: “I (and most of us are) am just as much an effective egoist as an effective altruist”, after all even the holiest of us probably cannot always help ourselves putting a significantly higher weight on our own welfare than on those of average strangers.

Nevertheless, some potential upside of the current term – equally I’m not sure it mat... (read more)

Agree that selection effects can be desirable and that dilution effects may matter if we choose a name that is too likable. But if we hold likability fixed, and switch to a name that is more appropriate (i.e. more descriptive), then it should select people more apt for the movement, leading to a stronger core.
2Aditya Vaze1y
Strongly agree. The potential benefits of selection effects are underrated in these discussions.
Ranking animal foods based on suffering and GHG emissions

Love the endeavor. But the calculation method really should be changed before anyone interested in the quantification of the combined CO2+animal suffering harm should use it, in my opinion: a weighted product model is inappropriate to express the total harm level of two independent harms, I really think you want to not multiply CO2 and animal suffering harm, but instead separately sum them, with whichever weight the user chooses. In that sense, I fully agree with what MichaelStJules also mentioned. But I want to give an example that makes this very clear -... (read more)

Thank you for the feedback Florian! I will move this issue upwards in my to do list since so many of you have explained the issues with WPM.
Name for the larger EA+adjacent ecosystem?

I find "new enlightenment" very fitting. But wonder whether it might at times be perceived as a not very humble name (must not be a problem, but I wonder whether some, me included, might at times end up feeling uncomfortable calling ourselves part of it).

As I mentioned above, cf “Brights”
3Owen Cotton-Barratt1y
I agree that this is potentially an issue. I think it's (partially) mitigated the more it's used to refer to ideas rather than people, and the more it's seen to be a big (and high prestige) thing.
Name for the larger EA+adjacent ecosystem?

Spontaneously I find "Broad Rationality" a plausible candidate (I spontaneously found it being used as a very specific concept mainly by Elster 1983, but I find on google only 46 hits on '"broad rationality" elster ', though there are of course more hits more generally on the word combination)

Dutch anti-trust regulator bans pro-animal welfare chicken cartel

Thanks, interesting case!

1. We might have loved to see the cartel here succeed, but we should probably still be thankful for the more general principle underlying the ruling:

As background, it should be mentioned that it is a common thing to use so-called green policies/standards for disguised protectionist measures, aka green protectionism: protecting local/domestic industry by imposing certain rules, often with minor environmental benefits (as here at least according to the ruling), but helping to keep out (international) competition.

So for the 'average' ... (read more)

Why I'm concerned about Giving Green

The post mentions 

Giving Green agrees with the consensus EA view that the framing of “offsetting personal emissions” is unhelpful

To some degree such a consensus seems natural, though I believe the issues with the idea of offsetting do not automatically mean helping people in search specifically of effective (or thus maybe least ineffective) offsetting possibilities is by nature ineffective. 

I wonder: is the mentioned "consensus" detailed/made most obvious in any particular place(s) - blog, article, ... ?

Indeed not, it will depend on the extent to which donors who seek offsets will be willing to donate to non-offset options if those are presented to them, and obviously also will depend on how effective offsets are compared to non-offset alternatives. This is why I called for these things to be modelled, rather than assumed, in the post. In answer to your question, [here's]( a post with 77 comments from a few years ago which will probably serve as a reasonable starting point.
EA Hotel with free accommodation and board for two years

Indeed, I think I'm not the only one to whom the nudge towards eating more fully vegan would seem a highly welcome side-effect of a stay in the hotel.