All of FlorianH's Comments + Replies

This post calls out un-diversities in EA. Instead of attributable to EA doing something wrong, I find these patterns mainly underline a basic fact about what type of people EA tends to attract. So I don't find the post fair to EA and its structure in a very general way.

I find to detect in the article an implicit, underlying view of the EA story being something like:

                'Person becoming EA -> World giving that person EA privileges'

But imho, this completely turns upside down the real story, which I mostl... (read more)

I have experience with that: eating meat at home but rather strictly not at restaurants for exactly the reasons you mention: it tends to simply be almost impossible to find a restaurant that seems to serve not-crazily-mistreated animals.

Doing that as vegan-in-restaurants (instead of vegetarian-in-restaurants) is significantly more difficult, but from my experience, one can totally get used to try to remain veg* outside but non-veg* at home where one can go for food with some expectation of net positive animal lives.

Few particular related experiences:

  1. Even p
... (read more)

Surprised. Maybe worth giving it another try, looking longer for good imitations - given today's wealth of really good ones (besides admittedly a ton of bad, assuming you really need them to imitate the original so much): I've made friends taste veg* burgers and chicken nuggets and they were rather surprised when I told them post-hoc that these had not been meat. I once had to double-check with the counter at the restaurant as I could not believe what I had in my plate was really not chicken. Maybe that speaks against the fine taste of me and some, but I r... (read more)

Interesting. Curious: If such hair is a serious bottleneck/costly, do some hair cutters as a default collect cut hair and sell/donate it for such use?

4
mikbp
6mo
I write only as user, I don't have any further knowledge but I have never seen it. There are the hair dressers that collaborate with "whip organisations" but as far as I know, they only collect the hair of the people who want to donate it.  In general, I don't think it is very common that people want to cut >20cm of hair in one go, and it makes the hair dresser's work somehow less natural, as they usually don't cut all hair at once (i.e. make a ponytail and cut it). Maybe those collaborating hair dresses would ask a customer who wants to cut their hair in one go if they may donate it?

I tried to account for the difficulty to pin down all relevant effects in our CBA by adding the somewhat intangible feeling about the gun to backfire (standing for your point that there may be more general/typical but less easy to quantify benefits of not censoring etc.). Sorry, if that was not clear.

More importantly:

I think your last paragraph gets to the essence: You're afraid the cost-benefit analysis is done naively, potentially ignoring the good reasons for which we most often may not want to try to prevent the advancement of science/tech.

This does, h... (read more)

I have some sympathy with 'a simple utilitarian CBA doesn't suffice' in general, but I do not end at your conclusion; your intuition pump also doesn't lead me there.

It doesn't seem to require any staunch utilitarianism to arrive at 'if a quick look at the gun design suggests it has 51% to shoot in your own face, and only 49% to shoot at the tiger you want to hunt as you otherwise starve to death'*, to decide to drop the project of it's development. Or, to halt, until a more detailed examination might allow you to update with a more precise understanding.

Yo... (read more)

2
Matthew_Barnett
6mo
In my thought experiment, we generally have a moral and legal presumption against censorship, which I argued should weigh heavily in our decision-making. By contrast, in your thought experiment with the tiger, I see no salient reason for why we should have a presumption to shoot the tiger now rather than wait until we have more information. For that reason, I don't think that your comment is responding to my argument about how we should weigh heuristics against simple cost-benefit analyses.  In the case of an AI pause, the current law is not consistent with a non-voluntary pause. Moreover, from an elementary moral perspective, inventing a new rule and forcing everyone to follow it generally requires some justification. There is no symmetry here between action vs. inaction as there would be in the case of deciding whether to shoot the tiger right now. If you don't see why, consider whether you would have had a presumption against pausing just about any other technology, such as bicycles, until they were proven safe. My point is not that AI is just as safe as bicycles, or that we should disregard cost-benefit analyses. Instead, I am trying to point out that cost-benefit analyses can often be flawed, and relying on heuristics is frequently highly rational even when they disagree with naive cost-benefit analyses.

There are two factors mixed up here: @kyle_fish writes about an (objective) amount of animal welfare. The concept @Jeff Kaufman refers to instead includes the weight we humans put on that animals' welfare. For a meaningful conversation about the topic, we should not mix these two up.*

Let's briefly assume a || world with humans2: just like us, but they simply never cared about animals at all (weight = 0). Concluding: "We thus have no welfare problem" is the logical conclusion for humans2 indeed, but it would not suffice to inform a genetically mutated human... (read more)

One of my favorite passages is your remark on AI in some ways being rather more white-boxy, while instead humans are rather black boxy and difficult to align. Some often ignored truth in that (even if, in the end, what really matters, arguably is that we're so familiar with human behavior, that overall, the black boxy-ness of our inner workings may matter less).

Enjoyed the post, thanks! But it starts with an invalid deduction:

Since we don’t enforce pauses on most new technologies, I hope the reader will grant that the burden of proof is on those who advocate for such a moratorium. We should only advocate for such heavy-handed government action if it’s clear that the benefits of doing so would significantly outweigh the costs.

(I added the emphasis)

Instead, it seems more reasonable to simply advocate for such action exactly if, in expectation, the benefits seem to [even just about] outweigh the costs. Of course, we... (read more)

6
Matthew_Barnett
6mo
I agree in theory, but disagree in practice. In theory, utilitarians only care about the costs and benefits of policy. But in practice, utilitarians should generally be constrained by heuristics and should be skeptical of relying heavily on explicit cost-benefit calculations. Consider the following thought experiment: You're the leader of a nation and are currently deciding whether to censor a radical professor for speech considered perverse. You're very confident that the professor's views are meritless. You ask your advisor to run an analysis on the costs and benefits of censorship in this particular case, and they come back with a report concluding that there is slightly more social benefit from censoring the professor than harm. Should you censor the professor? Personally, my first reaction would be to say that the analysis probably left out second order effects from censoring the professor. For example, if we censor the professor, there will be a chilling effect on other professors in the future, whose views might not be meritless. So, let's make the dilemma a little harder. Let's say the advisor insists they attempted to calculate second order effects. You check and can't immediately find any flaws in their analysis. Now, should you censor the professor? In these cases, I think it often makes sense to override cost-benefit calculations. The analysis only shows a slight net-benefit, and so unless we're extremely confident in its methodology, it is reasonable to fall back on the general heuristic that professors shouldn't be censored. (Which is not to say we should never violate the principle of freedom of speech. If we learned much more about the situation, we might eventually decide that the cost-benefit calculation was indeed correct.) Likewise, I think it makes sense to have a general heuristic like, "We shouldn't ban new technologies because of abstract arguments about their potential harm" and only override the heuristic because of strong evidence abo

Agree with the testing question. I think there's a lot of scope in trying to implement Mutual Matching (versions) in small or large scale, though I have not yet stumbled upon the occasion to test it in real life.

I would not say my original version of Mutual Matching is in every sense more general. But it does indeed allow the organizer some freedom to set up the scheme in a way that he deems conducive. It provides each contributor the ability to set (or know) her monotonously increasing contribution directly as a function of the leverage, which I think is ... (read more)

4
Filip Sondej
7mo
Ok, I think you're right, it's not strictly more general. Agreed! Yeah, that is a crucial component, but I think we need not only that, but also some natural way in which the donation saturates. Because when you keep funding some project, at some point the marginal utility from further funding decreases. (I'm not sure how original Mutual Matching deals with that.) I think Andrew Critch's S-process deals with that very elegantly, and it would be nice to take inspiration from it. (In my method here, the individual donations saturate into some maximal personal limit, which I think is nice, but is not quite the same as the full project pot saturating.)

I think you're describing is exactly (or almost exactly) Mutual Matching that I wrote about here on the forum a while ago: Incentivizing Donations through Mutual Matching

2
Filip Sondej
7mo
Oh yeah! They are identical in spirit, but a bit different in implementation. So they will have different results sometimes. It would be nice to test both of them in some real world setting. Edit: it seems that your method is more general? And that maybe you could set the curves in a way that the matching works the same as described here. So the method in this post maybe could be seen as a specific case of Mutual Matching, aiming to have some particular nice properties.

Great! I propose a concise 1-sentence summary that gets to the core of one of the main drawbacks of QF, and link to Mutual Matching, a 'decentralized donor matching on steroids', overcoming some of QF issues, that might have been interesting for the reader of this article.

QF really is an information eliciting mechanism, but much less a mechanism for solving the (obviously!) most notorious problem with public goods: the lack of funding due to free-riding and lacking incentives to contribute.

Yes, QF elicits the WTP, helping to inform about value & o... (read more)

Thanks for the post, resonates a lot with my personal experience.

Couldn't agree more with

In my impression, the most influential argument of the camp against the initiative was that factory farming just doesn't exist in Switzerland.[2] Even if it was only one of but not the most influential argument, I think this speaks volumes about both the (current) debate culture

In a similar direction, there's more that struck me as rather discouraging in terms of intelligent public debate:

In addition to this lie you pointed to apparently being popular*, from my experience in discussions about the initiative, the population also... (read more)

Fair point, even if my personal feeling is that it would be the same even without the killing (even if indeed the killing itself indeed would alone suffice too).

We can amend the RC2 attempt to avoid the killing : Start with the world with the seeds for huge numbers of lives worth-living-even-if-barely-so, and propose to destroy that world, for the sake of creating a world for very few really rich and happy! (Obviously with the nuance that it is the rich few whose net happiness is slightly larger than the sum of the others).

My gut feeling does not change about this RC2 still feeling repugnant to many, though I admit I'm less sure and might also be biased now, as in not wanting to feel different, oops.

Might simply also a big portion of status-quo bias and/or omission bias (here both with similar effect) - be at play, helping to explain the typical classification of the conclusion as repugnant?

I think this might be the case when I ask myself  whether many people who classify the conclusion as repugnant, would not also have classified as just as repugnant the 'opposite' conclusion, if instead they had been offered the same experiment 'the other way round':

Start with a world counting huge numbers of lives worth-living-even-if-barely-so, and propose to... (read more)

3
MichaelStJules
2y
I think the killing would probably explain the intuitive repugnance of RC2 most of the time, though.

Interesting suggestion! It sounds plausible that "barely worth living" might intuitively be mistaken as something more akin to 'so bad, they'd almost want to kill themselves, i.e. might well have even net negative lives' (which I think would be a poignant way to say what you write).

What about this to reduce the pbly often overwhelming stigma attached to showcasing one's own donations?!

  1. Maybe the main issue is that I'm showing off with the amount I donate, rather than towards which causes. So: Just show where to, or maybe which share to where I donate, avoiding to show the absolute amount of donations.
  2. Ok, so you do your donations, I do some other donations. But: You showcase my donations, I showcase yours. No clue whether that's stupid and not much better than simply not showing any personal donations. Maybe then, a bunch of us are don
... (read more)

Research vegan cat food as ideal EA cause!? Might also be ideal for human vegan future as 'side'-effect too.

  1. Cats are obligate carnivores; must eat meat (or animal products) according to typical recommendations (and cats tend to refuse most non-animal foods). At least, there seems to exist no vegan cat food that is recommended as a main diet for cats without further warnings; often cats would seem to not accept mostly non-animal foods
  2. I guess - but am not sure (?) - animals fed to cats mean significantly more animals are grown in factory farms
    1. Somewhat counte
... (read more)

Great question!

  1. Achieving a direct matching by the gvmt (even if it would be only with, say, a +25% match-factor or so, to keep it roughly in line with what tax-deductability means), instead of tax deductability, could indeed be more just, removing the bias you mention that unnecessarily favors the rich. Spot on imho.
  2. That said, democracies seem to love "tax deductability": stealing from the state that feels a bit less like stealing. So, deductability can be the single mostly easily acceptable policy. If so, it might pragmatically be worthwhile to support it
... (read more)

From what I read, Snowdrift is not quite "doing this", at least not in as far as the main aim here in Mutual Matching is to ask more from a participant only if leverage increases! But there are close links, thanks for pointing out the great project!

Snowdrift has people contribute as an increasing function of the # of co-donors, but the leverage, which is implicit, stays constant = 2, always (except for those cases where it even declines if others' chosen upper bounds are being surpassed), if my quick calculation is right (pretty sure*). This may or ma... (read more)

You're right. I see two situations here:

(i) the project has a strict upper limit on funding required. In this case, if you must (a) limit the pool of participants, and/or (b) their allowed contribution scales, and/or (c) maybe indeed the leverage progression, meaning you might incentivize people less strongly.

(ii) the project has strongly decreasing 'utility'-returns for additional money (at some point). In this case, (a), (b), (c) from above may be used, or in theory you as organizer could simply not care: your funding collection leverage still applies, b... (read more)

Agree with the "easily tens of millions a year", which, however, could also be seen to underline part of what I meant: it is really tricky to know how much we can expect from what exact effort.

I half agree with all your points, but see implicit speculative elements in them too, and hence remain with, a maybe all too obvious statement: let's consider the idea seriously, but let's also not forget that we're obviously not the first ones thinking of this, and in addition to all other uncertainties, keep in our mind that none seems to seriously have very much progress in that domain despite the possibly absolutely enormous value even private firms might have been able to make from it if they had serious progress in it.

I miss a clear definition of economic growth here, and the discussion strongly reminds me of the environmental resources focused critique of growth that has started with 1970's Club of Rome - Limits to Growth, there might be value to examine the huge literature around that topic that has been produced ever since on such topics.

Economic growth = increase in market value, is a typical definition.

Market value can increase if we paint the grey houses pink, or indeed if we design good computer games, or if we find great drugs to constantly awe use in insanely g... (read more)

I see enormous value in it and think it should be considered seriously.

On the other hand, the huge amount of value in it is also a reason I'm skeptical about it being obvious to be achievable: there are already individual giant firms who'd internally at multi-million annual savings (not to talk about the many billions the first firm marketing something like that would immediately earn) from having a convenient simple secure stack 'for everything', yet none seems to have something close to it (though I guess many may have something like that in some sub-sys... (read more)

2
Davidmanheim
3y
I think the budget to do this is easily tens of millions a year, for perhaps a decade, plus the ability to hire the top talent, and it likely only works as a usefully secure system if you open-source it. Are there large firms who are willing to invest $25m/year for 4-5 years on a long-term cybersecurity effort like this, even if it seems somewhat likely to pay off? I suspect not - especially if they worry (plausibly) that governments will actively attempt to interfere in some parts of this.

I find it a GREAT idea (have not tested it yet)!

Thank you! I was actually always surprised by H's mention of the taxation case as an example where maximin would be (readily) applicable.

IMHO, exactly what he explains in the rest of the article, can also be used to see how optimal taxation/public finance should rather only in exceptional cases be using a maximin principle as the proxy rule for a good redistributive process.

On the other hand, if you asked me whether I'd be happy if our actual very flawed tax/redistribution systems would be reformed such as to conform to the maximin - es, I'd possibly very happily agree on the latter, simply as a lesser of two evils. And maybe that's part of the point; in this case, fair enough!

I find this a rather challenging post, even if I like the high-level topic a lot! I didn't read the entire linked paper, but I'd be keen to understand whether you think you can make a concise, simple argument as to why my following view may be missing sth crucial that immediately follows from the Harsanyi vs. Rawls debate (if you think it does; feel free to ignore):

The Harsanyi 1975 paper which your linked post also cites (and which I recommend to any EA), is a great and rather complete rebuttal of Rawls core maximin-claim. The maximin principle, if taken ... (read more)

2
Ramiro
3y
Thank you so much for your comment. Yeah, I think Harsanyi's review is awesome, too (particularly his criticism of Ralws's position on future generations); I don't know why it seems to be  quite often neglected by philosophers. What you might be missing: notice everyone agrees that maximin sort of sucks as a decision principle - Rawls never endorsed it this way. However, I'd add: a) It's only discussed as a decision criteria in cases where you don't have probabilities (it's Wald's criterion). Now the standard response to that would be something like "but you can build credences by applying Laplace's Principle," and I tend to agree. But I'm not sure this is always the best thing to do; not even Savage thought so. Rawls thinks that, particularly in the original position, this would not be done... and really, it's not clear why. b) Notice that people often display higher risk-aversion when making decisions for the sake of others - they're supposed to be "prudent." Of course, I don't think this should be represented as following some sort of maximin principle - but maybe as some kind of "pessimistic" Hurwicz criterion; yet, and I can't stress this enough, my point is that people are not actually implying that negative outcomes are more likely. c) Actually, Harsanyi himself remarks (in the addendum of the review) that the maximin (or the difference principle) is a useful proxy to use, e.g., in the theory of the optimal taxation - even though it's not a "fundamental principle of morality." I think this point is more relevant than it might seem at first sight; actually, my interpretation / defense of the difference principle basically depends on that. I think (almost) all I'd have to say on this matter is in the Section 3 of the paper (especially the first part). But TL;DR: the difference principle is not a basic moral principle (and maximin is not an alternative to expected utility theorey). And the  problem of the original position should be seen as a complex bargaini

Just re Anxiety prevalence: It seems to me that Anxiety would be a kind of continuum, and you may be able to say 50% of people are suffering from anxiety or 5%, depending on where you make the cutoff. Your description implicitly seems to support exactly this view ("Globally, 284 million people—3.8% of all people—have anxiety disorders. Other estimates suggest that this might be even higher: according to the CDC, 11% of U.S. adults report regular feelings of worry, nervousness, or anxiety and ~19% had any anxiety disorder in the past year according to the N... (read more)

3
Hauke Hillebrandt
3y
Yes, excellent point- I go into more detail about this in the full report: "Anxiety is a highly prevalent condition, with lifetime rates for its derived mental disorders between 14.5% and 33.7% in Western countries (Alonso and Lepine, 2007; Kessler et al., 2012), and global estimates across countries between 3.8% to 25.0% (Remes et al., 2016).  Many more might have trait social anxiety which is not quite clinical yet still causes suffering. Indeed, trait social anxiety may have evolved to protect our ancestors from social threat. Similarly, generalized anxiety might have evolved to protect us from other threats. Thus, anxiety might be natural and very widespread." https://docs.google.com/document/d/1Y0Mc0pI-pDMQMPg8M4F0zA1KYiXuvW5q7MPXRH9sX7k/edit#bookmark=id.h2x1ikourvk6
  1. Part of your critique is mostly valid in cases where donors have a fixed donation budget and allocate it to the best cause they come across, taking into account a potential leverage factor. I wonder whether instead a lot of the donors - mind EAs are rare - donate on a whim, incentivized by the announcement of the matching, without that they would have donated that money anywhere else with any particularly high probability.
  2. I see another critique to apply with the schemes that have matching "up to a specified level, say $500,000", and I think you have not me
... (read more)

I also wonder about the same thing. Further Pledge does not answer this particular desire of committing to a limited personal annual consumption while potentially saving for particular - or yet to be defined - causes later on. This can make sense also if one believes one's future view on what to donate towards being significantly more enlightened.

I could see such a pledge to not consume above X/year to be valued not overly much by third parties, as we cannot trust our future selves so much I guess, and even investing in own endeavors, even if officially EA... (read more)

Surprised to see nothing (did I overlook?) about: The People vs. The Project/Job: The title, and the lead sentence,

Some people seem to achieve orders of magnitudes more than others in the same job.

suggest the work focuses essentially on people's performance, but already in the motivational examples

For instance, among companies funded by Y Combinator the top 0.5% account for more than ⅔ of the total market value; and among successful bestseller authors [wait, it's their books, no?], the top 1% stay on the New York Times bestseller list more than 25 times lo

... (read more)

Thanks, I think antipathy effects towards the name “Effective Altruism”, or worse, “I’m an effective altruist”, are difficult to overstate.

Also, somewhat related to what you write I happen to have thought to myself just today: “I (and most of us are) am just as much an effective egoist as an effective altruist”, after all even the holiest of us probably cannot always help ourselves putting a significantly higher weight on our own welfare than on those of average strangers.

Nevertheless, some potential upside of the current term – equally I’m not sure it mat... (read more)

7
RyanCarey
3y
Agree that selection effects can be desirable and that dilution effects may matter if we choose a name that is too likable. But if we hold likability fixed, and switch to a name that is more appropriate (i.e. more descriptive), then it should select people more apt for the movement, leading to a stronger core.
2
Aditya Vaze
3y
Strongly agree. The potential benefits of selection effects are underrated in these discussions.

Love the endeavor. But the calculation method really should be changed before anyone interested in the quantification of the combined CO2+animal suffering harm should use it, in my opinion: a weighted product model is inappropriate to express the total harm level of two independent harms, I really think you want to not multiply CO2 and animal suffering harm, but instead separately sum them, with whichever weight the user chooses. In that sense, I fully agree with what MichaelStJules also mentioned. But I want to give an example that makes this very clear -... (read more)

2
BrownHairedEevee
2mo
I think it's the other way around. Under a weighted product model (WPM), the overall impact of both A and B is zero because either component is zero, so the WPM favors A and B over C. Whereas summing the climate and welfare components (with "reasonable" weights) would result in C being the most favorable.
2
VilleSokk
3y
Thank you for the feedback Florian! I will move this issue upwards in my to do list since so many of you have explained the issues with WPM.

I find "new enlightenment" very fitting. But wonder whether it might at times be perceived as a not very humble name (must not be a problem, but I wonder whether some, me included, might at times end up feeling uncomfortable calling ourselves part of it).

1
david_reinstein
2y
As I mentioned above, cf “Brights”
3
Owen Cotton-Barratt
3y
I agree that this is potentially an issue. I think it's (partially) mitigated the more it's used to refer to ideas rather than people, and the more it's seen to be a big (and high prestige) thing.

Spontaneously I find "Broad Rationality" a plausible candidate (I spontaneously found it being used as a very specific concept mainly by Elster 1983, but I find on google only 46 hits on '"broad rationality" elster ', though there are of course more hits more generally on the word combination)

Thanks, interesting case!

1. We might have loved to see the cartel here succeed, but we should probably still be thankful for the more general principle underlying the ruling:

As background, it should be mentioned that it is a common thing to use so-called green policies/standards for disguised protectionist measures, aka green protectionism: protecting local/domestic industry by imposing certain rules, often with minor environmental benefits (as here at least according to the ruling), but helping to keep out (international) competition.

So for the 'average' ... (read more)

The post mentions 

Giving Green agrees with the consensus EA view that the framing of “offsetting personal emissions” is unhelpful

To some degree such a consensus seems natural, though I believe the issues with the idea of offsetting do not automatically mean helping people in search specifically of effective (or thus maybe least ineffective) offsetting possibilities is by nature ineffective. 

I wonder: is the mentioned "consensus" detailed/made most obvious in any particular place(s) - blog, article, ... ?

3
alex lawsen (previously alexrjl)
3y
Indeed not, it will depend on the extent to which donors who seek offsets will be willing to donate to non-offset options if those are presented to them, and obviously also will depend on how effective offsets are compared to non-offset alternatives. This is why I called for these things to be modelled, rather than assumed, in the post. In answer to your question, [here's](https://forum.effectivealtruism.org/posts/Yix7BzSQLJ9TYaodG/ethical-offsetting-is-antithetical-to-ea) a post with 77 comments from a few years ago which will probably serve as a reasonable starting point. 

Indeed, I think I'm not the only one to whom the nudge towards eating more fully vegan would seem a highly welcome side-effect of a stay in the hotel.