I have experience with that: eating meat at home but rather strictly not at restaurants for exactly the reasons you mention: it tends to simply be almost impossible to find a restaurant that seems to serve not-crazily-mistreated animals.
Doing that as vegan-in-restaurants (instead of vegetarian-in-restaurants) is significantly more difficult, but from my experience, one can totally get used to try to remain veg* outside but non-veg* at home where one can go for food with some expectation of net positive animal lives.
Few particular related experiences:
Surprised. Maybe worth giving it another try, looking longer for good imitations - given today's wealth of really good ones (besides admittedly a ton of bad, assuming you really need them to imitate the original so much): I've made friends taste veg* burgers and chicken nuggets and they were rather surprised when I told them post-hoc that these had not been meat. I once had to double-check with the counter at the restaurant as I could not believe what I had in my plate was really not chicken. Maybe that speaks against the fine taste of me and some, but I r...
Interesting. Curious: If such hair is a serious bottleneck/costly, do some hair cutters as a default collect cut hair and sell/donate it for such use?
I tried to account for the difficulty to pin down all relevant effects in our CBA by adding the somewhat intangible feeling about the gun to backfire (standing for your point that there may be more general/typical but less easy to quantify benefits of not censoring etc.). Sorry, if that was not clear.
More importantly:
I think your last paragraph gets to the essence: You're afraid the cost-benefit analysis is done naively, potentially ignoring the good reasons for which we most often may not want to try to prevent the advancement of science/tech.
This does, h...
I have some sympathy with 'a simple utilitarian CBA doesn't suffice' in general, but I do not end at your conclusion; your intuition pump also doesn't lead me there.
It doesn't seem to require any staunch utilitarianism to arrive at 'if a quick look at the gun design suggests it has 51% to shoot in your own face, and only 49% to shoot at the tiger you want to hunt as you otherwise starve to death'*, to decide to drop the project of it's development. Or, to halt, until a more detailed examination might allow you to update with a more precise understanding.
Yo...
There are two factors mixed up here: @kyle_fish writes about an (objective) amount of animal welfare. The concept @Jeff Kaufman refers to instead includes the weight we humans put on that animals' welfare. For a meaningful conversation about the topic, we should not mix these two up.*
Let's briefly assume a || world with humans2: just like us, but they simply never cared about animals at all (weight = 0). Concluding: "We thus have no welfare problem" is the logical conclusion for humans2 indeed, but it would not suffice to inform a genetically mutated human...
One of my favorite passages is your remark on AI in some ways being rather more white-boxy, while instead humans are rather black boxy and difficult to align. Some often ignored truth in that (even if, in the end, what really matters, arguably is that we're so familiar with human behavior, that overall, the black boxy-ness of our inner workings may matter less).
Enjoyed the post, thanks! But it starts with an invalid deduction:
Since we don’t enforce pauses on most new technologies, I hope the reader will grant that the burden of proof is on those who advocate for such a moratorium. We should only advocate for such heavy-handed government action if it’s clear that the benefits of doing so would significantly outweigh the costs.
(I added the emphasis)
Instead, it seems more reasonable to simply advocate for such action exactly if, in expectation, the benefits seem to [even just about] outweigh the costs. Of course, we...
Agree with the testing question. I think there's a lot of scope in trying to implement Mutual Matching (versions) in small or large scale, though I have not yet stumbled upon the occasion to test it in real life.
I would not say my original version of Mutual Matching is in every sense more general. But it does indeed allow the organizer some freedom to set up the scheme in a way that he deems conducive. It provides each contributor the ability to set (or know) her monotonously increasing contribution directly as a function of the leverage, which I think is ...
I think you're describing is exactly (or almost exactly) Mutual Matching that I wrote about here on the forum a while ago: Incentivizing Donations through Mutual Matching
Great! I propose a concise 1-sentence summary that gets to the core of one of the main drawbacks of QF, and link to Mutual Matching, a 'decentralized donor matching on steroids', overcoming some of QF issues, that might have been interesting for the reader of this article.
QF really is an information eliciting mechanism, but much less a mechanism for solving the (obviously!) most notorious problem with public goods: the lack of funding due to free-riding and lacking incentives to contribute.
Yes, QF elicits the WTP, helping to inform about value & o...
Couldn't agree more with
In my impression, the most influential argument of the camp against the initiative was that factory farming just doesn't exist in Switzerland.[2] Even if it was only one of but not the most influential argument, I think this speaks volumes about both the (current) debate culture
In a similar direction, there's more that struck me as rather discouraging in terms of intelligent public debate:
In addition to this lie you pointed to apparently being popular*, from my experience in discussions about the initiative, the population also...
Fair point, even if my personal feeling is that it would be the same even without the killing (even if indeed the killing itself indeed would alone suffice too).
We can amend the RC2 attempt to avoid the killing : Start with the world with the seeds for huge numbers of lives worth-living-even-if-barely-so, and propose to destroy that world, for the sake of creating a world for very few really rich and happy! (Obviously with the nuance that it is the rich few whose net happiness is slightly larger than the sum of the others).
My gut feeling does not change about this RC2 still feeling repugnant to many, though I admit I'm less sure and might also be biased now, as in not wanting to feel different, oops.
Might simply also a big portion of status-quo bias and/or omission bias (here both with similar effect) - be at play, helping to explain the typical classification of the conclusion as repugnant?
I think this might be the case when I ask myself whether many people who classify the conclusion as repugnant, would not also have classified as just as repugnant the 'opposite' conclusion, if instead they had been offered the same experiment 'the other way round':
Start with a world counting huge numbers of lives worth-living-even-if-barely-so, and propose to...
Interesting suggestion! It sounds plausible that "barely worth living" might intuitively be mistaken as something more akin to 'so bad, they'd almost want to kill themselves, i.e. might well have even net negative lives' (which I think would be a poignant way to say what you write).
What about this to reduce the pbly often overwhelming stigma attached to showcasing one's own donations?!
Research vegan cat food as ideal EA cause!? Might also be ideal for human vegan future as 'side'-effect too.
Great question!
From what I read, Snowdrift is not quite "doing this", at least not in as far as the main aim here in Mutual Matching is to ask more from a participant only if leverage increases! But there are close links, thanks for pointing out the great project!
Snowdrift has people contribute as an increasing function of the # of co-donors, but the leverage, which is implicit, stays constant = 2, always (except for those cases where it even declines if others' chosen upper bounds are being surpassed), if my quick calculation is right (pretty sure*). This may or ma...
You're right. I see two situations here:
(i) the project has a strict upper limit on funding required. In this case, if you must (a) limit the pool of participants, and/or (b) their allowed contribution scales, and/or (c) maybe indeed the leverage progression, meaning you might incentivize people less strongly.
(ii) the project has strongly decreasing 'utility'-returns for additional money (at some point). In this case, (a), (b), (c) from above may be used, or in theory you as organizer could simply not care: your funding collection leverage still applies, b...
Agree with the "easily tens of millions a year", which, however, could also be seen to underline part of what I meant: it is really tricky to know how much we can expect from what exact effort.
I half agree with all your points, but see implicit speculative elements in them too, and hence remain with, a maybe all too obvious statement: let's consider the idea seriously, but let's also not forget that we're obviously not the first ones thinking of this, and in addition to all other uncertainties, keep in our mind that none seems to seriously have very much progress in that domain despite the possibly absolutely enormous value even private firms might have been able to make from it if they had serious progress in it.
I miss a clear definition of economic growth here, and the discussion strongly reminds me of the environmental resources focused critique of growth that has started with 1970's Club of Rome - Limits to Growth, there might be value to examine the huge literature around that topic that has been produced ever since on such topics.
Economic growth = increase in market value, is a typical definition.
Market value can increase if we paint the grey houses pink, or indeed if we design good computer games, or if we find great drugs to constantly awe use in insanely g...
I see enormous value in it and think it should be considered seriously.
On the other hand, the huge amount of value in it is also a reason I'm skeptical about it being obvious to be achievable: there are already individual giant firms who'd internally at multi-million annual savings (not to talk about the many billions the first firm marketing something like that would immediately earn) from having a convenient simple secure stack 'for everything', yet none seems to have something close to it (though I guess many may have something like that in some sub-sys...
Thank you! I was actually always surprised by H's mention of the taxation case as an example where maximin would be (readily) applicable.
IMHO, exactly what he explains in the rest of the article, can also be used to see how optimal taxation/public finance should rather only in exceptional cases be using a maximin principle as the proxy rule for a good redistributive process.
On the other hand, if you asked me whether I'd be happy if our actual very flawed tax/redistribution systems would be reformed such as to conform to the maximin - es, I'd possibly very happily agree on the latter, simply as a lesser of two evils. And maybe that's part of the point; in this case, fair enough!
I find this a rather challenging post, even if I like the high-level topic a lot! I didn't read the entire linked paper, but I'd be keen to understand whether you think you can make a concise, simple argument as to why my following view may be missing sth crucial that immediately follows from the Harsanyi vs. Rawls debate (if you think it does; feel free to ignore):
The Harsanyi 1975 paper which your linked post also cites (and which I recommend to any EA), is a great and rather complete rebuttal of Rawls core maximin-claim. The maximin principle, if taken ...
Just re Anxiety prevalence: It seems to me that Anxiety would be a kind of continuum, and you may be able to say 50% of people are suffering from anxiety or 5%, depending on where you make the cutoff. Your description implicitly seems to support exactly this view ("Globally, 284 million people—3.8% of all people—have anxiety disorders. Other estimates suggest that this might be even higher: according to the CDC, 11% of U.S. adults report regular feelings of worry, nervousness, or anxiety and ~19% had any anxiety disorder in the past year according to the N...
I also wonder about the same thing. Further Pledge does not answer this particular desire of committing to a limited personal annual consumption while potentially saving for particular - or yet to be defined - causes later on. This can make sense also if one believes one's future view on what to donate towards being significantly more enlightened.
I could see such a pledge to not consume above X/year to be valued not overly much by third parties, as we cannot trust our future selves so much I guess, and even investing in own endeavors, even if officially EA...
Surprised to see nothing (did I overlook?) about: The People vs. The Project/Job: The title, and the lead sentence,
Some people seem to achieve orders of magnitudes more than others in the same job.
suggest the work focuses essentially on people's performance, but already in the motivational examples
...For instance, among companies funded by Y Combinator the top 0.5% account for more than ⅔ of the total market value; and among successful bestseller authors [wait, it's their books, no?], the top 1% stay on the New York Times bestseller list more than 25 times lo
Thanks, I think antipathy effects towards the name “Effective Altruism”, or worse, “I’m an effective altruist”, are difficult to overstate.
Also, somewhat related to what you write I happen to have thought to myself just today: “I (and most of us are) am just as much an effective egoist as an effective altruist”, after all even the holiest of us probably cannot always help ourselves putting a significantly higher weight on our own welfare than on those of average strangers.
Nevertheless, some potential upside of the current term – equally I’m not sure it mat...
Love the endeavor. But the calculation method really should be changed before anyone interested in the quantification of the combined CO2+animal suffering harm should use it, in my opinion: a weighted product model is inappropriate to express the total harm level of two independent harms, I really think you want to not multiply CO2 and animal suffering harm, but instead separately sum them, with whichever weight the user chooses. In that sense, I fully agree with what MichaelStJules also mentioned. But I want to give an example that makes this very clear -...
I find "new enlightenment" very fitting. But wonder whether it might at times be perceived as a not very humble name (must not be a problem, but I wonder whether some, me included, might at times end up feeling uncomfortable calling ourselves part of it).
Spontaneously I find "Broad Rationality" a plausible candidate (I spontaneously found it being used as a very specific concept mainly by Elster 1983, but I find on google only 46 hits on '"broad rationality" elster ', though there are of course more hits more generally on the word combination)
Thanks, interesting case!
1. We might have loved to see the cartel here succeed, but we should probably still be thankful for the more general principle underlying the ruling:
As background, it should be mentioned that it is a common thing to use so-called green policies/standards for disguised protectionist measures, aka green protectionism: protecting local/domestic industry by imposing certain rules, often with minor environmental benefits (as here at least according to the ruling), but helping to keep out (international) competition.
So for the 'average' ...
The post mentions
Giving Green agrees with the consensus EA view that the framing of “offsetting personal emissions” is unhelpful
To some degree such a consensus seems natural, though I believe the issues with the idea of offsetting do not automatically mean helping people in search specifically of effective (or thus maybe least ineffective) offsetting possibilities is by nature ineffective.
I wonder: is the mentioned "consensus" detailed/made most obvious in any particular place(s) - blog, article, ... ?
Indeed, I think I'm not the only one to whom the nudge towards eating more fully vegan would seem a highly welcome side-effect of a stay in the hotel.
This post calls out un-diversities in EA. Instead of attributable to EA doing something wrong, I find these patterns mainly underline a basic fact about what type of people EA tends to attract. So I don't find the post fair to EA and its structure in a very general way.
I find to detect in the article an implicit, underlying view of the EA story being something like:
'Person becoming EA -> World giving that person EA privileges'
But imho, this completely turns upside down the real story, which I mostl... (read more)