All of RedStateBlueState's Comments + Replies

Yes, I just would have emphasized it more. I sort of read it as “yeah this is something you might do if you’re really interested”, while I would more say “this is something you should really probably do”

Mostly agreed, but I do think that donating some money, if you are able, is a big part of being in EA. And again this doesn’t mean reorienting your entire career to become a quant and maximize your donation potential.

1
James Herbert
15d
Oh but I did put 'donate some money' in my 'hobby' list - or am I misunderstanding you?

All punishment is tragic, I guess, in that it would be a better world if we didn't have to punish anyone. I guess I just don't think the fact that SBF on some level "believed" in EA (whatever that means, and if that is even true) - despite not acting in accordance with the principles of EA - is a reason that his punishment is more tragic than anyone else's

This is just not true if you read about the case, he obviously knew he was improperly taking user funds and tells all sorts of incoherent lies to explain it, and it's really disappointing to see so many EAs continue to believe he was well-intentioned. You can quibble about the length of sentencing, but he broke the law, and he was correctly punished for it.

7
Brad West
1mo
Please note that my previous post took the following positions: 1. That SBF did terrible acts that harmed people. 2. That it was necessary that he be punished. To the extent that it wasn't implied by the previous comment, I clarify that what he did was illegal (EDIT: which would involve a finding of culpable mental states that would imply that his wrongdoing was no innocent or negligent mistake). 3. The post doesn't even take a position as to whether the 25 years is an appropriate sentence. All of the preceding is consistent with the proposition that he also acted with the intention of doing what he could to better the world. Like others have commented, his punishment is necessary for general deterrence purposes. However, his genuine altruistic motivations make the fact that he must be punished tragic.

Suppose someone were to convince you that the interventions GiveWell pursues are not the best way to improve "global capacity", and that a better way would be to pursue more controversial/speculative causes like population growth or long-run economic growth or whatever. I just don't see EA reorienting GHW-worldview spending toward controversial causes like this, ever. The controversial stuff will always have to compete with animal welfare and AI x-risk. If your worldview categorization does not always make the GHW worldview center on non-controversial stuf... (read more)

How are you defining global capacity, then? This is currently being argued in other replies better than I can, but I think there’s a good chance that the most reasonable definition implies optimal actions very different than GiveWell. Although I could be wrong.

I don’t really think the important part is the metric - the important part is that we’re aiming for interventions that agree with common sense and don’t require accepting controversial philosophical positions (beyond rejecting pro-local bias I guess)

2
Richard Y Chappell
1mo
I don't have anything as precise as a definition, but something in the general vicinity of [direct effects on individual welfare + indirect effects on total productivity, which can be expected to improve future welfare]. It's not a priori that GiveWell is justified by any reasonable EA framework. It is, and should be, open to dispute. So I see the debate as a feature, not a bug, of my framework. (I personally do think it is justified, and try to offer a rough indication of why, here. But I could be wrong on the empirics. If so, I would accept the implication that the funding should then be directed differently.) On the "important part", distinguish three steps: (i) a philosophical goal: preserving room for common sense causes on a reasonable philosophical basis. (ii) the philosophical solution: specifying that reasonable basis, e.g. valuing reliable improvements (including long-term flow-through effects) to global human capacity. (iii) providing a metric (GDP, QALYs, etc.) by which we could attempt to measure, at least approximately, how well we are achieving the specified value. I'm not making any claims about metrics. But I do think that my proposed "philosophical solution" is important, because otherwise it's not clear that the philosophical goal is realizable (without moral arbitrariness).

Love the post, don't love the names given.

I think "capacity growth" is a bit too vague, something like "tractable, common-sense global interventions" seems better.

I also think "moonshots" is a bit derogatory, something like "speculative, high-uncertainty causes" seems better.

2
Richard Y Chappell
1mo
Oops, definitely didn't mean any derogation -- I'm a big fan of moonshots, er, speculative high-uncertainty (but high EV) opportunities! [Update: I've renamed them to 'High-impact long-shots'.] I disagree on "capacity growth" through: that one actually has descriptive content, which "common-sense global interventions" lacks. (They are interventions to achieve what, exactly?)

This post is a great exemplar for why the term “AI alignment” has proven a drag on AI x-risk safety. The concern is and has always been that AI would dominate humanity like humans dominate animals. All of the talk about aligning AI to “human values” leads to pedantic posts like this one arguing about what “human values” are and how likely AIs are to pursue them.

7
Matthew_Barnett
2mo
Is there a particular part of my post that you disagree with? Or do you think the post is misleading. If so, how? I think there are a lot of ways AI could go wrong, and "AIs dominating humans like how humans dominate animals" does not exhaust the scope of potential issues.

Altman, like most people with power, doesn’t have a totally coherent vision for why him gaining power is beneficial for humanity, but can come up with some vague values or poorly-thought-out logic when pressed. He values safety, to some extent, but is skeptical of attempts to cut back on progress in the name of safety.

Hmm, I still don’t think this response quite addresses the intuition. Various groups yield outsized political influence owing to their higher rates of voting - seniors, a lot of religious groups, post-grad degree ppl, etc. Nonetheless, they vote in a lot of uncompetitive races where it would seem their vote doesn’t matter. It seems wrong that an individual vote of theirs has much EV in an uncompetitive race. On the other hand, it seems basically impossible to mediate strategy such that there is still a really strong norm of voting in competitive races but ... (read more)

Sorry, I shouldn’t have used the phrase “the fact that”. Rephrased, the sentence should say “why would the universe taking place in an incomputable continuous setting mean it’s not implemented”. I have no confident stance on if the universe is continuous or not, just that I find the argument presented unconvincing.

That and/or acausal decision theory is at play for this current election

I will say that I think most of this stuff is really just dancing around the fundamental issue, which is that expected value of your single vote really isn't the best way of thinking about it. Your vote "influences" other people's vote, either through acausal decision theory or because of norms that build up (elections are repeated games, after all!).

6
MichaelStJules
3mo
I think the local causal expected value of your vote and most other things you do is actually a decent proxy, even if you accept acausal influence. The proxy probably doesn't work well if you include options to trade acausally (unless acausal trade is much weaker than non-trade-based acausal influence and so can be practically ignored). 1. I doubt your own voting has much causal impact on the voting behaviour of others. 2. I doubt there's much acausal influence from your voting on others locally on this Earth (and this quantum branch, under many-worlds). 3. And then everything gets multiplied with correlated agents across a multiverse, not just voting. So if voting didn't look good on its local causal EV compared to other things you could do with that time, then I doubt it would look good on its acausal EV (suitably defined, in case of infinities). I guess one caveat with acausal influence across a multiverse is that the agents you're correlated with could be voting on entirely different things, with totally different local stakes (and some voting for things you'd disagree with). But the same would be true for other things you do and your acausal influence over others for them. So, it’s not clear this further favours voting in particular over other things.
2
Robi Rahman
3mo
So you think your influence on future voting behavior is more impactful than your effect on the election you vote in?

I may go listen to the podcast if you think it settles this more, but on reading it I'm skeptical of Joscha's argument. It seems to skip the important leap from "implemented" to "computable". Why does the fact that our universe takes place in an incomputable continuous setting mean it's not implemented? All it means is that it's not being implemented on a computer, right?

2
Vasco Grilo
3mo
Interesting point. I do not think we have any empirical evidence that the universe is: * Continuous, because all measurements have a finite sensitivity. * Infinite, because all measurements have a finite scale. Claiming the universe is continuous or infinite requires extrapolating infinitely far from observed data. For example, to conclude that the universe is infinite, people usually extrapolate from the universe being pretty flat locally to it being perfectly flat globally. This is a huge extrapolation: * Modelling our knowledge about the local curvature as a continuous symmetrical distribution, even if the best guess is that the universe is perfectly flat locally, there is actually 0 % chance it has zero local curvature, 50 % it has negative, and 50 % it has positive. * We do not know whether the curvature infinitely far away is the same as the local one. In my mind, claiming the universe is perfectly flat and infinite based on it being pretty flat locally is similar to claiming that the Earth is flat and infinite based on it being pretty flat locally.

I think there’s a non-negligible chance we survive until the heat death of the sun or whatever, maybe even after, which is not well-modelled by any of this.

To clarify: the point of this parenthetical was to state reasons why a world without transhumanist progress may be terrible. I don't think animal welfare concerns disappear or even are remedied much with transhumanism in the picture. As long as animal welfare concerns don't get much worse however, transhumanism changes the world either from good to amazing (if we figure out animal welfare) or terrible to good (if we don't). Assuming AI doesn't kill us obviously.

I think the simplest answer is not that such a world would be terrible (except for factory farming and wild animal welfare, which are major concerns), but that a world with all these transhumanist initiatives would be much better

8
Vasco Grilo
4mo
Thanks for pointing that out. Just to elaborate a little, the table below from Newberry 2021 has some estimates of how valuable the future can be. Even if one does not endorse the total view, person-affecting views may be dominated by possibilities of large future populations of necessary people.
4
Hayven Frienby
4mo
How could AI stop factory farms (aside from making humans extinct)? I'm honestly interested in the connection there. If you're referring to cellular agriculture, I'm not sure why any form of AI would be needed to accomplish that.

I am glad somebody wrote this post. I often have the inclination to write posts like these, but I feel like advice like this is sometimes good and sometimes bad and it would be disingenuous for me to stake out a claim in any direction. Nonetheless, I think it’s a good mental exercise to explicitly state the downsides of comparative claims and the upsides of absolute claims, and then people in the comments will (and have) assuredly explain the opposite.

"...for most professional EA roles, and especially for "thought leadership", English-language communication ability is one of the most critical skills for doing the job well"

Is it, really? Like, this is obviously true to some extent. But I'm guessing that English communication ability isn't much more important for most professional EA roles than it is for eg academics or tech startup founders. These places are much more diverse in native language than EA I think.

7
Arepo
4mo
Yeah, I want thought leaders to be highly proficient in logic, statistics, and/or (preferably 'and') some key area of science, engineering, philosophy or social science. I really don't see any strong need for them to speak pristine English, as long as they can write clearly enough in their native language that someone can easily translate it.

How did he deal with two-envelope considerations in his calculation of moral weights for OpenPhil?

[This comment is no longer endorsed by its author]Reply
2
CarlShulman
5mo
I have never calculated moral weights for Open Philanthropy, and as far as I know no one has claimed that. The comment you are presumably responding to began by saying I couldn't speak for Open Philanthropy on that topic, and I wasn't.

This consideration is something I had never thought of before and blew my mind. Thank you for sharing.

Hopefully I can summarize it (assuming I interpreted it correctly) in a different way that might help people who were as befuddled as I was. 

The point is that, when you have probabilistic weight to two different theories of sentience being true, you have to assign units to sentience in these different theories in order to compare them. 

Say you have two theories of sentience that are similarly probable, one dependent on intelligence and one depend... (read more)

If OpenPhil’s allocation is really so dependent on moral weight numbers, you should be spending significant money on research in this area, right? Are you doing this? Do you plan on doing more of this given the large divergence from Rethink’s numbers?

9
Emily Oehlsen
5mo
Several of the grants we’ve made to Rethink Priorities funded research related to moral weights; we’ve also conducted our own research on the topic. We may fund additional moral weights work next year, but we aren’t certain. In general, it's very hard to guarantee we'll fund a particular topic in a future year, since our funding always depends on which opportunities we find and how they compare to each other — and there's a lot we don't know about future opportunities. I unfortunately won’t have time to engage with further responses for now, but whenever we publish research relevant to these topics, we’ll be sure to cross-post it on the Forum!

Yeah, I think there’s a big difference between how Republican voters feel about it and how their elites do. Romney is, uhh, not representative of most elite Republicans, so I’d be cautious there

Do we have any idea how Republican elites feel about AI regulation?

This seems like the biggest remaining question mark which will determine how much AI regulation we get. It's basically guaranteed that Republicans will have to agree to AI regulation legislation, and Biden can't do too much without funding in legislation. Also there's a very good chance Trump wins next year and will control executive AI Safety regulation.

3
Odd anon
6mo
Copy-pasting something I wrote elsewhere: Also, Mitt Romney seemed to be very concerned about AI risk during the hearings, and I don't think he was at all alone among the Republicans present.

Politics is really important, so thank you for recognizing that and adding to discussion about Pause.

But this post confuses me. You start by talking about how protests are stronger when they are centered on something people care about rather than simply policy advocacy. Which, I don't know if I agree with, but it's an argument that you can make. But then you shift toward advocating for regulation rather than pause. Which is also just policy advocacy, right? And I don't understand why you'd expect it to have better politics than a pause. Your points about needing companies to prove they are safe is pretty much the same point that Holly Elmore has been making, and I don't know why they apply better to regulation than a Pause.

Reading this great thread on SBF's bio it seems like his main problem was stimulants wrecking his brain. He was absurdly overconfident in everything he did, did not think things through, didn't sleep, and admitted to being deficient in empathy ("I don't have a soul"). Much has been written about deeper topics like naive utiliarianism and trust in response to SBF, but I wonder if the main problem might just be the drug culture that exists in certain parts of EA. Stimulants should be used with caution, and a guy like SBF probably should never have been using them, or at least nowhere near the amount he was getting.

7
Jason
7mo
Is "the amount he was getting" publicly known? I think we need to be really careful to distinguish self-medication or recreational use from legitimate medical use to [edit: avoid inadvertently criticizing] appropriate medical treatment. The Adderall and Emsam doses referenced in a recent court order are not inappropriate for the diagnoses provided, if the prescriber and patient know what they are doing. I'm also not aware of any significant risk of medical-level doses triggering erratic behavior, but havent looked at the literature specifically. (I don't encourage unauthorized use of controlled substances, but also don't want to discourage those who have mental health conditions from accessing appropriate treatment.)

I think the judgement calls used in coming up with moral weights have less to do with caring about animals and more to do with how much you think attributes like intelligence and self-awareness have to do with sentience. They're applied to animals, but I think they're really more neuroscience/philosophy intuitions. The people who have the strongest/most out-of-the-ordinary intuitions are MIRI folk, not animal lovers.

Yeah i guess that makes sense. But uh.... have other institutions actually made large efforts to preserve such info? Which institutions? Which info?

9
Holly Morgan
10mo
Huh, maybe not. Might be worth buying a physical copy of The Knowledge too (I just have). And if anyone's looking for a big project...

This might be a dumb question, but shouldn't we be preserving more elementary resources to rebuild a flourishing society? Current EA is kind of only meaningful in a society with sufficient abundant resources to go into nonprofit work. It feels like there are bigger priorities in the case of sub-x-risk.

7
Aaron Bergman
10mo
I’ve definitely thought about this and short answer: depends on who “we” is. A sort of made up particular case I was imagining is “New Zealand is fine, everywhere else totally destroyed” because I think it targets the general class of situation most in need of action (I can justify this on its own terms but I’ll leave it for now) In that world, there’s a lot of information that doesn't get lost: everything stored in the laptops and servers/datacenters of New Zealand (although one big caveat and the reason I abandoned the website is that I lost confidence that info physically encoded in eg a cloud server in NZ would be de facto accessible without a lot of the internet’s infrastructure physically located elsewhere), everything in all its university libraries, etc. That is a gigantic amount of info, and seems to pretty clearly satisfy the “general info to rebuild society” thing. FWIW I think this holds if only a medium size city were to remain intact, not certain if it’s say a single town in Northern Canada, probably not a tiny fishing village, but in the latter case it’s hard to know what a tractable intervention would be. But what does get lost? Anything niche enough not to be downloaded on a random NZers computer or in a physical book in a library. Not everything I put in the archive, to be sure, but probably most of it. Also, 21GB of the type of info I think you’re getting at is in the “non EA info for the post apocalypse folder” because why not! :)
6
Holly Morgan
10mo
That was my first thought, but I expect many other individuals/institutions have already made large efforts to preserve such info, whereas this is probably the only effort to preserve core EA ideas (at least in one place)? And it looks like the third folder - "Non-EA stuff for the post-apocalypse" - contains at least some of the elementary resources you have in mind here. But yeah, I'm much more keen to preserve arguments for radical empathy, scout mindset, moral uncertainty etc. than, say, a write-up of the research behind HLI's charity recommendations. Maybe it would also be good to have an even small folder within "Main content (3GB)" with just the core ideas; the "EA Handbook" (39MB) sub-folder could perhaps serve such a purpose in the meantime. Anyway, cool project! I've downloaded :)

I don't think points about timelines reflect an accurate model of how AI regulations and guardrails are actually developed. What we need is for Congress to pass a law ordering some department within the executive branch to regulate AI, eg by developing permitting requirements or creating guidelines for legal AI research or whatever. Once this is done, the specifics of how AI is regulated are mostly up to that executive branch, which can and will change over time.

Because of this, it is never "too soon" to order the regulation of AI. We may not know exactly ... (read more)

I think most of the animal welfare neglect comes from the fact that if people are deep enough into EA to accept all of its "weird" premises they will donate to AI safety instead. Animal welfare is really this weird midway spot between "doesn't rest on controversial claims" and "maximal impact".

8
Aaron Bergman
11mo
Definitely part of the explanation, but my strong impression from interaction irl and on Twitter is that many (most?) AI-safety-pilled EAs donate to GiveWell and much fewer to anything animal related. I think ~literally except for Eliezer (who doesn’t think other animals are sentient), this isn’t what you’d expect from the weirdness model implied. Assuming I’m not badly mistaken about others’ beliefs and the gestalt (sorry) of their donations, I just don’t think they’re trying to do the most good with their money. Tbc this isn’t some damning indictment - it’s how almost all self-identified EAs’ money is spent and I’m not at all talking about ‘normal person in rich country consumption.’

On "End high-skilled immigration programs": The thing about big-brained stuff like this is it very rarely works. Consider:

What is p(doom|immigration restrictions)-p(doom|status quo immigration)? To that end: might immigration be useful in AI Safety research as well? 

What is E[utility from AI doom]-E[utility from not AI doom]? This also probably gets into all sorts of infinite ethics/pascal's mugging issues.

How likely are you to actually change immigration laws like this?

What is the non-AI-related utility of immigration, before AI doom or assuming AI d... (read more)

6
ColdButtonIssues
11mo
"The other stuff seems more reasonable but if you're going to restrict immigrants' ability to work on AI you might as well restrict natives' ability to work on AI as well. I doubt that the former is much easier than the latter." This part of your comment I disagree on. There are specific provisions in US law to protect domestic physicians, immigrants on H1B visas have way fewer rights and are more dependent on their employers than citizen employees, and certain federal jobs or contractor positions are limited to citizens/permanent residents. I think this isn't outlandish, but certainly not hard. The end of high-skilled immigration won't happen, I agree. Even when RW populists actually win national elections, they don't do this. 

Let me make the contrarian point here that you don't have to build AGI to get these benefits eventually. An alternative, much safer approach would be to stop AGI entirely and try to inflate human/biological intelligence with drugs or other biotech. Stopping AGI is unlikely to happen and this biological route would take a lot longer but it's worth bringing up in any argument about the risks vs. reward of AI.

1
Karl von Wendt
1y
I fully agree, see this post.

I am nervous about wading into partisan politics with AI safety. I think there’s a chance that AI safety becomes super associated with one party due to a stunt like this, or worse becomes a laughing stock for both parties. Partisan politics is an incredibly adversarial environment, which I fear could undermine the currently unpolarized nature of AI safety.

Ooh, now this is interesting!

Running a candidate is one thing, actually getting coverage for this candidate is another. If we could get a candidate to actually make the debate stage in one of the parties that would be a big deal, but that would also be very hard. The one person who I can think who could actually get on the debate stage is Andrew Yang, if there ends up being a Democratic primary (which I am not at all sure about). If I recall he has actually talked about AI x-risk in the past? Even if that’s wrong, I know he has interacted with EA before, s... (read more)

Ahh, I didn't read it as you talking about the effects of Eliezer's past outreach. I strongly buy "this time is different", and not just because of the salience of AI in tech. The type of media coverage we're getting is very different: the former CEO of Google advocating AI risk and a journalist asking about AI risk in the White House press briefing is just nothing like we've ever seen before. We're reaching different audiences here. The AI landscape is also very different; AI risk arguments are a lot more convincing when we have a very good AI to point to... (read more)

Not to be rude but this seems like a lot of worrying about nothing. "AI is powerful and uncontrollable and could kill all of humanity, like seriously" is not a complicated message. I'm actually quite scared if AI Safety people are hesitant to communicate because they think the misinterpretation will be as bad as you are saying here; this is a really strong assumption, an untested one at that, and the opportunity cost of not pursuing media coverage is enormous. 

The primary purpose of media coverage is to introduce the problem, not to immediately push f... (read more)

1
Closed Limelike Curves
1y
Anything longer than 8 morphemes is probably not going to survive Twitter or CNN getting their hands on it. I like the original version ("Literally everyone will die") better.
7
Linch
1y
To first order, the problem isn't that the message is complicated. "Bioterrorism might kill you, here are specific viruses that they can use, we should stop that." is also not a complicated message, but it'll be a bad idea to indiscriminately spread that message as well.  Well there was DeepMind, and then OpenAI, and then Anthropic.  I don't view this as a crux. I weakly think additional attention is a cost, not a benefit. I meant in AI. Also I feel like this might be the crux here. I currently think that past communications (like early Yudkowsky and Superintelligence) have done a lot of harm (though there might have been nontrivial upsides as well). If you don't believe this you should be more optimistic about indiscriminate AI safety comms than I am, though maybe not to quite the same extent as the OP. Tbh in contrast with the three target groups you mentioned, I feel more generally optimistic about the "public's" involvement. I can definitely see worlds where mass outreach is net positive, though of course this is a sharp departure from past attempts (and failures) in communication. 

Well, maybe to both parts; it's a good sign, but a weak one. Also concerns about response bias, etc., especially since YouGov doesn't specialize in polling these types of questions and there's no "ground truth" here to compare to.

I would caution people against reading too much into this. If you poll people about a concept they know nothing about ("AI will cause the end of the human race") you will always get answers that don't reflect real belief. These answers are very easily swayed, they don't cause people to take action like real beliefs would, they are not going to affect how people vote or which elites they trust, etc.

This is an important warning but to be clear it also isn’t necessarily always the case. Rethink Priorities has studied low salience issue polling a lot and we think there are some good methods. I don’t think YouGov has been very good about using those methods here though.

Largely agree, but results like this (1) indicate that if AI does become more salient the public will be super concerned about risks and (2) might help nudge policy elites to be more interested in regulating AI. (And it's not like there's some other "real belief" that the survey fails to elicit-- most people just don't have 'real beliefs' on most topics.)

Part of the motivation for this post is that I think AI Safety press is substantially different from EA press as a whole. AI safety is inherently a technical issue which means you don’t get this knee-jerk antagonism that happens when people’s ideology is being challenged (ie when you tell people they should be donating to your cause instead of theirs). So while I haven’t read the whole EA press post you linked to, I think parts of it probably apply less to AI.

With all due respect I think people are reading way too far into this, Eliezer was just talking about the enforcement mechanism for a treaty. Yes, treaties are sometimes (often? always?) backed up by force. Stating this explicitly seems dumb because it leads to posts like this, but let's not make this bigger than it is.

9
dsj
1y
It varies, but most treaties are not backed up by force (by which I assume we're referring to inter-state armed conflict). They're often backed up by the possibility of mutual tit-for-tat defection or economic sanction, among other possibilities.

The point of the letter is to raise awareness for AI safety, not because they actually think a pause will be implemented. We should take the win.

Thanks!

I hate to be someone who walks into a heated debate and pretends to solve it in one short post, so I hope my post didn’t come off too authoritative (I just genuinely have never seen debate about the term). I’ll look more into these.

Note that, if you are going to start thinking about these cofounders, you have to consider cofounders working against this relationship as well:

  • there is often a trade-off between more lucrative and more personally rewarding jobs
  • intuitively I think people who get more stressed are harder workers, though I'm certainly not confident in this claim.

The difference, from my perspective, is that the mixing of romantic and work relationships in a poly context has much more widespread damage. In monogamous relationships, the worst that can happened is that there is one incident involving 2 or so people, which can be dealt with in a contained way. In poly relationships, when you have a relationship web spanning a large part of an organization, this can cause very large harm to the company and to potential future employees. I, frankly, would feel very uncomfortable if I was at an organization where most of my coworkers were in a polyamorous relationship.

I think a better way of looking at this is that EA is very inviting of criticism but not necessarily that responsive to it. There are like 10 million critiques on the EA Forum, most with serious discussion and replies. Probably very few elicit actual change in EA. (I am of the opinion that most criticism just isn’t very good, and that there is a reason it hasn’t been adopted, but obviously this is debatable).

5
freedomandutility
1y
I don’t think I like this framing, because being responsive to criticism isn’t inherently good, because criticism isn’t always correct. I think EA is bad at the important middle step between inviting criticism and being responsive to it, which is seriously engaging with criticism.

I have opposite  intuition actually - I'd guess that people closer to animals have more empathy for their suffering. Either way I think this is mostly orthogonal to the cultural values of masculinity you are talking about.

2
Peter Rautenbach
1y
As someone who spent quite a bit of time in cattle country in Canada, I can say that your intuition is right. People living by these animals do truly tend to care are about them. On the other hand, killing them is central to their entire way of life and the core of their economy. Without the animals, there would be no rural for much of Canada. Additionally, the difficulty of even modern rural life seems to create a certain hardness that is okay with animal death/suffering and that hardness exists alongside their love for their animals. 
1
Lixiang
1y
I also have that intuition.

Small point here but unless you think that even after adjusting for partisanship working-class or rural Americans are more likely to oppose animal welfare action, I would take out the part about working class and rural and just leave right-wing. Otherwise, it just detracts from epistemic value as people create stereotypes about what political parties' voting bases  look like.

2
Lixiang
1y
I'd guess that does still hold after adjusting, but I did take it out. One thing is that I'd guess working class, rural people are more likely to work in some area at least adjacent  to the meat/fish/food industry, and so the vegetarian movement would go against their livelihood, which might make them more likely to oppose it. To be clear, I'm not blaming those people. I think the city-dwelling meat eater who deliberately shields themselves from the unpleasant sight of the process that makes their food is much more troublesome.  Also, working class areas just don't have vegan food available as much.  I'm sure many farmers do care about their animals. 

Yeah, in fact I think most of the domestic opposition also comes from this backlash (in poli sci it's called "negative partisanship"). The right starts to oppose animal welfare policy not on its merits but simply because the left supports it - another reason to strive not to polarize the issue.

7
Larks
1y
This fits in to Bryan Caplan's simple theory of politics, on which the defining feature of the right wing is simply opposing the left.
Load more