All of Omnizoid's Comments + Replies

Re 1, as Richard says: "Wenar scathingly criticized GiveWell—the most reliable and sophisticated charity evaluators around—for not sufficiently highlighting the rare downsides of their top charities on their front page.8 This is insane: like complaining that vaccine syringes don’t come with skull-and-crossbones stickers vividly representing each person who has previously died from complications. He is effectively complaining that GiveWell refrains from engaging in moral misdirection. It’s extraordinary, and really brings out why this concept matters." ... (read more)

This just seemed to be a list of false claims about things GiveWell forgot to consider, a series of ridiculous claims about philosophy, and no attempt to compare the benefits to the costs.  Yes, lots of EA charities have various small downsides, most of which are taken into account, but those are undetectable compared to the hundreds of thousands of lives saved.  He suggests empowering local people, which is a good applause line, but it's vague.  Most local people are not in a position to do high quality comparisons between different interventions.  

Thank you!  It was your speech at the OFTW meeting that largely inspired it. 

We all agree that you should get utility.  You are pointing out that FDT agents get more utility.  But once they are already in the situation where they've been created by the demon, FDT agents get less utility.  If you are the type of agent to follow FDT, you will get more utility, just as if you are the type of agent to follow CDT while being in a scenario that tortures FDTists, you'll get more utility.  The question of decision theory is, given the situation you are in, what gets you more utility--what is the rational thing to do. &n... (read more)

Wait sorry it’s hard to see the broader context of this comment on account of being on my phone and comment sections being hard to navigate on ea forum. I don’t know if I said eliezer had 100% credence, but if I did, that was wrong.

He didn't quote it--he linked to it.  I didn't quote the broader section because it was ambiguous and confusing.  The reason not accounting for interactionist dualism matters is because it means that he misstates the zombie argument, and his version is utterly unpersuasive. 

The demon case shows that there are cases where FDT loses, as is true of all decision theories.  IF the question is which decision theory will programming into an AI generate most utility, then that's an empirical question that depends on facts about the world.  If it's once you're in a situation which  will get the most utility, well, that's causal decision theory.  

Decision theories are intended as theories of what is rational for you to do.  So it describes what choices are wise and which choices are foolish.  I think Eliez... (read more)

2
Scott Alexander
8mo
I thought we already agreed the demon case showed that FDT wins in real life, since FDT agents will consistently end up with more utility than other agents. Eliezer's argument is that you can become the kind of entity that is programmed to do X, by choosing to do X. This is in some ways a claim about demons (they are good enough to predict even the choices you made with "your free will"). But it sounds like we're in fact positing that demons are that good - I don't know how to explain how they have 999,999/million success rate otherwise - so I think he is right. I don't think the demon being wrong one in a million times changes much. 999,999 of the people created by the demon will be some kind of FDT decision theorist with great precommitment skills. If you're the one who isn't, you can observe that you're the demon's rare mistake and avoid cutting off your legs, but this just means you won the lottery - it's not a generally winning strategy. I don't understand why you think that the choices that get you more utility with no drawbacks are foolish, and the choices that cost you utility for no reason are wise. On the Newcomb's Problem post, Eliezer explicitly said that he doesn't care why other people are doing decision theory, he would like to figure out a way to get more utility. Then he did that. I think if you disagree with his goal, you should be arguing "decision theory should be about looking good, not about getting utility" (so we can all laugh at you) rather than saying "Eliezer is confidently and egregiously wrong" and hiding the fact that one of your main arguments is that he said we should try to get utility instead of failing all the time and then came up with a strategy that successfully does that.

I would agree with the statement "if Eliezer followed his decision theory, and the world was such that one frequently encountered lots of Newcombe's problems and similar, you'd end up with more utility."  I think my position is relatively like MacAskill's in the linked post where he says that FDT is better as a theory of the agent you should want to be than what's rational.  

But I think that rationality won't always benefit you.  I think you'd agree with that.  If there's a demon who tortures everyone who believes FDT, then believing FD... (read more)

6
Scott Alexander
8mo
I think rather than say that Eliezer is wrong about decision theory, you should say that Eliezer's goal is to come up with a decision theory that helps him get utility, and your goal is something else, and you have both come up with very nice decision theories for achieving your goal. (what is your goal?) My opinion on your response to the demon question is "The demon would never create you in the first place, so who cares what you think?" That is, I think your formulation of the problem includes a paradox - we assume the demon is always right, but also, that you're in a perfect position to betray it and it can't stop you. What would actually happen is the demon would create a bunch of people with amputation fetishes, plus me and Eliezer who it knows wouldn't betray it, and it would never put you in the position of getting to make the choice in real life (as opposed to in an FDT algorithmic way) in the first place. The reason you find the demon example more compelling than the Newcomb example is that it starts by making an assumption that undermines the whole problem - that is, that the demon has failed its omniscience check and created you who are destined to betray it. If your problem setup contains an implicit contradiction, you can prove anything. I don't think this is as degenerate a case as "a demon will torture everyone who believes FDT". If that were true, and I expected to encounter that demon, I would simply try not to believe FDT (insofar as I can voluntarily change my beliefs). While you can always be screwed over by weird demons, I think decision theory is about what to choose in cases where you have all of the available knowledge and also a choice in the matter, and I think the leg demon fits that situation.

I know you said you didn't want to repeatedly go back and forth, but . . . 

Yes, I agree that if you have some psychological mechanism by which you can guarantee that you'll follow through on future promises--like programming an AI--then that's worth it.  It's better to be the kind of agent who follows FDT (in many cases).  But the way I'd think about this is that this is an example of rational irrationality, where it's rational to try to get yourself to do something irrational in the future because you get rewarded for it.  But remember... (read more)

2
Scott Alexander
8mo
I guess any omniscient demon reading this to assess my ability to precommit will have learned I can't even precommit effectively to not having long back-and-forth discussions, let alone cutting my legs off. But I'm still interested in where you're coming from here since I don't think I've heard your exact position before. Have you read https://www.lesswrong.com/posts/6ddcsdA2c2XpNpE5x/newcomb-s-problem-and-regret-of-rationality ? Do you agree that this is our crux? Would you endorse the statement "Eliezer, using his decision theory, will usually end out with more utility than me over a long life of encountering the sorts of weird demonic situations decision theorists analyze, I just think he is less formally-rational" ?  Or do you expect that you will, over the long run, get more utility than him?

Well put!  Though one nitpick: I didn't defer to Eliezer much.  Instead, I concluded that he was honestly summarizing the position.  So I assumed physicalism was true because I assumed, wrongly, that he was correctly summarizing the zombie argument. 

Oh sorry, yeah I misunderstood what point you were making.  I agree that you want to be the type of agent who cuts off their legs--you become better off in expectation.  But the mere fact that the type of agent who does A rather than B gets more utility on average does not mean that you should necessarily do A rather than B.  If you know you are in a situation where doing A is guaranteed to get you less utility than B, you should do B.  The question of which agent you should want to be is not the same as which agent is acting rationally... (read more)

4
Scott Alexander
8mo
Sorry if I misunderstood your point. I agree this is the strongest objection against FDT. I think there is some sense in which I can become the kind of agent who cuts off their legs (ie by choosing to cut off my legs), but I admit this is poorly specified. I think there's a stronger case for, right now, having heard about FDT for the first time, deciding I will follow FDT in the future. Various gods and demons can observe this and condition on my decision, so when the actual future comes around, they will treat me as an FDT-following agent rather than a non-FDT-following agent. Even though future-created-me isn't exactly in a position to influence the (long-since gone) demon, current me is in a position to make this decision for future relevant situations, and should decide to follow FDT in general. Part of this decision I've made involves being the kind of person who would take the FDT option in hypothetical scenarios. Then there's the additional question of whether to defect against the demons/gods later, and say "Haha, back in August 2023 I resolved to become an FDT agent, and I fooled you into believing me, but now that I've been created I'm just going to not cut off my legs after all". I think of this as - suppose every past being created by the demon has cut off its legs, ie the demon has a 100% predictive success rate over millions of cases. So the demon would surely predict if I would do this. That means I should (now) try really hard not to do this. Cf. Parfit's Hitchhiker. Can I bind my future self like this? I think empirically yes - I think I have enough honor that if I tell hypothetical demon gods now that I'm going to do various things, I can actually do them when the time comes. This will be "irrational" in some sense, but I'll still end up with more utility than everyone else.  Is there some sense in which, if I decide not to cut off my legs, I would wink out of existence? I admit feeling a superstitious temptation to believe this (a non-superstit

Thanks for this comment.  I agree with 2.  On 3, it seems flatly irrational to have super high credences when experts disagree with you and you do not have any special insights.

If an influential person who is given lots of deference is often wrong, that seems notable.  If people were largely influenced by my blog, and I was often full of shit, expressing confident views on things I didn't know about, that would be noteworthy.  

Agree with 4. 

On 5, I wasn't intending to criticize EA or rationalism.  I'm a bit lukewarm on rationa... (read more)

3
JWS
8mo
I guess on #3, I suggest reading Inadequate Equilibria. I think it's given me more insight into Eliezer's approach to making claims. The Bank of Japan example he uses in the book is probably, ironically, one of the clearest examples of an uncorrect, egregious and overconfident mistake. I think the question of when to trust your own judgement over experts, of how much to incorporate expert views into your own, and how to identify experts in the first place is an open and unsolved issue (perhaps insoluble?). Point taken on #5, was definitely my most speculative point. I think it comes back to Point #1 for me. If your core aim was: "to show that Eliezer is worthy of much less deference then he currently is given" then I'd want you to show how much deference is given to him over and above the validity of his ideas spreading in the community, its mechanisms, and why that's a potential issue more than litigating individual object-level cases. Instead, if your issue is the commonly-believed views in the community that you think are incorrect, then you could have argued against those beliefs without necessarily invoking or focusing on Eliezer. In a way the post suffers from kinda trying to be both of those critiques at once, at least in my opinion. That's at least the feedback I'd give if you wanted to revisit this issue (or a similar one) in the future.

If you claim to be justified in having a near zero credence in some view, and the reason for that is because you don't know what words mean that are totally standard among people who are informed about the subject matter, and then you go on to dismiss those who are informed who disagree with you, that seems pretty eggregious. 

I'm sympathetic to that. I just also get a whiff of "it's my group's prerogative to talk about this and he didn't pay proper deference". As a point of comparison, I'm sympathetic to theologians who thought the new atheists were total yokels who didn't understand any of the subtleties of their religions and their arguments, because they often didn't. But I also think the new atheists were more right and I don't think it would have been a good use of time for them to understand more. I'm not trying to be insulting to academic philosophy but rather insist tha... (read more)

He didn't understand how the field was using a certain word.  If a person uses words incorrectly, based on a misreading, and then interprets arguments as being obviously wrong based on their misinterpretation, they are making errors, not just failing to agree with the consensus view.  

Misunderstanding someone else's claim doesn't strike me as an "egregious error". I don't feel he should have to understand the entirety of the academic view to have his own view. Although I agree he was mistaken to dismiss that view using words he had misunderstood.

I don't think so.  I argued in detail against each of Eliezer's views.  I think I do know that Eliezer is wrong about zombies, decision theory, and animal consciousness.  I didn't just point to what experts believe, I also explained why Eliezer is wrong. 

4
Holly_Elmore
8mo
My read on what you meant by "wrong about zombies" was that he didn't understand what the field was claiming with the use of certain words and was dismissing a strawman. 

Let's stipulate you have good evidence that you are the only being in the universe, and no one else will exist in the future.  You don't care about what happens to anyone else. 

1
Max H
8mo
OK. Simultaneously believing that and believing the truth of the original setup seems dangerously close to believing a contradiction. But anyway, you don't really need all those stipulations to decide not to chop your legs off; just don't do that if you value your legs. (You also don't need FDT to see that you should defect against CooperateBot in a prisoner's dilemma, though of course FDT will give the same answer.) A couple of general points to keep in mind when dealing with thought experiments that involve thorny or exotic questions of (non-)existence: * "Entities that don't exist don't care that they don't exist" is a vacuously true, for most ordinary definitions of non-existence. If you fail to exist as a result of your decision process, that's generally not a problem for you, unless you also have unusual preferences over or beliefs about the precise nature of existence and non-existence.[1] * If you make the universe inconsistent as a result of your decision process, that's also not a problem for you (or for your decision process). Though it may be a problem for the universe creator, which in the case of a thought experiment could be said to be the author of that thought experiment.  An even simpler view is that logically inconsistent universes don't actually exist at all - what would it even mean for there to be a universe (or even a thought experiment) in which, say, 1 + 2 = 4? Though if you accepted the simpler view, you'd probably also be a physicalist. I continue to advise you to avoid confidently pontificating on decision theory thought experiments that directly involve non-existence, until you are more practiced at applying them correctly in ordinary situations.   1. ^ e.g. unless you're Carissa Sevar

If your action affects what happens in other Everett branches, such that there are actual, concretely existing people whose well-being is affected by your action to blackmail, then that is not relevantly like the case given by Schwarz.  That case seems relevantly like the twin case, where I think there might be a way for a causal decision theorist to accomodate the intuition, but I am not sure.  

We can reconstruct the case without torture vs dust specks reasoning, because that's plausibly a cofounder.  Suppose a demon is likely to create peo... (read more)

4
TAG
8mo
It's more effective to show they are confused about maths, physics and AI, since it is much easier to establish truth/consensus in those fields.
3
Scott Alexander
8mo
I don't want to get into a long back-and-forth here, but for the record I still think you're misunderstanding what I flippantly described as "other Everett branches" and missing the entire motivation behind Counterfactual Mugging. It is definitely not supposed to directly make sense in the exact situation you're in. I think this is part of why a variant of it is called "updateless", because it makes a principled refusal to update on which world you find yourself in in order to (more flippant not-quite-right description) program the type of AIs that would weird games played against omniscient entities. If the demon would only create me conditional on me cutting off my legs after I existed, and it was the specific class of omniscient entity that FDT is motivated by winning games with, then I would endorse cutting off my legs in that situation.  (as a not-exactly-right-but-maybe-helpful intuition pump, consider that if the demon isn't omniscient - but simply reads the EA Forum - or more strictly can predict the text that will appear on the EA Forum years in the future - it would now plan to create me but not you, and I with my decision theory would be better off than you with yours. And surely omniscience is a stronger case than just reads-the-EA-Forum!) If this sounds completely stupid to you, and you haven't yet read the LW posts on Counterfactual Mugging. I would recommend starting there; otherwise, consider finding a competent and motivated FDT proponent (ie not me) and trying to do some kind of double-crux or debate with them, I'd be interested in seeing the results.

It means your preference ordering says that it's very good for you to be alive.  

We can stipulate that you get decisive evidence that you're not in a simulation. 

1
Max H
8mo
So then chop your legs off if you care about maximizing your total amount of experience of being alive across the multiverse (though maybe check that your measure of such experience is well-defined before doing so), or don't chop them off if you care about maximizing the fraction of high-quality subjective experience of being alive that you have. This seems more like an anthropics issue than a question where you need any kind of fancy decision theory though. It's probably better to start by understanding decision theory without examples that involve existence or not, since those introduce a bunch of weird complications about the nature of the multiverse and what it even means to exist (or fail to exist) in the first place.

We could ask a physicalist too--Frankish, Richard Brown, etc. 

Just want to say, I also agree that much of the original language was inflammatory.  I think I have fixed it to make it less inflammatory, but do let me know if there are other parts that you think are inflammatory.  

In your shoes, I'd remove "egregiously" from the title, but I'm not great at titles and also occupy a different epistemic status than you (eg I think FDT is better than CDT or EDT).

You're response in the decision theory case was that there's no way that a rational agent could be in that epistemic state.  But we can just stipulate it for the purpose of the hypothetical.  

In addition, the scenario doesn't require absurdly low odds.  Suppose that a demon has a 70% chance of creating people who will chop their legs off.  You've been created and your actions will affect no one else.  FDT implies that you have strong reason to chop your legs off even though it doesn't benefit you at all.  

1
Max H
8mo
I did not say this. OK, in that case, the agent in the hypothetical should probably consider whether they are in a short-lived simulation.   No, it might say that, depending on (among other things) what exactly it means to value your own existence.

Some counterarguments are sufficiently strong that they are decisive.  These are good examples.  

I have read quite a lot of philosophy.  Less than academics--I'm currently an undergrad--but my major is philosophy and it's my primary interest, such that I spend lots of time reading and writing about it.

7
David Mathers
8mo
How long have you been studying philosophy? My views sometimes changed quite radically in grad school: I used to think problem of evil was "decisive", but now I think multiverse theodicies might work. I used to think Moore's proof of the existence of the external world was question-begging garbage as an undergrad, but then I read Scott Soames' account of its historical significance in grad school one day, and decided, no, actually Moore was totally right, and it's a deep insight. I used to buy Chalmers' 2-d zombie argument against [lol I originally idiotically wrote for here] materialism, and then one day at a conference in either my 2nd year of masters or 1st year of PhD, I decided that no, actually, I am now a physicalist. When I was an undergrad, I had idealist sympathies at one point, but now I think idealism is the dumbest view ever. 

I tend to think Hanson more reliably generates true beliefs than Eliezer.

I don't think I really overgeneralized from limited data.  Eliezer talks about tons of things, most of which I don't know about.  I know a lot about maybe 6 things that he talks about and expresses strong views on.  He is deeply wrong about at least four of them. 

1
Jonas Hallgren
8mo
I didn't mean it in this sense. I think the lesson you drew from it is fair in general, I was just reacting to the things I felt you pulled under the rug, if that makes sense.

Or a sign that knowing about philosophy decreases support for Rand. 

Philosophia has I think a publication rate decently below 50%.  

I'd disagree with the notion that "this post made a really serious effort to optimize for maximizing damage to the reputation to at least one of the major Schelling points in the Rationality community."  The thing I was optimizing for was getting people to be more skeptical about Eliezer's views, not ruining his career or reputation.  In fact, as I said in the article, I think he often has interesting, clever, and unique insights and has made the world a better place. 

See also my reply to Eliezer.  In short, if you're writing a post arg... (read more)

8
Linch
8mo
Yeah it seems pretty obvious to me that there are far worse things you could've said if you wanted to optimize for reputational damage, assuming above 75th percentile creativity and/or ruthlessness. 

Yes, there are some arguments of questionable efficacy for the conclusion that zombieism entails epiphenomenalism.  But notably:

  1. Eliezer hasn't given any such argument. 
  2. Eliezer said that deniers of zombieism are by definition zombieists.  That's just flatly false. 

I tried very hard to phrase everything as clearly as possible.  But if people's takeaway is "people who know about philosophy of mind and decision theory find Eliezer's views there deeply implausible and indicative of basic misunderstandings," then I don't think that's the end of the world.  Of course, some would disagree. 

Yes, sorry I should have had it start in personal blog.  I have now removed the incendiary phrasing that you highlight.

4
Lizka
8mo
Thanks for editing your post.  I've moved the post back to Frontpage (although I don't think this changes much) — see this comment. We don't generally move posts to Frontpage if the authors mark them as Personal Blog themselves. Do you want us to move this post back? 

Hi Eliezer.  I actually do quite appreciate the reply because I think that if one writes a piece explaining why someone else is systematically in error, it's important that the other person can reply. That said . . . 

You are misunderstanding the point about causal closure.  If there was some isomorphic physical law, that resulted in the same physical states of affairs as is resulted in by consciousness, the physical would be causally closed.  I didn't say that your description of what a zombie is was the misrepresentation.  The poi... (read more)

5
Pseudotruth
8mo
Eliezer quoted the SEP entry as support for his position and you, in your response, cut off the part of said quote which contained the support and only responded to the remaining part which did not contain the supporting point (eg the key words: causal closure). This seems bad-faith to me even though I think you're right that Eliezer did not account for interactionist dualism (though I disagree that it is necessarily a critical error, I don't think one should be expected to note every possibilty no matter how low prob in the course of an argumentation.)

'Chalmers, Goff, or Chappell'  This is stacking the deck against Eliezer rather unfairly; none of these 3 are physicalists, even though physicalism is the plurality, and I think still slight majority position in the field: https://survey2020.philpeople.org/survey/results/4874 

Re Chalmers agreeing with you, he would, he said as much in the LessWrong comments and I recently asked him in person and he confirmed it. In Yudkowsky’s defense it is a very typical move among illusionists to argue that Zombiests can’t really escape epiphenomenalism, not just some ignorant outsider’s move (I think I recall Keith Frankish and Francois Kammerer both making arguments like this). That said I remain frustrated that the post hasn’t been updated to clarify that Chalmers disagrees with this characterization of his position.

I mean, it's always possible.  But the views I defend here are utterly mainstream.  Virtually no people in academia think either FDT, Eliezer's anti-zombie argument, or animal nonconsciousness are correct.  

I obviously disagree that this is the conclusion of the LessWrong comments, many of which I think are just totally wrong!  Notably, I haven't replied to many of them because the LessWrong bot makes it impossible for me to post above once per hour because I have negative Karma on recent posts. 

-2
Max H
8mo
You've commented 12 times so far on that post, including on all 4 of the top responses. My advice: try engaging from a perspective of inquiry and seeking understanding, rather than agreement / disagreement. This might take longer than making a bunch of rapid-fire responses to every negative comment, but will probably be more effective.  My own experience commenting and getting a response from you is that there's not much room for disagreement on decision theory - the issue is more that you don't have a solid grasp of the basics of the thing you're trying to criticize, and I (and others) are explaining why. I don't mind elaborating more for others, but I probably won't engage further with you unless you change your tone and approach, or articulate a more informed objection.

Putting aside whether or not what you say is correct, do you think it's possible that you have fallen prey to the overconfidence that you accuse Eliezer of? This post was very strongly written and it seems a fair number of people disagree with your arguments.

Okay yeah, fair.  Here's my friends publication record https://philpeople.org/profiles/amos-wollen  

Though worth noting that the other author rejected it.  It's not clear how common it is for one reviewer to be willing to submit your paper after heavy revisions is. 

Fair point that many rejected things probably received one "revise and resubmit".

The link to your friend's philpapers page I'd broken, but I googled him and I think mediocre journals is probably, mostly the right answer, mixed a bit with "your friend is very, talented" (Though to be clear even 5 mediocre pubs is impressive for a 2nd year undergrad, and I would predict your friend can go to a good grad school if he wants to. ) Philosophia is a generalist journal I never read a single paper in in the 15 or so years I was reading philosophy papers generally, ... (read more)

2
JoshuaBlake
8mo
Fixed link
2
david_reinstein
8mo
Phew. Please fix when you have a moment thanks. (Otherwise people may start to think they are not understanding things and give up reading.)

They will still endorse the same things as side constraints views do (E.g. not killing one to save 5). 

Eliezer talks about lots of topics that I don't know anything about.  So I can only write about the things that I do know about.  There are maybe five or six examples of that, and I think he has utterly crazy views in perhaps all except one of those cases.  

I can't fat check him on physics or nanotech, for instance. 

Eliezer has a huge number of controversial beliefs--about AI, physics, Newcombe's problem, zombies, nanotech, etc.  Many of these are about things I know nothing about.  But there are a few things where he adopts deeply controversial views that I know something about.  And almost every time--well above half the time--that I know enough to fact check him, he turns out to be completely wrong in embarrassing ways. 

Based on this essay it seems like by "completely wrong in embarrassing ways" you mean that he's not knowledgeable about or respectful of what the local experts think. It's not like we know they are right on most of these questions. 

Yeah, though we can imagine that everyone feels similar urge to smoke, but it's only the people with the lesion who ultimately decide to smoke. 

2
Sylvester Kollin
8mo
As Ahmed notes (chapter 4.3.1), if the lesion doesn't work through your beliefs and desires, smoking is not a genuine option, and so this is not an argument against evidentialism.

This is I think a really good comment.  The animal consciousness stuff I think is a bit crazy.  If Dennett thinks that as well . . . well, I never gave Dennett much deference.  

I was exaggerating a bit when I said that no undergraduate would make that error.  

I don't think that Schwarz saying he might publish it is much news.  I have a friend who is an undergraduate in his second year and he has 5 or 6 published philosophy papers--I'm also an undergraduate and I have one forthcoming.  

Do we know what journal Eliezer was publishing in?  I'd expect it not to get published in even a relatively mediocre journal, but I might be wrong. 

Thanks!

I don't know the journal Schwarz rejected it for, no. I f your friend has 5 or 6 publications as an undergrad then either they are a genius, or they are unusually talented and also very ruthless about identifying small, technical objections to things famous people have said, or they are publishing in extremely mediocre journals. The second and third things ares probably not what's going on when Wolfgang gives an R&R to the Yudkowsky/Soares fdt paper. It is an attempt to give a big new fundamental theory, not a nitpick. And regardless of the part... (read more)

I know this is very late, but I wrote a piece a while ago about this.  I bite the bullet.   https://benthams.substack.com/p/against-conservatism-about-value

It was trying to argue for 2.  I think that if we give up any side constraints, which is what my piece argued for, we get something very near utilitarianism--at the very least consequentialism.  Infinitarian ethics is everyone's problem. 

If we reject any side constraints--which my argument supports--then we get something very near utilitarianism.  

1
RyanBaylon
9mo
Correct me if Im wrong, but it doesnt seem like virtue ethics or care ethics relies on side constraints -- they seem uniquely deontic. Im not sure that rejecting deontology implies a form consequentialism as virtue or feminist ethics are still viable at that point. 

Thanks for the reply!  I was focusing on the most common animals that Americans eat, though I should perhaps have noted that.  I disagree that the focus was very much on physical suffering--I talk about sleep deprivation and the sadness of being separated from parents, to give a few examples. 

4
Tyler Johnston
1y
I might be taking it too literally, but given these points, it could be worth renaming this post from a "comprehensive fact sheet of almost all the ways animals are mistreated in factory farms" (I wish such a list could fit in a few thousand words...) to something like a "fact sheet of some of the most salient causes of suffering on factory farms." Then again, I realize that's a worse title and has way less rhetorical power... maybe you could come up with something more creative than me! Thanks for writing this.

Thanks for the comment.  What I said was "Anyone who is not a moral imbecile recognizes that it’s wrong to contribute to senseless cruelty for the sake of comparatively minor benefits."  The point is that it's obvious that one shouldn't cause lots of torture for the sake of minor benefits.  If, as I claim, that is what happens when one eats meat, then this is a good case against eating meat. 

9
Ariel Simnegar
1y
I think it's worth increasing the degree to which you put your prospective reader in mind when writing essays like this. As they say, "you catch more flies with honey than you do with vinegar". I think more could have been done to avoid alienating readers who otherwise would have been inclined to listen to you. Of course, I understand (and 100% agree!) with the way you feel about this moral issue. To you, factory farming is obviously morally wrong. But front and center in your mind could be that most people, and even some EAs/rationalists, have just never thought about meat consumption this way. You're in 650 BCE trying to convince Spartans to not kill babies, 1850 trying to convince American Southerners to not own slaves, and 2023 trying to get people to care about AI x-risk. What's a better approach: Wrecking them with facts + logic, or gently guiding them to consider a perspective they haven't before?

I think I just disagree about what reasoning is.  I think that reasoning does not just make our existing beliefs more coherent, but allows us to grasp new deep truths.  For example, I think that an anti-realist who didn't originally have the FTI irrational intuition could grasp it by reflection, and that one can, over time, discover that some things are just not worth pursuing and others are.  

I think 1 is right.  

2 I agree that it would depend on how the being is constructed.  My claim is that it's plausible that they'd be moral by default just by virtue of being smart.  

3 I think there is a sense in which I have--and most modern people have--unlike most people historically, grasped the badness of slavery.  

Load more