This just seemed to be a list of false claims about things GiveWell forgot to consider, a series of ridiculous claims about philosophy, and no attempt to compare the benefits to the costs. Yes, lots of EA charities have various small downsides, most of which are taken into account, but those are undetectable compared to the hundreds of thousands of lives saved. He suggests empowering local people, which is a good applause line, but it's vague. Most local people are not in a position to do high quality comparisons between different interventions.
We all agree that you should get utility. You are pointing out that FDT agents get more utility. But once they are already in the situation where they've been created by the demon, FDT agents get less utility. If you are the type of agent to follow FDT, you will get more utility, just as if you are the type of agent to follow CDT while being in a scenario that tortures FDTists, you'll get more utility. The question of decision theory is, given the situation you are in, what gets you more utility--what is the rational thing to do. &n...
Wait sorry it’s hard to see the broader context of this comment on account of being on my phone and comment sections being hard to navigate on ea forum. I don’t know if I said eliezer had 100% credence, but if I did, that was wrong.
He didn't quote it--he linked to it. I didn't quote the broader section because it was ambiguous and confusing. The reason not accounting for interactionist dualism matters is because it means that he misstates the zombie argument, and his version is utterly unpersuasive.
The demon case shows that there are cases where FDT loses, as is true of all decision theories. IF the question is which decision theory will programming into an AI generate most utility, then that's an empirical question that depends on facts about the world. If it's once you're in a situation which will get the most utility, well, that's causal decision theory.
Decision theories are intended as theories of what is rational for you to do. So it describes what choices are wise and which choices are foolish. I think Eliez...
I would agree with the statement "if Eliezer followed his decision theory, and the world was such that one frequently encountered lots of Newcombe's problems and similar, you'd end up with more utility." I think my position is relatively like MacAskill's in the linked post where he says that FDT is better as a theory of the agent you should want to be than what's rational.
But I think that rationality won't always benefit you. I think you'd agree with that. If there's a demon who tortures everyone who believes FDT, then believing FD...
I know you said you didn't want to repeatedly go back and forth, but . . .
Yes, I agree that if you have some psychological mechanism by which you can guarantee that you'll follow through on future promises--like programming an AI--then that's worth it. It's better to be the kind of agent who follows FDT (in many cases). But the way I'd think about this is that this is an example of rational irrationality, where it's rational to try to get yourself to do something irrational in the future because you get rewarded for it. But remember...
Well put! Though one nitpick: I didn't defer to Eliezer much. Instead, I concluded that he was honestly summarizing the position. So I assumed physicalism was true because I assumed, wrongly, that he was correctly summarizing the zombie argument.
Oh sorry, yeah I misunderstood what point you were making. I agree that you want to be the type of agent who cuts off their legs--you become better off in expectation. But the mere fact that the type of agent who does A rather than B gets more utility on average does not mean that you should necessarily do A rather than B. If you know you are in a situation where doing A is guaranteed to get you less utility than B, you should do B. The question of which agent you should want to be is not the same as which agent is acting rationally...
Thanks for this comment. I agree with 2. On 3, it seems flatly irrational to have super high credences when experts disagree with you and you do not have any special insights.
If an influential person who is given lots of deference is often wrong, that seems notable. If people were largely influenced by my blog, and I was often full of shit, expressing confident views on things I didn't know about, that would be noteworthy.
Agree with 4.
On 5, I wasn't intending to criticize EA or rationalism. I'm a bit lukewarm on rationa...
If you claim to be justified in having a near zero credence in some view, and the reason for that is because you don't know what words mean that are totally standard among people who are informed about the subject matter, and then you go on to dismiss those who are informed who disagree with you, that seems pretty eggregious.
I'm sympathetic to that. I just also get a whiff of "it's my group's prerogative to talk about this and he didn't pay proper deference". As a point of comparison, I'm sympathetic to theologians who thought the new atheists were total yokels who didn't understand any of the subtleties of their religions and their arguments, because they often didn't. But I also think the new atheists were more right and I don't think it would have been a good use of time for them to understand more. I'm not trying to be insulting to academic philosophy but rather insist tha...
He didn't understand how the field was using a certain word. If a person uses words incorrectly, based on a misreading, and then interprets arguments as being obviously wrong based on their misinterpretation, they are making errors, not just failing to agree with the consensus view.
Misunderstanding someone else's claim doesn't strike me as an "egregious error". I don't feel he should have to understand the entirety of the academic view to have his own view. Although I agree he was mistaken to dismiss that view using words he had misunderstood.
I don't think so. I argued in detail against each of Eliezer's views. I think I do know that Eliezer is wrong about zombies, decision theory, and animal consciousness. I didn't just point to what experts believe, I also explained why Eliezer is wrong.
Let's stipulate you have good evidence that you are the only being in the universe, and no one else will exist in the future. You don't care about what happens to anyone else.
If your action affects what happens in other Everett branches, such that there are actual, concretely existing people whose well-being is affected by your action to blackmail, then that is not relevantly like the case given by Schwarz. That case seems relevantly like the twin case, where I think there might be a way for a causal decision theorist to accomodate the intuition, but I am not sure.
We can reconstruct the case without torture vs dust specks reasoning, because that's plausibly a cofounder. Suppose a demon is likely to create peo...
It means your preference ordering says that it's very good for you to be alive.
We can stipulate that you get decisive evidence that you're not in a simulation.
Just want to say, I also agree that much of the original language was inflammatory. I think I have fixed it to make it less inflammatory, but do let me know if there are other parts that you think are inflammatory.
In your shoes, I'd remove "egregiously" from the title, but I'm not great at titles and also occupy a different epistemic status than you (eg I think FDT is better than CDT or EDT).
You're response in the decision theory case was that there's no way that a rational agent could be in that epistemic state. But we can just stipulate it for the purpose of the hypothetical.
In addition, the scenario doesn't require absurdly low odds. Suppose that a demon has a 70% chance of creating people who will chop their legs off. You've been created and your actions will affect no one else. FDT implies that you have strong reason to chop your legs off even though it doesn't benefit you at all.
Some counterarguments are sufficiently strong that they are decisive. These are good examples.
I have read quite a lot of philosophy. Less than academics--I'm currently an undergrad--but my major is philosophy and it's my primary interest, such that I spend lots of time reading and writing about it.
I don't think I really overgeneralized from limited data. Eliezer talks about tons of things, most of which I don't know about. I know a lot about maybe 6 things that he talks about and expresses strong views on. He is deeply wrong about at least four of them.
I'd disagree with the notion that "this post made a really serious effort to optimize for maximizing damage to the reputation to at least one of the major Schelling points in the Rationality community." The thing I was optimizing for was getting people to be more skeptical about Eliezer's views, not ruining his career or reputation. In fact, as I said in the article, I think he often has interesting, clever, and unique insights and has made the world a better place.
See also my reply to Eliezer. In short, if you're writing a post arg...
Yes, there are some arguments of questionable efficacy for the conclusion that zombieism entails epiphenomenalism. But notably:
I tried very hard to phrase everything as clearly as possible. But if people's takeaway is "people who know about philosophy of mind and decision theory find Eliezer's views there deeply implausible and indicative of basic misunderstandings," then I don't think that's the end of the world. Of course, some would disagree.
Yes, sorry I should have had it start in personal blog. I have now removed the incendiary phrasing that you highlight.
Hi Eliezer. I actually do quite appreciate the reply because I think that if one writes a piece explaining why someone else is systematically in error, it's important that the other person can reply. That said . . .
You are misunderstanding the point about causal closure. If there was some isomorphic physical law, that resulted in the same physical states of affairs as is resulted in by consciousness, the physical would be causally closed. I didn't say that your description of what a zombie is was the misrepresentation. The poi...
'Chalmers, Goff, or Chappell' This is stacking the deck against Eliezer rather unfairly; none of these 3 are physicalists, even though physicalism is the plurality, and I think still slight majority position in the field: https://survey2020.philpeople.org/survey/results/4874
Re Chalmers agreeing with you, he would, he said as much in the LessWrong comments and I recently asked him in person and he confirmed it. In Yudkowsky’s defense it is a very typical move among illusionists to argue that Zombiests can’t really escape epiphenomenalism, not just some ignorant outsider’s move (I think I recall Keith Frankish and Francois Kammerer both making arguments like this). That said I remain frustrated that the post hasn’t been updated to clarify that Chalmers disagrees with this characterization of his position.
I mean, it's always possible. But the views I defend here are utterly mainstream. Virtually no people in academia think either FDT, Eliezer's anti-zombie argument, or animal nonconsciousness are correct.
I obviously disagree that this is the conclusion of the LessWrong comments, many of which I think are just totally wrong! Notably, I haven't replied to many of them because the LessWrong bot makes it impossible for me to post above once per hour because I have negative Karma on recent posts.
Putting aside whether or not what you say is correct, do you think it's possible that you have fallen prey to the overconfidence that you accuse Eliezer of? This post was very strongly written and it seems a fair number of people disagree with your arguments.
Okay yeah, fair. Here's my friends publication record https://philpeople.org/profiles/amos-wollen
Though worth noting that the other author rejected it. It's not clear how common it is for one reviewer to be willing to submit your paper after heavy revisions is.
Fair point that many rejected things probably received one "revise and resubmit".
The link to your friend's philpapers page I'd broken, but I googled him and I think mediocre journals is probably, mostly the right answer, mixed a bit with "your friend is very, talented" (Though to be clear even 5 mediocre pubs is impressive for a 2nd year undergrad, and I would predict your friend can go to a good grad school if he wants to. ) Philosophia is a generalist journal I never read a single paper in in the 15 or so years I was reading philosophy papers generally, ...
They will still endorse the same things as side constraints views do (E.g. not killing one to save 5).
Eliezer talks about lots of topics that I don't know anything about. So I can only write about the things that I do know about. There are maybe five or six examples of that, and I think he has utterly crazy views in perhaps all except one of those cases.
I can't fat check him on physics or nanotech, for instance.
Eliezer has a huge number of controversial beliefs--about AI, physics, Newcombe's problem, zombies, nanotech, etc. Many of these are about things I know nothing about. But there are a few things where he adopts deeply controversial views that I know something about. And almost every time--well above half the time--that I know enough to fact check him, he turns out to be completely wrong in embarrassing ways.
Based on this essay it seems like by "completely wrong in embarrassing ways" you mean that he's not knowledgeable about or respectful of what the local experts think. It's not like we know they are right on most of these questions.
Yeah, though we can imagine that everyone feels similar urge to smoke, but it's only the people with the lesion who ultimately decide to smoke.
This is I think a really good comment. The animal consciousness stuff I think is a bit crazy. If Dennett thinks that as well . . . well, I never gave Dennett much deference.
I was exaggerating a bit when I said that no undergraduate would make that error.
I don't think that Schwarz saying he might publish it is much news. I have a friend who is an undergraduate in his second year and he has 5 or 6 published philosophy papers--I'm also an undergraduate and I have one forthcoming.
Do we know what journal Eliezer was publishing in? I'd expect it not to get published in even a relatively mediocre journal, but I might be wrong.
Thanks!
I don't know the journal Schwarz rejected it for, no. I f your friend has 5 or 6 publications as an undergrad then either they are a genius, or they are unusually talented and also very ruthless about identifying small, technical objections to things famous people have said, or they are publishing in extremely mediocre journals. The second and third things ares probably not what's going on when Wolfgang gives an R&R to the Yudkowsky/Soares fdt paper. It is an attempt to give a big new fundamental theory, not a nitpick. And regardless of the part...
I know this is very late, but I wrote a piece a while ago about this. I bite the bullet. https://benthams.substack.com/p/against-conservatism-about-value
It was trying to argue for 2. I think that if we give up any side constraints, which is what my piece argued for, we get something very near utilitarianism--at the very least consequentialism. Infinitarian ethics is everyone's problem.
If we reject any side constraints--which my argument supports--then we get something very near utilitarianism.
Thanks for the reply! I was focusing on the most common animals that Americans eat, though I should perhaps have noted that. I disagree that the focus was very much on physical suffering--I talk about sleep deprivation and the sadness of being separated from parents, to give a few examples.
Thanks for the comment. What I said was "Anyone who is not a moral imbecile recognizes that it’s wrong to contribute to senseless cruelty for the sake of comparatively minor benefits." The point is that it's obvious that one shouldn't cause lots of torture for the sake of minor benefits. If, as I claim, that is what happens when one eats meat, then this is a good case against eating meat.
I think I just disagree about what reasoning is. I think that reasoning does not just make our existing beliefs more coherent, but allows us to grasp new deep truths. For example, I think that an anti-realist who didn't originally have the FTI irrational intuition could grasp it by reflection, and that one can, over time, discover that some things are just not worth pursuing and others are.
I think 1 is right.
2 I agree that it would depend on how the being is constructed. My claim is that it's plausible that they'd be moral by default just by virtue of being smart.
3 I think there is a sense in which I have--and most modern people have--unlike most people historically, grasped the badness of slavery.
Re 1, as Richard says: "Wenar scathingly criticized GiveWell—the most reliable and sophisticated charity evaluators around—for not sufficiently highlighting the rare downsides of their top charities on their front page.8 This is insane: like complaining that vaccine syringes don’t come with skull-and-crossbones stickers vividly representing each person who has previously died from complications. He is effectively complaining that GiveWell refrains from engaging in moral misdirection. It’s extraordinary, and really brings out why this concept matters." ... (read more)