N

nathan98000

162 karmaJoined Jun 2017

Comments
60

[What] is required of the philosopher is also to provide grounding or to think about grounding upon which the intuitions pointed to by a thought experiment are consistent.

Why can't a philosopher just present a counterexample? In fact, it seems arguing from a specific alternative grounding would make Timmerman's argument weaker. As he notes (emphasis mine):

I have purposefully not made a suggestion as to how many (if any) children Lisa is obligated to rescue. I did so to make my argument as neutral as possible, as I want it to be consistent with any normative ethical view ranging from moral libertarianism to a view that only permits Lisa to indulge in a comparably insignificant good a single time.

As an analogy, if you make a general claim such as: "All marbles are blue," it's enough to point to a single counterexample to show that that claim is false. I don't also have to have my own view about what colors marbles come in.

Also, as a matter of interpreting Famine, Affluence, and Morality, Singer doesn't justify his principles based on any inferences from the drowning child thought experiment. Instead, he only uses that thought experiment as an application of his principles, which he takes to simply be common-sense. And although Singer is himself a utilitarian, he doesn't make any argument for utilitarianism in that paper, largely for the same reason as Timmerman! He wants diverse people to agree with him regardless of their grounding for the principles he discusses.

I don't quite understand your objection to Timmerman's thought experiment. You say it's "ad hoc" and "justifies our complacency arbitrarily", but it's unclear what you mean by these terms. And it's unclear why someone should agree that it's ad hoc and arbitrary.

This seems like a good summary! Was this downvoted merely because of a wrong pronoun?

For someone not familiar with Farrell's work, what's the main problem with it?

I appreciate the post, though I think "The universe is meaningless" section wasn't so convincing. The universe is meaningless because we're the product of natural selection? I would want a better argument than that.

FWIW I think it's still the case that psychologists/neuroscientists are nowhere near developing an accurate lie detector. And the paper you cite doesn't seem to support the claim that lie detection technology is accurate. From the abstract (emphasis mine):

Analyzing the myriad issues related to fMRI lie detection, the article identifies the key limitations of the current neuroimaging of deception science as expert evidence and explores the problems that arise from using scientific evidence before it is proven scientifically valid and reliable. We suggest that courts continue excluding fMRI lie detection evidence until this potentially useful form of forensic science meets the scientific standards currently required for adoption of a medical test or device.
 

There are methodological challenges associated with the typical studies done on lie detection. From a 2016 paper (emphasis mine):

Great hopes and expectations were expressed regarding the potential use of brain imaging techniques for the detection of deception. Contrary to what has been advocated by many researchers as well as practitioners (e.g., Bles & Haynes, 2008; Farwell, 2012; Langleben et al., 2005), the introduction of new measures such as P300 and fMRI is by no means a solution to the problems associated with the ANS-based CQT polygraph test. The CQT has been criticized for lacking proper controls and being unstandardized. In addition, its outcome is often contaminated by prior information available to the examiner. None of these criticisms can be resolved by replacing ANS recordings with fMRI measures.

Moreover, all paradigms face a similar logical problem: deception cannot be directly inferred either from the presence of emotional arousal in the CQT or from attentional orienting or inhibition in the CIT or DoD, regardless of whether ANS, reaction times, ERPs, or fMRI measures have been used.

So I'm not sure what the basis is for saying it's an "unambiguous mistake" to think accurate lie detection technology is a long way off.

I'm personally skeptical that we'll ever "solve" what the neural basis of sentience is. That said, I think there are still some promising ways a better understanding of psychology can advance standard EA causes. Here's a paper that goes into more depth on this issue:
https://pubmed.ncbi.nlm.nih.gov/35981321/

But for the paradox's setup to make sense, the player must have, in some sense, made his decision before the prediction is made: he is either someone who is going to take both boxes or someone who is just going to take the opaque box.

 

This doesn't seem correct. It's possible to make a better than random guess about what a person will decide in the future, even if the person has not yet made their decision.

This is not mysterious in ordinary contexts. I can make a plan to meet with a friend and justifiably have very high confidence that they'll show up at the agreed time. But that doesn't preclude that they might in fact choose to cancel at the last minute.

I suppose I agree that humanity should generally focus more on catastrophic (non-existential) risks.

That said, I think this is often stated explicitly. For example, MacAskill in his recently book explicitly says that many of the actions we take to reduce x-risks will also look good even for people with shorter-term priorities.

Do you have any quote from someone who says we shouldn't care about catastrophic risks at all?

Load more