3 karmaJoined


10,000ppm causes heavy breathing and confusion, from 50,000ppm upwards you can die in a couple of hours. I don't think going from 80th to 30th on complex cognitive tasks is totally implausible with an increase from 600 to 2500ppm.

I don't understand why you find it so surprising given that (it seems) you had no previous knowledge of the area. Where did your prior come from? I can't be surprised by the fact the Planck length is 1.61x10^-35 and not 2.5x10^-30 given that I had not idea what it was.

Here's another study which found the same thing: http://ehp.niehs.nih.gov/wp-content/uploads/advpub/2015/10/ehp.1510037.acco.pdf

Both studies were published in a top journal in the area (13% acceptance rate) and have respectable university professors in the area among their authors. The reason other studies did not found the same thing is because they weren't looking at that range. The common assumption was that anything below 800ppm is equally perfect. You will find some blogs in the area giving this same explanation. The effect size really is suspiciously huge, but it is not entirely implausible either given that we already know CO2 does have a brutal effect on cognition at higher ppm and understand the mechanism fairly well.

Anders made the following calculations. A human consumes about 45 liters of oxygen per hour, producing about 10 liters of carbon dioxide per hour. To maintain a good CO2 level you want 8-10 l/s/person in an office. Now, if the airspeed is 0.15 m/s (about the limit of drafty) you need 0.01/ 0.15 = 6.7 cm^2 window area.

My credence on the finding being true is high. Even if it is false, the cost would be leaving the window very slightly opened (6.7cm2 is pretty small), or opening in every other hour. I've been doing so since the last study came out and I would add the room also feels more pleasant like that.

My understanding of prudential reasons is that they are reasons of the same class as those I have to want to live when someone points a gun at me. They are reasons that relate me to my own preferences and survival, not as a recipient of the utilitarian good, but as the thing that I want. They are more like my desire for a back massage than like my desire for a better world. A function from my actions to my reasons to act would be partially a moral function, partially a prudential function.

That seems about right under some moral theories. I would not want to distinguish being the recipient of the utilitarian good and getting back massages. I would want to say getting back massages instantiate the utilitarian good. According to this framework, the only thing these prudential reasons capture not in impersonal reasons themselves is the fact people give more weight to themselves than others, but I would like to argue there are impersonal reasons for allowing them to do so. If that fails, then I would call these prudential reasons pure personal reasons, but I would not remove them from the realm of moral reasons. There seems to be already established moral philosophers that tinker with apparently similar types of solutions. (I do stress the “apparently” given that I have not read them fully or fully understand what I read.)

Appearances deceive here because "that I should X" does not imply "that I think I should X". I agree that if both I should X and I think I should X, then by doing Y=/=X I'm just being unreasoable. But I deny that mere knowledge that I should X implies that I think I should X.

They need not imply, but I would like a framework where they do under ideal circumstances. In that framework - which I paraphrase from Lewis - if I know a certain moral fact, e.g., that something is one of my fundamental values, then I will value it (this wouldn’t obtain if you are a hypocrite, in such case it wouldn’t be knowledge). If I value it, and if I desire as I desire to desire (which wouldn’t obtain in moral akrasia), then I will desire it. If I desire it, and if this desire is not outweighed by other conflicting desires (either due to low-level desire multiplicity or high-level moral uncertainty), and if I have moral reasoning to do what servers my desires according to my beliefs (wouldn't obtain for a psychopath), then I will pursue it. And if my relevant beliefs are near enough true, then I will pursue it as effectively as possible. I concede valuing something may not lead to pursuing it, but only if something goes wrong in this chain of deductions. Further, I claim this chain defines what value is.

I should X = A/The moral function connects my potential actions to set X. I think I should X = The convolution of the moral function and my prudential function take my potential actions to set X.

I’m unsure I got your notation. =/= means different? What is the meaning of “/” in “A/The…”?

In your desert scenario, I think I should(convolution) defend my self, though I know I should (morality) not.

I would claim you are mistaken about your moral facts in this instance.

We are in disagreement. My understanding is that the four quadrants can be empty or full. There can be impartial reasons for personal reasons, personal reasons for impartial reasons, impartial reasons for impartial reasons and personal reasons for personal reasons. Of course not all people will share personal reasons, and depending on which moral theory is correct, there may well be distinctions in impersonal reasons as well.

What leads you to believe we are in disagreement if my claim was just that one of the quadrants are full?

In what sense do you mean psychopathy? I can see ways in which I would agree with you, and ways in which not.

I mean failure to exercise moral reasoning. You would be right about what you value, you would desire as you desire to desire, have all the relevant beliefs right, have no conflicting desires or values, but you would not act to serve your desires according to your beliefs. In your instance things would be more complicated given that it involves knowing a negation. Perhaps we can go about like this. You would be right maximizing welfare is not your fundamental value, you would have the motivation to stop solely desiring to desire welfare, you would cease to desire welfare, there would be no other desire inducing a desire on welfare, there would be no other value inducing desire on welfare, but you would fail to pursue what serves your desire. This fits well with the empirical fact psychopaths have low-IQ and low levels of achievement. Personally, I would bet your problem is more with allowing to have moral akrasia with the excuse of moral uncertainty.

Most of my probability mass is that maximizing welfare is not the right thing to do, but maximizing a combination of identity, complexity and welfare is.

Hence, my framework says you ought to pursue ecstatic dance every weekend.

One possibility is that morality is a function from person time slices to a set of person time slices, and the size to which you expand your moral circle is not determined a priori. This would entail that my reasons to act morally only when considering time slices that have personal identity 60%+ with me would look a lot like prudential reasons, whereas my reasons to act morally accounting for all time slices of minds in this quantum branch and its descendants would be very distinct. The root theory would be this function.

Why just minds? What determines the moral circle? Why does the core need to be excluded from morality? I claim these are worthwhile questions.

Seems plausible to me.

If this is true, maximizing welfare cannot be the fundamental value because there is not anything that can and is epistemically accessible.

Do you just mean VNM axioms? It seems to me that at least token commensurability certainly obtains. Type commensurability quite likely obtains. The problem is that people want the commensurability ratio to be linear on measure, which I see no justification for.

It is certainly true of VNM, I think it is true of a lot more of what we mean by rationality. Not sure I understood your token/type token, but it seems to me that token commensurability can only obtain if there is only one type. It does not matter if it is linear, exponential or whatever, if there is a common measure it would mean this measure is the fundamental value. It might also be that the function is not continuous, which would mean rationality has a few black spots (or that value monism has, which I claim are the same thing).

I would look for one I can accept if I was given sufficient (convoluted) reasons to do so. At the moment it seems to me that all reasonable people are either some type of utilitarian in practice, or are called Bernard Williams. While I don't get pointed thrice to another piece that may overwhelm the sentiment I was left with, I see no reason to enter exploration stage. For the time being, the EA in me is peace.

I know a lot of reasonable philosophers that are not utilitarians, most of them are not mainstream utilitarians. I also believe the far future (e.g. Nick Beckstead) or future generations (e.g. Samuel Scheffler) is a more general concern than welfare monism, and that many utilitarians do not share this concern (I’m certain to know a few). I believe if you are more certain about the value of the future than about welfare being the single value, you ought to expand your horizons beyond utilitarianism. It would be hard to provide another Williams regarding convincingness, but you will find an abundance of all sort of reasonable non-utilitarian proposals. I already mentioned Jonathan Dancy (e.g. http://media.philosophy.ox.ac.uk/moral/TT15_JD.mp4), my Nozick’s Cube, value pluralism and so on. Obviously, it is not recommendable to let these matters depend on being pointed.

Why not conclude so much worse for ought, hedonism, or impersonal morality? There are many other moral theories build away from these notions which would not lead you to these conclusions – of course, this does not mean they ignore these notions. If this simplistic moral theory makes you want to abandon morality, please abandon the theory.

I find the idea that there are valid reasons to act that are not moral reasons weird; I think some folks call them prudential reasons. It seems that your reason to be an EA is a moral reason if utilitarianism (plus a bunch of other specific assumptions) is right, and “just a reason” if it isn't. But if increasing other's welfare is not producing value - or is not right or whatever - what is your reason for doing it? Is it due to some sort of moral akrasia? You know it is not the right thing to do, but you do it nevertheless? It seems there would only be bad reasons for you to act this way.

If you are not acting like you think you should after having complete information and moral knowledge, perfect motivation and reasoning capacity, then it does not seem like you are acting on prudential reasons, it seems you are being unreasonable. If you are acting on the best of your limited knowledge and capacities, it seems you are acting for moral reasons. These limitations might explain why you acted in a certain sub-optimal way, but they do not seem to constitute your reason to act.

Suppose the scenario where you are stuck on a desert island with another starving person with a slightly higher chance of survival (say, he is slightly healthier than you). There’s absolutely no food and you know that the best shot for at least one of you surviving is if one eats the other. He comes to attack you. Some forms of utilitarianism would say you ought to let him kill you. Any slight reaction would be immoral. If later on people find out you fought for your life, killed the other person and survived, the right thing for them to say would be “He did the wrong thing and had no right to defend his life.” The intuition you have the right to self-defence would be simply mistaken; there is no moral basis for it.

But we need not to abandon this intuition and that some forms of utilitarianism require us to do so will always be a point against them - in a similar manner that the intuition sentient pleasure is good is an intuition for them. It would be morally right to defend yourself in many other moral systems, including more elaborate forms of utilitarianism. You may believe people ought to have the right of self-defence as a deontological principle on its own, or even for utilitarian reasons (e.g., society works better that way). There might be impersonal reasons to have the right to put your personal interest in your survival above the interest that another person with slightly higher life expectancy survives. Hence, even if impersonal reasons are all the moral reasons there are, insofar as there are impersonal reasons for people to have personal reasons these latter are moral reasons.
If someone is consistently not acting like he thinks he should and upon reflection there is no change in behaviour or cognitive dissonance, then that person either is a hypocrite - he does not really think he should act that way - or a psychopath - he is incapable of moral reasoning. Claiming one does not have the right to self-defence even though you would feel you have strong reasons not to let the other person kill you seems like an instance of hypocrisy. Being an EA while fully knowing maximizing welfare is not the right thing to do seems like an instance of psychopathy (in the odd case EA is only about maximizing welfare). Of course, besides these two pathologies, you might have some form of cognitive dissonance or other accidental failures. Perhaps you are not really that sure maximizing welfare is not the right thing to do. You might not have the will to commit to do the things you should do in case right actions consists in something more complicated than maximizing welfare. You might be overwhelmed by a strong sense you have the right to life. It might not be practical at the time to consider these other complicated things. You might not know which moral theory is right. These are all accidental things clouding or limiting your capacity for moral reasoning, things you should prefer to overcome. This would be a way of saving the system of morality by attributing any failure to act right to accidents, uncertainties or pathologies. I prefer this solution of sophisticating the way moral reasons behave than to claim that there are valid reasons to act that are not moral reasons; the latter looks, even more than the former, like shielding the system of morality from the real world. If there are objective moral truths, they better have something to do with what people want to want to do upon reflection.

But perhaps there is no system to be had. Some other philosophers believe these limitations above are inherent to moral reason, and it is a mistake to think moral reasoning should function the same way as pure reasoning does. The right thing to do will always be an open question, and all moral reasoning can do is recommend certain actions over others, never to require. If there is more than one fundamental value, or if this one fundamental value is epistemically inaccessible, I see no other way out besides this solution. Incommensurable fundamental values are incompatible with pure rationality in its classical form. Moreover, if the fundamental value is simply hard to access, this solution is at least the most practical one and the one we should use in most of applied ethics until we come up with Theory X. (In fact, it is the solution the US Supreme Court adopts)

I personally think there is a danger with going about believing to believe in some simple moral theory while ignoring it whenever it feels right. Pretending to be able to abandon morality altogether would be another danger. How actually believing and following these simplistic theories fare among these latter two options is uncertain. If, as in Williams joke, one way of acting inhumanely is to act on certain kinds of principles, it does not fare very well.

It seems to me Williams made his point; or the point I wished him to make to you. You are saying “if this is morality, I reject it”. Good. Let’s look for one you can accept.