Hi, Everyone
I dare to insert here a proposal which is, at the same time, vague and
ambitious; just to discuss about it. It is nothing firm, just an idea. I excuse
for my defective English.
After a long life and many books read, I realize that if we want to improve
human life in the sense of prosociality the real target must be human behaviour.
If we improve our moral view –our ethos- in a sense of benevolence, altruism and
non-aggression –in a rational way, of course- the charities, the economic acts,
the deeds must be a necessary consequence of this previous human change.
We know that moral evolution exists. Humanitarian movements like EA show that.
Why not to try to go farther? Which is the limit to moral change?
I don´t see anything in this forum dealing with the possibilities of improving
moral behavior in individuals –and, consequently, in groups and societies- in
order to achieve the highest effective altruism. I mean, doing the job that in
the past did the moralistic religions of the Axial Age... but now, independently
from irrational religious traditions. So, doing it finally the right way:
non-political social change.
We have today the experience, the knowledge in social sciences and the clarity
of thought enough to ponder the means for improving human behavior in the sense
of extreme prosociality. I realize that no one is just discussing the question.
You write about getting as much as possible –with charitable goals- from the
people as they are. Don´t you realize that you could get much more by changing
morally the people first?
A person is made out of motivations, feelings, rewards and desires. Moral
change can act on them. This is historical evidence… And if the outcome of that
process of change turns to be unconventional, is not that also the usual result
of social change along the history?
At least, just to discuss it…
People write about why diversity is a moral imperative because it's useful,
mostly to come up with creative solutions and avoid groupthink. But, is
diversity intrinsically good, a thing worth maximizing for its own sake? Here's
a quick question to test that, I would love your thoughts:
* In situation A, Alpha and Bravo lead identical lives and do the same amount
of good in the world.
* In situation B, Alpha and Bravo lead completely different lives and do the
same amount of good in the world.
* Which, if any, situation is morally preferable?
One doubt on superrationality:
(I guess similar discussions must have happened elsewhere, but I can't find
them. I am new to decision theory and superrationality, so my thinking may very
well be wrong.)
First I present an inaccruate summary of what I want to say, to give a rough
idea:
* The claim that "if I choose to do X, then my identical counterpart will also
do X" seems to (don't necessarily though; see the example for details) imply
there is no free will. But if we in deed assume determinism, then no decision
theory is practically meaningful.
Then I shall elaborate with an example:
* Two AIs with identical source codes, Alice and Bob, are engaging in a
prisoner's dillema.
* Let's first assume they have no "free will", i.e. their programs are
completely deterministic.
* Suppose that Alice defects, then Bob also defects, due to their identical
source code.
* Now, we can vaguely imagine a world in which Alice had cooperated, and then
Bob would also cooperate, resulting in a better outcome.
* But that vaguely imagined world is not coherent, as it's just impossible
that, given the way her source code was written, Alice had cooperated.
* Therefore, it's practically meaningless to say "It would be better for
Alice to cooperate".
* What if we assume they have free will, i.e. they each have a source of
randomness, feeding random numbers into their programs as input?
* If the two sources of randomness are completely independent, then decisions
of Alice and Bob are also independent. Therefore, to Alice, an input that
leads her to defect is always better than an input that leads her to
cooperate - under both CDT and EDT.
* If, on the other hand, the two sources are somehow correlated, then it
might in deed be better for Alice to receive an input that leads her to
cooperate. This is the only case in which superrationality is practically
meaningful, but here the assumption of corr
Philosophy is the systematized investigation of the most general and abstract features of the world, including morals, ethics, and systems of value.