Thanks for a thoughtful response.
Likewise :)
...My worry is the idea we can round this problem by evaluating the arguments ourselves. We're not special. Academics just evaluate the arguments, like we would, but understand them better. The only way i can see myself being justified in rejecting their views is by showing they're biased. So maybe my point wasn't "the academics are right, so narrow consequentialism is wrong" but "most people who know much more about this than us don't think narrow consequentialism is right, so we don't know its
Starting a long debate about moral philosophy would be relevant here, but also out of place, so I'll refrain myself.
But what do you mean by "Refrain from posting things that assume that consequentialism is true"? That its best to refrain from posting things that assume that values like e.g. justice aren't ends-in-themselves, or refrain from posting things that assume that consequences and their quantity are important?
If it is something more like the latter, I would ask myself if this would be to pursue the goal of popularity by diminishing a part...
I'm not aware of careful analysis having been done on the topic.
One thing speaking in favour of it increasing existential risk is if it leads to faster technological progress, which in turn could give less time to research on things that specifically benefit safety, of the kind that MIRI and FHI are doing. I'm thinking that more rich people in previously poor countries would make it more profitable for western countries to invest in R&D and that these previously would fund proportionally less x-risk-research than what takes place in the west (this is n...
“I think the major issue here is that you seem to be taking moral realism for granted and assume that if we look hard enough, morality will reveal itself to us in the cosmos. I'm a moral anti-realist, and I'm unable to conceive of what evidence for moral realism would even look like.”
That may be a correct assessment.
I think that like all our knowledge about anything, statements about ethics rest on unproven assumptions, but that there are statements about some states of the world being preferable to others that we shouldn’t have less confidence in than man...
So a bit of a late answer here :)
"Is this a problem? I don't think humor is inherently valuable. It happens to be valuable to humans, but an alternate world in which it weren't valuable seems acceptable."
If a species has conscious experiences that all are of a kind that we are familiar with, but they lack our strongest and most valued experiences, and devalue these because they follow a strict the-less-similar-to-us-the-less-valuable-policy, then I think that’s regrettable. If they themselves and/or beings they create don’t laugh at jokes but hav...
It appears to me that if we were a species that didn't have [insert any feeling we care about, e.g. love, friendship, humour or the feeling of eating tasty food], and someone then invented it, then many people would think of it as not being valuable. The same would go for some alien species that has different kinds of conscious experiences from us trying to evaluate our experiences. I'm convinced that they would be wrong in not valuing our experiences, and I think this shows that that way of thinking leads to mistakes. Would you agree with this (but perhap...
An important topic!
Potentially influencing lock-in is certainly among my motivations for wanting to work on AI friendliness, and doing things that could have a positive impact of a potential lock-in has a lot speaking for it I think (and many of these things, such as improving the morality of the general populous, or creating tools or initiatives for thinking better about such questions, are things that could have significant positive effects also if no lock-in occurs).
As to example of having-more-children out of far-future concerns, I think this could go ...
Cool idea and initiative to make such a calculator :) Although it doesn't quite reflect how I make estimations myself (I might make a more complicated calculator of my own at some point that does).
The way I see it, the work that is done now will be the most valuable per person, and the amount of people working on this towards the end may not be so indicative (nine women cannot make a baby in a month, etc).
So as I understand it, what MIRI is doing now is to think about theoretical issues and strategies and write papers about this, in the hope that the theory you develop can be made use of by others?
Does MIRI think of ever:
Also (feel free to skip this part of the question if it is too big/demanding):
Personally, I ...
Is MIRIs hope/ambition that that CEV (http://wiki.lesswrong.com/wiki/Coherent_Extrapolated_Volition) or something resemblant of CEV will be implement, or is this not something you have a stance on?
(I'm not asking whether you think CEV should be the goal-system of the first superintelligence. I know it's possible to have strategies such as first creating an oracle and then at some later point implement something CEV-like.)
Unless there are strategic concerns I don't fully understand I second this. I cringe a little every time I see such goal-descriptions.
Personally I would argue that the issue of largest moral concern is ensuring that new beings that can have good experiences and have a meaningful existence are put into existence, as the quality and quantity of consciousness experienced by such not-yet-existant beings could dwarf what is experienced by currently existing beings on our small planet.
I understand that MIRI doesn't want to take stance on all controversial ethica...
Btw, I agree with this in the sense that I'd rather have a random ethicist make decisions about an ethical question than a random person.
Great! I'm writing a text about this, and I'll add a comment with a reference to it when the... (read more)