Indra Gesink

70Joined Aug 2017

Bio

Participation
6

BSc & MSc Econometrics and Operations Research

MSc Systems Biology

 

writing, teaser "Ideas to Secure Our Future": 

Posts
2

Sorted by New

Comments
19

In addition, we might also want to use - and take in account - our abilities to look ahead. Suppose for example a worthwhile task that requires two people to engage in it. The first person to engage in it gains zero marginal returns, while the latter gets everything (all of the returns as marginal returns). The first person might however predict the second person's behavior and based on the expectation that results engage with this task. By contrast, chimpanzees are not able to do this; You would never see two of them cooperate to together e.g. carry a log (research by Joseph Henrich).

And regarding the non-orthogonality, I was - as a moral realist -more thinking along the lines of: being organized (etc., etc.), is presumably a good value, and it would also improve your decision-making (sort of considered neutrally)...

Thanks for the post and for taking the time! My initial thoughts on trying to parse this are below, I think it will bring mutual understanding further.

You seem to make a distinction between intentions on the y-axis and outcomes on the x-axis. Interesting!

 The terrorist example seems to imply that if you want bad outcomes you are not value-aligned (aligned to what? to good outcomes?). They are value-aligned from their own perspective. And "terrorist" is also not a value-neutral term, for example Nelson Mandela was once considered one, which would I think surprise most people now.

If we allow "from their own perspective" then "effectiveness" would do (and "efficiency" to replace the x-axis), but it seems we don't, and then "altruism" (or perhaps "good", with less of an explicit tie to EA?) would without the ambiguity "value-aligned" brings on whether or not we do [allow "from their own perspective"].

(As not a moral realist, the option of "better value" is not available, so it seems one would be stuck with "from their own perspective" and calling the effective terrorist value-aligned, or moving to an explicit comparison to EA values, which I was supposing was not the purpose, and seems to be even more off-putting via the mentioned alienating shortcoming in communication.)

Next to value-aligned being suboptimal, which I also just supported further, you seem to agree with altruism and effectiveness (I would now suggest "efficiency" instead) as appropriate labels, but agree with the author about the shortcoming for communicating to certain audiences (alienation), with which I also agree. For other audiences, including myself, the current form perhaps has shortcomings. I would value clarity more, and call the same the same. An intentional opaque-making change of words might additionally come across as deceptive, and as aligned with one's own ideas of good, but not with such ideas in a broader context. And that I think could definitely also count as / become a consequential shortcoming in communication strategy.

No. I do think that combining the comments would yield less karma, which could be a bad thing and - in the spirit of this post - in need of being done better, thereby not saying anything about your intentions. And I agree with your reply to your comment: therefore the "and". And I think what you say there is actually a very good reason, which also answers why I was reading all these distinct comments by you, which is in turn why I appreciated this one amongst them and responded. I'm sorry if it came across as an ad hominem attack instead! Best!

And likely yields less karma overall!

One way to succinctly make a similar point, I suggest, is to insist, continuously, that AI alignment and AI safety are not the same problems but are actually distinct.

Thank you for making and pushing for the relevance of these comments!

Load more