All of somervta's Comments + Replies

I am also not an expert on designing surveys, but it seems really hard to get meaningful data on something like the 'consent philosophies' that you describe, at least without broadly-understood and theorised examples of different versions of them. Imagine trying to get an idea of the different ethical views of EAs without being able to rely on terms like 'utilitarianism' and 'condequentialism' and others that have a well-developed meaning from the philosophical literature.

Asking people to describe their ethical views in such a situation seems like a really... (read more)

anonymous question from a big fan of yours on tumblr:

"Re: Nate Soares (thanks for doing this btw, it's really nice of you), two questions. First, I understand his ethical system described in his recent "should" series and other posts to be basically a kind of moral relativism; is he comfortable with that label? Second, does he only intend it for a certain subset of humans with agreeable values, or does it apply to all value systems, even ones we would find objectionable?"

(I'm passing on questions without comment from anyone without an e-a.com account or who wants anonymity here. )

6
So8res
9y
You could call it a kind of moral relativism if you want, though it's not a term I would use. I tend to disagree with many self-proclaimed moral relativists: for example, I think it's quite possible for one to be wrong about what they value, and I am not generally willing to concede that Alice thinks murder is OK just because Alice says Alice thinks murder is OK. Another place I depart from most moral relativists I've met is by mixing in a healthy dose of "you don't get to just make things up." Analogy: we do get to make up the rules of arithmetic, but once we do, we don't get to decide whether 7+2=9. This despite the fact that a "7" is a human concept rather than a physical object (if you grind up the universe and pass it through the finest sieve, you will find no particle of 7). Similarly, if you grind up the universe you'll find no particle of Justice, and value-laden concepts are human concoctions, but that doesn't necessarily mean they bend to our will. My stance can roughly be summarized as "there are facts about what you value, but they aren't facts about the stars or the void, they're facts about you." (The devil's in the details, of course.)

What are MIRI's plans for publication over the next few years, whether peer-reviewed or arxiv-style publications?

More specifically, what are the a) long-term intentions and b) short-term actual plans for the publication of workshop results, and what kind of priority does that have?

4
So8res
9y
Great question! The short version is, writing more & publishing more (and generally engaging with the academic mainstream more) are very high on my priority list. Mainstream publications have historically been fairly difficult for us, as until last year, AI alignment research was seen as fairly kooky. (We've had a number of papers rejected from various journals due to the "weird AI motivation.") Going forward, it looks like that will be less of an issue. That said, writing capability is a huge bottleneck right now. Our researchers are currently trying to (a) run workshops, (b) engage with & evaluate promising potential researchers, (c) attend conferences, (d) produce new research, (e) write it up, and (f) get it published. That's a lot of things for a three-person research team to juggle! Priority number 1 is to grow the research team (because otherwise nothing will ever be unblocked), and we're aiming to hire a few new researchers before the year is through. After that, increasing our writing output is likely the next highest priority. Expect our writing output this year to be similar to last year's (i.e., a small handful of peer reviewed papers and a larger handful of technical reports that might make it onto the arXiv), and then hopefully we'll have more & higher quality publications starting in 2016 (the publishing pipeline isn't particularly fast).