I'm not exactly sure what this job actually is based on the forum post. Based on a link in the form the duties might be:
Hello, to clarify #1 I would say:
It could be the case that future AI systems are conscious by default, and that it is difficult to build them without them being conscious.
Let me try to spell out my intuition here:
If many organisms have property X, and property X is rare amongst non-organisms, then property X is evolutionarily advantageous.
Consciousness meets this condition, so it is likely evolutionarily advantageous.
The advantage that consciousness gives us is most likely something to do with our ability to reason, adapt behaviour, control our attention, compare options, and so on. In other words, it's a "mental advantage" (as opposed to e.g. a physical or metabolic advantage).
We will put a lot of money into building AI that can reason, problem solve, adapt behaviour appropriately, control attention, compare options and so on. Given that many organisms employ consciousness to efficiently achieve these tasks, there is a non-trivial chance that AI will too.
To be clear, I don't know that I would say "it's more likely than not that AI will be conscious by default".
I think that AI welfare should be an EA priority, and I'm also working on it. I think this post is a good illustration of what that means, 5% seems reasonable to me. I also appreciate this post, as it has many of the core motivations for me. I recently spent several months thinking hard about the most effective philosophy PhD project I could work on, and ended up thinking that it was to work on AI consciousness.
I feel like this post is missing discussion of two reasons to build conscious AI:
1. It may be extremely costly or difficult to avoid (this may not be a good reason, but it seems plausibly like why we would do it).
2. Digital minds could have morally valuable conscious experiences, and if there is very many of them, this could be extremely good (at least on some, admittedly controversial ethical theories).
That's great, thank you for clarifying Peter!