Hi EA Forum,
I'm Luke Muehlhauser and I'm here to answer your questions and respond to your feedback about the report on consciousness and moral patienthood I recently prepared for the Open Philanthropy Project. I'll be here today (June 28th) from 9am Pacific onward, until the flow of comments drops off or I run out of steam, whichever comes first. (But I expect to be avaliable through at least 3pm and maybe later, with a few breaks in the middle).
Feel free to challenge the claims, assumptions, and inferences I make in the report. Also feel free to ask questions that you worry might be "dumb questions," and questions you suspect might be answered somewhere in the report (but you're not sure where) — it's a long report! Please do limit your questions to the topics of the report, though: consciousness, moral patienthood, animal cognition, meta-ethics, moral weight, illusionism, hidden qualia, etc.
As noted in the announcement post, much of the most interesting content in the report is in the appendices and even some footnotes, e.g. on unconscious vision, on what a more satisfying theory of consciousness might look like, and a visual explanation of attention schema theory (footnote 288). I'll be happy to answer questions about those topics as well.
I look forward to chatting with you all!
EDIT: Please post different questions as separate comments, for discussion threading. Thanks!
EDIT: Alright, I think I replied to everything. My thanks to everyone who participated!
I think that functionalism is incorrect and that we are super-confused about this issue.
Specifically, there is merit to the "Explanatory gap" argument. See https://en.m.wikipedia.org/wiki/Explanatory_gap
I also sort of think I know what the missing thing is. It's that the input is connected to the algorithm that constitutes you.
If this is true, there is no objective fact-of-the-matter about which entities are conscious (in the sense of having qualia). From my point of view only I am conscious. From your point of view, only you are. Neither of us are wrong.
What's the explanatory gap argument?