A

alexherwix

580 karmaJoined Jul 2018

Participation

    Comments
    118

    This reminds me of the work on the Planungszelle in Germany but with some more bells and whistles. One difference that I see is that afaik the core idea in more traditional deliberation processes is that the process itself is also understandable by the average citizen. This gives it some grounding and legitimacy in that all people involved in the process can cross-check each other and make sure that the outcome is not manipulated. You seem to be diverging from this ideal a little bit in the sense that you seem to require the use of sophisticated statistical techniques, which potentially cannot be understood or cross-checked by a general cross-section of the population. 

    Maybe it would make sense to use a two-stage procedure where in the first (preparation) stage you gain general agreement on what process to run in the second (work) stage? Or looking at your model to actually have the citizen assembly be involved in managing and controlling the expert modeling process or have at least multiple different expert teams provide models to the citizen assembly. Otherwise it seems like you have a single point of failure where the democratic aspect of the process can be neutralized potentially quite easily.

    I am just speculating though, haven't had time to look at the white paper in detail. Maybe/probably you have thought about those aspects already!

    The key point that I am trying to make is that you seem to argue against our common sense understanding that animals are sentient because they are anatomically similar to us in many respects and also demonstrate behavior that we would expect sentient creatures to have. Rather you come up with your own elaborate requirements that you argue are necessary for a being able to say something about qualia in other beings but then at some point (maybe at the point where you feel comfortable with your conclusions) you stop following your own line of argument through to the end (i.e., qualia somewhere in the causal structure != other humans have qualia) and just revert back to "common sense", which you have argued against just before as being insufficient in this case. So, your position seems somewhat selective and potentially self-serving with respect to supporting your own beliefs rather than intellectually superior to the common sense understanding.

    But how can you assume that humans in general have qualia if all the talking about qualia tells you only that qualia exist somewhere in the causal structure? Maybe all talking about qualia derives from a single source? How would you know? For me, this seems to be a kind of a reductio ad absurdum for your entire line of argument.

    Thanks for sharing your thoughts! I think you are onto an interesting angle here that could be worthwhile exploring if you are so inclined. 

    One interesting line of work that you do not seem to be considering at the moment but could be interesting is the work done in the "metacrisis" (or polycrisis) space. See this presentation for an overview but I recommend diving deeper than this to get a better sense of the space. What this perspective is interested in is trying to understand and address the underlying patterns, which create the wicked situation we find ourselves in. They work a lot with concepts like "Moloch" (i.e., multi-polar traps in coordination games), the risk accelerating role of AI or different types of civilizational failure modes (e.g., dystopia vs. catastrophe) we should guard against.

    Interesting for you may also be a working paper that I am working on with ALLFED, where we are looking at the digital transformation as a driver of systemic catastrophic risks. We do this based on a simulation model in specific scenarios but then generalize a framework where we suggest that the key features that make digital systems valuable also make them an inherent driver of what we called "the risk of digital fragility". Our work does not yet elaborate on the role of AI but only the pervasive use of digital systems and services in general. My next steps are to work out the role of AI more clearly and see if/how our digital fragility framework can be put to use to better understand how AI could be contributing to systemic catastrophic risks. You can reach out via PM if you are interested to have a chat about this.

    Hey Daniel,

    as I also stated in another reply to Nick, I didn’t really mean to diminish the point you raised but to highlight that this is really more of a „meta point“ that’s only tangential to the matter of the issue outlined. My critical reaction was not meant to be against you or the point you raised but the more general community practice / trend of focusing on those points at the expense of engaging the subject matter itself, in particular, when the topic is against mainstream thinking. This I think is somewhat demonstrated by the fact that your comment is by far the most upvoted on an issue that would have far reaching implications if accepted as having some merit.

    Hope this makes it clearer. Don’t mean to criticize the object level of your argument, it’s just coincidental that I picked out your comment to illustrate a problematic development that I see.

    P.S.: There is also some irony in me posting a meta critique of a meta critique to argue for more object level engagement but that’s life I guess.

    Hey Nick,

    thanks for your reply. I didn’t mean to say that Daniel didn’t have a point. It’s a reasonable argument to make. I just wanted to highlight that this shouldn’t be the only angle to look at such posts. If you look, his comment is by far the most upvoted and it only addresses a point tangential to the problem at hand. Of course, getting upvoted is not his „fault“. I just felt compelled to highlight that overly focusing on this kind of angle only brings us so far.

    Hope that makes it clearer :)

    Your question reminded me of the following quote:

    It Is Difficult to Get a Man to Understand Something When His Salary Depends Upon His Not Understanding It

    Maybe here we are talking about an alternative version of this:

    It Is Difficult to Get a Man to Say Something When His Salary (or Relevance, Power, Influence, Status) Depends Upon Him Not Saying It

    Isn’t your point a little bit pedantic here in the sense that you seem to be perfectly able to understand the key point the post was trying to make, find that point somewhat objectionable or controversial, and thus point to some issues regarding „framing“ rather than really engage deeply with the key points?

    Of course, every post could be better written, more thoughtful, etc. but let’s be honest, we are here to make progress on important issues and not to win „argument style points.“ In particular, I find it disturbing that this technique of criticizing style of argument seems to be used quite often to discredit or not engage with „niche“ viewpoints that criticize prevailing „mainstream“ opinions in the EA community. Happened to me as well, when I was suggesting we should look more into whether there are maybe alternatives to purely for profit/closed sourced driven business models for AI ventures. Some people where bending over backwards to argue some concerns that were only tangentially related to the proposal I made (e.g., government can't be trusted and is incompetent so anything involving regulation could never ever work, etc.). Another case was a post on engagement with "post growth" concepts. There I witnessed something like a wholesale character assassination of the post growth community for whatever reasons. Not saying this happened here but I am simply trying to show a pattern of dismissal of niche viewpoints for spurious, tangential reasons without really engaging with them.

    Altogether, wouldn’t it be more productive to have more open minded discussions and practice more of what we preach to the normies out there ourselves (e.g., steel-manning instead of straw-manning)? Critiquing style is fine and has its place but maybe let’s do substance first and style second?

    Thank you for writing this post!

    I think it is really important to stay flexible in the mind and to not tie ourselves into race dynamics prematurely. I hope that reasonable voices such as yours can broaden the discourse and maybe even open up doors that were only closed in our minds but never truly locked.

    Ok, I acknowledge that I might have misunderstood your intent. If had taken that your point was to dispassionately explain why people (the EA community) don't engage with this topic, I myself might have reacted more dispassionately. However, as I read your comments, I don't think that it was very clear that this is what you were after. Rather, it seemed like you were actively making the case against engaging with the topic and using strawmanning tactics to make your point. I would encourage you to be more clear in this regard in the future, I will try to be more mindful of possible misinterpretation. 

    I think the key point of my comments stand in that the position you outlined is potentially problematic and ill-informed. To take an other example, you say:

    You can cite the study "justifying" limits to growth, (which I've discussed on this forum before!) but they said that there would be a collapse decades ago, so it's hard to take that seriously

    The point of simulation models is never to "predict" the future. We are not in the foundation novels and doing psychohistory here. Studies like this are used to look for and examine patterns in behavior. That's why it is so remarkable that one of the scenarios they developed actually mapped so closely to current developments. That was never the goal of the exercise. So the issue here is that you are misrepresenting the way people are actually building their arguments. If you again are claiming that this is not how you see the situation but how other people see the situation, please make people aware of the errors in their reasoning and don't continue to propagate false or at least misleading information.

    I'm sure there is a steelmanned version of this that deserves some credit, and I initially said that there are some ideas from that movement that deserve credit - but I don't understand what it has to do with the degrowth movement, which is pretty explicit about what it wants and aims for.

    I think the point I was trying to make is that it would do us good to try to go out there with a charitable mindset, look for the steelmanned versions of arguments being made, and try to engage them on their merits. For me this implies looking also in "unusual" or on the face of it "irritating" places and talking to people that hold different beliefs or work with different ideas, in particular if they are trying to reach out and engage with us. This happened to some degree here and all I am advocating for is keeping an open mind and not jumping to dismissive conclusions without deliberate critical engagement.

    Load more