DDM

Dr. David Mathers

2162 karmaJoined Dec 2021

Participation

    Comments
    226

    Fair point. I personally agree that has tended to be underdeveloped.

    Also part (although not all) of the attraction of "more neurons=more consciousness" is I think a picture that comes from "more input=more of a physical stuff", which is wrong in this case. I actually do  (tentatively!) think that consciousness is sort of a cluster-y concept, where the more of a range of properties a mind has, the more true* it is to say it is conscious, but none of those properties definitively is "really" what being conscious requires. (i.e. sensory input into rational belief, ability to recognize your own sensory states, some sort of raw complexity requirement to rule out very simple systems with the previous 2 features etc.) And I think larger neuron counts will rough correlate with having more of these sorts of properties. But I doubt this will lead to a view where something with a trillion neurons is a thousand times more conscious than something with a billion. 
    *Degrees of truth are also highly philosophically controversial though. 

    I'm not saying it's impossible to make sense of the idea of a metric of "how conscious" something is, just that it's unclear enough what this means that any claim employing the notion without explanation is not "commonsense". 

    'There's a common sense story of: more neurons → more compute power → more consciousness.'

    I think it is very unclear what "more consciousness" even means. "Consciousness" isn't "stuff" like water that you can have a greater weight or volume of. 

    It's hard to see how the backlash could actually destroy GiveWell or stop Moskowitz and Tuna giving away their money through Open Phil/something that resembles Open Phil. That's a lot of EA right there.

    Good comment, but Drexler actually strikes me as both more moderate and more interesting on AI than just "same as Yudkowsky". He thinks really intelligent AIs probably won't be agents with goals at all (at least the first ones we build), and that this means that takeover worries of the Bostrom/Yudkowsky kind are overrated. It's true that he doesn't think the risks are zero, but if you look at the section titles of his FHI report, a lot of it is actually devoted to debunking various claims Bostrom/Yudkowksy make in support of the view that takeover risk is high: https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf

    I don't think this effects the point your making, it just seemed a bit unfair on Drexler if I didn't mention this. 

    'The way to actually fix this would be actual solidarity and bridge building on dealing with the short-term harms of AI.'

    What would this look like? I feel like, if all you do is say nice things,  that is a good idea usually, but it won't move the dial that much (and also is potentially lying, depending on context and your own opinions; we can't just assume all concerns about short-term harm, let alone proposed solutions, are well thought out). But if you're advocating spending actual EA money and labour on this, surely you'd first need to make a case that stuff "dealing with the short term harms of AI" is not just good (plausible), but also better than spending the money on other EA stuff. I feel like a hidden crux here might be that you, personally, don't believe in AI X-risk*, so you think it's an improvement if AI-related money is spent on short term stuff, whether or not that is better than spending it on animal welfare or global health and development, or for that matter anti-racist/feminist/socialist stuff not to do with AI. But obviously, people who do buy that AI X-risk is comparable/better as a cause area than standard near-term EA stuff or biorisk, can't take that line. 

    *I am also fairly skeptical it is a good use of EA money and effort for what it's worth, though I've ended up working on it anyway. 

    Thorstad is mostly writing about X-risk from bioterror. That's slightly different from biorisk as a broader category. I suspect Thorstad is also skeptical about the latter, but that is not what the blogposts are mostly focused on. It could be that frontier AI models will make bioterror easier and this could kill a large number of people in a bad pandemic, even if X-risk from bioterror remains tiny. 

    Load more