JH

Jonas Hallgren

324 karmaJoined Mar 2021Uppsala, Sweden

Comments
32

Topic contributions
3

I did enjoy the discussion here in general. I hadn't heard of the "illusionist" stance before and it does sound quite interesting yet I do find it quite confusing as well.

I generally find there to be a big confusion about the relation of the self to what "consciousness" is. I was in this rabbit hole of thinking about it a lot and I realised I had to probe the edges of my "self" to figure out how it truly manifested. A 1000 hours into meditation some of the existing barriers have fallen down. 

The complex attractor state can actually be experienced in meditation and it is what you would generally call a case of dependent origination or a self-sustaining loop (literally, lol). You can see through this by the practice of realising that the self-property of mind is co-created by your mind and that it is "empty". This is a big part of the meditation project. (alongside loving-kindness practice, please don't skip the loving-kindness practice)

Experience itself isn't mediated by this "selfing" property, it is rather an artificial boundary we have created about our actions in the world for simplification reasons. (See Boundaries as a general way of this occurring.)

So, the self cannot be the ground of consciousness; it is rather a computationally optimal structure for behaving in the world. Yet realizing this fully is easiest done through your own experience, or through n=1 science. Meaning that to fully collect the evidence you will have to discover it through your own phenomenological experience. (which makes it weird to take into western philosophical contexts)

So, the self cannot be the ground and partly as a consequence of this and partly since consciousness is a very conflated term, I like thinking more about different levels of sentience instead. At a certain threshold of sentience the "selfing" loop is formed.

The claims and evidence he's talking about may be true but I don't believe that justifies the conclusions that he draws from them.

Thank you for this post! I will make sure to read the 5/5 books that I haven't read yet, especially excited about Joseph Heinrich's book from 2020, had read The Secret of Our Success before but not that one. 

I actually come from an AI Safety interest when it comes to moral progress. The question is to some extent for me on how we can set up AI systems so that they continuously improve "moral progress" as we don't want to leave our fingerprints on the future

In my opinion, the larger AI Safety dangers come from "big data hell" like the ones described in Yuah Noah Harari's Homo Deus or Paul Christiano's slow take-off scenarios. 

Therefore we want to figure out how to set up AIs in such a way that automatically improves moral progress in the structure of their use. I'm also a believer that AI will most likely in the future go through a similar process to the one described in The Secret of Our Success and that we should prepare appropriate optimisation functions for it. 

So, if you ever feel like we might die from AI, I would love to see some work in that direction! 
(happy to talk more about it if you're up for it.)

The number of applications will affect the counterfactual value of applying. Now, saying your expected number might lower the number of people who will apply, but I would still appreciate having a range of expected applicants for the AI Safety roles. 

What is the expected amount of people applying for the AI Safety roles? 

I'm getting the vibe that your priors are on the world to some extent, being in a multipolar scenario in the future. I'm interested in more specifically what your predictions are for multipolarity versus singleton given the shard-theory thinking as it seems unlikely for recursive self-improvement to happen in the way described given what I understand of your model?

Great post; I enjoyed it.

I've got two things to say, the first one being that GPT is a very nice brainstorming tool as it generates many more ideas than you could yourself that you can then prune from, which is nice.

Secondly, I've been doing "peer coaching" with some EA people using reclaim.ai (not sponsored) to automatically book meetings each week where we take turns being the mentor and mentee answering the five following questions:

- What's on your mind?
 - When would today's setting be a success?
 - Where are you right now?
 - How do you get where you want to go?
 - What are the actions/first steps to get there?
 - Ask for feedback

I really like the framing of meetings with yourself, I'll definitely try that out.  

Isn't estimated value calculated by the probability times the utility and as a consequence isn't the higher risk part wrong if one simply looks at it like this? (20% to 10% would be 10x the impact of 2% to 1%)

(I could be missing something here, please correct me in that case)

I didn't mean it in this sense. I think the lesson you drew from it is fair in general, I was just reacting to the things I felt you pulled under the rug, if that makes sense.

Sorry, Pablo, I meant that I got a lot more epistemically humble, I should have thought about how I phrased it more. It was more that I went from the opinion that many worlds is probably true to: "Oh man, there are some weird answers to the Wigner's friend thought experiment and I should not give a major weight to any." So I'm more like maybe 20% on many worlds? 

That being said I am overconfident from time to time and it's fair to point that out from me as well. Maybe you were being overconfident in saying that I was overconfident? :D

I will say that I thought the consciousness p zombie distinction was very interesting and a good example of overconfidence as this didn't come across in my previous comment.

Load more