JH

Jonas Hallgren

359 karmaJoined Mar 2021Uppsala, Sweden

Comments
39

Topic contributions
3

I appreciate you putting out a support post of someone who might have some EA leanings that would be good to pick up on. I may or may not have done so in the past and then removed the post because people absolutely shat on it on the forum 😅 so respect.

I guess I felt that a lot of the post was arguing under a frame of utilitarianism which is generally fair I think. When it comes to "not leaving a footprint on the future" what I'm referring to is epistemic humility about the correct moral theories. I'm quite uncertain myself about what is correct when it comes to morality with extra weight on utilitarianism. From this, we should be worried about being wrong and therefore try our best to not lock in whatever we're currently thinking. (The classic example being if we did this 200 years ago we might still have slaves in the future)

I'm a believer that virtue ethics and deontology are imperfect information approximations of utilitarianism. Like Kant's categorical imperative is a way of looking at the long-term future and asking, how do we optimise society to be the best that it can be?

I guess a core crux here for me is that it seems like you're arguing a bit for naive utilitarianism here. I actually don't really believe the idea that we will have the AGI follow the VNM-axioms that is being fully rational. I think it will be an internal dynamic system that are weighing for different things that it wants and that it won't fully maximise utility because it won't be internally aligned. Therefore we need to get it right or we're going to have weird and idiosyncratic values that are not optimal for the long-term future of the world.

I hope that makes sense, I liked your post in general.

Yes, I was on my phone, and you can't link things there easily; that was what I was referring to. 

I feel like this goes against the principle of not leaving your footprint on the future, no?

Like, a large part of what I believe to be the danger with AI is that we don't have any reflective framework for morality. I also don't believe the standard path for AGI is one of moral reflection. This would then to me say that we leave the value of the future up to market dynamics and this doesn’t seem good with all the traps there are in such a situation? (Moloch for example)

If we want a shot at a long reflection or similar, I don't think full sending AGI is the best thing to do.

How will you address the conflict of interest allegations raised against your organisation? It feels like the two organisations are awfully intertwined. For gods sake, the CEOs are sleeping with each other! I bet they even do each other's taxes!

I'm joining the other EA. 

It makes sense for the dynamics of EA to naturally go in this way (Not endorsing). It is just applying the intentional stance plus the free energy principle to the community as a whole. I find myself generally agreeing with the first post at least and I notice the large regularization pressure being applied to individuals in the space.

I often feel the bad vibes that are associated with trying hard to get into an EA organisation. I'm doing for-profit entrepreneurship for AI safety adjacent to EA as a consequence and it is very enjoyable. (And more impactful in my views)

I will however say that the community in general is very supportive and that it is easy to get help with things if one has a good case and asks for it, so maybe we should make our structures more focused around that? I echo some of the things about making it more community focused, however that might look. Good stuff OP, peace.

I did enjoy the discussion here in general. I hadn't heard of the "illusionist" stance before and it does sound quite interesting yet I do find it quite confusing as well.

I generally find there to be a big confusion about the relation of the self to what "consciousness" is. I was in this rabbit hole of thinking about it a lot and I realised I had to probe the edges of my "self" to figure out how it truly manifested. A 1000 hours into meditation some of the existing barriers have fallen down. 

The complex attractor state can actually be experienced in meditation and it is what you would generally call a case of dependent origination or a self-sustaining loop (literally, lol). You can see through this by the practice of realising that the self-property of mind is co-created by your mind and that it is "empty". This is a big part of the meditation project. (alongside loving-kindness practice, please don't skip the loving-kindness practice)

Experience itself isn't mediated by this "selfing" property, it is rather an artificial boundary we have created about our actions in the world for simplification reasons. (See Boundaries as a general way of this occurring.)

So, the self cannot be the ground of consciousness; it is rather a computationally optimal structure for behaving in the world. Yet realizing this fully is easiest done through your own experience, or through n=1 science. Meaning that to fully collect the evidence you will have to discover it through your own phenomenological experience. (which makes it weird to take into western philosophical contexts)

So, the self cannot be the ground and partly as a consequence of this and partly since consciousness is a very conflated term, I like thinking more about different levels of sentience instead. At a certain threshold of sentience the "selfing" loop is formed.

The claims and evidence he's talking about may be true but I don't believe that justifies the conclusions that he draws from them.

Thank you for this post! I will make sure to read the 5/5 books that I haven't read yet, especially excited about Joseph Heinrich's book from 2020, had read The Secret of Our Success before but not that one. 

I actually come from an AI Safety interest when it comes to moral progress. The question is to some extent for me on how we can set up AI systems so that they continuously improve "moral progress" as we don't want to leave our fingerprints on the future

In my opinion, the larger AI Safety dangers come from "big data hell" like the ones described in Yuah Noah Harari's Homo Deus or Paul Christiano's slow take-off scenarios. 

Therefore we want to figure out how to set up AIs in such a way that automatically improves moral progress in the structure of their use. I'm also a believer that AI will most likely in the future go through a similar process to the one described in The Secret of Our Success and that we should prepare appropriate optimisation functions for it. 

So, if you ever feel like we might die from AI, I would love to see some work in that direction! 
(happy to talk more about it if you're up for it.)

The number of applications will affect the counterfactual value of applying. Now, saying your expected number might lower the number of people who will apply, but I would still appreciate having a range of expected applicants for the AI Safety roles. 

What is the expected amount of people applying for the AI Safety roles? 

Load more