P

peterbarnett

508 karmaJoined May 2020

Bio

EA and AI safety

Conceptual alignment research at MIRI

https://peterbarnett.org/

Comments
18

Do you mean the posts early last year about fundamental controllability limits?

Yep, that is what I was referring to. It does seem like you're likely to be more careful in the future, but I'm still fairly worried about advocacy done poorly. (Although, like, I also think people should be able to advocacy if they want)

I have similar views to Marius's comment. I did AISC in 2021 and I think it was somewhat useful for starting in AI safety, although I think my views and understanding of the problems were pretty dumb in hindsight. 

AISC does seem extremely cheap (at least for the budget options). If you have like 80% on the "Only top talent matters" model (MATS, Astra, others) and 20% on the "Cast a wider net" model (AISC), I would still guess that AISC seems like a good thing to do. 

My main worries here are with the negative effects. These are mainly related to the "To not build uncontrollable AI" stream; 3 out of 4 of these seem to be about communication/politics/advocacy.[1] I'm worried about these having negative effects, making the AI safety people seem crazy, uninformed, or careless. I'm mainly worried about this because Remmelt's recent posting on LW really doesn't seem like careful or well thought through communication. (In general I think people should be free to do advocacy etc, although please think of externalities) Part of my worry is also from AISC being a place for new people to come, and new people might not know how fringe these views are in the AI safety community. 

I would be more comfortable with these projects (and they would potentially still be useful!) if they were focused on understanding the things they were advocating for more. E.g. a report on "How could lawyers and coders stop AI companies using their data?", rather than attempting to start an underground coalition. 

All the projects in the "Everything else" streams (run by Linda) seem good or fine, and likely a decent way to get involved and start thinking about AI safety. Although, as always, there is a risk of wasting time with projects that end up being useless. 

[ETA: I do think that AISC is likely good on net.]

  1. ^

    The other one seems like a fine/non-risky project related to domain whitelisting.

This is missing a very important point, which is that I think humans have morally relevant experience and I'm not confident that misaligned AIs would. When the next generation replaces the current one this is somewhat ok because those new humans can experience joy, wonder, adventure etc. My best guess is that AIs that take over and replace humans would not have any morally relevant experience, and basically just leave the universe morally empty. (Note that this might be an ok outcome if by default you expect things to be net negative)

I also think that there is way more overlap in the "utility functions" between humans, than between humans and misaligned AIs. Most humans feel empathy and don't want to cause others harm. I think humans would generally accept small costs to improve the lives of others, and a large part of why people don't do this is because people have cognitive biases or aren't thinking clearly. This isn't to say that any random human would reflectively become a perfectly selfless total utilitarian, but rather that most humans do care about the wellbeing of other humans. By default, I don't think misaligned AIs will really care about the wellbeing of humans at all. 

Yeah, that's reasonable, as of 5:36pm PST, November 18, 2023 it still seems like a good bet. 
I definitely am worried about either Sam Altman + Greg Brockman starting a new, less safety-focused lab, or Sam+Greg somehow returning to OpenAI and removing the safety-focused people from the board. 
Even with this, it seems pretty good to have safety-focused people with some influence over OpenAI. I'm a bit confused about situations where it's like "Yes, it was good to get influence, but it turned out you made a bad tactical mistake and ended up making things worse." 

Yeah, a more quantitative survey sounds like a useful thing to have, although I don't have concrete plans to do this currently.

I'm slight wary of causing 'survey fatigue' by emailing AI safety people constantly with surveys, but this seems like something that wouldn't be too fatiguing 

Not exactly, but it seems useful to know what other people have done if you want to do similar work to them. 

Obviously with all the standard hedges that we don't want everyone doing exactly the same thing and thinking the same way.

That is definitely part of studying math. The thing I was trying to point to is the process of going from an idea or intuition to something that you can write in math. For example, in linear algebra you might have a feeling about some property of a matrix but then you actually have to show it with math. Or more relevantly, in Optimal Policies Tend to Seek Power it seems like the definition of 'power' came from formalizing what properties we would want this thing called 'power' to have. 

But I'm curious to hear your thoughts on this, and if you think there are other useful ways to develop this 'formalization' skill.
 

I got to the same stage (and also didn't get in) and had the same experience as you. I was definitely a bit sad about not getting in, but I did appreciate the call and feedback

Maybe some construction megaprojects might count, I'm thinking the Notre-Dame Cathedral which took about 100 years to complete. 

This might not really count because the choir was completed after about 20 years. I'm also not sure if it was meant to take so long.

Load more