@niplav Interesting take; thanks for the detailed response.
Technically, I think that AI safety as a technical discipline has no "say" in who the systems should be aligned with. That's for society at large to decide.
So, if AI safety as a technical discipline should not have a say on who systems should be aligned with, but they are the ones aiming to align the systems, whose values are they aiming to align the systems with?
Is it naturally an extension of the values of whoever has the most compute power, best engineers, and most data?
I love the idea of society at large deciding but then I think about humanity's track record.
@PeterSlattery I want to push back on the idea about "regular" movement building versus "meta". It sounds like you have a fair amount of experience in movement building. I'm not sure I agree that you went meta here, but if you had, am not convinced that would be a bad thing, particularly given the subject matter.
I have only read one of your posts so far, but appreciated it. I think you are wise to try and facilitate the creation of a more cohesive theory of change, especially if inadvertently doing harm is a significant risk.
As someone on the periphery and not working in AI safety but who has tried to understand it a bit, I feel pretty confused as I haven't encountered much in the way of strategy and corresponding tactics. I imagine this might be quite frustrating and demotivating for those working in the field.
I agree with the anonymous submission that broader perspectives would likely be quite valuable.
Interesting, thanks for sharing your thoughts. I guess I'm less certain that wealth has led to faster moral progress.