Interesting, thanks for sharing your thoughts. I guess I'm less certain that wealth has led to faster moral progress.
Thanks for taking the time to respond; I appreciate it.
@niplav Interesting take; thanks for the detailed response.
Technically, I think that AI safety as a technical discipline has no "say" in who the systems should be aligned with. That's for society at large to decide.
So, if AI safety as a technical discipline should not have a say on who systems should be aligned with, but they are the ones aiming to align the systems, whose values are they aiming to align the systems with?
Is it naturally an extension of the values of whoever has the most compute power, best engineers, and most data?
I love the idea of society at large deciding but then I think about humanity's track record.
Interesting, thanks for answering!
Or is there a (spoken or unspoken) consensus that working on aligned AI means working on aligned superintelligent AI?
@PeterSlattery I want to push back on the idea about "regular" movement building versus "meta". It sounds like you have a fair amount of experience in movement building. I'm not sure I agree that you went meta here, but if you had, am not convinced that would be a bad thing, particularly given the subject matter.
I have only read one of your posts so far, but appreciated it. I think you are wise to try and facilitate the creation of a more cohesive theory of change, especially if inadvertently doing harm is a significant risk.
As someone on the periphery and not working in AI safety but who has tried to understand it a bit, I feel pretty confused as I haven't encountered much in the way of strategy and corresponding tactics. I imagine this might be quite frustrating and demotivating for those working in the field.
I agree with the anonymous submission that broader perspectives would likely be quite valuable.
Like, what is the incentive for everyone using existing models to adopt and incorporate the new aligned AI?
If aligned AI is developed, then what happens?
Who should aligned AI be aligned with?
This is great!
Is the prediction that we will run out of text by 2040 specific to human-generated text or does it account for generative text outputs (which, as I understand it, are also being used as inputs)?