Thank you for writing this.
Please could you add to the top of the Google doc:
This would make it easier for people to judge for themselves how much weight to put on your advice.
Thank you for this post. I agree with its central premise and I know that Michelle is already working on an impact evaluation that will contain a lot of this sort of information.
However, your post contains a couple of misleading points that I thought would be worth correcting.
For future reference, it may have been courteous to contact someone at Giving What We Can before posting this. In case that sounds intimidating I can assure you they are all very friendly :)
(Disclosure: I manage Giving What We Can's website as a volunteer.)
I think that OpenAI is not worried about actors like DeepMind misusing AGI, but (a) is worried about actors that might not currently be on most people's radar misusing AGI, (b) thinks that scaling up capabilities enables better alignment research (but sees other benefits to scaling up capabilities too) and (c) is earning revenue for reasons other than direct existential risk reduction where it does not see a conflict in doing so.