Apologies if this is clearly laid out somewhere else: Is there someone I could donate to that independently investigates the AI safety space for conflicts of interest?
It has been mentioned that several large donors into the AI safety space have personal investments in AI. While I have no proof that this is going on, and really hope it is not, it seems smart to have at least 1 person funded at least 50% to look across the AI safety space to see if there might be conflicts of interest.
I think a large diverse group of small donors could actually have a unique opportunity here. The funded person should refute grants from any large donors and should not accepts grants that comprise more than e.g. 5% of their total funding and all this should be extremely transparent.
This does not need to be an investigative journalist, it could be anyone with a scout mindset, ability to connect with people and a hunch for "where to look".
Thanks that is super helpful although some downvotes could have come from what might be perceived as slightly infantilizing tone - haha! (no offense taken as you are right that the information is really accessible but I guess I am just a bit surprised that this is not often mentioned on the podcasts I listen to, or perhaps I have just missed several EAF posts on this).
Ok so all major funders of AI safety are personally, and probably quite significantly going to profit from the large AI companies making AI powerful and pervasive.
I guess the good thing is then as AI grows they will have more money to put towards making it safe - it might not be all bad.