Hide table of contents

Apologies if this is clearly laid out somewhere else: Is there someone I could donate to that independently investigates the AI safety space for conflicts of interest? 

It has been mentioned that several large donors into the AI safety space have personal investments in AI. While I have no proof that this is going on, and really hope it is not, it seems smart to have at least 1 person funded at least 50% to look across the AI safety space to see if there might be conflicts of interest. 

I think a large diverse group of small donors could actually have a unique opportunity here. The funded person should refute grants from any large donors and should not accepts grants that comprise more than e.g. 5% of their total funding and all this should be extremely transparent.

This does not need to be an investigative journalist, it could be anyone with a scout mindset, ability to connect with people and a hunch for "where to look".

-2

0
5

Reactions

0
5
New Answer
New Comment

1 Answers sorted by

It is trivially available public information that what you are saying here is true. This isn't something for which we need an investigative journalist, it's something for which you just need basic Google skills: 

Thanks that is super helpful although some downvotes could have come from what might be perceived as slightly infantilizing tone - haha! (no offense taken as you are right that the information is really accessible but I guess I am just a bit surprised that this is not often mentioned on the podcasts I listen to, or perhaps I have just missed several EAF posts on this).

Ok so all major funders of AI safety are personally, and probably quite significantly going to profit from the large AI companies making AI powerful and pervasive. 

I guess the good thing is then as AI grows they will have more money to put towards making it safe - it might not be all bad. 

4
MichaelDickens
I know of only two major funders in AI safety—Jaan Tallinn and Good Ventures—and both have investments in frontier AI companies. Do you know of any others?
Comments3
Sorted by Click to highlight new comments since:

Not sure why this is tagged Community? Ticking one of these makes it EA Community:

 

  • The post is about EA as a cultural phenomenon (as opposed to EA as a project of doing good)
    • I think this is clearly about doing good, it does not rely on EA at all, only AI safety.
  • The post is about norms, attitudes or practices you'd like to see more or less of within the EA community
    • This is a practice that might be relevant to AI safety independent of EA.
  • The post would be irrelevant to someone who was interested in doing good effectively, but NOT interested in the effective altruism community
    • If this is indeed something that would help AI safety, I think someone interested in this topic but without any knowledge or interest in the EA Community would be highly relevant. I would welcome any explanation about why, given this, this question is about community?
  • The post concerns an ongoing conversation, scandal or discourse that would not be relevant to someone who doesn't care about the EA community.

    • Again, this should be relevant to people that have no interest in EA but an interest in AI safety

     

Community seems the right categorisation to me - the main reason to care about this is understanding the existing funding landscape in AI safety, and how much to defer to them/trust their decisions. And I would consider basically all the large funders in AI Safety to also be in the EA space, even if they wouldn't technically identify as EA.

More abstractly, a post about conflicts of interest and other personal factors, in a specific community of interest, seems to fit this category

Being categorised as community doesn't mean the post is bad, of course!

edit: the issue raised in this comment has been fixed

Curated and popular this week
Relevant opportunities