Hide table of contents

Apologies if this is clearly laid out somewhere else: Is there someone I could donate to that independently investigates the AI safety space for conflicts of interest? 

It has been mentioned that several large donors into the AI safety space have personal investments in AI. While I have no proof that this is going on, and really hope it is not, it seems smart to have at least 1 person funded at least 50% to look across the AI safety space to see if there might be conflicts of interest. 

I think a large diverse group of small donors could actually have a unique opportunity here. The funded person should refute grants from any large donors and should not accepts grants that comprise more than e.g. 5% of their total funding and all this should be extremely transparent.

This does not need to be an investigative journalist, it could be anyone with a scout mindset, ability to connect with people and a hunch for "where to look".

4

0
6

Reactions

0
6
New Answer
New Comment


2 Answers sorted by

It is trivially available public information that what you are saying here is true. This isn't something for which we need an investigative journalist, it's something for which you just need basic Google skills: 

Thanks that is super helpful although some downvotes could have come from what might be perceived as a slightly infantilizing tone - haha! (no offense taken as you are right that the information is really accessible but I guess I am just a bit surprised that this is not more often mentioned on the podcasts I listen to, or perhaps I have just missed several EAF posts on this).

Ok so all major funders of AI safety are personally, and probably quite significantly going to profit from the large AI companies making AI powerful and pervasive. 

I guess the good thing is then as AI grows they will have more money to put towards making it safe - it might not be all bad. 

4
MichaelDickens
I know of only two major funders in AI safety—Jaan Tallinn and Good Ventures—and both have investments in frontier AI companies. Do you know of any others?
5
Benevolent_Rain
No, my comments are completely novice and naïve. I think I am just baffled that all of the funding of AI Safety is done by individuals who will profit massively from accelerating AI. Or, I think what baffles me most is how little focus there is on this peculiar combination of incentives - I listen to a few AI podcasts and browse the forum now and then - why am I only hearing about it now after a couple of years? Not sure what to think of it - my main feeling is just that the relative silence about this is somehow strange, especially in an environment that places importance on epistemics and biases.
3
MichaelDickens
I think most people don't talk about it because they don't think it's a big deal. FWIW I don't think it's a huge deal but it's still concerning.

FYI, weirdly timely podcast episode just out from FLI.

Comments5
Sorted by Click to highlight new comments since:

Not sure why this is tagged Community? Ticking one of these makes it EA Community:

 

  • The post is about EA as a cultural phenomenon (as opposed to EA as a project of doing good)
    • I think this is clearly about doing good, it does not rely on EA at all, only AI safety.
  • The post is about norms, attitudes or practices you'd like to see more or less of within the EA community
    • This is a practice that might be relevant to AI safety independent of EA.
  • The post would be irrelevant to someone who was interested in doing good effectively, but NOT interested in the effective altruism community
    • If this is indeed something that would help AI safety, I think someone interested in this topic but without any knowledge or interest in the EA Community would be highly relevant. I would welcome any explanation about why, given this, this question is about community?
  • The post concerns an ongoing conversation, scandal or discourse that would not be relevant to someone who doesn't care about the EA community.

    • Again, this should be relevant to people that have no interest in EA but an interest in AI safety

     

Community seems the right categorisation to me - the main reason to care about this is understanding the existing funding landscape in AI safety, and how much to defer to them/trust their decisions. And I would consider basically all the large funders in AI Safety to also be in the EA space, even if they wouldn't technically identify as EA.

More abstractly, a post about conflicts of interest and other personal factors, in a specific community of interest, seems to fit this category

Being categorised as community doesn't mean the post is bad, of course!

edit: the issue raised in this comment has been fixed

[comment deleted]0
0
0
[comment deleted]0
0
0
Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
 ·  · 8m read
 · 
In my past year as a grantmaker in the global health and wellbeing (GHW) meta space at Open Philanthropy, I've identified some exciting ideas that could fill existing gaps. While these initiatives have significant potential, they require more active development and support to move forward.  The ideas I think could have the highest impact are:  1. Government placements/secondments in key GHW areas (e.g. international development), and 2. Expanded (ultra) high-net-worth ([U]HNW) advising Each of these ideas needs a very specific type of leadership and/or structure. More accessible options I’m excited about — particularly for students or recent graduates — could involve virtual GHW courses or action-focused student groups.  I can’t commit to supporting any particular project based on these ideas ahead of time, because the likelihood of success would heavily depend on details (including the people leading the project). Still, I thought it would be helpful to articulate a few of the ideas I’ve been considering.  I’d love to hear your thoughts, both on these ideas and any other gaps you see in the space! Introduction I’m Mel, a Senior Program Associate at Open Philanthropy, where I lead grantmaking for the Effective Giving and Careers program[1] (you can read more about the program and our current strategy here). Throughout my time in this role, I’ve encountered great ideas, but have also noticed gaps in the space. This post shares a list of projects I’d like to see pursued, and would potentially want to support. These ideas are drawn from existing efforts in other areas (e.g., projects supported by our GCRCB team), suggestions from conversations and materials I’ve engaged with, and my general intuition. They aren’t meant to be a definitive roadmap, but rather a starting point for discussion. At the moment, I don’t have capacity to more actively explore these ideas and find the right founders for related projects. That may change, but for now, I’m interested in
Recent opportunities in Building effective altruism