Kat Woods

Wiki Contributions

Comments

EA needs consultancies

I'd love to talk to you about this! Sent you a DM

EA needs consultancies

Great post! Thanks for sharing this!

Nonlinear has actually been considering doing almost half of these ideas, particularly prizes, RFPs, training, recruiting, mental health, and doing market research about which services would be the most useful. We’ll definitely reach out to you privately about possible plans because we’d love to get your input on what would be most helpful for OpenPhil.

We are also looking for people or organizations that might be a good fit for these projects and we will be able to provide mentorship, funding, and introductions, so if any of these ideas excited you and seem like a potentially good fit, please reach out to me! t.ly/sBUB
 

MichaelA's Shortform

Thanks for posting this! This is a gold mine of resources. This will save the Nonlinear team so much time. 

Introducing The Nonlinear Fund: AI Safety research, incubation, and funding

For sure. One example that we'll be researching is scaling up getting PAs for high impact people in AI safety. It seems like one of the things that's bottlenecking the movement is talent. Getting more talent is one solution which we should definitely be working on. Another is helping the talent we already have be more productive. Setting up an organization that specializes in hiring PAs and pairing them with top AI safety experts seems like a potentially great way to boost the impact of already high impact people. 

Introducing The Nonlinear Fund: AI Safety research, incubation, and funding

Thanks for the links and thoughtful question! 

From an overarching viewpoint, I am personally extremely motivated to avoid accidentally doing more harm than good. I have seen how very easy it is to do that in the relatively forgiving fields of poverty and animal welfare and the stakes are much higher and the field much smaller in AI safety. I literally (not figuratively or hyperbolically) lose sleep over this concern. So when I say we take it seriously, it’s not corporate speak for appeasing the masses, but a deeply, genuinely held concern. I say this to point towards the fact that whatever our current methods are for avoiding causing harm, we are motivated to find and become aware of other ways to increase our robustness. 

More specifically, another approach we’re using is being extremely cautious in launching things, even if we are not convinced by an advisor’s object level arguments. Last year I was considering launching a project but before I went for it, I asked a bunch of experts in the area. Lots of people liked the idea but some were worried about it for various reasons. I wasn’t convinced by their reasoning, but I am convinced by epistemic modesty arguments and they had more experience in the area, so I nixxed the project. We intend to have a similar mindset moving forwards, while still keeping in mind that no project will ever be universally considered good.

Introducing The Nonlinear Fund: AI Safety research, incubation, and funding

Replied to hiring full-timer above   https://forum.effectivealtruism.org/posts/fX8JsabQyRSd7zWiD/introducing-the-nonlinear-fund-ai-safety-research-incubation?commentId=ANTbuSPrNTwRHvw73

Introducing The Nonlinear Fund: AI Safety research, incubation, and funding

This is indeed part of our plan! No need to re-invent the wheel. :) 

One of our first steps will be to canvas existing AI Safety organizations and compile a comprehensive list of ideas they want done. We will do our own due diligence before launching any of them, but I would love for it to be that Nonlinear is the organization people come to when they have a great idea that they want to have happen. 

Introducing The Nonlinear Fund: AI Safety research, incubation, and funding

For hiring full-time RAs, we have plans to do that in the future. Right now we are being slow on hiring full-timers. We want to get feedback from external people first (thank you!) and have a more solidified strategy before taking on permanent employees. 

We are, however, working on developing a technical advisory board of people who are experts in ML. If you know anybody who'd be keen, please send them our way! 

Introducing The Nonlinear Fund: AI Safety research, incubation, and funding

Good and important points! 

Sorry for the miscommunication. We are not intending to do technical AI safety work. We are going to focus on non-technical for the time being. 

I am in the process of learning ML but am very far from being able to make contributions in that area. This is mostly so that I have a better understanding of the area and can better communicate with people with more technical expertise. 

Introducing The Nonlinear Fund: AI Safety research, incubation, and funding

We currently have a donor who is funding everything. In the future, we intend for it to be a combination of 1) fundraising for specific ideas when they are identified and 2) fundraising for non-earmarked donations from people who trust our research and assessment process.

Load More