Kat Woods

Comments

Introducing The Nonlinear Fund: AI Safety research, incubation, and funding

For sure. One example that we'll be researching is scaling up getting PAs for high impact people in AI safety. It seems like one of the things that's bottlenecking the movement is talent. Getting more talent is one solution which we should definitely be working on. Another is helping the talent we already have be more productive. Setting up an organization that specializes in hiring PAs and pairing them with top AI safety experts seems like a potentially great way to boost the impact of already high impact people. 

Introducing The Nonlinear Fund: AI Safety research, incubation, and funding

Thanks for the links and thoughtful question! 

From an overarching viewpoint, I am personally extremely motivated to avoid accidentally doing more harm than good. I have seen how very easy it is to do that in the relatively forgiving fields of poverty and animal welfare and the stakes are much higher and the field much smaller in AI safety. I literally (not figuratively or hyperbolically) lose sleep over this concern. So when I say we take it seriously, it’s not corporate speak for appeasing the masses, but a deeply, genuinely held concern. I say this to point towards the fact that whatever our current methods are for avoiding causing harm, we are motivated to find and become aware of other ways to increase our robustness. 

More specifically, another approach we’re using is being extremely cautious in launching things, even if we are not convinced by an advisor’s object level arguments. Last year I was considering launching a project but before I went for it, I asked a bunch of experts in the area. Lots of people liked the idea but some were worried about it for various reasons. I wasn’t convinced by their reasoning, but I am convinced by epistemic modesty arguments and they had more experience in the area, so I nixxed the project. We intend to have a similar mindset moving forwards, while still keeping in mind that no project will ever be universally considered good.

Introducing The Nonlinear Fund: AI Safety research, incubation, and funding

Replied to hiring full-timer above   https://forum.effectivealtruism.org/posts/fX8JsabQyRSd7zWiD/introducing-the-nonlinear-fund-ai-safety-research-incubation?commentId=ANTbuSPrNTwRHvw73

Introducing The Nonlinear Fund: AI Safety research, incubation, and funding

This is indeed part of our plan! No need to re-invent the wheel. :) 

One of our first steps will be to canvas existing AI Safety organizations and compile a comprehensive list of ideas they want done. We will do our own due diligence before launching any of them, but I would love for it to be that Nonlinear is the organization people come to when they have a great idea that they want to have happen. 

Introducing The Nonlinear Fund: AI Safety research, incubation, and funding

For hiring full-time RAs, we have plans to do that in the future. Right now we are being slow on hiring full-timers. We want to get feedback from external people first (thank you!) and have a more solidified strategy before taking on permanent employees. 

We are, however, working on developing a technical advisory board of people who are experts in ML. If you know anybody who'd be keen, please send them our way! 

Introducing The Nonlinear Fund: AI Safety research, incubation, and funding

Good and important points! 

Sorry for the miscommunication. We are not intending to do technical AI safety work. We are going to focus on non-technical for the time being. 

I am in the process of learning ML but am very far from being able to make contributions in that area. This is mostly so that I have a better understanding of the area and can better communicate with people with more technical expertise. 

Introducing The Nonlinear Fund: AI Safety research, incubation, and funding

We currently have a donor who is funding everything. In the future, we intend for it to be a combination of 1) fundraising for specific ideas when they are identified and 2) fundraising for non-earmarked donations from people who trust our research and assessment process.

How do you balance reading and thinking?

Thanks for writing this! I think about this a lot, and this helped clarify the problem for me.

The problem can be summarized as: there's a couple competing forces. There's not wanting to re-invent the wheel. Humanity makes progress by standing on the shoulders of giants.

On the other side, there's 1) anchoring (not getting stuck in how people think about things in the field) and 2) benefits of having your own model (force you to think actively and helps guide your reading).

The problem we're trying to solve is how to get the benefits of both.

One potential solution is to start off with small amounts of thinking on your own, like Alex Lintz described, then spending time on consuming existing knowledge. Then you can alternate between creating and consuming, starting off with the bulk of your time in consuming, with short periods of creating interspersed throughout, and the time spent creating can get longer and longer as time progresses.

Schools already work this way to a large extent. Most of your time as an undergraduate you are simply reading existing literature and only doing occasional novel contributions. Then when you're a PhD student you're focused mostly on making new contributions.

However, I do think that formal education does this suboptimally. To think creatively is a skill, and like all skills, the more you practice, the better you get. If you've spent the first 16 years of your education more or less regurgitating pre-prackaged information, you're not going to be as good at coming up with new ideas once you're finally in position to than if you had been practicing along the way. This definitely cross-applies to EA.

Poll - what research questions do you want me to investigate while I'm in Africa?

Factory farming - go to a factory farm and see what the conditions are like there compared to developed countries.

Load More