P

porby

14 karmaJoined Sep 2022

Posts
1

Sorted by New

Comments
5

Thanks for breaking down details! That's very helpful. (And thanks to Lauro too!)

Continuing my efforts to annoy everyone who will listen with this genre of question, what value of X would make this proposition seem true to you?

It would be better in expectation to have $X dollars of additional funding available in the field in the year 2028 than an additional full time AI safety researcher starting today.

Feel free to answer based on concrete example researchers if desired. Earlier respondents have based their answer on people like Paul Christiano.

I'd also be interested in hearing answers for a distribution of different years or different levels of research impact.

(This is a pretty difficult and high variance forecast, so don't worry, I won't put irresponsible weight on the specifics of any particular answer! Noisy shrug-filled answers are better than none for my purposes.)

In the absence of longer-term security, it would be nice to have more foresight about income.

I can tolerate quite a lot of variance on the 6+ month timescale if I know it's coming. If I know I'm not going to get another grant in 6 months, there are things I can do to compensate.

A mistake I made recently is applying relatively late due to an incorrect understanding of the funding situation, getting rejected, and now I'm temporarily unable to continue safety research. (Don't worry, I'm no danger of going homeless or starving, just lowered productivity.)

Applying for grants sooner would help mitigate this, but there are presumably limits. I would guess that grantmaking organizations would rather not commit funds to a project that won't start for another 2 years, for example.

If there was a magic way to get an informed probability on future grant funding, that would help a lot. I'm not sure how realistic this is; a lot of options seem pretty high overhead for grantmakers and/or involve some sort of fancy prediction markets with tons of transparency and market subsidies. Having continuous updates about some smaller pieces of the puzzle could help.

Thanks for the update! This kind of information is helpful for planning. I'd also love to see projections/simulations from different organizations about future funding. Such predictions would be prone to high variance error, but I bet the models would be better than mine.

At the moment, I'm leaning towards keeping my safety work on pretty low hours (which provides more benefit per hour than working full time in my experience so far) while pursuing opportunities for high throughput earning to give. I'm concerned/alarmed that there are a significant number of possible worlds where my individual earnings would swamp the current budget; that seems like a bad sign from an evidence-of-sufficient-coordination standpoint.

As one datapoint, the time spent on my entry to the original worldview prize was strictly additive. I have a grant to do AI safetystuff part time, and I still did all of that work; the work I didn't do that week was all non-AI business.

It's extremely unlikely that I would have written that post without the prize or some other financial incentive. So, to the extent that my post had value, the prize helped make it happen.

That said, when I saw another recent prize, I did notice the incentive for me to conceal information to increase the novelty of my submission. I went ahead and posted that information anyway because that's not the kind of incentive I want to pay attention to, but I can see how the competitive frame could have unwanted side effects.