We’ve also been doing media and we’re working on building capacity and gaining expertise to do more of it more effectively.
Publishing research in more traditional venues is also something we’ve been chatting about internally.
What's included in the "cost of doing business" category? $0.8M strikes me as high, but I don't have a granular understanding here.
It includes things like, rent, utilities, general office expenses, furnishings/equipment, bank/processing fees, software/services, insurance, bookkeeping/accounting, visas/legal. The largest expense that makes up the estimated ~$0.8M is rent, which accounts for just over half.
Is it right that you're estimating 2020's compensation expenditure at ~$182,000 per employee? (($3.56M + $1.4M + $0.51M) / 3...
(I'm COO at MIRI.)
Just wanted to provide some info that might be helpful:
Thanks!
We currently have 30 staff at the moment.
Were most of the 12 new staff onboarded early enough in 2019 such that it makes sense to include them in a 2019 per capita expenditure estimate?
Our 2019 fundraiser post has a high-level breakdown of our spending estimates for 2020.
Thanks for the pointer – this is helpful.
Thanks :)
All grants we know we will receive (or are very likely to receive) have already been factored into our reserves estimates, which together with our budget estimate for next year, is the basis for the $1M fundraising goal. We haven't factored in any future grants where we're uncertain if we'll get the grant, uncertain of the size or structure of the grant, etc.
Update: Added an announcement of our newest hire, Edward Kmett, as well as, a list of links to relatively recent work we've been doing in Agent Foundations, and updated the post to reflect the fact that Giving Tuesday is over (though our matching opportunity continues)!
Yeah, I just replaced the fundraiser progress image in the post with a static version, previewed it by saving to draft first, then published the update. It seems like saving an existing post to draft, then publishing causes the post to be republished :|
First, note that we’re not looking for “proven” solutions; that seems unrealistic. (See comments from Tsvi and Nate elsewhere.) That aside, I’ll interpret this question as asking: “if your research programs succeed, how do you ensure that the results are used in practice?” This question has no simple answer, because the right strategy would likely vary significantly depending on exactly what the results looked like, our relationships with leading AGI teams at the time, and many other factors.
For example:
To the first part of your question, most faculty at universities have many other responsibilities beyond research which can include a mix of grant writing, teaching, supervising students, and sitting on various university councils. At MIRI most of these responsibilities simply don’t apply. We also work hard to remove as many distractions from our researchers as we can so they can spend as much of their time as possible actually making research progress. [1]
Regarding incentives, as Nate has previously discussed here on the EA Forum, our researchers aren’t ...
Re 2, Sam and Eliezer have been corresponding for a while now. They’ve been exploring the possibility of pursuing a couple of different projects together, including co-authoring a book or recording a dialogue of some sort and publishing it online. Sam discussed this briefly on an episode of his podcast. We’ll mention in the newsletter if things get more finalized.
Re 3, it varies a lot month-to-month and person-to-person. Looking at the data, the average and median are pretty close at somewhere between 40–50 hours a week depending on the month. During crunc...
Over the past couple of years I’ve been excited to see the growth of the community of researchers working on technical problems related to AI alignment.
Here a quick and non-exhaustive list of people (and associated organizations) that I’m following (besides MIRI research staff and associates) in no particular order:
When it comes to growth, at the moment our focus is on expanding the research team. As such, our next few hires are likely to be research fellows, and assistant research fellows[1] for both our agent foundations and machine learning technical agendas. We have two new research fellows who are signed on to join the team, Abram Demski and Mihály Bárász. Abram and Mihály will both be more focused on the AF agenda, so I’m hoping our next couple hires after them will be on the ML side. We’re prioritizing people who can write well and quickly; if you or someone y...
FWIW, I found this last bit confusing. In my experience chatting with folk, regardless of how much they agree with or like MIRI, they usually think MIRI is quite candid an honest in it’s communication.
(TBC, I do think the “Death with Dignity” post was needlessly confusing, but that’s not the same thing as dishonest.)