Malo

Chief Executive Officer @ Machine Intelligence Research Institute
267 karmaJoined Sep 2014Working (6-15 years)Berkeley, CA, USA
malob.me

Posts
4

Sorted by New
19
Malo
· 4y ago · 10m read
20
Malo
· 5y ago · 8m read

Comments
13

I expect it's still worth MIRI being aware that almost as many people still distrust as trust MIRI as being sufficiently honest in its public communications.

FWIW, I found this last bit confusing. In my experience chatting with folk, regardless of how much they agree with or like MIRI, they usually think MIRI is quite candid an honest in it’s communication. 

(TBC, I do think the “Death with Dignity” post was needlessly confusing, but that’s not the same thing as dishonest.)

We’ve also been doing media and we’re working on building capacity and gaining expertise to do more of it more effectively.

Publishing research in more traditional venues is also something we’ve been chatting about internally.

Yeah, that should be a reasonably good estimate.

Malo
4y12
0
0
What's included in the "cost of doing business" category? $0.8M strikes me as high, but I don't have a granular understanding here.

It includes things like, rent, utilities, general office expenses, furnishings/equipment, bank/processing fees, software/services, insurance, bookkeeping/accounting, visas/legal. The largest expense that makes up the estimated ~$0.8M is rent, which accounts for just over half.

Is it right that you're estimating 2020's compensation expenditure at ~$182,000 per employee? (($3.56M + $1.4M + $0.51M) / 30 employees)

No, that will be an over estimate for a few reasons:

  • The $0.51M is an estimate of what new research staff we'll add to the team in 2020 will cost. (So above the 30 we have at the moment.)
  • The $1.4M estimate for General Personnel assumes we'll add one new operations staff in 2020.
  • The $3.56M estimate for Research Personnel, largely represents salaries and related costs for existing research staff, but it also includes compensation for research interns and research contractors.
Were most of the 12 new staff onboarded early enough in 2019 such that it makes sense to include them in a 2019 per capita expenditure estimate?

We added 8 new staff in 2019. When I make our spending estimates, I assume new staff are added evenly throughout the year, i.e., I assume the spending on all new staff in a given year will be ~50% of their total annual cost. In practice given that we aren't talking about very large numbers here the accuracy of that estimate varies quite a bit. The distributions of when new staff were added in 2019 was pretty centered on the middle of the year, though salary level of those staff will likely complicate things here (I haven't run those numbers.)

Malo
4y26
0
0

(I'm COO at MIRI.)

Just wanted to provide some info that might be helpful:

  • We currently have 30 staff at the moment.
  • Our 2019 fundraiser post has a high-level breakdown of our spending estimates for 2020.
  • In our 2018 fundraiser post there's a budget estimate for 2019. The upper end of our estimated spending for 2019 in that post was $5.5M. I expect we'll actually come in under $6M but definitely over $5.5M. (This is inline with the upper end of updated spending estimates I generated internally in Q1 2019.)
  • Our 2018 in review post has a high-level breakdown of our 2018 spending. You can also see audited financial statements on our transparency page. (Note that figures in the financial statements and in the review post might not match up for a bunch of reasons, e.g., differences in how expenses are categorized, expenses for lease holder improvement and equipment etc. being considered fixed assets that depreciate over time on the financial statements, etc.)
  • The notable increase in our spending after 2017 is for the most part due to doubling the size of our staff, where more new staff were added in 2019 than in 2018.
  • The above doesn't include AI Impacts, which operates on it's own restricted funding.

Thanks :)

All grants we know we will receive (or are very likely to receive) have already been factored into our reserves estimates, which together with our budget estimate for next year, is the basis for the $1M fundraising goal. We haven't factored in any future grants where we're uncertain if we'll get the grant, uncertain of the size or structure of the grant, etc.

Malo
5y14
0
0

Update: Added an announcement of our newest hire, Edward Kmett, as well as, a list of links to relatively recent work we've been doing in Agent Foundations, and updated the post to reflect the fact that Giving Tuesday is over (though our matching opportunity continues)!

Yeah, I just replaced the fundraiser progress image in the post with a static version, previewed it by saving to draft first, then published the update. It seems like saving an existing post to draft, then publishing causes the post to be republished :|

First, note that we’re not looking for “proven” solutions; that seems unrealistic. (See comments from Tsvi and Nate elsewhere.) That aside, I’ll interpret this question as asking: “if your research programs succeed, how do you ensure that the results are used in practice?” This question has no simple answer, because the right strategy would likely vary significantly depending on exactly what the results looked like, our relationships with leading AGI teams at the time, and many other factors.

For example:

  • What sort of results do we have? The strategy is different depending on whether MIRI researchers develop a generic set of tools for aligning arbitrary AGI systems versus whether they develop a set of tools that only work for developing a sufficiently aligned very limited task-directed AI, and so on.[1]
  • How dangerous do the results seem? Designs for alignable AI systems could feasibly yield insight into how to construct misaligned AI systems; in that case, we’d have to be more careful with the tools. (Bostrom wrote about issues surrounding openness here.)[2]

While the strategy would depend quite a bit on the specifics, I can say the following things in general:

  • We currently have pretty good relationships with many of the leading AI teams, and most of the leading teams are fairly safety-conscious. If we made a breakthrough in AI alignment, and an expert could easily tell that the tools were useful upon inspection, I think it is very reasonable to expect that the current leading teams would eagerly adopt those tools.
  • The “pass a law that every AGI must be built a certain way” idea does not seem feasible to me in this context.
  • In the ideal case, the world will coordinate around the creation of AGI (perhaps via a single collaborative project), in which case there would be more or less only one team that needed to adopt the tools.

In short, my answer here is “AI scientists tend to be reasonable people, and it currently seems reasonable to expect that if we develop alignment tools that clearly work then they’ll use them.”

[1] MIRI’s current focus is mainly on improving the odds that the kinds of advanced AI systems researchers develop down the road are alignable, i.e., they’re the kinds of system we can understand on a deep and detailed enough level to safely use them for various “general-AI-ish” objectives.

[2] On the other hand, sharing sufficiently early-stage alignment ideas may be useful for redirecting research energies toward safety research, or toward capabilities research on relatively alignable systems. What we would do depends not only on the results themselves, but on the state of the rest of the field.

To the first part of your question, most faculty at universities have many other responsibilities beyond research which can include a mix of grant writing, teaching, supervising students, and sitting on various university councils. At MIRI most of these responsibilities simply don’t apply. We also work hard to remove as many distractions from our researchers as we can so they can spend as much of their time as possible actually making research progress. [1]

Regarding incentives, as Nate has previously discussed here on the EA Forum, our researchers aren’t subject to the same publish-or-perish incentives that most academics (especially early in their careers) are. This allows them to focus more on making progress on the most important problems, rather than trying to pump out as many papers as possible.

[1] For example, the ops team takes care of formatting and submitting all MIRI publications, we take on as much of grant application and management as is practical, we manage all the researcher conference travel booking, we provide food at the office, etc.

Load more