Haydn has been a Research Associate and Academic Project Manager at the University of Cambridge's Centre for the Study of Existential Risk since Jan 2017.
It seems like the majority of individual grantees (over several periods) are doing academic-related research.
Can Caleb or other fund managers say more about why "One heuristic we commonly use (especially for new, unproven grantees) is to offer roughly 70% of what we anticipate the grantee would earn in an industry role" rather than e.g. "offer roughly the same as what we anticipate the grantee would earn in academia"?
See e.g. the UC system salary scale:
Helpful post on the upcoming red-teaming event, thanks for putting it together!
Minor quibble - "That does mean I’m left with the slightly odd conclusion that all that’s happened is the Whitehouse has endorsed a community red-teaming event at a conference."
I mean they did also announce $140m, that's pretty good! That's to say, the two other announcements seem pretty promising.
The funding through the NSF to launch 7 new National AI Research Institutes is promising, especially the goal for these to provide public goods such as reseach into climate, agriculture, energy, public health, education, and cybersecurity. $140m is, for example, more than the UK's Foundation Models Taskforce £100m ($126m).
The final announcement was that in summer 2023 the OMB will be releasing draft policy guidance for the use of AI systems by the US government. This sounds excruciatinly boring, but will be important, as the federal government is such a bigger buyer/procurer and setter of standards. In the past, this guidance has been "you have to follow NIST standards", which gives those standards a big carrot. The EU AI Act is more stick, but much of the high-risk AI it focusses on is use by governments (education, health, recruitment, police, welfare etc) and they're developing standards too. So far amount of commonality across the two. To make another invidious UK comparison, the AI white paper says that a year from now, they'll put out a report considering the need for statutory interventions. So we've got neither stick, carrot or standards...
Here's the Factsheet - https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/fact-sheet-biden-harris-administration-announces-new-actions-to-promote-responsible-ai-innovation-that-protects-americans-rights-and-safety/
"Today’s announcements include:
Yes this was my thought as well. I'd love a book from you Jeff but would really (!!) love one from both of you (+ mini-chapters from the kids?).
I don't know the details of your current work, but it seems worth writing one chapter as a trial run, and if you think its going well (and maybe has good feedback) considering taking 6 months or so off.
Am I right that in this year and a half, you spent ~$2 million (£1.73m)? Seems reasonable not to continue this if you don't think its impactful
He listed GovAI on this (very good!) post too: https://www.lesswrong.com/posts/5hApNw5f7uG8RXxGS/the-open-agency-model
Yeah dunno exactly what the nature of his relationship/link
He's at GovAI.
The point is that he reused the term, and didn't redact it by e.g. saying "n------!!!!” or "the n-word".
3 notes on the discussion in the comments.1. OP is clearly talking about the last 4 or so years, not FHI in eg 2010 to 2014. So quality of FHI or Bostrom as a manager in that period is not super relevant to the discussion. The skills needed to run a small, new, scrappy, blue-sky-thinking, obscure group are different from a large, prominent, policy-influencing organisation in the media spotlight.2. The OP is not relitigating the debate over the Apology (which I, like Miles, have discussed elsewhere) but instead is pointing out the practical difficulties of Bostrom staying. Commenters may have different views from the University, some FHI staff, FHI funders and FHI collaborators - that doesn't mean FHI wouldn't struggle to engage these key stakeholders.3. In the last few weeks the heads of Open Phil and CEA have stepped aside. Before that, the leadership of CSER and 80,000 Hours has changed. There are lots of other examples in EA and beyond. Leadership change is normal and good. While there aren't a huge number of senior staff left at FHI, presumably either Ord or Sandberg could step up (and do fine given administrative help and willingness to delegate) - or someone from outside like Greaves plausibly could be Director.
This is very exciting work! Really looking forward to the first research output, and what the team goes on to do. I hope this team gets funding - if I were a serious funder I would support this.
For some context on the initial Food Security project, readers might want to take a glance at how South America does in this climate modelling from Xia et al, 2022.
From: Global food insecurity and famine from reduced crop, marine fishery and livestock production due to climate disruption from nuclear war soot injection