No, Dario Amodei and Paul Christiano were at the time employed by OpenAI, the recipient of the $30M grant. They were associated with Open Philanthropy in an advisory role.
I'm not trying to voice an opinion on whether this particular grant recommendation was unprincipled. I do think that things like this undermine trust in EA institutions, set a bad example, and make it hard to get serious concerns heard. Adopting a standard of avoiding appearance of impropriety can head off these concerns and relieve us of trying to determine on a case-by-case basis how fishy something is (without automatically accusing anyone of impropriety).
I'm mainly referring to this, at the bottom:
OpenAI researchers Dario Amodei and Paul Christiano are both technical advisors to Open Philanthropy and live in the same house as Holden. In addition, Holden is engaged to Dario’s sister Daniela.
Holden is Holden Karnofsky, at the time OP's Executive Director, who also joined OpenAI's board as part of the partnership initiated by the grant. Presumably he wasn't the grant investigator (not named), just the chief authority of their employer. OP's description of their process does not suggest that he or the OP technical advisors from OpenAI held themselves at any remove from the investigation or decision to recommend the grant:
OpenAI initially approached Open Philanthropy about potential funding for safety research, and we responded with the proposal for this grant. Subsequent discussions included visits to OpenAI’s office, conversations with OpenAI’s leadership, and discussions with a number of other organizations (including safety-focused organizations and AI labs), as well as with our technical advisors.
Does it? The Doing EA Better post made it sound like conflict-of-interest statements are standard (or were at one point), but recusal is not, at least for the Long-Term Future Fund. There's also this Open Philanthropy OpenAI grant, which is infamous enough that even I know about it. That was in 2017, though, so maybe it doesn't happen anymore.
More specifically, EA shows a pattern of prioritising non-peer-reviewed publications – often shallow-dive blogposts[36] – by prominent EAs with little to no relevant expertise.
This is my first time seeing the "climate change and longtermism" report at that last link. Before having read it, I imagined the point of having a non-expert "value-aligned" longtermist applying their framework to climate change would be things like
Instead, the report spends a lot of time on
The two are interwoven, which weakens the report even as a critical literature review. When it comes to particular avenues for catastrophe, the analysis is often perfunctory and dismissive. It comes off less as a longtermist perspective on climate change than as having an insider evaluate the literature because only "we" can be trusted to reason well.
I don't know how canonical that report has become. The reception in the thread where it was posted looks pretty critical, and I don't mean to pile on. I'm commenting because this post links the report in a way that looks like a backhanded swipe, so once I read it myself I felt it was worth sketching out my reaction a bit further.
Both examples show how we can act to reduce the catastrophe rate over time, but there are also 3 key risk factors applying upward pressure on the catastrophe rate:
- The lingering nature of present threats
- Our ongoing ability to generate new threats
- Continuously lowering barriers to entry/access
In the case of AI, its usually viewed that AI will be aligned or misaligned, meaning this risk is either be solved or not. It’s also possible that AI may be aligned initially, and become misaligned later[11]. The need for ongoing protection from bad AI would therefore be ongoing. In this scenario we’d need systems in place to stop AI being misappropriated or manipulated, similar to how we guard nuclear weapons from dangerous actors. This is what I term “lingering risk”.
I just want to flag one aspect of this I haven't seen mentioned, which is that much of this lingering risk naturally grows with population, since you have more potential actors. If you have acceptable risk per century with 10 BSL-4 labs, the risk with 100 labs might be too much. If you have acceptable risk with one pair of nuclear rivals in cold war, a 20-way cold war could require much heavier policing to meet the same level of risk. I expanded on this in a note here.
[2023-01-19 update: there's now an expanded version of this comment here.]
Note: I've edited this comment after dashing it off this morning, mainly for clarity.
Sure, that all makes sense. I'll think about spending some more time on this. In the meantime I'll just give my quick reactions:
I found this post helpful, since lately I've been trying to understand the role of molecular nanotechnology in EA and x-risk discussions. I appreciate your laying out your thinking, but I think full-time effort here is premature.
Overall, then, adding the above probabilities implies that my guess is that there’s a 4-5% chance that advanced nanotechnology arrives by 2040. Again, this number is very made up and not stable.
This sounds astonishingly high to me (as does 1-2% without TAI). My read is that no research program active today leads to advanced nanotechnology by 2040. Absent an Apollo program, you'd need several serial breakthroughs from a small number of researchers. Echoing Peter McCluskey's comment, there's no profit motive or arms race to spur such an investment. I'd give even a megaproject slim odds—all these synthesis methods, novel molecules, assemblies, information and power management—in the span of three graduate student generations? Simulations are too computationally expensive and not accurate enough to parallelize much of this path. I'd put the chance below 1e-4, and that feels very conservative.
Here’s a quick attempt to brainstorm considerations that seem to be feeding into my views here: "Drexler has sketched a reasonable-looking pathway and endpoint", "no-one has shown X isn't feasible even though presumably some people tried"
Scientists convince themselves that Drexler's sketch is infeasible more often than one might think. But to someone at that point there's little reason to pursue the subject further, let alone publish on it. It's of little intrinsic scientific interest to argue an at-best marginal, at-worst pseudoscientific question. It has nothing to offer their own research program or their career. Smalley's participation in the debate certainly didn't redound to his reputation.
So there's not much publication-quality work contesting Nanosystems or establishing tighter upper bounds on maximum capabilities. But that's at least in part because such work is self-disincentivizing. Presumably some arguments people find sufficient for themselves wouldn't go through in generality or can't be formalized enough to satisfy a demand for a physical impossibility proof, but I wouldn't put much weight on the apparent lack of rebuttals.
Interesting, thanks. I read Nanosystems as establishing a high upper bound. I don't see any of its specific proposals as plausibly workable enough to use as a lower bound in the sense that, say, a ribosome is a lower bound, but perhaps that's not what Eliezer means.
Differential response within the survey is again as bad.
The response rate for the survey as a whole was about 20% (265 of 1345), and below 8% (102) for every individual question on which data was published across three papers (on international differences, the Flynn effect, and controversial issues).
Respondents attributed the heritability of U.S. black-white differences in IQ 47% on average to genetic factors. On similar questions about cross-national differences, respondents on average attributed 20% of cognitive differences to genes. On the U.S. question, there were 86 responses, and on the others, there were between 46 and 64 responses.
Steve Sailer's blog was rated highest for accuracy in reporting on intelligence research—by far, not even in the ballpark of sources that got more ratings (those sources being exactly every mainstream English-language publication that was asked about). It was rated by 26 respondents.
The underlying data isn't available, but this is all consistent with the (known) existence of a contingent of ISIR conference attendees who are likely to follow Sailer's blog and share strong, idiosyncratic views on specifically U.S. racial differences in intelligence. The survey is not a credible indicator of expert consensus.
(More cynically, this contingent has a history of going to lengths to make their work appear more mainstream than it is. Overrepresenting them was a predictable outcome of distributing this survey. Heiner Rindermann, the first author on these papers, can hardly have failed to consider that. Of course, what you make of that may hinge on how legitimate you think their work is to begin with. Presumably they would argue that the mainstream goes to lengths to make their work seem fringe.)