Stuart Buck

Executive Director @ Good Science Project
878 karmaJoined Dec 2021

Bio

I lead a small think tank dedicated to accelerating the pace of scientific advancement by improving the conditions of science funding. As well, I'm a senior advisor to the Social Science Research Council. Prior to these roles, I spent some 9 years at Arnold Ventures (formerly the Laura and John Arnold Foundation) as VP of Research. 

How I can help others

Science policy, reproducibility, and philanthropy. 

Comments
45

Just a note: this post could have opposite advice for people from guess culture rather than ask culture. See https://ask.metafilter.com/55153/Whats-the-middle-ground-between-FU-and-Welcome#830421

I.e., someone from ask culture might need to be warned not to bother people so much. Someone from guess culture might need to be told that it is ok to reach out to people once in a while.

"I think all of these considerations in-aggregate make me worried that a lot of current work in AI Alignment field-building and EA-community building is net-negative for the world, and that a lot of my work over the past few years has been bad for the world"

This admirably honest statement deserves more emphasis. As we know from medicine and international development and anywhere that does RCTs, it is really, really hard -- even when the results of your actions are right in front of you -- to know whether you have helped someone or harmed them. There are just too  many confounding factors, selection bias, etc. 

The long-termist AGI stuff has always struck me as even worse off in this respect. How is anyone supposed to know that the actions they take today will have a beneficial impact on the world decades from now, rather than making things worse? And given the premises of AGI alignment, making things worse would be utterly catastrophic for humanity. 

I don't believe the $10m claim. Indeed, I don't even see how it would be possible to spend that much without buying a Super Bowl ad. At $12k a month, you would have to hire nearly 140 PR firms for 6 months to add up to $10m. Perhaps someone added an extra zero or two . . .

My reaction was to Google "diamondoid bacteria" and then wonder why no one on the Internet has uttered that phrase other than Eliezer or someone quoting him.

All of that seems question-begging. If we define "true AGI" as that which knows how to rewrite its own code, then that is indeed what a "true AGI" would be able to do. 

"But a true AGI could not only transform the world, it could also transform itself."

Is there a good argument for this point somewhere? It doesn't seem obvious at all. We are generally intelligent ourselves, and yet existed for hundreds of thousands of years before we even discovered that there are neurons, synapses, etc., and we are absolutely nowhere near the ability to rewire our neurons and glial cells so as to produce ever-increasing intelligence. So too, if AGI ever exists, it might be at an emergent level that has no idea it is made out of computer code, let alone knows how to rewrite its own code. 

Also: https://twitter.com/moskov/status/1624058113119645699

One issue for me is just that EA has radically different standards for what constitutes "impact." If near-term: lots of rigorous RCTs showing positive effect sizes.

 If long-term: literally zero evidence that any long-termist efforts have been positive rather than negative in value, which is a hard enough question to settle even for current-day interventions where we see the results immediately . . .  BUT if you take the enormous liberty of assuming a positive impact (even just slightly above zero), and then assume lots of people in the future, everything has a huge positive impact. 

Open Phil doesn't necessarily owe anyone an explanation, but the website seems fairly vacuous, and most of their "projects" are just mentioning times when they have invested in someone else's company.  Strong vibe of "all hat, no cattle." 

Definitely agree with this post. 

That said, I suspect that the underlying concern here is one of imbalance of power. As I've seen from the funder's side, there are a number of downsides when a grantee is overly dependent on one particular funder: 1) the funder might change direction in a way that is devastating to the grantee and its employees;  2) the grantee is incentivized to cater solely to that one funder while remaining silent about possible criticisms, all of which can silently undermine the effectiveness of the work. 

I wonder to what extent this happens to the EA movement broadly, given the dominance of Open Phil as a funder (more so due to the implosion of SBF). 

To be sure, Open Phil does a better job than just about any philanthropy at being transparent and welcoming of critique! 

 Even so, folks are still intimidated (hence anonymity for that long "Doing EA Better" post), and it often strikes me that some possible lines of questioning aren't raised at all.  

For example, while I might have missed something, I haven't seen anyone ask why the Regranting Challenge gave $70 million to the Gates Foundation that has a 2021 endowment  of$55 billion, with Gates' net worth over $100 billion and many more billions on the way from Warren Buffett. 

Why is this cause "neglected"? It may well be, of course-- perhaps the folks working on a TB vaccine are doing great work but don't have the internal political capital at Gates to get even 1/1000th of the Gates wealth allocated to their team. And it may just be a matter of framing--a "collaboration" or "joint funding effort" with Gates wouldn't come across the same way as "a grant to Gates," which I guess is irrational on my part. So maybe there's nothing here at all. 

Still...."giving money to Gates" is the kind of thing that would attract some questioning if anyone else did it, and the only criticism of the Regranting Challenge I've seen is from another funder who isn't dependent on future funding here: https://twitter.com/mulagostarr/status/1613911821857230848

In other words, I suspect that some folks have the internal feeling, "I think Open Phil made questionable decisions in one case or another, but it seems too high-risk to speak out despite Open Phil's willingness to consider critiques. I wish there were a way to capture the kinds of insights that are currently being stifled." 

The relative lack of truly honest feedback is something that bothered me as a funder, and it seemed really hard to address! Even anonymous surveys won't tell you what grantees and others really think (most people that have a very specific insight or critique can't say very much without making it obvious which grant they're talking about and who is speaking). 

Load more