Any living person (or list of people). Assume they can be persuaded that the problem of existential risk from AI is real and important.
Any living person (or list of people). Assume they can be persuaded that the problem of existential risk from AI is real and important.
I feel like this question is so much more fun if we can include dead people, so I’m gonna do just that.
Off the top of my head:
Here's what GPT3 thinks:
No surprises there (although a bit surprised that GPT-3 doesn't know that Alan Turing is dead, and can't spell Eliezer).
Any standard list of "top AI researchers" will do. Also look at top researchers in CS, math, stats, physics, philosophy (note the new CAIS philosophy fellowship as an example of how you might attract people from other fields). Edward Witten comes to mind. But you'll get better answers if you ask professors within these subjects or even turn to Reddit, Quora, etc.
Hmm..... Who are the leading thinkers/speakers who argue we should not further develop AI? Such folks would not need to be persuaded, and would perhaps be willing to consider the full range of options.
People who have invested heavily in AI careers are not likely to be receptive to proposals which don't include the continuation of AI development, that is, not open to the full range of options.
One way to solve AI alignment would be to stop developing AI. I know, very challenging, but then so are all other options, none of which would seem to offer such a definitive solution.
:( As far as I know, no one from EA has annoyed (randomly emailed) Terry Tao about it, despite many people saying he would be a great person to have on board.
Obviously I'm not in favour of random EAs annoying important people (and hurting the reputation of EA/AI Alignment), but I do think given the urgency of the situation we are in, at some point, some high up people in EA/AI Alignment have to make some serious attempt at putting together such a dream team (more).
Yes, I think the fact that they didn't go through with it is some evidence that such a list need not be counterproductive to our goal (and the EV is probably positive). Ultimately the Dream Team needs to be approached, but I'm optimistic that this can be done in a careful and coordinated manor by the relevant senior people in EA/Alignment.
One design ideation method is instead of trying to think of good ideas, try to think of the worst possible idea.
With that in mind, encourage the writers of "It's Always Sunny in Philadelphia" to do an episode "The Gang Solves AGI Alignment".
A bit sad that no one has actually answered the object level question and nearly all the discussion is meta. I can understand why. But I also think that we are at crunch time with this, and the stakes are as high as they can be. So this is actually a very serious question that serious people should be considering. Maybe (some) people high up in EA are considering it. I hope so!
Some ideas for identifying dream team members: