Thanks to everyone who filled out my polls on AGI de/acceleration, timelines, and doom, and especially to those who commented. I learned a lot, so I thought I would summarize the findings here. Then I say what could be done differently in future polls and end with some questions on something that surprised me (probability of disempowerment).
Very quick summaries of the three polls:
The median respondent thought AI should be paused if there were a particular event/threshold, and thought that working on AI safety in a lab was acceptable. The median timelines for AGI, singularity, and "crazy" (disruptive economic growth, unemployment, protests, misinformation, etc) were mid to late 2030s. Most people did not consider disempowerment to be doom, and the probability of catastrophe (mass loss of life), disempowerment, and doom were all 15%.
More detailed summaries of the three polls:
For the big picture, it received 39 votes. 13% want AGI never to be built, 26% said to pause AI now in some form, and another 21% would like to pause AI if there is a particular event/threshold. 31% want some other regulation, 5% are neutral and 5% want to accelerate AI in a safe US lab. So if I had to summarize the median respondent, it would be strong regulation for AI or pause if a particular event/threshold is met. There appears to be more evidence for the claim that EA wants AI to be paused/stopped than for the claim that EA wants AI to be accelerated.
As for the personal actions, there were 27 votes. 67-74% thought it was okay to work in an AI lab in some capacity, depending on how you interpret votes in between defined numbers. About half of these were okay with the work being in capabilities, whereas the other half said it should be in safety. 26% appear to say it is not okay to invest in AI, and about 15% say it is not okay to pay for AI, and about 7% say it is not okay to use AI at all. So the median EA likely thinks it is ok to do AI safety work in the labs. It appears that EAs think that personal actions that accelerate AI are more acceptable than big picture actions to do the same, but this could be a difference due to the personal question being phrased as what is permissible versus the big picture as what would be best to do.
The median years of AGI, singularity, and crazy are 2035, 2038.5 and 2035, respectively (if options were continuous, it looks like AGI median would have been ~2034). LessWrong was 2030 for AGI (sooner) and 2040 for singularity (later). About 15% expect AGI and separately crazy by 2030 or sooner. One thing that surprised me was that by medians, people expected crazy ~1 year after AGI (~3 years after when I matched individual forecasts). Whereas I am expecting crazy 6 years before AGI, partly because of Epoch modeling indicating 10% global economic growth rate before 2029, much before full automation (and I think AGI).
There were 21 responses for the definition of doom, and ~80% of people did not consider disempowerment to be doom. There were 12 responses for the probability of catastrophe (mass loss of life), and the median was 15%, and the same for disempowerment (though the last 3 polls only had 5-7 responses, so I’m not sure how meaningful they are). Since the median person did not consider disempowerment to be doom, the median probability of doom was also 15%. Thus, I do think there is significant confusion caused by different definitions of P(doom), so specifying whether disempowerment is included would be helpful. The minimum P(doom) that was unacceptable to develop AGI was a median of 1%.
What could be done differently for future polls:
For the de/accelerating AGI poll, I would include the option of pausing at AGI.
For the definitions of doom, I would add that disempowerment could be gradual, even over millions or billions of years, and could apply across the universe.
Other thoughts?
Questions:
With the median being 15% disempowerment, and 15% catastrophe, that means 70% chance of a good outcome. How do you think this will be stable in the long term? Are you compelled by Carl Shulman's analogy that someone who is taking care of a powerless person will continue to carry out the will of this person if they really cared? Do you think this will continue for billions of years across the universe? Even if we do human brain emulations, so that we could think much faster, still artificial super intelligence would gain greater intelligence. So is the solution to modify the brain emulations to be super intelligent? Would that still be considered humans maintaining control?
