OA

Odd anon

2 karmaJoined Oct 2023

Posts
1

Sorted by New
-1
· 6mo ago · 6m read

Comments
3

Wow, that article is seriously dishonest and misleading throughout. What a mess.

Copy-pasting something I wrote elsewhere:

The issue is not yet clearly polarized, but when/if it does, it's likely that the right will be the more pro-safety side, unless something changes a lot. (Polls: "The uncontrollable effects of artificial intelligence (AI) could be so harmful that it may risk the future of humankind", Trump voters - 70% agree, Biden voters - 60%, Ipsos poll; "How worried are you that machines with artificial intelligence could eventually pose a threat to the existence of the human race – very, somewhat, not too, or not at all worried?", Republicans - 31% "very worried", Democrats - 21%, 31% each for "somewhat worried", Monmouth. Among politicians, it's less obvious a skew, but Sunak, von der Leyen, and Netanyahu are all right-wing within their systems.) This will likely end up being a problem, because academia, big tech, and the media are all held by the left.

Also, Mitt Romney seemed to be very concerned about AI risk during the hearings, and I don't think he was at all alone among the Republicans present.

AGI development is already taboo outside of tech circles. Per the September poll by the AIPI, only 12% disagree that "Preventing AI from quickly reaching superhuman capabilities" should be an important AI policy goal. (56% strongly agree, 20% somewhat agree, 8% somewhat disagree, 4% strongly disagree, 12% not sure.) Despite the fact that world leaders are themselves influenced by tech circles' positions, leaders around the world are quite clear that they take the risk seriously.

The only reason AGI development hasn't been halted already is that the general public does not yet know that big tech is both trying to build AGI, and actually making real progress towards it.