My understanding is that Schmidt (1) has never espoused views along the lines of "positively influencing the long-term future is a key moral priority of our time"
I don't think that's so important a distinction. Prominent longtermists have declared the view that longtermism basically boils down to x-risk, which (again in their view) overwhelmingly boils down to AI risk. If, following their messaging, we get highly influential people doing harmful stuff in the name of AI risk, I think we should still update towards 'longtermism tends to be harmful in practice'.
Not as much as if they were explicitly waving a longtermist banner, but the more we believe the longtermist movement has had any impact on society at all, the stronger this update should be.
My impression of Eric Schmidt is that he is not a longtermist, and if anything has done lots to accelerate AI progress.
This seems no-true-Scotsmany. It seems to have become almost commonplace for organisations that started from a longtermist seed to have become competitors in the AI arms race, so if many people who are influenced by longtermist philosophy end up doing stuff that seems harmful, we should update towards 'longtermism tends to be harmful in practice' much more than towards 'those people are not longtermists'.
Hey Sam, I've added Confido to the list, thanks :)
Re EA Focusmate, I've left it off because afaik it doesn't meet the 'putting ongoing work into' criterion, which is there partly to recognise and acclaim such efforts, partly to stop the list from getting swamped with 'someone set up something one time, seems like it could be useful idk' kind of efforts.
Also, I'm biased, but if you want to come and make connections with EAs, I would strongly recommend the Gather Town over it, which is there for exactly that purpose, less hassle to set up and (IMO) a generally warmer place.
Thoughts on including EA-oriented tools? I've added one at its creator's suggestion, though I'm not sure if there are so many such tools that a fair inclusion policy would be an enormous list (though if so, maybe tools would deserve their own advertising thread?). Should I add eg Squiggle/Guesstimate/others?
Fair enough. I am still sceptical that this would translate into a commensurate increase in psychopaths in utilitarian communities*, but this seems enough to give us reason for concern.
*Also violent psychopaths don't seem to be our problem, so their greater intelligence would mean the average IQ of the kind of emotional manipulator we're concerned about would be slightly lower.
There's any number of possible reasons why psychopaths might not want to get involved with utilitarian communities. For example, their IQ tends to be slightly lower than average, whereas utilitarians tend to have higher than average IQs, so they might not fit in intellectually whether their intentions were benign or malign. Relatedly, you would expect communities with higher IQs better at policing themselves against malign actors.
I think there would be countless confounding factors like this that would dominate a single survey based on a couple of hundred (presumably WEIRD) students.
psychopaths disproportionately have utilitarian intuitions, so we should expect communities with a lot of utilitarians to have a disproportionate number of psychopaths relative to the rest of the population.
From psychopaths disproportionately having utilitarian intuitions, it doesn't follow that utilitarians disproportionately have psychopathic tendencies. We might slightly increase our credence that they do, but probably not enough to meaningfully outweigh the experience of hanging out with utilitarians and learning first hand of their typical personality traits.
Are you claiming that if (they think and we agree that) longtermism is 80+% concerned with AI safety work and AI safety work turns out to be bad, we shouldn't update that longtermism is bad? The first claim seems to be exactly what they think.
Scott:
You could argue that he means 'socially promote good norms on the assumption that the singularity will lock in much of society's then-standard morality', but 'shape them by trying to make AI human-compatible' seems a much more plausible reading of the last sentence to me, given context of both longtermism.
Neel:
He identifies as a not-longtermist (mea culpa), but presumably considers longtermism the source of these as 'the core action relevant points of EA', since they certainly didn't come from the global poverty or animal welfare wings.
Also, at EAG London, Toby Ord estimated there were 'less than 10' people in the world working full time on general longtermism (as opposed to AI or biotech) - whereas the number of people who'd consider themselves longtermist is surely in the thousands.