No, there is no interesting new method here, it's using LLM scaffolding to copy some files and run a script. It can only duplicate itself within the machine it has been given access to.
In order for AI to spread like a virus it would have to have some way to access new sources of compute, for which it would need be able to get money or the ability to hack into other servers. Neither of which current LLMs appear to be capable of.
Successful self-replication under no human assistance is the essential step for
AI to outsmart the human beings
This seems clearly false. Replication (under their operationalization) is just another programming task that is not especially difficult. There's no clear link between this task and self improvement, which would be a much harder ML task requiring very different types of knowledge and actions.
However, I do separately think we have passed the level of capabilities where it is responsible to keep improving AIs.
This is particularly relevant given the recent letter from Anthropic on SB-1047.
I would like to see a steelman of the letter since it appears to me to significantly undermine Anthropic's entire raison d'etre (which I understood to be: "have a seat at the table by being one of the big players - use this power to advocate for safer AI policies"). And I haven't yet heard anyone in the AI Safety community defending it.
https://www.lesswrong.com/posts/s58hDHX2GkFDbpGKD/linch-s-shortform?commentId=RfJsudqwEMwTR5S5q
TL;DR
Anthropic are pushing for two key changes
Also significantly weakening whistleblower protections.
Yep, maybe. I'm responding specifically to the vibe that this particular pre-print should make us more scared about AI.