I'm a software engineer on the CEA Online team, mostly working on the EA Forum. We are currently interested in working with impactful projects as contractors/consultants, please fill in this form if you think you might be a good fit for this.
You can contact me at will.howard@centreforeffectivealtruism.org
I think this table from the paper gives a good idea of the exact methodology:
Like others I'm not convinced this is a meaningful "red line crossing", because non-AI computer viruses have been able to replicate themselves for a long time, and the AI had pre-written scripts it could run to replicate itself.
The reason (made up by me) non-AI computer viruses aren't a major threat to humanity is that:
I don't think this paper shows these AI models making a significant advance on these two things. I.e. if you found this model self-replicating you could still shut it down easily, and this experiment doesn't in itself show the ability of the models to self-improve.
I just sent out the Forum digest and I thought there was a higher number of underrated (and slightly unusual) posts this week, so I'm re-sharing some of them here:
I don't work on the EAG team, but I believe applications haven't opened yet because the exact date and location haven't been decided (cc @RobertHarling)
I think it's a shame the Nucleic Acid Observatory are getting so few votes.
They are relatively cheap (~$2M/year) and are working on a unique intervention that on the face of it seems like it would be very important if successful. At least as far as I'm aware there is no other (EA) org that explicitly has the goal of creating a global early warning system for pandemics.
By the logic of it being valuable to put the first few dollars into something unique/neglected I think it looks very good (although I would want to do more research if it got close to winning).
Ah, I hadn't thought of that, and I can see how this makes the results indeterminate (because reallocating the votes from one joint-last candidate could bump the other joint-last candidate up from the bottom).
I'll have a think about how to handle this and get back to you, my initial thought is still to break ties randomly (with a stable-but-random ranking of the precedence of each candidate in a tie).
(Discussed separately) I think it would be best to split the pot 4 ways if this happens, because there is some chance of introducing a bias by deciding when to end based on a property of the votes. Or if there is some reason we can't do this that I'm not aware of (e.g. legal constraints), then breaking the tie with a coin flip.
(@Lorenzo Buonanno🔸 You can consider this the official answer unless I hear otherwise).
I'm curating this post. This was my favourite post from Funding Strategy Week. It makes a straightforward but important point that is useful to keep in mind.
I'm one of the people who agreed with @titotal's comment, and it was because of something like this.
It's not that I'm worried per se that the survey designers will write a takeaway that puts a spin on this question (last time they just reported it neutrally). It's more that I expect this question[1] to be taken by other orgs/people as a proxy metric for the EA community's support for hits-based interventions. And because of the practicalities of how information is acted on the subtlety of the wording of the question might be lost in the process (e.g. in an organisation someone might raise the issue at some point, but it would eventually end up as a number in a spreadsheet or BOTEC, and there is no principled way to adjust for the issue that titotal describes).
And one other about supporting low-probability/high-impact interventions