See explainer on why AGI could not be controlled enough to stay safe:
https://www.lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable
Note: I am no longer part of EA because of the community’s/philosophy’s overreaches. I still post here about AI safety.
Actually, looks like there is a thirteenth lawsuit that was filed outside the US.
A class-action privacy lawsuit filed in Israel back in April 2023.
Wondering if this is still ongoing: https://www.einpresswire.com/article/630376275/first-class-action-lawsuit-against-openai-the-district-court-in-israel-approved-suing-openai-in-a-class-action-lawsuit
I agree that implies that those people are more inclined to spend the time to consider options. At least they like listening to other people give interesting opinions about the topic.
But we’re all just humans, interacting socially in a community. I think it’s good to stay humble about that.
If we’re not, then we make ourselves unable to identify and deal with any information cascades, peer proof, and/or peer group pressures that tend to form in communities.
Three reasons come to mind why OpenPhil has not funded us.
Does that raise any new questions?
They're not quite doing a brand partnership.
But 80k has featured various safety researchers working at AGI labs over the years. Eg. see OpenAI.
So it's more like 80k has created free promotional content, and given their stamp of approval of working at AGI labs (of course 'if you weigh up your options, and think it through rationally' like your friends).
Hi Conor,
Thank you.
I’m glad to see that you already linked to clarifications before. And that you gracefully took the feedback, and removed the prompt engineer role. I feel grateful for your openness here.
It makes me feel less like I’m hitting a brick wall. We can have more of a conversation.
~ ~ ~
The rest is addressed to people on the team, and not to you in particular:
There are grounded reasons why 80k’s approaches to recommending work at AGI labs – with the hope of steering their trajectory – has supported AI corporations to scale. While disabling efforts that may actually prevent AI-induced extinction.
This concerns work on your listed #1 most pressing problem. It is a crucial consideration that can flip your perceived total impact from positive to negative.
I noticed that 80k staff responses so far started by stating disagreement (with my view), or agreement (with a colleague’s view).
This doesn’t do discussion of it justice. It’s like responding to someone’s explicit reasons for concern that they must be “less optimistic about alignment”. This ends reasoned conversations, rather than opens them up.
Something I would like to see more of is individual 80k staff engaging with the reasoning.
If some employees actually have the guts to whistleblow on current engineering malpractices…
Plenty of concrete practices you can whistleblow on that will be effective in getting society to turn against these companies:
Pick what you’re in a position to whistleblow on.
Be very careful to prepare well. You’re exposing a multi-billion-dollar company. First meet in person with an attorney experienced in protecting whistleblowers.
Once you start collecting information, make photographs with your personal phone, rather than screenshots or USB copies that might be tracked by software. Make sure you’re not in line of sight of an office camera or webcam. Etc. Etc.
Preferably, before you start, talk with an experienced whistleblower about how to maintain anonymity. The more at ease you are there, the more you can bide your time, carefully collecting and storing information.
If you need information to get started, email me at remmelt.ellen[a/}protonmail<d0t>com.
~ ~ ~
But don’t wait it out until you can see some concrete dependable sign of “extinction risk”. By that time, it’s too late.
If labs do engage in behavior that is flagrantly reckless, employees can act as whistleblowers.
This is the crux for me.
If some employees actually have the guts to whistleblow on current engineering malpractices, I have some hope left that having AI safety researchers at these labs still turns out “net good”.
If this doesn’t happen, then they can keep having conversations about x-risks with their colleagues, but I don’t quite see when they will put up a resistance to dangerous tech scaling. If not now, when?
Internal politics might change
We’ve seen in which directions internal politics change, as under competitive pressures.
Nerdy intellectual researchers can wait that out as much as they like. That would confirm my concern here.
Another problem with the NIST approach is an overemphasis on solving for identified risks, rather than precautionary principle (just don’t use scaled tech that could destabilise society at scale), or on preventing and ensuring legal liability for designs that cause situationalised harms.
Thanks! Also a good example of lots of complaints being prepared now by individuals