Let's say you get the chance to talk to someone in one of the following situations:
- An entrepreneur attempting to apply existing AI tech (e.g. ChatGPT) to a new niche or application (e.g. helping companies improve their sales messaging)
- The CEO of a large company or nonprofit (or, really, anyone else with decision making power), who would like to use existing AI tools to make your operations more efficient, help generate text or images, etc.
Suppose, in each case, that the person is concerned about AI x-risk, since they've heard you or someone else mention it, but don't realise how big a deal it might be.
What, concretely, should you suggest they do to reduce AI x-risk?
I would be very curious to hear your thoughts.
Why you might be concerned that these activities could increase AI x-risk
- An organisation could notice how much profit (or other value) can be generated by applying recently-developed AI, or a startup could attract significant AI hype from others (e.g. users) noticing the same
- Consequently, the organisation, other organisations or individuals supporting it, or people at large might (a) invest more in AI capabilities research (without investing in AI safety research sufficient to offset this), (b) be opposed to regulations slowing down or limiting the use of new AI tools
- Other reasons - please let me know of any you think of
Suggestions you might make, and why they don't seem satisfactory
- Just don't use AI at all.
- In some cases, this seems too cautious. (E.g. should no-one use ChatGPT?)
- It's very unlikely to be persuasive to someone who isn't convinced of AI risk, even if it were the decision-theoretically correct outcome under uncertainty. This is especially true if they are excited about AI.
- Race dynamics mean that organisations not (aggressively) using AI may be outcompeted by those that do
- Do not build new models; only applying existing ones
- This seems unlikely to change much; most in this situation would already only be applying existing models, and those planning to build new models are unlikely to be persuaded for reasons 1b and 1c
- Use less powerful AI wherever possible (e.g. GPT-3 instead of GPT-4)
- Maybe a useful suggestion
- But there's probably a 'tax' in terms of user experience, accuracy, efficiency, etc.
Hey, good question!
First of all I'd like to recommend Holen's "What AI companies can do today to help with the most important century", which is the closest I know to a reputable answer to your question (it doesn't answer the exact same question).
Also see his other posts, like Spreading messages to help with the most important century about how to approach similar problems (of how to explain about AI risk, and especially, what failure modes to avoid).
My own (non reputable) opinion is that the most important are:
I know these aren't concrete, but I don't think it's realistic to meet a CEO and get them to change their plans if they're not on board about that.
Still, to answer your question concretely, here are my thoughts in order, most important is on top:
I hope that helps
Welcome to the EA Forum!