Hide table of contents

Let's say you get the chance to talk to someone in one of the following situations:

  • An entrepreneur attempting to apply existing AI tech (e.g. ChatGPT) to a new niche or application (e.g. helping companies improve their sales messaging)
  • The CEO of a large company or nonprofit (or, really, anyone else with decision making power), who would like to use existing AI tools to make your operations more efficient, help generate text or images, etc.

Suppose, in each case, that the person is concerned about AI x-risk, since they've heard you or someone else mention it, but don't realise how big a deal it might be.

What, concretely, should you suggest they do to reduce AI x-risk?

I would be very curious to hear your thoughts.


Why you might be concerned that these activities could increase AI x-risk

  1. An organisation could notice how much profit (or other value) can be generated by applying recently-developed AI, or a startup could attract significant AI hype from others (e.g. users) noticing the same
  2. Consequently, the organisation, other organisations or individuals supporting it, or people at large might (a) invest more in AI capabilities research (without investing in AI safety research sufficient to offset this), (b) be opposed to regulations slowing down or limiting the use of new AI tools
  3. Other reasons - please let me know of any you think of

Suggestions you might make, and why they don't seem satisfactory

  1. Just don't use AI at all.
    1. In some cases, this seems too cautious. (E.g. should no-one use ChatGPT?)
    2. It's very unlikely to be persuasive to someone who isn't convinced of AI risk, even if it were the decision-theoretically correct outcome under uncertainty. This is especially true if they are excited about AI.
    3. Race dynamics mean that organisations not (aggressively) using AI may be outcompeted by those that do
  2. Do not build new models; only applying existing ones
    1. This seems unlikely to change much; most in this situation would already only be applying existing models, and those planning to build new models are unlikely to be persuaded for reasons 1b and 1c
  3. Use less powerful AI wherever possible (e.g. GPT-3 instead of GPT-4)
    1. Maybe a useful suggestion
    2. But there's probably a 'tax' in terms of user experience, accuracy, efficiency, etc.

11

0
0

Reactions

0
0
Comments1
Sorted by Click to highlight new comments since: Today at 1:13 AM

Hey, good question!

First of all I'd like to recommend Holen's "What AI companies can do today to help with the most important century", which is the closest I know to a reputable answer to your question (it doesn't answer the exact same question). 

Also see his other posts, like Spreading messages to help with the most important century about how to approach similar problems (of how to explain about AI risk, and especially, what failure modes to avoid).

 

My own (non reputable) opinion is that the most important are:

  1. Getting people on board about this being an actual danger (if you think it is)
  2. Noticing that it's happening because many people are following their own "incentives" such as making profitable products quickly.

I know these aren't concrete, but I don't think it's realistic to meet a CEO and get them to change their plans if they're not on board about that.

 

Still, to answer your question concretely, here are my thoughts in order, most important is on top:

  1. Don't develop new AI capabilities (that bring us maybe closer to "AGI")
  2. Don't share capabilities you created
  3. Don't do things that create a lot more resources to enter the field, 
    1. Imagine the hype around ChatGPT
    2. Imagine adding a new AI org which increases race dynamics
  4. [there's more]

 

I hope that helps

 

Welcome to the EA Forum!