This is a special post for quick takes by ElliotTep. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
How organisations with low AI usage can and should be using it more
There is a lot of discussion about how everyone should be using AI more, and efforts to increase use and literacy. So far in animal advocacy spaces where I work I’ve seen the following efforts to increase usage so far:
Orgs provide model subscriptions to their teams.
People share the ways they’ve been using AI in slack channels or recurring meetings.
There are educational webinars or fellowships.
The above has made a real dent in AI usage, but much less than we should be aiming for given the gains left on the table. My sense is that the reason these actions have only seen incremental improvements is that:
Significantly upgrading usage requires a lot of dedicated time to experiment and learn in ways that can feel hard during a busy work week.
A great way to learn can be trying a task just outside of one’s ability with someone on hand to help, which is quite hard to set up in the age of remote work.
For folks who don’t have a coding/IT background, it’s hard to know what activities could be automated, or what supportive infrastructure is needed to pull it off.
I think the following would meaningfullyimprove how much individuals and organisations use AI:
Extended time for peer-to-peer co-working on trying to solve problems with AI (e.g. every second Friday afternoon.)
A full week of staff training on AI use, so that lessons can be followed by practice (HT to Eleanor McAree for this one).
Organisations with 20+ staff should hire an AI specialist who goes from team to team and person to person to help them use AI to increase their productivity on an ongoing basis (I think if someone builds a technical solution, it usually requires maintenance by someone with that level of proficiency).
Smaller organisations could have fractional AI specialists on retainer to do the same thing.
Have you considered that the reason these policies are not increasing AI usage is that AI usage is not particularly useful for many applications? Particularly when it comes to something like animal advocacy, I'm struggling to think of many things you'd actually need a full model subscription for (rather than just asking the occasional question to a free model).
I think the original policies are fine: they let people evaluate and decide for themselves how useful AI models are, and adjust strategies accordingly. Trying to pressure people to use AI beyond this level is going to make your team less effective.
How organisations with low AI usage can and should be using it more
There is a lot of discussion about how everyone should be using AI more, and efforts to increase use and literacy. So far in animal advocacy spaces where I work I’ve seen the following efforts to increase usage so far:
The above has made a real dent in AI usage, but much less than we should be aiming for given the gains left on the table. My sense is that the reason these actions have only seen incremental improvements is that:
I think the following would meaningfully improve how much individuals and organisations use AI:
What do people think? What have I missed?
Have you considered that the reason these policies are not increasing AI usage is that AI usage is not particularly useful for many applications? Particularly when it comes to something like animal advocacy, I'm struggling to think of many things you'd actually need a full model subscription for (rather than just asking the occasional question to a free model).
I think the original policies are fine: they let people evaluate and decide for themselves how useful AI models are, and adjust strategies accordingly. Trying to pressure people to use AI beyond this level is going to make your team less effective.