E

ElliotTep

1973 karmaJoined

Comments
69

How organisations with low AI usage can and should be using it more

There is a lot of discussion about how everyone should be using AI more, and efforts to increase use and literacy. So far in animal advocacy spaces where I work I’ve seen the following efforts to increase usage so far:

  1. Orgs provide model subscriptions to their teams.
  2. People share the ways they’ve been using AI in slack channels or recurring meetings.
  3. There are educational webinars or fellowships. 

The above has made a real dent in AI usage, but much less than we should be aiming for given the gains left on the table. My sense is that the reason these actions have only seen incremental improvements is that:

  1. Significantly upgrading usage requires a lot of dedicated time to experiment and learn in ways that can feel hard during a busy work week.
  2. A great way to learn can be trying a task just outside of one’s ability with someone on hand to help, which is quite hard to set up in the age of remote work.
  3. For folks who don’t have a coding/IT background, it’s hard to know what activities could be automated, or what supportive infrastructure is needed to pull it off.

I think the following would meaningfully improve how much individuals and organisations use AI:

  1. Extended time for peer-to-peer co-working on trying to solve problems with AI (e.g. every second Friday afternoon.)
  2. A full week of staff training on AI use, so that lessons can be followed by practice (HT to Eleanor McAree for this one).
  3. Organisations with 20+ staff should hire an AI specialist who goes from team to team and person to person to help them use AI to increase their productivity on an ongoing basis (I think if someone builds a technical solution, it usually requires maintenance by someone with that level of proficiency).
  4. Smaller organisations could have fractional AI specialists on retainer to do the same thing.  

What do people think? What have I missed?

This is very sweet. Made my day :)

I had in mind the EA Animal Welfare fund where small orgs is a reasonable part of its giving portfolio (I don't have exact numbers off the top of my head) 

FWIW the main point I wanted to make in this post is that individuals should not be reaching out to Anthropic staff who don't actively indicate they want to be pitched directly. Part of our strategy is to have a high-trust call to action, but this is mostly based on conversations we've had with Anthropic staff themselves. 

I'm not particularly against newer funds coming on to the scene and agree with a lot of the comments in this post about the pros of doing so.

As a small nitpick some of these major funds do give a lot of smaller donations to smaller, less established organizations, so I wouldn't say major funds = money goes to major orgs.

I interpreted this as the challenge of setting up a foundation with one purpose in mind, and then the people you hire executing something different because of the values they bring to the table. In general, I'd guess that people who work in philanthropic spaces skew left-wing, and so whatever mandate you set will end up skewing more left-wing than you intend (if you yourself are not left-wing). 

Apologies, by that I mean a few Anthropic staff said one thing that was missing from the donor advisor space was recommendations of what % of their donations to allocate across cause areas, so this is something I tried to make happen by advocating for a few other organisations and individuals to do this.

Hi Abraham, I'm curious what you think about the difference between FTX and this situation is that FTX was disbursing hired grantmakers to do the work. My impression is that most Anthropic staff don't have the time or expertise to set this up themselves, even if it was a model like a giving circle, nor do they want to. 

It seems like a challenge here to recreate FTX's level of willingness to fund ambitious projects is that for Anthropic donors, either they'd need to want to spend the time setting up foundations individually, or someone with the right expertise would need to set up their own fund and join the fray on more speculative work. 

FWIW my vague impression (I have less visibility into other cause areas) is that as funds anticipate an influx of funding coming into the space, funding more ambitious and speculative bets seems to be  a part of the conversation (while hopefully reducing the downsides that came with FTX funding). 

Hi Nick, thanks for engaging. I agree that in writing this, there is a level of scrutiny I've opened myself up to. I'll respond to some of the main points:

  1. I agree that everything I've said in this post conveniently aligns with my job. I also have said them not to gatekeep but because I think it is true and has signficant implications for the future of funding in EA.
  2. I endeavour to provide services to Anthropic staff that sit at the intersection of valuable to them AND good for the world. For example, I've spent a fair bit of time advocating for recommended default splits across cause areas based on feedback from a few Anthropic staff. We've also developed resources on some of the main fund options in the animal advocacy space and run an event in SF to ask questions of the fund managers.
  3. The default preference to defer to funds has come from Anthropic staff communicating that for most of them, that's their preference due to lacking the time or expertise. If individuals at Anthropic have wanted to donate to individual organisations, we've been happy to make introductions or specific recommendations.
  4. I agree there is a collective action problem at the level of funds, and how that is navigated is important. I just think that it is a much smaller pool of pitches than at the organisation level. FWIW there have been ongoing efforts among the funders in FAW to coordinate to reduce the collective action problem. 

These posts need warnings that if you have any important work to do in the next hour not to click on them. Watching these videos is too damn tempting! Well done to the team as always! 

Load more