I should say that relatively little management experience (largest AI Safety ANZ was when I ran it was me and Yanni part-time, a part time ops contractor and an intern), but that said key crux for me is this:
• Option 1: hire someone and severely limit their promotional potential, acknowledging the weird dynamics this might create
• Option 2: hire someone with a reasonable level of value alignment and ability to understand strategy
Option 1 might work for specialist roles (ie. if an org needs an accountant, that person might be fine only ever being an accountant). It's worth noting, that even if you do this, there's still a cost from bringing them into the field insofar as someone else may hire them to do a role they'd be ill-suited for.
In terms of understanding strategy, it's important to realise that different people have wildly different worldviews and ways of seeing the world. You can collapse these down to a few dot points and tell yourself that you understand the different perspectives, but you'd just be kidding yourself (I've made this mistake myself in the past).
I'm pretty busy, but feel free to ping me in like two weeks.
"Several participants suggested that, for most generalist roles, a competent senior professional can get to a working level of AIS context in weeks"
I'm pretty skeptical of this, without a ton of individual mentorship that I doubt anyone does—or without resources that don't currently exist. My intuition is that people who make these claims have low standards.
We do have other expertise that we’d be happy to trade. Many AI safety folks have proposed just this: animal welfare campaigners are experienced with guerilla campaigns that have pressured some of the world’s largest companies to make modest but meaningful concessions to ethics. We could trade these services to the AI movement, using our skills to win stronger safety and alignment commitments from leading labs, in exchange for technical safety and alignment researchers giving animals their due consideration in overall alignment strategy
I'm in favour of this proposal. I'd love to see it explored in a future post.
I expect people to update somewhat. My split was more about where people end up falling after initial exposure to arguments on both sides.
In the past, AI didn't feel so pressing to the AI crowd, so they had more space to explore, rather than the discussion of animals and global poverty feeling like dead weight.
I don't really know.
But that's a good point: Chesterton's fence is a pretty good heuristic.
Probably some people were being a bit pushy advertising their services?
Btw, I just thought I should say that I really appreciate you folks writing this post. I don't want you to think that I disliked your article just because I disagreed on one point (which is how things can sometimes come off).