CL

Chris Leong

7715 karmaJoined Sydney NSW, Australia

Participation
7

  • Organizer of AI Safety Australia and New Zealand
  • Organizer of AI Safety Sydney
  • Completed the AGI Safety Fundamentals Virtual Program
  • Attended an EA Global conference
  • Attended more than three meetings with a local EA group
  • Attended an EAGx conference
  • Completed the In-Depth EA Virtual Program

Sequences
1

Wise AI Wednesdays

Comments
1308

Topic contributions
2

Btw, I just thought I should say that I really appreciate you folks writing this post. I don't want you to think that I disliked your article just because I disagreed on one point (which is how things can sometimes come off).

I should say that relatively little management experience (largest AI Safety ANZ was when I ran it was me and Yanni part-time, a part time ops contractor and an intern), but that said key crux for me is this:

• Option 1: hire someone and severely limit their promotional potential, acknowledging the weird dynamics this might create
• Option 2: hire someone with a reasonable level of value alignment and ability to understand strategy

Option 1 might work for specialist roles (ie. if an org needs an accountant, that person might be fine only ever being an accountant). It's worth noting, that even if you do this, there's still a cost from bringing them into the field insofar as someone else may hire them to do a role they'd be ill-suited for.

In terms of understanding strategy, it's important to realise that different people have wildly different worldviews and ways of seeing the world. You can collapse these down to a few dot points and tell yourself that you understand the different perspectives, but you'd just be kidding yourself (I've made this mistake myself in the past).

I'm pretty busy, but feel free to ping me in like two weeks.

Sorry, autocomplete got me. I meant mentorship. I'll update 

"Several participants suggested that, for most generalist roles, a competent senior professional can get to a working level of AIS context in weeks"

I'm pretty skeptical of this, without a ton of individual mentorship that I doubt anyone does—or without resources that don't currently exist. My intuition is that people who make these claims have low standards.

Int/a is still new, so my first-level analysis is that it's okay for it to still be in the ideation phase. My second-level analysis is that AI timelines might be short maybe this phase needs to be cut short.

I suspect whether it helped or hindered folks likely depends on where they were pre-EA. Did they need to learn to pay more or less attention to cost effectiveness?

I agree that the future will be profoundly weird, although it's an extra step to claim that the future will be profoundly weird in a way that change what actions animal welfare folks should take (as opposed to being weird in some orthogonal manner).

We do have other expertise that we’d be happy to trade. Many AI safety folks have proposed just this: animal welfare campaigners are experienced with guerilla campaigns that have pressured some of the world’s largest companies to make modest but meaningful concessions to ethics. We could trade these services to the AI movement, using our skills to win stronger safety and alignment commitments from leading labs, in exchange for technical safety and alignment researchers giving animals their due consideration in overall alignment strategy


I'm in favour of this proposal. I'd love to see it explored in a future post.

I expect people to update somewhat. My split was more about where people end up falling after initial exposure to arguments on both sides. 

In the past, AI didn't feel so pressing to the AI crowd, so they had more space to explore, rather than the discussion of animals and global poverty feeling like dead weight.

I don't really know.

But that's a good point: Chesterton's fence is a pretty good heuristic.

Probably some people were being a bit pushy advertising their services?

Load more