PeterSlattery

Research Fellow @ BehaviourWorks/Monash University/Ready Research
2407 karmaJoined Dec 2015Working (6-15 years)Sydney NSW, Australia
www.pslattery.com/

Bio

Participation
4

Behaviour change researcher at BehaviourWorks Australia in Monash University.

Helping develop an EA related course at University of Queensland.

Part of the team at Ready Research (https://www.readyresearch.org/).

Occasional entrepreneur.

Former movement builder for i) UNSW, Sydney, Australia, ii) Sydney, Australia and iii) Ireland, EA groups.

Marketing Lead for the 2019 EAGx Australia conference.

Current lead for the EA Behavioral Science Newsletter (https://forms.gle/cL2oJwTenwnUNRTc6).

See my LinkedIn profile for more.

Leave (anonymous) feedback here: https://forms.gle/c2N8PvNZfTPtUEom7

How others can help me

I am exploring if I should start working on AI Safety movement building and would welcome suggestions and feedback on my ideas.

How I can help others

Please feel comfortable reaching out if you would like to connect or think I can help you with something. I don't take myself too seriously and like to help people. I am very busy though and often a bit overwhelmed, so there might be a delay in response!

Things that I might be useful for:

Building a network on Linkedin

Getting social science research experience

Running social science research projects to produce academic outputs

Mental health advice or support

Setting up/running EA groups

Changing behaviour/marketing/growing new projects

Basic advice about working with government/policymakers

Posts
28

Sorted by New

Sequences
1

A proposed approach for AI safety movement building

Comments
337

Topic Contributions
3

Thanks for writing this - it was useful to read the pushbacks! 

As I said below, I want more synthesis of these sorts of arguments. I know that some academic groups are preparing literature reviews of the key arguments for and against AGI risk.

I really think that we should be doing that for ourselves as a community and to make sure that we are able to present busy smart people with more compelling content than a range of arguments spread across many different forum posts. 

I don't think that that is going to cut it for many people in the policy space.

Thanks for writing this. I appreciate the effort and sentiment. My quick and unpolished thoughts are below. I wrote this very quickly, so feel free to critique.

The TLDR is that I think that this is good with some caveats but also that we need more work on our ecosystem to be able to do outreach (and everything else) better.

I think we need a better AI Safety movement to properly do and benefit from outreach work. Otherwise, this and similar posts for outreach/action are somewhat like a call to arms without the strategy, weapons and logistics structure needed to support them. 

Doing the things you mention is probably better than doing nothing (some of these more than others), but it's far what is possible in terms or risk minimisation and expected impact. 
 

What do we need for the AI Safety movement to properly do and benefit from outreach work?

I think that doing effective collective outreach will require us to be more centralised and coordinated. 

Right now, we have people like you who seem to believe that we need to act urgently to engage people and raise awareness, in opposition to other influential people like Rohin Shah, Oliver Harbynka, who seem to oppose movement building (though this may just be the recruitment element).

The polarisation and uncertainty promotes inaction.

I therefore don't think that we will get anything close to maximally effective awareness raising about AI risk until we have a related strategy and operational plan that has enough support from key stakeholders or is led by one key stakeholder (e.g., Will/Holden/Paul) and actioned by those who trust that person's takes.

Here are my related (low confidence) intuitions (based on this and related conversations mainly) for what to do next:

We need to find/fund/choose some/more people/process to drive overall strategy and operation for the mainstream AI Safety community. For instance, we could just have some sort of survey/voting system to capture community preferences/elect someone. I don't know what makes sense now, but it's worth thinking about. 

When we know what the community/representatives see as the strategy and supporting operation, we need someone/some process to figure out who is responsible for executing the overall strategy and parts of the operations and communicating them to relevant people. We need behaviour level statements for 'who needs to do what differently'.

When we know 'who needs to do what differently' we need to determine and address the blockers and enablers to scale and sustain the strategy and operation (e.g., we likely need researchers to find what communication works with different audiences, communicators to write things, connect with, and win over, influential/powerful people, recruiters to recruit the human resources, developers and designers to make persuasive digital media, managers to manage these groups, entrepreneurs to start and scale the project, and funders to support the whole thing etc).

It's a big ask, but it might be our best shot.

 

Why hasn't somebody done this already?

As I see it, the main reason for all of the above is a lack of shared language and understanding which merged because of how the AI safety community developed

Movement building/field building mean different things to different people and no-one knows what the community collectively support or oppose in this regard. This uncertainty reduces attempts to do anything on behalf of the community or the chances of success if anyone tries. 

Perhaps because of this no-one who could curate preferences and set a direction (e.g., Will/Holden/Paul) feels confident to do so. 

It's potentially a chicken and egg or coincidence of wants problem where most people would like someone like Holden to drive the agenda, but he doesn't know or thinks someone would be better suited (and they don’t know). Or the people who could lead somehow know that the community doesn’t want anyone to lead it in this way, but haven't communicated this, so I don’t know that yet.

 

What happens if we keep going as we are?

I think that the EA community (with some exceptions) will mostly continue to function like a decentralised group of activists, posting conflicting opinions in different forums and social media channels, while doing high quality, but small scale, AI safety governance, technical and strategy work that is mostly known and respected in the communities it is produced in.

Various other more centralised groups with leaders like Sam Altman, Tristan Harris, Tina Gebru etc will drive the conversations and changes. That might be for the best, but I suspect not.

 

Urgent, unplanned communication by EA acting insolation poses many risks. If lots of people who don't know what works for changing people's minds and behaviours post lots of things about how they feel this could be bad. 

These people could very well end up in isolated communities (e.g., just like many vegan activists I see who are mainly just reaching vegan followers on social media). 

They could poison the well and make people associate AI safety with poorly informed and overconfident pessimists. 

If people engage in civil disobedience we could end being feared and hated and subsequently excluded from consideration and conversation.

Our actions could create abiding associations that will damage later attempts to persuade by more persuasive sources.

This could be the unilateralist's curse brought to life.
 

Other thoughts/suggestions

Test the communication in small scale (e.g., with a small sample of people on mechanical turk or with friends) before you do large scale outreach 

Think about taking steps back to prioritise between the behaviour to rule out the ones with more downside risk (so better to write letters to representatives than posts to large audiences on social media if unsure what is persuasive).

Don’t do civil disobedience unless you have read the literature about when and where it works (and maybe just don’t do it - that could backfire badly).
 

Think about the AI Safety ecosystem and indirect ways to get more of what you want by influencing/aiding people or processes within it:

For instance, I'd like for progress on questions like:

- what are the main arguments for and against doing certain things (e.g., the AI pause/public awareness raising), what is the expert consensus on whether a strategy/action would be a good idea or not (e.g., what do superforcasters/AI orgs recommend)?

- When we have evidence for a strategy/action, then: Who needs to do what differently? Who do we need to communicate to, and what do we know is persuasive to them/how can we test that?

- Which current AI safety (e.g., technical, strategy, movement building) projects are the ones that are worth prioritising the allocation of resources (funding, time, advocacy) to etc? What do experts think?
- What voices/messages can we amplify when we communicate? It's much easier to share something good from an expert than write it.

- Who could work with others for mutual benefit but doesn’t realise it yet?


I am thinking about, and doing, a little of some of these things, but have other obligations for 3-6 months and some uncertainty about whether I am well suited to do them.

Thanks for taking the time to share, this was a great summary.

It seems like it could be valuable to study the link between coherence and intelligence more carefully.

He linked to his post in the comment. I presume that he believes that it explains why he disagrees. I'd consider that contribution to be deserving of not getting downvoted, but I see where you are coming from. 

With that said, if he said, "I think we need regulation" and offered two lines of related thoughts and the same link, would people have downvoted his comment for not being useful and being impossible to engage with? Probably not, I suspect.

Anyway, I may be wrong in this case, but I still think that we probably shouldn't be so quick to downvote comments like this (or at least a bit better). Especially for new community members. 

I see a lot of stuff on the forum get no comments at all which seems worse than getting a few comments with opinions. 

I often see low effort disagreeing comments on a post get downvoted but similarly low effort agreeable comment (e.g., this sounds great) get upvoted. 

I am also influenced by other factors. Discussions I have had and seen where people I know who have been involved in EA for years said that they don't like using the forum because it is too negative or because they don't get any engagement on what write.  

The expectation that lots of lurkers on the forum don't feel comfortable sharing quick thoughts or disagreements because they could get downvotes.

My experiences writing posts that almost no-one commented on where I would have welcomed a 2-minute opinion comment made without arguments or a supposedly supporting link.

But of course other people might disagree with all of that or see different trade-offs etc.

[Quick meta comment to try to influence forum norms] 

This comment was at -5 karma when I saw it, and hidden.

I disagree with Arturo's comment and disagree voted to indicate this disagreement. I also upvoted his the karma on his comment because I appreciated that he engaged with the post to express his views and that he posted something on the forum to explain those views. 

I'd like other people to do something similar. I think that we should upvote people for expressing good faith disagreement and make an effort to explain that disagreement. Otherwise, the forum will become a complete echo chamber where we all just agree with each other. 

I also think that we should try particularly hard to engage with new people in the community who express reasonable disagreement. Getting lots of anonymous downvotes without useful insights generally discourage engagements in most situations and I don't think that this is what we want.

Thanks for the thoughts, I really appreciate that you took the time to share them.

Load more