PeterSlattery

@ BehaviourWorks/Monash University/Ready Research
Working (6-15 years of experience)
1487Sydney NSW, AustraliaJoined Dec 2015

Bio

See my LinkedIn profile for what I have done/do. Here is a summary of some key points:

Behaviour change researcher at BehaviourWorks Australia in Monash University and part of the team at Ready Research (https://www.readyresearch.org/).

Occasional entrepreneur

Former movement builder for i) UNSW, Sydney, Australia, ii) Sydney, Australia and iii) Ireland, EA groups.

Marketing Lead for the 2019 EAGx Australia  conference.

Current lead for the EA Behavioral Science Newsletter (https://forms.gle/cL2oJwTenwnUNRTc6).

Leave (anonymous) feedback here: https://forms.gle/c2N8PvNZfTPtUEom7

How others can help me

I am gradually exploring a pivot into doing more coaching, EA organisation/ conference/ community marketing and growth work and would welcome advice and opportunities.
 

How I can help others

Please feel comfortable reaching out if you would like to connect or think I can help you with something. I don't take myself too seriously and like to help people. I am very busy though and often a bit overwhelmed, so there might be a delay in response!

Things that I might be useful for:

Building a network on Linkedin

Getting social science research experience

Running social science research projects that aim to produce academic outputs

Mental health advice or support

Setting up/running EA groups

Changing behaviour/marketing/growth ideas

Advice on working with government/policymakers

Comments
248

Topic Contributions
2

Hey Constance! Thank you for writing this. I am sorry to hear that this has been stressful. I have had several similar experiences where I felt that I was rejected or treated poorly by CEA or EA funders. Sometimes it really upset me and reduced my motivation for a period.

However, I also believe that the people at such orgs are generally very competent and good-natured, that there are things I don't know/consider which they account for. They have to do difficult work when filtering applications and leave themselves open for public criticism that is hard to defend against. I know that I would find that last part to be very difficult. 

Overall, I feel that mistakes are inevitable in these sorts of application processes - especially in assessments of more unusual or novel people/projects. However, I also feel that if you work hard and keeping having good impacts, you will usually get the deserved resources and opportunities. 

For what it is worth, I scanned your application, and I admire and appreciate your work. I hope you persevere, and that I see you at a future conference!

Some quick thoughts:

Thanks for all your work on this, it's really great to see it finally happening! Would love it if the survey can identify and compare 'social movement' subgroups such as EA, Social justice, socialism, animal welfare etc. Could be assessed in terms of activism/participation in the subgroups and or awareness/attitude towards them.

This would be helpful in several ways. As an example, I think that it will be very helpful to better understand the relative differences in values and receptiveness to messages etc that exist between such groups and how this changes over time.

It could be interesting to explore how it changes with such groups when new books and articles are widely publicized etc.

From a movement building and impact perspective, it seems important to really understand our adjacent social movements. Where are the overlaps and disconnects in shared values? What are each groups major gripes/misconceptions etc.

I'd welcome any attempt to eventually grow this service to the point where it will allow EA orgs and researchers to easily and affordably survey large samples of key audiences (e.g., AI professionals, policy makers etc). I think that the absence of this is an upstream barrier to lots of important research and message testing.

Thank you for working on this, and congratulations to all the winners! Just wanted to mention that I think that it could be good to have a running competition for suggesting new cause areas on the forum, with an annual awards process. Suggesting new causes seems like a valuable activity to prompt and incentivise. 

Thanks for taking the time to write this up, Vael. It's going to be very useful for me for learning about, and sharing information about, AI safety in the future. 

I agree with you that we should reward impact more and I like your suggestions. I think that having more better incentives for searching for and praising/rewarding 'doers' is one model to consider. I can imagine a person in CEA being responsible for noticing people who are having underreported impact and offering them conditional grants (e.g., finacial support to transition to do more study/full time work) and providing them with recognition by posting about and praising their work in the forum.

Thank you for this great write up. I completely agree with nearly everything that you have said. I'd love to see more of the recent work from Rethink and Lucius examining public awareness and receptivity to EA. I'd also like to see more audience research to understand which audiences are more or less receptive to EA, why, where they hear about us, what they think we do etc. Ready Research are also exploring opportunities in this context.

Load More