PeterSlattery

@ BehaviourWorks/Monash University/Ready Research
Working (6-15 years of experience)

Bio

Participation
4

Behaviour change researcher at BehaviourWorks Australia in Monash University and part of the team at Ready Research (https://www.readyresearch.org/).

Occasional entrepreneur

Former movement builder for i) UNSW, Sydney, Australia, ii) Sydney, Australia and iii) Ireland, EA groups.

Marketing Lead for the 2019 EAGx Australia  conference.

Current lead for the EA Behavioral Science Newsletter (https://forms.gle/cL2oJwTenwnUNRTc6).

See my LinkedIn profile for more.

Leave (anonymous) feedback here: https://forms.gle/c2N8PvNZfTPtUEom7

How others can help me

I am exploring if I should start working on AI Safety movement building and would welcome suggestions and feedback on my ideas.

How I can help others

Please feel comfortable reaching out if you would like to connect or think I can help you with something. I don't take myself too seriously and like to help people. I am very busy though and often a bit overwhelmed, so there might be a delay in response!

Things that I might be useful for:

Building a network on Linkedin

Getting social science research experience

Running social science research projects to produce academic outputs

Mental health advice or support

Setting up/running EA groups

Changing behaviour/marketing/growing new projects

Basic advice about working with government/policymakers

Sequences
1

A proposed approach for AI safety movement building

Comments
296

Topic Contributions
3

I appreciate that you took the time to explain your perspective, and I am sorry to hear that you feel as you do. I think it is understandable and I sympathise. 

Hopefully things improve and I think that they will. 

Some very quick thoughts:

Even if you don't feel part of the community, then you should perhaps still consider keeping an EA identity on the level of values.

For instance, you could continue to believe that you should i) try to do good and ii) try to do it effectively - arguably the core values that underpin EA as a philosophy. 

I think that these are rare and admirable values and that EA is just one (though maybe the best) label of many that people use to communicate that they have them. 

I don't identify very strongly with the EA community, but I identify strongly with the core values as I see them. 

I have been thinking something similar so I will take this as chance to say that I really appreciate all your work and commitment. I also really sympathise about the stress that recent events have probably caused. It feels a bit trite and empty to say but I really mean it. I really hope that things calm down soon.

Thanks for sharing and for all your work!

What you have done already has been quite useful to me and I am excited to see what you do next.

Thanks for this! I liked it and found it helpful for understanding the key arguments for AI risk.

It also felt more engaging than other presentations of those arguments because it is interactive and comparative.

I think that the user experience could be improved a little but that it's probably not worth making those improvements until you have a larger number of users.

One change you could make now is to mention the number of people who have completed the tool (maybe on the first page) and also change the outputs on the conclusion page to percentages.

How do you imagine using this tool in the future? Like what are some user stories (e.g., person x wants to do y, so they use this)?

Here are some quick (possibly bad) ideas I have for potential uses (ideally after more testing):

  • As something that advocates like Robert Miles can refer relevant people to
  • As part of a longitudinal study where a panel of say 100 randomly selected AI safety researchers do this annually, and you report on changes in their responses over time.
  • Using a similar approach/structure, with new sections and arguments, to assess levels of agreement and disagreement with different AI safety research agendas within the AI Safety community and to identify the cruxes
  • As a program that new AI Safety researchers, engineers and movement builders do to understand the relevant arguments and counterarguments.

I also like the idea of people making something like this for other cause areas and appreciate the effort invested to make that easy to do.

Thanks for writing this up! 

I find the claim that all of this is an early and preliminary example of (a) PASTA (Process for Automating Scientific and Technological Advancement) to be pretty interesting. 

I hadn't made that connection. Whisper and GPT-3 will almost certainly help to accelerate science (especially if used alongside other tools and improved) and there is already related discussion of how they are going to affect science.

Now I wonder what Holden thinks the threshold for a 'PASTA' is  and whether he'd agree that this is an example?

Thank you for your update and work on this, Howie. I really appreciate you taking it on.

Thanks, Akash, I really appreciate that you reviewed them and shared that!

Thanks for writing this up Ismam! It seems comprehensive and useful (albeit to a complete novice). In particular, time-inhomogeneity seems very useful to model.

Is knowledge of/aptitude with CTMC common among actuaries? 

Also, have you considered doing some more work to apply it to ER with expert support? 

Related to both questions, I'd like to see new approaches like this (e.g, which consider time-inhomogeneity) applied to prioritising existential risk and other causes. 

One reasons for that is because I think that a lot of the differences between how EAs and others view the priority of climate change is underpinned by different expectations around the linearity of the growth of risk over time (e.g., relative to other concerns like AI). It would therefore be great to see someone dive into that more.

My interest is probably not a high quality signal of demand though! 

Load More