P

Peter

535 karmaJoined Aug 2021Working (0-5 years)

Bio

Participation
2

Interested in AI safety talent search and development. 

How others can help me

  1. Discuss charity entrepreneurship ideas, nuts & bolts. 
  2. Recommend guest speakers for virtual discussions on AI alignment, biosecurity, animal welfare, AI governance, and charity entrepreneurship.
  3. Connect me with peers, partners, or cowriters for research or fiction. 

How I can help others

Making and following through on specific concrete plans. 

Comments
122

Topic contributions
2

  1. Interesting. Are there any examples of what we might consider a relatively small policy changes that received huge amounts of coverage? Like for something people normally wouldn't care about. Maybe these would be informative to look at compared to more hot button issues like abortion that tend to get a lot of coverage. I'm also curious if any big issues somehow got less attention than expected and how this looks for pass/fail margins compared to other states where they got more attention. There are probably some ways to estimate this that are better than others. 
  2. I see. 
  3. I was interpreting it as "a referendum increases the likelihood of the policy existing later." My question is about the assumptions that lead to this view and the idea that it might be more effective to run a campaign for a policy ballot initiative once and never again. Is this estimate of the referendum effect only for the exact same policy (maybe an education tax but the percent is slightly higher or lower) or similar policies (a fee or a subsidy or voucher or something even more different)? How similar do they have to be? What is the most different policy that existed later that you think would still count?

"Something relevant to EAs that I don't focus on in the paper is how to think about the effect of campaigning for a policy given that I focus on the effect of passing one conditional on its being proposed. It turns out there's a method (Cellini et al. 2010) for backing this out if we assume that the effect of passing a referendum on whether the policy is in place later is the same on your first try is the same as on your Nth try. Using this method yields an estimate of the effect of running a successful campaign on later policy of around 60% (Appendix Figure D20).

I'd be curious to hear about potential plans to address any of these, especially talent development and developing the pipeline of AI safety and governance. 

Very interesting. 
1. Did you notice an effect of how large/ambitious the ballot initiative was? I remember previous research suggesting consecutive piecemeal initiatives were more successful at creating larger change than singular large ballot initiatives. 

2. Do you know how much the results vary by state?

3. How different do ballot initiatives need to be for the huge first advocacy effect to take place? Does this work as long as the policies are not identical or is it more of a cause specific function or something in between? Does it have a smooth gradient or is it discontinuous after some tipping point?

This is an inspiring amount of research. I really appreciate it and am enjoying reading it. 

That's a good point. Although 1) if people leave a company to go to one that prioritizes AI safety, then this means there are fewer workers at all the other companies who feel as strongly. So a union is less likely to improve safety there. 2) It's common for workers to take action to improve safety conditions for them, and much less common for them to take action on issues that don't directly affect their work, such as air pollution or carbon pollution, and 3) if safety inclined people become tagged as wanting to just generally slow down the company, then hiring teams will likely start filtering out many of the most safety minded people. 

I've thought about this before and talked to a couple people in labs about it. I'm pretty uncertain if it would actually be positive. It seems possible that most ML researchers and engineers might want AI development to go as quickly or more than leadership if they're excited about working on cutting edge technologies or changing the world or for equity reasons. I remember some articles about how people left Google for companies like OpenAI because they thought Google was too slow, cautious, and lost its "move fast and break things" ethos. 

Really appreciate this post. Recently I've felt less certain about whether slowing down AI is feasible or helpful in the near future. 

I think how productive current alignment and related research is at the moment is a key crux for me. If it's actually quite valuable at the moment, maybe having more time would seem better. 

It does seem easier to centralize now when there are fewer labs and entrenched ways of doing things, though it's possible that exponentially rising costs could lead to centralization through market dynamics anyway. Though maybe that would be short lived and some breakthrough after would change the cost of training dramatically. 

Yes, it seems difficult to pin those down. Looking forward to the deeper report!

I really want to see more discussion about this. There's serious effort put in. I've often felt that nuclear is perhaps overlooked/underemphasized even within EA. 

Load more