P

Peter

527 karmaJoined Aug 2021Working (0-5 years)

Bio

Participation
2

Interested in AI safety talent search and development. 

How others can help me

  1. Discuss charity entrepreneurship ideas, nuts & bolts. 
  2. Recommend guest speakers for virtual discussions on AI alignment, biosecurity, animal welfare, AI governance, and charity entrepreneurship.
  3. Connect me with peers, partners, or cowriters for research or fiction. 

How I can help others

Making and following through on specific concrete plans. 

Comments
118

Topic Contributions
2

That's a good point. Although 1) if people leave a company to go to one that prioritizes AI safety, then this means there are fewer workers at all the other companies who feel as strongly. So a union is less likely to improve safety there. 2) It's common for workers to take action to improve safety conditions for them, and much less common for them to take action on issues that don't directly affect their work, such as air pollution or carbon pollution, and 3) if safety inclined people become tagged as wanting to just generally slow down the company, then hiring teams will likely start filtering out many of the most safety minded people. 

I've thought about this before and talked to a couple people in labs about it. I'm pretty uncertain if it would actually be positive. It seems possible that most ML researchers and engineers might want AI development to go as quickly or more than leadership if they're excited about working on cutting edge technologies or changing the world or for equity reasons. I remember some articles about how people left Google for companies like OpenAI because they thought Google was too slow, cautious, and lost its "move fast and break things" ethos. 

Really appreciate this post. Recently I've felt less certain about whether slowing down AI is feasible or helpful in the near future. 

I think how productive current alignment and related research is at the moment is a key crux for me. If it's actually quite valuable at the moment, maybe having more time would seem better. 

It does seem easier to centralize now when there are fewer labs and entrenched ways of doing things, though it's possible that exponentially rising costs could lead to centralization through market dynamics anyway. Though maybe that would be short lived and some breakthrough after would change the cost of training dramatically. 

Yes, it seems difficult to pin those down. Looking forward to the deeper report!

I really want to see more discussion about this. There's serious effort put in. I've often felt that nuclear is perhaps overlooked/underemphasized even within EA. 

Actually, they are the same type of error. EA prides itself on using evidence and reason rather than taking the assessments of others at face value. So the idea that others did not sufficiently rely on experts who could obtain better evidence and reasoning to vet FTX is less compelling to me as an after-the-fact explanation to justify EA as a whole not doing so. I think probably just no one really thought much about the possibility and looking for this kind of social proof helps us feel less bad. 

Yeah, I do sometimes wonder if perhaps there's a reason we find it difficult to resolve this kind of inquiry. 

Yes, I think they're generally pretty wary of saying much exactly since it's sort of beyond conceptual comprehension. Something probably beyond our ideas of existence and nonexistence. 

Glad to hear that! You're welcome :)

Load more