To your first para - yes I wonder how unionised countries and relevant sectors are in bottlenecks in the compute supply chain - Netherlands, Japan and Taiwan. I don't know enough about the efficacy of boycotts to comment on the union led boycotts idea.
I've adjusted another comment but I want to also address her the recurring concerned that workers who join a union they organise to accelerate the development of AI. I think that symmetrical argument is unlikely - the history of unions is a strong tradition of safety, slowing down or stopping work. I do not know an example of a union that has instead prioritised acceleration but there's probably some and it would get grey as you move into the workers self-management space.
I had explicitly considered this in drafting and whether to state that crux. If so, it could be an empirical question of whether there is greater support from the workers or management, or receptiveness to change.
I did not because I now think the question is not whether AI workers are more cautious than AI shareholders, but whether AI firms where unionised AI workers negotiate with AI shareholders would be more cautious. To answer that question, I think so
As you have said there are examples of individuals have left firms because they feel their company is too cautious. Conversely there are individuals who have left for companies that priorities AI safety.
If we zoom out and take the outside view, it is common for those individuals who form a union to take action to slow down or stop their work or take action to improve safety. I do not know an example of a union that has instead prioritised acceleration.
I don't think this is predicated on those assumptions.
My assumptions are:
AI workers who join a union are more likely to care about safety than AI workers who do not join a union. That is because the history of unions suggests that unions promote a culture of safety
Unionised AI workers will be more organised in influencing their workplace than non unionised AI workers. That is because of their ability to co-ordinate collectively
Furthermore, these unions could be in a position to implement AI safety policies.
For further investigation
How would vetters, whether a regulatory agency or an independent initiative, screen papers? In the case of DNA synthesis, which does not account for all biosecurity relevant dual use research, a minimalistic approach is for vetters to utilise an encrypted database of riskier sequences of DNA , as proposed by MIT's Prof Kevin Esvelt.
However, dual use control at the publisher level would presumably not be restricted to DNA synthesis, it would include such things as studies of remote areas at the human-animal boundary.
Esvelt is in dialogue with the Nuclear Threat Initiative who are coordinating higher level conversation in this area. If the publishers you mentioned aren't already part of that dialogue, the best next steps may be to connect Nuclear Threat Initiative folks with those academic publishers. But, I don't think that should mean this initiative shouldn't proceed in parallel. I think there is merit in taking some action now in this space because the conversation that the Nuclear Threat Initiative and co are kindling is a slow, multilateral process - screening DNA synthesis orders is not legally required by any national government at this stage.
The cost of DNA synthesis is declining and the fixed costs of filtering could grow as fraction of the cost, therefore the viability of a voluntary screening model could its highest right now.
I'm interested in being involved, but don't know that much about academic publishing or technical genomics stuff so probably not a fit to be a (solo) project lead. Do know about management, health policy, public administration, stakeholder engagement, communications etc