C

CAISID

AI Legislation
145 karmaJoined Feb 2024Working (6-15 years)

Bio

I am a computer scientist (to degree level) and legal scholar (to PhD level) working at the intersection between technology and law. I currently work in a legislation role at a major technology company, and as a consultant to government and industry on AI Law, Policy, Governance, and Regulation.

How others can help me

I am looking for opportunities to network with others. I have some scope to take on new projects in 2024, and I am willing to hear from potential collaborators or funders.

How I can help others

Reach out to me to spit-ball AI legislation ideas or for simple feedback. I am also interested in meeting early careers researchers in AI governance or policy.

Comments
44

This is a really interesting post, especially considering realistic considerations of how AI can affect crime and terrorism are few and far between in public debate. Much of my own research is in the field of AI in National Security and Defence and though biosecurity isn't my wheelhouse, I do have some have thoughts on what you've all written that may or may not be useful.

I think the arguments in this post match really well with some types of bioterrorist but not others. I'd be interested to read more research (if it yet exists) on how the different 'types' of terrorist would utilise LLMs. I can imagine such technology would be far more useful to self-radicalised and lone actors rather than those in more traditional and organised terror structures for various reasons. The two use cases would also require very different measures to predict and prevent attacks as well.
 


Future chatbots seem likely to be capable of lowering the bar to such an attack. As models become increasingly “multimodal”, their training data will soon include video, such as university lectures and lab demonstrations. Such systems would not be limited to providing written instructions; they could plausibly use a camera to observe a would-be terrorist’s work and coach them through each step of viral synthesis. Future models (if not mitigated) also seem likely to be able to provide meaningful help in planning attacks, brainstorming everything from general planning, to obtaining equipment, to applying published research toward creating more-hazardous viruses, to where and how to release a virus to cause maximum impact.

The concept of chatbots lowering the bar is a good one, though it also comes with the upside that it also makes attacks easier to stop because it's an intelligence and evidence goldmine. More terrorists having webcams in their houses would be fantastic. The downside obviously being that knowledge is more democratised. The bioterrorism element is harder to stop than other direct action or NBC attacks because the knowledge is 'dual-use'. That is there are plenty of good reasons to access that information, and plenty of bad ones too. Unlike some other searches.

The second point about 'meaningful help in planning attacks' is likely to be the most devastating in the short term. The ability to quickly and at scale map things like footfall density and security arrangements over geographical areas reduces the timelines for attack planning, which subsequently reduces the time good actors have to prevent attacks. It also could feasibly provide help in avoiding detection. This isn't really a serious infosec hazard because plenty of would-be criminals attempt to find information online or in books to conceal their crimes (there's even a fantastic Breaking Bad scene where Walter White admonishes a rookie criminal for making rookie mistakes), but it helps the less 'common sense gifted' avoid common pitfalls which slightly increases the difficulty in stopping such plots.

It is sometimes suggested that these systems won’t make a meaningful difference, because the information they are trained on is already public. However, the runaway success of chatbots stems from their ability to surface the right information at the right time.

I agree with this, and would also add to it that non-public information becomes public information in unintended and often silly ways. There's actually a serious issue in the defence industry where people playing military simulators like Arma3 or War Thunder will leak classified documents on forums in order to win arguments. I'm not kidding. People in sensitive industries such as policing and healthcare have also been found to be using things like ChatGPT to answer internal queries or summarise confidential reports which exposes people's very private data (or active investigations) to the owners of the chatbots and even worse into the training data. This information, despite intentions to be private, then end up in the databanks of LLMs and might turn up elsewhere. This would be a concern in relation to your post for its use in pharmaceutical industries. There may be a need for regulation there as a potential impact lever for what you discuss in your post?

Exclude certain categories of biological knowledge from chatbots and other widely accessible AIs, so as to prevent them from coaching a malicious actor through the creation of a virus. Access to AIs with hazardous knowledge should be restricted to vetted researchers.

I can see why this gets said, and I think it would be useful for self-radicalised loners who lack access to any other tools, but I imagine that larger terror organisations will be working on their own LLMs before long (if they aren't already). Larger terror groups have in the past been very successful at adopting new technologies far faster than their opponents have realistically been ready for. Take ISIS and their use of social media and drones, for example. Such a policy would be effective at reducing the scale of threat within domestic borders though, and could be an effective policy. It's not my specialist area though, so I'm happy to be corrected by someone for whom it is.

Restricted access is at odds with the established practices and norms in most scientific fields. Traditionally, the modern academic enterprise is built around openness; the purpose of academic research is to publish, and thus contribute to our collective understanding. Sometimes norms need to adapt to changing circumstances, but this is never easy.

Fortunately, there's actually a much larger existing infrastructure here than you might think. I acknowledge you said 'most' and so are probably already aware, but in terms of scale I think it's worth noting it's quite widespread. There are academic conferences geared towards potentially unsafe knowledge that are restricted - some by organisers and some by government. There are events like this which have a significant academia component which are publicly advertised, but attendees must a) have a provably good reason to attend and b) undergo government vetting. It's not 'hard' to get in on the academic track, just quite restricted. Then there are other types of conference which again are a kind of mixture between academia and frontline NS/D but aren't publicly advertised and are invitation only or word-of-mouth application only. The point being that there's quite a good infosec infrastructure on a sliding scale there which could realistically be imported into biological sciences (and maybe is already - like I say, not really my wheelhouse). So I think the point you were hinting at here is a really good idea and I don't think it violates academic principles. Like you wouldn't leave a petri dish full of unsafe chemicals on a bus, you wouldn't release unsafe knowledge into the world. There are people, however, who vehemently disagree with this - and I'm sure they have their reasons.

I apologise if this comment was overly long, but this post is in a very interesting area and I felt it worth putting the effort in :)

 

This was really interesting, and thank you for being so open. This is a really useful post.

 Unfortunately you can plan perfectly, have the resources you need, put in all the effort, and a project still doesn't work due to a variety of factors out of your control. There's no shame in that, it happens to many of us, and I hope this doesn't put you off doing similar work in future. 

I actually don't really debate on the forums for this very reason. I too am EA-adjacent (yes I'm aware that's a bit of a meme!) and do not work in the EA sphere. I share insights and give feedback, but generally if people disagree I'm happy to leave it at that. I have a very stressful (non-EA) job and so rarely have the bandwidth for something that has no real upside like forum debate. I may make exceptions if someone seems super receptive, but I totally understand why you feel how you do. 

That's a good idea, I'd be interested in seeing that. It may also be worth reaching out to organisations who deal with this kind of stuff to express interest in if they have any recruitment planned. Most won't respond but you might make some useful points of contact from any who do? Lots of these places have a recruitment and onboarding team you can usually find buried in the website somewhere.

Answer by CAISIDFeb 26, 20243
0
0

I'm aware that most of these jobs are government-issued and mostly kept private or sent by referrals


You're mostly right, but with some caveats. The jobs are publicly listed because they need applicants, but those who work in those industries tend not to (at least directly) label themselves online for security reasons. They'll have their job listed on their LinkedIn, but might be more cagey on forums etc. So they won't exactly be posting 'Come work with me at nuclear reactor xyz!'.
 

It may vary between industry and nation, but when I've worked in similar fields before most recruiting has been done by specialist recruiters, and by public posts on sites like Indeed. From memory 75% of the recruitment comes from those recruiters because the applicants tend to already have the right security clearances and vetting in place which significantly shortens the resources and time required to recruit them. The other 25% are fresh newbies like graduates who have to take the 'long road' but are preferred for more long-term hires. Many of these come from graduate schemes pitched to universities as well.

I imagine most qualified people for such roles will already have a network, and I should hope the clearances/vetting required, and so will be found by recruiters or be recommended roles anyway.
 

That said, there's nothing to stop a list of public posts being made and updated once a week or so - though geographical restrictions might impact that a bit. It might be helpful for people to get a sense of how many roles are around, what is required, and so on.

Obviously it isn't always perfect, and good posts with niche audiences aren't always given much karma.


This was actually a surprisingly useful thing to hear, and I'm glad you included it. It can be quite disheartening (especially for first or second time posters) to spend weeks on a post for it to receive single digit karma after it goes live. I'd hate to think that ever puts anyone off. It's just something that happens. Things like amnesty are good for overcoming that (I assume), especially people who are afraid to make posts for fear of harsh criticism.

I may have a crack at taking part in the draft amnesty, if any 'would like to see' suggestions match my research areas.

Looking forward to seeing what else this generates!

This is a very interesting post. I wonder if I could ask your opinion on two things:

  • How do you think China's expansive R&D espionage (whether PLA-centric or individual) will impact this, if at all? Does the ability to (relatively easily and legally) steal AI-related R&D from others affect this prediction at all?
     
  • Do you think China's internal AI law and policy has a positive or negative impact on their growth rates mentioned in your post? Eg Internet Information Service Algorithmic Recommendation Management Provisions or the Interim Measures for Generative AI Service Management etc? No worries if this isn't your area and you can't answer. It's not really mine either, hence the question.

Thank you for this post :)

This is an interesting post. What your thoughts on the relationship AI and poverty have with each other? Does the fact that AI has significant impact on poverty levels and vice versa influence your opinions in any way?  I also wonder if you have the time to expand on why you think AI would solve or improve global poverty, considering it currently has the adverse effect? Not a criticism or counter-point, just looking to understand your standpoint better :)

This is actually a really well thought out, feasible, implementable idea. One aspect of leveraging impact to consider would be the supply chain impacts of the Transparency Index. I can certainly see valuable buy-in from stores who buy the products to want to show that they are ethically sourcing their products, and so would potentially buy in to the transparency index model. Some would likely then have a 'minimum transparency rating' policy in their procurement and compliance rules which would be a good avenue for impact as it then forces the producers to achieve that level of lose major contracts to competitors.

You are correct in that I was referring more to the natural risks associated with disagreeing with a major funder in a public space (even though OP have a reputation for taking criticism very well), and wasn't referring to friendships. I could well have been more clear, and that's on me.

Load more