Summary
Insider threats are security risks caused by an organisation's staff. Careless and intentional insider threats cause 25+% of cyber breaches.[1]
In a survey of existential risks involving cybersecurity and AI, I identified key actors in the AI supply chain to protect: compute manufacturers, frontier AI labs, and AI regulatory agencies.
These actions can help key actors reduce insider threats:[2]
- Individual factors:
- Screen hires for “dark personality traits” (self-promotion, emotional coldness, deceitfulness, aggressiveness, and more).
- Monitor employees for personal stress, financial issues, dissatisfaction, late/poor quality work, sudden wealth, bragging, suspicious office hours, and signs of substance abuse.
- Train employees on personal cybersecurity (managing logins, risks with smart home devices, financial scams, etc.).
- Organisational factors:
- Create clear whistleblowing/reporting rules. Specifically answer NDA and confidentiality concerns for junior staff.
- Create a dedicated insider threat program and manager. Ensure data sharing between it and cybersecurity/HR teams.
- Lower outsourcing, remote work, & external tools when possible.
- Reduce hierarchies. Ex: create consistent opportunities for junior staff to suggest ideas to senior leaders.
- Technical factors:
- Regularly simulate social engineering and phishing attacks.
- Use the obvious tools: spam filters, firewalls and proxies, access management, network segmentation, etc.
- Try monitoring tools for underused data (like HR or psychological data).[3]
Details on Individual Factors
It's difficult to identify demographics factors that correlate with insider threats.
- It's possible to analyse public records and state conclusions like: "criminals who performed insider threats are more often men than women."
- However, insider threats are rarely reported since companies fear reputational damage and most jurisdictions have no reporting requirements.[2] Thus, datasets likely have biases compared to the real world.
Next, there are correlations between "dark personality traits" and staff who intentionally cause insider threats.
- Examples of dark personality traits are intuitive: "the desire for control, narcissism, ... egocentricity, a socially malevolent character, self-promotion, emotional coldness, [deceitfulness], and aggressiveness"[2]
- Yet employees may mask such traits, especially during the hiring process. Thus, it's impractical to use these traits to proactively screen employees.
Instead, organisations can proactively monitor and encourage employee wellbeing.
- Ex: It helps if staff regularly check in with each other about personal matters. These include financial stability, relational stability, mental health, a change in work times or quality, or signs of substance abuse.
- Proactive strategies depend on the demographics of an organisation's employees. Those employing young parents may offer childcare benefits, whereas those with an older workforce might invest in personal cybersecurity supports. (A personal hacked account can enable fraud / coercion that causes workplace losses.)[2]
The last point is important since insider threats are more often caused by employee error than malicious intent.[1] Extremified employee monitoring programs will reduce privacy, trust, and wellbeing.[4] Proactive wellbeing supports have low side effects.
Details on Organisational Factors
First, employees who work together are best able to observe each others' wellness. Thus, an organisation's first priority is to increase "see something, say something" behaviour amongst all employees. What gets in the way?
- Reporting colleagues can be seen as distrustful ("ratting out"). To fix this, employers must develop reporting channels primarily to help employees. Most often, reporting colleagues' anomalous behaviours should lead to extra support for them. Reducing malicious insider threats should be a rare side effect.
- Employees may be nervous about the confidentiality of reporting a behaviour. Especially if they've signed NDAs or are reporting a senior manager. Thus, clear confidentiality policies about reporting programs are essential.
- In government settings like an AI regulatory agency, this is even more important due to strict information security clearances for public servants.
- Staff must be trained and encouraged to look for warning signs in colleagues. Training on mental health warning signs, red teaming exercises on phishing or social engineering, and leadership celebrating positive outcomes of "see something, say something" policies (like supporting staff in need) can all help.
Separately, some more tangible organisational policies also have an impact.
- For instance, a dedicated insider threats prevention program and manager creates persistent focus and accountability about insider threats. This is easily affordable for frontier AI labs, AI regulatory agencies, and compute manufacturers.
- Moreover, it helps to reduce outsourcing, contracting, remote work, and external tools when possible. More people with less direct contact to an organisation's culture brings a higher likelihood of misaligned actions by insiders.
Sadly, an organisation can easily do all the right things on paper but cause a distrustful, inefficient, and toxic workplace. Though intangible, culture is critical. Even one remark by managers can have a large impact:[5]
- More hierarchy, less cooperation: "Recently, we've had increased cyberattacks and our cybersecurity team has been busy around the clock. These experts know what they're doing, so just leave it to them and focus on your jobs."
- Less trust: "Recently, we've had increased cyberattacks and our cybersecurity team has been busy around the clock. Man, it's so hard to know who to trust these days! I sure am glad they're watching over everything."
- More community: "Recently, we've had increased cyberattacks and our cybersecurity team has been busy around the clock. I'd struggle to focus on my work without them! I'm glad they help run the tools we need to do our best jobs."
Details on Technical Factors
The general theme is that technical safeguards are necessary, but not sufficient. Many solutions are reactive ways to prevent insiders from causing damage.
There are many standard defences to briefly name: spam filters to prevent social engineering against staff; principle of least privilege to limit any one employee's access to sensitive information; zero trust architectures to prevent colleagues' access from being exploited; network segmentation, proxies, and firewalls to prevent damages from spreading; intrusion detection systems/anomaly detection systems to spot suspicious activities like sensitive data being exported.
While there are many improvements being researched to the above standard technologies, here are some improvements relevant to insider threats:
To start, it's common to monitor device-specific data for outliers. This data reveals some information about the user (unique keystroke patterns, common actions on the device, etc.). Still, it doesn't reveal much about user motivations. Complementing device-based data sources with HR data reveals employee motivations.
- HR data can include business travel history, job title, past projects, salary over time (including perhaps a lack of promotions / raises), and performance reviews.
- As a personified analogy, the HR data can "proactively focus" technical systems to look out for certain anomalies. Ex: An employee on an R&D team with poor performance recently is more likely to cause insider threats by saving sensitive data to unauthorised devices than by bypassing network firewalls (since they're unlikely to have the expertise or insider knowledge to do so). Thus, device specific logs on file access can be scrutinised more than logs on network requests.
A similar conclusion is possible with psychometric data. That said, psychometric data sources are ethically-contentious and publicly unavailable to develop defensive tools with. Specifically, psychological questionnaires may be seen as cumbersome or overbearing by employees. Whereas automated data collection tools like social media crawlers may be seen as privacy violations.[4]
These "contextual" data sources can make other insider threat detection systems more adaptive. For example, access management often has static policies set for each team.[6] If an employee has a low trustworthiness score due to some recent logins at suspicious times, a dynamic access management system could temporarily revoke the employee's access to certain sensitive documents.
As seen, the general trend with improving technical defences against insider threats at an organisation is to get more holistic (and human) data which is used to adapt defences over time.
Personally, I've been intrigued to learn about all these human-focused best practices to reduce insider threats. I'm hoping to get more primary data by interviewing cyber security staff at AI labs and compute manufacturers. Any suggestions on who to reach out to are much appreciated!
- ^
G. Bassett, C. D. Hylender, P. Langlois, A. Pinto, and S. Widup, “2022 Data Breach Investigations Report,” Verizon Communications Inc., 2022. Accessed: Nov 15, 2022. [Online]. Available: https://www.verizon.com/business/resources/T501/reports/dbir/2022-data-breach-investigations-report-dbir.pdf
- ^
Black, Marigold, et al. Insider Threat and White-Collar Crime in Non-Government Organisations and Industries: A Literature Review. RAND Corporation, 2022, https://doi.org/10.7249/RRA1507-1.
- ^
L. Liu, O. De Vel, Q. -L. Han, J. Zhang and Y. Xiang, "Detecting and Preventing Cyber Insider Threats: A Survey," in IEEE Communications Surveys & Tutorials, vol. 20, no. 2, pp. 1397-1417, 2018, doi: 10.1109/COMST.2018.2800740.
- ^
J. Love and F. Schmalz, ‘Companies Now Have Many Tools to Monitor Employee Productivity. When Should They Use Them?’, Kellogg Insight. Available: https://insight.kellogg.northwestern.edu/productivity-monitoring. [Accessed: Oct. 04, 2023]
- ^
A. Moore, S. Perl, J. Cowley, M. Collins, T. Cassidy, N. VanHoudnos, P. Buttles-Valdez, D. Bauer, A. Parshall, J. Savinda, E. Monaco, J. Moyes, and D. Rousseau, "The Critical Role of Positive Incentives for Reducing Insider Threats," Carnegie Mellon University, Software Engineering Institute's Digital Library. Software Engineering Institute, Technical Report CMU/SEI-2016-TR-014, 15-Dec-2016 [Online]. Available: https://doi.org/10.1184/R1/6585104.v1. [Accessed: 5-Oct-2023].
- ^
J. Crampton and M. Huth, ‘Towards an Access-Control Framework for Countering Insider Threats’, in Insider Threats in Cyber Security, C. W. Probst, J. Hunker, D. Gollmann, and M. Bishop, Eds., Boston, MA: Springer US, 2010, pp. 173–195. doi: 10.1007/978-1-4419-7133-3_8. Available: https://link.springer.com/10.1007/978-1-4419-7133-3_8. [Accessed: Oct. 05, 2023]
Overall I think this post was well done and introduces valuable approaches, especially the focus on social engineering and the limits of psychometric data collection available to most firms, but generally a lot of different and valuable topics which I expect to be unfamiliar to most readers. Have you thought about cross posting this to Lesswrong, where users are friendlier to AI safety?
Something that's worth keeping in mind is that, although base rates of this seem low relative to employee incompetence, it's also true that a sufficiently sophisticated adversary will be highly capable of framing specific employees for the attack. This is important for AI labs, which will be noticed by unusually powerful adversaries, and yet nonetheless must get everything right. They should expect their top-performing security staff to start getting bumped off one by one, the same way they would expect that to happen to leadership.
Securing smart home devices isn't possible, not for anyone in or adjacent to EA anyway. The idea that you can make smart devices themselves safe is ludicrous and dangerous; unless by "securing smart home devices" you meant mitigating personal exposure to them, which I absolutely agree would reduce risk. The threat of smart home devices creating massive psychometric datasets and researching social engineering with sample sizes in the millions is a security nightmare, that every developed country has gotten entangled in; just because everyone's doing it doesn't make it sensible or reasonable, just like religion or meat consumption. The current paradigm of constant smart device exposure is already wildly inadequate for the basic infosec that AI labs currently require, let alone for the transformative slow takeoff world that many anticipate over the next 1-2 decades.
@trevor1 Thank you for the detailed response!
RE: Crossposting to LessWrong
RE: Personal Cybersecurity and IoT