Hide table of contents

This post summarises recent progress in AI-enabled malware detection for the general public.

Why read a post about AI-enabled malware detection?  It has large impacts in protecting technology for democratic elections; technology for biosecurity (ex: algorithms to screen dangerous biological products); the trained settings of AI algorithms; and more.

P.S. If you're familiar with applied AI, the technical version of this post may be more interesting to you.

Summary

  • The field's largest challenge is updating defences against ever-changing malware. This will get worse as hackers use AI to modify malware at faster rates. 
    • Predicting future malware or training AI models continuously could help.
  • Many studies in this field are questionable, slowing progress. Studies often violate best practices in AI research. Broken rules include not comparing advanced methods to simple baselines; not assessing performance in realistic conditions; and a lack of transparency about data sources/data processing. 
  • Vulnerabilities are growing with small devices (like temperature controllers for drug production). Simple defensive algorithms are needed to protect them. Algorithms inspired by our immune systems can help.

Traditional Malware Detection

Some early malware detectors saved parts of malware files (signatures) for later.  These techniques mainly worked to detect known malware multiple times. Ex: an antivirus might save a sequence of bytes (the most basic 'letters' of computer code) from a malicious program. Then, it could detect this sequence later.

But what if a hacker updates an old malware file to change that sequence? Well, the update would still have similar behaviour to older files. Ex: a malware file could record the keys you press on your keyboard. Thus, the behaviour (actions) of malware files can be tracked with less variation than raw bytes.

Note how this new technique must run malware files (to record their actions). Whereas older techniques looking for a 'signature' only read a malware file's content. This difference is known as static analysis (read-only) vs. dynamic analysis (run).

Static analysis techniques are less risky since malware is not run. However, they analyse a lot of information across an entire file. 

  • Aside: most of that information isn't related to a file's malicious actions. For example, a hacker may copy a normal mobile app, but run dangerous code only when one button is pressed. (Source)

Dynamic analysis techniques need more safety precautions when running malware. This could mean running the malware on a test computer with no personal data. Also, dynamic analysis struggles to analyse all actions of malware file. Ex: a hacker could program their malware to only run after two weeks of waiting. So any dynamic analysis that didn't wait two weeks wouldn't detect the malware. (Source)

Due to these tradeoffs, both techniques are used in practice. 


Progress With AI

Common Algorithms

AI algorithms have mainly been used for static (read-only) analysis. Here are the largest differences in AI algorithms in the field. (Source)

 

Preprocessing: how to prepare information about malware files before sending it to an AI algorithm. 

Common preprocessing techniques include:

  • Extracting statistics from malware files like the 10 most common instructions. (Source)
  • No preprocessing at all to preserve raw details. (Source)
  • Condensing the file's raw bytes into higher-level alternatives. (Source)

Of these, processing raw bytes is an ideal goal. No special steps are needed and the AI algorithm is easy to update. Still, it's hard since a malware file could have million of bytes to process.

 

Data Sources: it's rare for researchers to find high-quality data to train algorithms with. Companies may share their data with a few partners, but not everyone.

Aside box: problems with different data sources (Source)

  • Crowdsourced malware files are hard to trust. Researchers can't run tens of thousands of malware files to check if they get hacked as expected. 
  • Researchers often combine malware from different sources. This makes it hard for AI algorithms to learn what malware code looks like. Oversimplified analogy - if an AI algorithm trains with malicious mobile apps from Chinese and U.S. app stores, the algorithm would have to analyse app text in two languages. 
  • (Tens of thousands of) examples of regular software files are hard to find. They're needed to show AI algorithms which files not to detect.

 

Computational Needs: some AI algorithms train for weeks on expensive hardware specially made for AI algorithms. Other algorithms could train in seconds on a regular laptop. Unfortunately, these methods aren't always compared on the same task. So, it's unclear if more expensive technology brings extra performance. (Source)

Overall, this creates little research for securing simple devices. Examples include smart motion detectors, temperature sensors, medical monitors, etc. Yet, these devices are increasingly common in essential industries. (Source)

Some researchers are trying simple techniques to fix this gap. Here are two case studies that take inspiration from our immune systems and image processing techniques.

 

Case Study 1: Repurposing Image Processing (Source)

For a programmer, a zero-thought way to analyse a malware file might be a neural network. Oversimplified, this algorithm would receive bytes of the file, create settings to process each byte, and use the settings to decide if the file is malware. However, malware files with millions of bytes would need a lot of settings!

A variation of this algorithm can simplify these settings. Convolutional neural networks are algorithms often used for image processing. Their specialty is breaking an image into chunks and reusing the same settings to process every chunk. 

This kind of reuse would be great across a file with millions of raw bytes.  So what if we could split a large malware file into many smaller chunks, reusing the same settings to analyse every chunk?

This somewhat works, but it has issues. Specifically, files are one-dimensional code sequences, not a two-dimensional 'chunk' of numbers. 

  • Converting 1D to 2D means raw bytes that were once right next to each other are now on different rows of the image. 
  • Also, how exactly do we choose the dimensions of the 2D chunks? Should they be 2 x 2 or 4 x 4 or 8 x 2 or ...?

Some researchers got around these issues by just using one dimensional convolutional neural networks. The key idea is still to break up a large sequence of bytes into smaller chunks. But one dimensional chunks in a row instead of two dimensional chunks in a square. 

All this resulted in 10x fewer settings than even the most efficient AI models. And 30x faster training times than comparable cybersecurity algorithms.  

Case Study 2: Artificial Immune Systems

The last algorithm is efficient enough to run on mobile phones. But it still struggles with small devices like smart temperature sensors. The key problem is that neural networks need new computations to analyse every file.

The opposite approach is to run all computations needed for a malware detection algorithm ahead of time. And then save the results. Thus, only storage space is used, not processing power.

One algorithm that does this is an artificial immune system. It copies our bodies' immune systems. Specifically, our immune systems store tools called antibodies to spot harmful microorganisms later. But the artificial version stores a specific pattern (signature) from malware files to match against new files. (Source)

Still, the signature can't be like past algorithms that simply stored some bytes. These bytes vary a lot between malware files, making the signatures unhelpful for detecting new malware. So artificial immune systems model the way that antibodies evolve to generate signatures. These signatures match more kinds of malware.

First, here are the steps to set up the algorithm: (Source)

  1. Choose data to analyse from malware files. For example, raw bytes, common instructions in the file, which permissions a file needs (ex: for a mobile phone app), any text in the file, etc. 
  2. Temporarily initialise some random signatures ('antibodies'). These are going to be plain old numbers. But the numbers may have meaning that matches the examples above. 

Next, here are the steps repeated while the algorithm is running: (Source)

  1. Compare the similarity of malware file data with current signatures. For programmers, this is done with mathematical functions like the dot product, cosine similarity, or Euclidean distance.
  2. All current signatures are modified randomly. But the least similar ones to malware file data are modified the most. This is like genetic mutation; the least important genes are the most changeable.
  3. The least similar signatures to malware file data are deleted. This is like the evolutionary survival of the fittest.

Repeating those steps eventually makes signatures that resemble malware file data. We can then save these signatures on small devices. They can compare the data from any incoming file against these signatures. If the similarity is high enough, they filter those files as malware.

Using a slightly more complex variation, some researchers  detected 99 out of a 100 malware samples for small devices. (Source).  


Unsolved Problems

Having discussed recent advances in the field, where is improvement needed? I see two categories: technical improvements to algorithms and meta improvements to research.

P.S. This section is largely based on my personal opinions after about 100 hours of researching this topic.

Technical

The above case studies showed how malware detection algorithms are getting more efficient. Unfortunately, hackers can still change malware to get around detection systems. In technical terms, this is called creating 'adversarial examples.'

Two factors make adversarial examples in malware detection more challenging than in other AI applications. 

  1. Hackers are willing and able to adjust malware very rapidly. Millions of hackers globally try to trick malware detection algorithms for a living. In contrast, very few people trick common AI algorithms like language models or image processing algorithms for a living.
  2. Malware can be changed in more possible ways than most other data. Software often has millions to billions of bytes of data to modify. In contrast, images and text can have as few as thousands of data points to modify. 
    1. Also, it's harder for humans to check for suspicious software compared to  suspicious images or text. Thus, it's easier to create adversarial examples for malware than other data used in AI algorithms.

AI algorithms can continuously train to detect new malware examples. Still, this is reactive not proactive. Especially if hackers use AI algorithms to generate malware, new malware will spread faster and faster. Thus, a proactive solution is preferable.

Potential next steps for this are to proactively modify malware examples to simulate what hackers might do. These simulated examples could train AI algorithms proactively. Though this strategy is already researched, it has risky side effects. Hackers could use these algorithms to modify their own malware .

Overall, more research is needed to keep malware detection algorithms working after hackers change their techniques (ideally, with few side risks).

A secondary problem is ensuring that malware detection algorithms are secured against 'backdoors.' (Source) Backdoors cause an algorithm to behave unexpectedly when given very specific inputs. Still, this is a secondary problem. It's currently much easier for hackers to bypass malware detection algorithms by updating malware than creating backdoors. (This is because the hackers would have to influence the training process of a malware detection algorithm to create a back door, not just send it new input.)

Meta

In addition to the technical points noted above, there are also more general research practices that would help this field. The most important practices are noted in this paper.

First, more researchers need to compare state of the art algorithms with simple algorithms. This will help determine if the complex models are 'worth it' due to extra performance. For instance, the complicated neural network in the first case study could be compared to human-made checklists on clues about malware.

Next, research papers need more transparent reports of how data were handled

  • It's notoriously difficult to train malware detection algorithms on datasets which resemble the files they would analyse in real life. 
    • This is because in real life, someone may install hundreds of normal software files and only one malware file. This "imbalance" makes it difficult for AI algorithms to get enough examples of both regular and malicious software.
  • Researchers use many methods to get around this problem. However, they can create problems which need to be reported in research papers. 
    • As one example, if researchers mix "malware" files from a crowdsourced website with malware files verified by a company - the first dataset might include files that shouldn't be there. Researchers should note that they mixed datasets so any resulting problems can be analysed.

Finally, research on malware detection should report a standard set of evaluation metrics: accuracy, precision, recall, ROC curves, AUC, and the count of data points in various classes.

  • Without too many details, these metrics suit tasks with the "imbalance" of malware files vs. regular software. 
  • Ex: Imagine there are 100 files, 99 of which are regular software and one of which is malware. A "malware detector" that simply says every file is safe will be right 99% of the time. Though, it'll also be useless 100% of the time. Ensuring those the metrics above are reported avoids this being missed.

Closing Remarks

Overall, AI-enabled malware detection is an impactful problem that could be called "the ultimate robustness challenge." Personally, I expect the techniques developed in this field will help the general AI safety field to progress. 

As outlined above, however, the field is still crippled by the challenge of keeping algorithms effective even as hackers actively work against them. A lot of interesting work remains to be done to fix this. So I hope that the explanations and citations above will help more minds to work on this. If you have any questions or feedback on my writing, please feel free to comment and I will happily explain my reasoning in more depth :-)

5

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities