Machine Learning for Document Security: Defense and Attack

Nedim Šrndić


Machine learning has seen many successful applications in the context of information security for tasks such as the detection of network intrusion or spam email messages and the detection and clustering of malicious executables. One particularly popular attack vector in recent years have been malicious documents: non-executable computer files of different formats, e.g., Portable Document Format (PDF), HTML, Microsoft Office, Adobe Flash, etc. Traditional detection methods based on signature matching have difficulties handling malicious documents because of their format complexity and/or ambiguity and the ease of producing massive numbers of heterogenous malicious documents by embedding polymorphic malware inside them. This is what machine learning methods are better suitable for.


The machine-learning-based approach called PJScan [1] detects malicious PDF files based on their JavaScript scripts. It learns a model of malicious JavaScript based on lexical token sequences, i.e., the lexical meaning of what JavaScript programs embedded in PDF files do. PJScan source code is available here.


Fig. 1: PJScan system architecture. Figure taken from [1]

Another approach to detect malicious PDF files takes advantage of the differences between how malicious and benign files are built, i.e., their document structure [2].


Fig. 2: The raw PDF file (left) is parsed by a full-fledged PDF parser and traversed using breadth-first search (BFS) to recreate its document structure tree (center). A full walk through the tree extracts all the paths and their counts (right), which are used to discriminate malicious from benign files. Figure taken from [2].

Evasion of classifiers in feature space

In security contexts, it is crucial to assure that the learning algorithm performing a security-critical task such as malware detection cannot be influenced by attackers. Therefore, it is important to evaluate and improve the security of learning algorithms themselves [3].


Fig. 3: Result of an evasion attack against an SVM classifier at test time. Malicious points were manipulated to move from malicious (red) into benign (blue) area, therefore evading detection. Figure taken from [3].

Evasion of classifiers in data space

Going from theoretical attacks on learning algorithms with known models in feature space to practical attacks on real-world, deployed classifier systems in data space, it was shown that the ability to modify only one third of features utilized by the classifier PDFrate can enable an attacker to severely degrade the classifier’s performance, making it label a large share of malicious PDF files as benign [4]. Source code for the Mimicus experimental platform used for PDFrate evasion is available here.


References

[1]          Pavel Laskov, Nedim Šrndić. Static Detection of Malicious JavaScript-Bearing PDF Documents. In Annual Computer Security Applications Conference, 2011.

[2]          Nedim Šrndić, Pavel Laskov. Detection of Malicious PDF Files Based on Hierarchical Document Structure. In Network and Distributed System Security Symposium, 2013.

[3]        Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, Fabio Roli. Evasion Attacks Against Machine Learning at Test Time. In European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2013.

[4]          Nedim Šrndić, Pavel Laskov. Practical Evasion of a Learning-Based Classifier: A Case Study. IEEE Symposium on Security and Privacy, 2014.

Contact

Nedim Šrndić, Tel.: (07071) 29-77175, nedim.srndic (at) uni-tuebingen.de