Forensicability of Deep Neural Network Inference Pipelines

Alexander Schlögl, Tobias Kupek, Rainer Böhme

We propose methods to infer properties of the execution environment of machine learning pipelines by tracing characteristic numerical deviations in observable outputs. Results from a series of proof-of-concept experiments obtained on local and cloud-hosted machines give rise to possible forensic applications, such as the identification of the hardware platform used to produce deep neural network predictions. Finally, we introduce boundary samples that amplify the numerical deviations in order to distinguish machines by their predicted label only.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment