Despite the popularisation of the machine learning models, more often than not they still operate as black boxes with no insight into what is happening inside the model. There exist a few methods that allow to visualise and explain why the model has made a certain prediction. Those methods, however, allow viewing the causal link between the input and output of the model without presenting how the model learns to represent the data. In this paper, a method that addresses that issue is proposed, with a focus on visualising multi-dimensional time-series data. Experiments on a high-frequency stock market dataset show that the method provides fast and discernible visualisations. Large datasets can be visualised quickly and on one plot, which makes it easy for a user to compare the learned representations of the data. The developed method successfully combines known and proven techniques to provide novel insight into the inner workings of time-series classifier models.