3KG: Contrastive Learning of 12-Lead Electrocardiograms using Physiologically-Inspired Augmentations

Bryan Gopal, Ryan W. Han, Gautham Raghupathi, Andrew Y. Ng, Geoffrey H. Tison, Pranav Rajpurkar

Self-supervised contrastive learning approaches leverage modality-specific context or invariances to pretrain models using unlabeled data. While contrastive learning has demonstrated promising on results in the image domain, there has been limited work on determining how to exploit modality-specific invariances in biosignals such as the electrocardiogram. In this work, we propose 3KG, a method to generate positive pairs for contrastive learning using physiologically-inspired 3D augmentations of the 12-lead electrocardiogram. We evaluate representation quality by fine-tuning a linear layer for the downstream task of 24-class diagnosis on the PhysioNet 2020 challenge training data, and find that models trained with physiologically-inspired augmentations both outperform and complement standard time-series augmentations. Our best performing strategy, which incorporates spatial rotation, spatial scaling, and time masking, achieves a performance increase of 0.16, .086, and .046 in mean AUROC over a randomly initialized baseline at 1%, 10%, and 100% label fractions respectively. Additionally, we show that the strength of spatial augmentations does not significantly affect the quality of the learned representations. Finally, we investigate the clinical relevance of how physiologically-inspired augmentations affect the performance of our classifier on different disease subgroupings. As expert annotations are often expensive and scarce for medical contexts, our approach highlights the potential of machine learning to tackle medical problems with large quantities of unlabeled biosignal data by exploiting their unique biological properties.

Knowledge Graph



Sign up or login to leave a comment