ECNR: Efficient Compressive Neural Representation of Time-Varying Volumetric Datasets

Kaiyuan Tang, Chaoli Wang

Due to its conceptual simplicity and generality, compressive neural representation has emerged as a promising alternative to traditional compression methods for managing massive volumetric datasets. The state-of-the-art neural compression solution, neurcomp, however, utilizes a single large multilayer perceptron (MLP) to encode the global volume, incurring slow training and inference. This paper presents an efficient compressive neural representation (ECNR) solution that improves upon neurcomp to handle large-scale time-varying datasets. At the heart of our approach is a multiscale structure that uses the Laplacian pyramid for adaptive signal fitting via implicit neural representation. We leverage multiple small MLPs at each scale for fitting local content or residual blocks. By assigning similar blocks to the same MLP via size uniformization, we enable balanced parallelization among MLPs to significantly speed up training and inference. A deep compression strategy is then employed to compact the resulting model. We demonstrate the effectiveness of ECNR with multiple datasets and compare it with neurcomp and two state-of-the-art conventional compression methods (SZ3 and TTHRESH). Our results position ECNR as a promising alternative to neurcomp for scientific data compression.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment