RobustPointSet: A Dataset for Benchmarking Robustness of Point Cloud Classifiers

Saeid Asgari Taghanaki, Jieliang Luo, Ran Zhang, Ye Wang, Pradeep Kumar Jayaraman, Krishna Murthy Jatavallabhula

The 3D deep learning community has seen significant strides in pointcloud processing over the last few years. However, the datasets on which deep models have been trained have largely remained the same. Most datasets comprise clean, clutter-free pointclouds canonicalized for pose. Models trained on these datasets fail in uninterpretible and unintuitive ways when presented with data that contains transformations "unseen" at train time. While data augmentation enables models to be robust to "previously seen" input transformations, 1) we show that this does not work for unseen transformations during inference, and 2) data augmentation makes it difficult to analyze a model's inherent robustness to transformations. To this end, we create a publicly available dataset for robustness analysis of point cloud classification models (independent of data augmentation) to input transformations, called \textbf{RobustPointSet}. Our experiments indicate that despite all the progress in the point cloud classification, PointNet (the very first multi-layered perceptron-based approach) outperforms other methods (e.g., graph and neighbor based methods) when evaluated on transformed test sets. We also find that most of the current point cloud models are not robust to unseen transformations even if they are trained with extensive data augmentation. RobustPointSet can be accessed through

Knowledge Graph



Sign up or login to leave a comment