Learning color space adaptation from synthetic to real images of cirrus clouds

Qing Lyu, Xiang Chen

Training on synthetic data is becoming popular in vision due to the convenient acquisition of accurate pixel-level labels. But the domain gap between synthetic and real images significantly degrades the performance of the trained model. We propose a color space adaptation method to bridge the gap. A set of closed-form operations are adopted to make color space adjustments while preserving the labels. We embed these operations into a two-stage learning approach, and demonstrate the adaptation efficacy on the semantic segmentation task of cirrus clouds.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment