Learning Site-specific Styles for Multi-institutional Unsupervised Cross-modality Domain Adaptation

Han Liu, Yubo Fan, Zhoubing Xu, Benoit M. Dawant, Ipek Oguz

Unsupervised cross-modality domain adaptation is a challenging task in medical image analysis, and it becomes more challenging when source and target domain data are collected from multiple institutions. In this paper, we present our solution to tackle the multi-institutional unsupervised domain adaptation for the crossMoDA 2023 challenge. First, we perform unpaired image translation to translate the source domain images to the target domain, where we design a dynamic network to generate synthetic target domain images with controllable, site-specific styles. Afterwards, we train a segmentation model using the synthetic images and further reduce the domain gap by self-training. Our solution achieved the 1st place during both the validation and testing phases of the challenge.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment