Unsupervised Learning of Spatial Embeddings for Cell Segmentation

Unsupervised Learning of Spatial Embeddings for Cell Segmentation

We present an unsupervised learning method to identify and segment cells in microscopy images. This is possible by leveraging certain assumptions that generally hold in this imaging domain: cells in one dataset tend to have a similar appearance, are randomly distributed in the image plane, and do not overlap. We show theoretically that under those assumptions it is possible to learn a spatial embedding of small image patches, such that patches cropped from the same object can be identified in a simple post-processing step. Empirically, we show that those assumptions indeed hold on a diverse set of microscopy image datasets: Evaluated on six large cell segmentation datasets, the segmentations obtained with our method in a purely unsupervised way are substantially better than a pre-trained baseline on four datasets, and perform comparably on the remaining two datasets. Furthermore, the segmentations obtained from our method constitute an excellent starting point to support supervised training on small amounts of labels. Especially in lowdata regimes (using less than 10% of the available annotations), this supported supervised training substantially outperforms purely supervised methods using the same amount of annotations.