Self-supervision based deep learning classification approaches have received considerable attention in academic literature. However, the performance of such methods on remote sensing imagery domains remains under-explored. In this work, we explore contrastive representation learning methods on the task of imagery-based city classification, an important problem in urban computing. We use satellite and map imagery across 2 domains, 3 million locations and more than 1500 cities. We show that self-supervised methods can build a generalizable representation from as few as 200 cities, with representations achieving over 95% accuracy in unseen cities with minimal additional training. We also find that the performance discrepancy of such methods, when compared to supervised methods, induced by the domain discrepancy between natural imagery and abstract imagery is significant for remote sensing imagery. We compare all analysis against existing supervised models from academic literature and open-source our models 1 1 https://github.com/sachith500/self-supervision-remote-sensing-abstraction for broader usage and further criticism.
Sachith Seneviratne, Kerry A. Nice, Jasper S. Wijnands, Mark Stevenson, Jason Thompson. Self-supervision, remote sensing and abstraction: representation learning across 3 million locations
Journal: Digital Image Computing: Techniques and Applications (DICTA), Year: 2022, doi: https://doi.org/10.1109/DICTA52665.2021.9647061