Abstract:
In remote sensing, each sensor can provide complementary or reinforcing information. It is valuable to fuse outputs from multiple sensors to boost overall performance. Previous supervised fusion methods often require accurate labels for each pixel in the training data. However, in many remote sensing applications, pixel-level labels are difficult or infeasible to obtain. In addition, outputs from multiple sensors may have different levels of resolution or modalities (such as rasterized hyperspectral imagery versus LiDAR 3D point clouds). This paper presents a Multiple Instance Multi-Resolution Fusion (MIMRF) framework that can fuse multi-resolution and multi-modal sensor outputs while learning from ambiguously and imprecisely labeled training data. Experiments were conducted on the MUUFL Gulfport hyperspectral and LiDAR data set and a remotely-sensed soybean and weed data set. Results show improved, consistent performance on scene understanding and agricultural applications when compared to traditional fusion methods.
Links:
Citation:
X. Du and A. Zare, “Multi-Resolution Multi-Modal Sensor Fusion For Remote Sensing Data With Label Uncertainty,” in IEEE Trans. on Geoscience and Remote Sensing (TGRS), vol. 58, no. 4, pp. 2755-2769, April 2020.
@Article{Du2019MIMRF
Title = {Multi-Resolution Multi-Modal Sensor Fusion For Remote Sensing Data With Label Uncertainty},
Author = {Xiaoxiao Du and Alina Zare},
Journal = {IEEE Trans. on Geoscience and Remote Sensing (TGRS)},
Year = {2020},
Volume={58},
Number={4},
Pages={2755--2769},
}