{"id":3564,"date":"2018-05-03T13:01:39","date_gmt":"2018-05-03T18:01:39","guid":{"rendered":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/?p=3564"},"modified":"2026-04-06T19:53:08","modified_gmt":"2026-04-06T23:53:08","slug":"du2018multi","status":"publish","type":"post","link":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/2018\/05\/03\/du2018multi\/","title":{"rendered":"Multi-Resolution Multi-Modal Sensor Fusion For Remote Sensing Data With Label Uncertainty"},"content":{"rendered":"<h2>Abstract:<\/h2>\n<p>In remote sensing, each sensor can provide complementary or reinforcing information. It is valuable to fuse outputs from multiple sensors to boost overall performance. Previous supervised fusion methods often require accurate labels for each pixel in the training data. However, in many remote sensing applications, pixel-level labels are difficult or infeasible to obtain. In addition, outputs from multiple sensors may have different levels of resolution or modalities (such as rasterized hyperspectral imagery versus LiDAR 3D point clouds). This paper presents a Multiple Instance Multi-Resolution Fusion (MIMRF) framework that can fuse multi-resolution and multi-modal sensor outputs while learning from ambiguously and imprecisely labeled training data. Experiments were conducted on the MUUFL Gulfport hyperspectral and LiDAR data set and a remotely-sensed soybean and weed data set. Results show improved, consistent performance on scene understanding and agricultural applications when compared to traditional fusion methods.<\/p>\n<h2>Links:<\/h2>\n<p><a href=\"https:\/\/ieeexplore.ieee.org\/document\/8931670\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-426 size-thumbnail\" src=\"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-content\/uploads\/sites\/759\/2016\/09\/ieee-150x150.jpg\" alt=\"I Triple E link of the paper\" width=\"150\" height=\"150\" \/><\/a><\/p>\n<h2><a href=\"https:\/\/arxiv.org\/abs\/1805.00930\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-470\" src=\"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-content\/uploads\/sites\/759\/2016\/09\/arxiv.png\" alt=\"arxive link of the paper\" width=\"90\" height=\"90\" \/><\/a><\/h2>\n<h2><a href=\"https:\/\/github.com\/GatorSense\/MIMRF\"><img decoding=\"async\" src=\"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-content\/uploads\/sites\/58\/2016\/09\/GitHub-Mark.png\" alt=\"Git hub link for the Code of M I M R F\" height=\"120\" \/><\/a><\/h2>\n<h2>Citation:<\/h2>\n<pre><code>X. Du and A. Zare, \u201cMulti-Resolution Multi-Modal Sensor Fusion For Remote Sensing Data With Label Uncertainty,\u201d in <em>IEEE Trans. on Geoscience and Remote Sensing (TGRS),<\/em>\u00a0vol. 58, no. 4, pp. 2755-2769, April 2020.<\/code><\/pre>\n<pre><code>@Article{Du2019MIMRF\nTitle = {Multi-Resolution Multi-Modal Sensor Fusion For Remote Sensing Data With Label Uncertainty},\nAuthor = {Xiaoxiao Du and Alina Zare},  \nJournal = {IEEE Trans. on Geoscience and Remote Sensing (TGRS)},  \nYear = {2020},  \nVolume={58},  \nNumber={4},  \nPages={2755--2769},\n}\n<\/code><\/pre>\n","protected":false},"excerpt":{"rendered":"<p>Abstract: In remote sensing, each sensor can provide complementary or reinforcing information. It is valuable to fuse outputs from multiple sensors to boost overall performance. Previous supervised fusion methods often require accurate labels for each pixel in the training data. However, in many remote sensing applications, pixel-level labels are difficult or infeasible to obtain. In [&hellip;]<\/p>\n","protected":false},"author":28,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"single-templates\/single-sidebar-none.php","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"featured_post":"off","footnotes":"","_links_to":"","_links_to_target":""},"categories":[19],"tags":[313,365,411,421,479,487,649],"class_list":["post-3564","post","type-post","status-publish","format-standard","hentry","category-journal_paper","tag-fusion","tag-hyperspectral","tag-label-uncertainty","tag-lidar","tag-multi-resolution","tag-multiple-instance","tag-scene-understanding"],"acf":[],"_links":{"self":[{"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/posts\/3564","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/users\/28"}],"replies":[{"embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/comments?post=3564"}],"version-history":[{"count":2,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/posts\/3564\/revisions"}],"predecessor-version":[{"id":16361,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/posts\/3564\/revisions\/16361"}],"wp:attachment":[{"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/media?parent=3564"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/categories?post=3564"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/tags?post=3564"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}