{"id":6334,"date":"2020-03-06T10:02:24","date_gmt":"2020-03-06T15:02:24","guid":{"rendered":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/?p=6334"},"modified":"2026-02-18T11:30:07","modified_gmt":"2026-02-18T16:30:07","slug":"multi-target-multiple-instance-learning-for-hyperspectral-target-detection","status":"publish","type":"post","link":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/2020\/03\/06\/multi-target-multiple-instance-learning-for-hyperspectral-target-detection\/","title":{"rendered":"Multi-Target Multiple Instance Learning for Hyperspectral Target Detection"},"content":{"rendered":"<h2>Abstract:<\/h2>\n<p>In remote sensing, it is often challenging to acquire or collect a large dataset that is accurately labeled. This difficulty is usually due to several issues, including but not limited to the study site\u2019s spatial area and accessibility, errors in the global positioning system (GPS), and mixed pixels caused by an image\u2019s spatial resolution. We propose an approach, with two variations, that estimates multiple target signatures from training samples with imprecise labels: Multi-Target Multiple Instance Adaptive Cosine Estimator (Multi-Target MI-ACE) and Multi-Target Multiple Instance Spectral Match Filter (MultiTarget MI-SMF). The proposed methods address the problems above by directly considering the multiple-instance, imprecisely labeled dataset. They learn a dictionary of target signatures that optimizes detection against a background using the Adaptive Cosine Estimator (ACE) and Spectral Match Filter (SMF). Experiments were conducted to test the proposed algorithms using a simulated hyperspectral dataset, the MUUFL Gulfport hyperspectral dataset collected over the University of Southern Mississippi-Gulfpark Campus, and the AVIRIS hyperspectral dataset collected over Santa Barbara County, California. Both simulated and real hyperspectral target detection experiments show the proposed algorithms are effective at learning target signatures and performing target detection.<\/p>\n<h2>Links:<\/h2>\n<p><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9375470\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-470\" src=\"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-content\/uploads\/sites\/759\/2016\/09\/ieee.jpg\" alt=\"\" width=\"90\" height=\"90\" \/><\/a> <a href=\"https:\/\/arxiv.org\/abs\/1909.03316\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-470\" src=\"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-content\/uploads\/sites\/759\/2016\/09\/arxiv.png\" alt=\"\" width=\"90\" height=\"90\" \/><\/a><\/p>\n<h2>Citation:<\/h2>\n<pre><code>S.K. Meerdink, J. Bocinsky, A. Zare, N. Kroeger, C. H. McCurley, D. Shats and P.D. Gader, \"Multi-Target Multiple Instance Learning for Hyperspectral Target Detection,\" in IEEE Transaction on Geoscience and Remote Sensing (TGRS), vol. 60, pp. 1-14, Art no. 5502814, doi: 10.1109\/TGRS.2021.3060966, 2022.<\/code><\/pre>\n<pre class=\"verbatim select-on-click\" title=\"click to copy to clipboard\"><code>@Article {Meerdink2020MTMIHSI,\nauthor = {Susan Meerdink and James Bocinsky and Alina Zare and Nicholas Kroeger and Connor McCurley and Daniel Shats and Paul Gader},\ntitle = {Multitarget Multiple-Instance Learning for Hyperspectral Target Detection},  \njournal = {IEEE Transaction on Geoscience and Remote Sensing (TGRS)},  \nyear={2022},\nvolume={60},\nnumber={},\npages={1-14},\ndoi={10.1109\/TGRS.2021.3060966}}<\/code><\/pre>\n","protected":false},"excerpt":{"rendered":"<p>Abstract: In remote sensing, it is often challenging to acquire or collect a large dataset that is accurately labeled. This difficulty is usually due to several issues, including but not limited to the study site\u2019s spatial area and accessibility, errors in the global positioning system (GPS), and mixed pixels caused by an image\u2019s spatial resolution. [&hellip;]<\/p>\n","protected":false},"author":28,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"single-templates\/single-sidebar-none.php","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"featured_post":"","footnotes":"","_links_to":"","_links_to_target":""},"categories":[19,9,13],"tags":[53,61,151,273,275,365,487,621,695,697,733],"class_list":["post-6334","post","type-post","status-publish","format-standard","hentry","category-journal_paper","category-news","category-publication","tag-uncertain-imprecise-labels","tag-adaptive-cosine-estimator","tag-classification","tag-endmember","tag-endmember-variability","tag-hyperspectral","tag-multiple-instance","tag-remote-sensing","tag-spectral-matched-filter","tag-spectral-variability","tag-target-detection"],"acf":[],"_links":{"self":[{"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/posts\/6334","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/users\/28"}],"replies":[{"embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/comments?post=6334"}],"version-history":[{"count":1,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/posts\/6334\/revisions"}],"predecessor-version":[{"id":15327,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/posts\/6334\/revisions\/15327"}],"wp:attachment":[{"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/media?parent=6334"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/categories?post=6334"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/tags?post=6334"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}