{"id":1126,"date":"2014-06-11T14:50:46","date_gmt":"2014-06-11T19:50:46","guid":{"rendered":"https:\/\/faculty.eng.ufl.edu\/alina-zare\/?p=1126"},"modified":"2026-02-18T11:28:02","modified_gmt":"2026-02-18T16:28:02","slug":"du2014spatial","status":"publish","type":"post","link":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/2014\/06\/11\/du2014spatial\/","title":{"rendered":"Spatial and spectral unmixing using the beta compositional model"},"content":{"rendered":"<h2>Abstract:<\/h2>\n<p>This paper introduces the beta compositional model (BCM) for hyperspectral unmixing and four algorithms for unmixing given the BCM. Hyperspectral unmixing estimates the proportion of each endmember at every pixel of a hyperspectral image. Under the BCM, each endmember is a random variable distributed according to a beta distribution. By using a beta distribution, spectral variability is accounted for during unmixing, the reflectance values of each endmember are constrained to a physically realistic range, and skew can be accounted for in the distribution. Spectral variability is incorporated to increase hyperspectral unmixing accuracy. Two BCM-based spectral unmixing approaches are presented: BCM-spectral and BCM-spatial. For each approach, two algorithms, one based on quadratic programming (QP) and one using a Metropolis-Hastings (MH) sampler, are developed. Results indicate that the proposed BCM unmixing algorithms are able to successfully perform unmixing on simulated data and real hyperspectral imagery while incorporating endmember spectral variability and spatial information.<\/p>\n<h2>Links:<\/h2>\n<p> <a href=\"\/\/doi.org\/10.1109\/JSTARS.2014.2330347\"><img decoding=\"async\" border=\"2\" alt=\"IEEE Link\" src=\"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-content\/uploads\/sites\/759\/2016\/09\/ieee.jpg\" height=\"50\"><\/a> <a href=\"https:\/\/github.com\/GatorSense\/Publications\/blob\/master\/du2014spatial.pdf\"><img decoding=\"async\" border=\"2\" alt=\"PDF\" src=\"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-content\/uploads\/sites\/759\/2016\/09\/pdflogo-e1482256801729.png\" height=\"50\"><\/a> <a href=\"https:\/\/github.com\/GatorSense\/BetaCompositionalModel\"><img decoding=\"async\" border=\"2\" alt=\"PDF\" src=\"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-content\/uploads\/sites\/759\/2016\/09\/GitHub-Mark-e1482256611783.png\" height=\"50\"><\/a><\/p>\n<h2>Citation:<\/h2>\n<pre><code>X. Du, A. Zare, P. Gader, and D. Dranishnikov, \u201cSpatial and spectral unmixing using the beta compositional model,\u201d IEEE J. Sel. Topics. Appl. Earth Observ., vol. 7, iss. 6, pp. 1994-2003, 2014.<\/code><\/pre>\n<pre><code>@Article{du2014spatial,\nTitle = {Spatial and spectral unmixing using the beta compositional model},\nAuthor = {Du, Xiaoxiao and Zare, Alina and Gader, Paul and Dranishnikov, Dmitri},\nJournal = {IEEE J. Sel. Topics. Appl. Earth Observ.},\nYear = {2014},\nMonth = {June},\nNumber = {6},\nPages = {1994-2003},\nVolume = {7},\nDoi = {10.1109\/JSTARS.2014.2330347},\n}\n<\/code><\/pre>\n","protected":false},"excerpt":{"rendered":"<p>Abstract: This paper introduces the beta compositional model (BCM) for hyperspectral unmixing and four algorithms for unmixing given the BCM. Hyperspectral unmixing estimates the proportion of each endmember at every pixel of a hyperspectral image. Under the BCM, each endmember is a random variable distributed according to a beta distribution. By using a beta distribution, [&hellip;]<\/p>\n","protected":false},"author":28,"featured_media":1400,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"single-templates\/single-sidebar-none.php","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"featured_post":"","footnotes":"","_links_to":"","_links_to_target":""},"categories":[5,19],"tags":[273,275,365,683,717,781],"class_list":["post-1126","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-featured","category-journal_paper","tag-endmember","tag-endmember-variability","tag-hyperspectral","tag-spatial","tag-superpixel","tag-unmixing"],"acf":[],"_links":{"self":[{"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/posts\/1126","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/users\/28"}],"replies":[{"embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/comments?post=1126"}],"version-history":[{"count":1,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/posts\/1126\/revisions"}],"predecessor-version":[{"id":14627,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/posts\/1126\/revisions\/14627"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/media\/1400"}],"wp:attachment":[{"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/media?parent=1126"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/categories?post=1126"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/tags?post=1126"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}