{"id":1558,"date":"2016-12-28T13:28:05","date_gmt":"2016-12-28T18:28:05","guid":{"rendered":"https:\/\/faculty.eng.ufl.edu\/alina-zare\/?p=1558"},"modified":"2026-02-18T11:29:06","modified_gmt":"2026-02-18T16:29:06","slug":"chen2016partial-2","status":"publish","type":"post","link":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/2016\/12\/28\/chen2016partial-2\/","title":{"rendered":"Partial Membership Latent Dirichlet Allocation for Soft Image Segmentation"},"content":{"rendered":"<h2>Abstract:<\/h2>\n<p>Topic models (e.g., pLSA, LDA, sLDA) have been widely used for segmenting imagery. However, these models are confined to crisp segmentation, forcing a visual word (i.e., an image patch) to belong to one and only one topic. Yet, there are many images in which some regions cannot be assigned a crisp categorical label (e.g., transition regions between a foggy sky and the ground or between sand and water at a beach). In these cases, a visual word is best represented with partial memberships across multiple topics. To address this, we present a partial membership latent Dirichlet allocation (PM-LDA) model and an associated parameter estimation algorithm. This model can be useful for imagery where a visual word may be a mixture of multiple topics. Experimental results on visual and sonar imagery show that PM-LDA can produce both crisp and soft semantic image segmentations; a capability previous topic modeling methods do not have.<\/p>\n<h2>Links:<\/h2>\n<p> <a href=\"http:\/\/ieeexplore.ieee.org\/document\/8002590\/\"><img decoding=\"async\" border=\"2\" alt=\"IEEE Link\" src=\"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-content\/uploads\/sites\/759\/2016\/09\/ieee.jpg\" height=\"50\"><\/a> <a href=\"https:\/\/arxiv.org\/abs\/1612.08936\"><img decoding=\"async\" src=\"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-content\/uploads\/sites\/759\/2016\/09\/arxiv.png\" alt=\"ArXiv\" height=\"50\"><\/a>  <a href=\"https:\/\/github.com\/GatorSense\/PMLDA\"><img decoding=\"async\" src=\"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-content\/uploads\/sites\/759\/2016\/09\/GitHub-Mark-e1482256611783.png\" alt=\"Code\" height=\"50\"><\/a><\/p>\n<h2>Citation:<\/h2>\n<pre><code>C. Chen, A. Zare, H. Trinh, G. Omotara, J. T. Cobb, and P. Lagaunne, \u201cPartial Membership Latent Dirichlet Allocation,\u201d <em>IEEE Trans. Image Process.<\/em>vol. 26, pp. 5590-5602, Dec. 2017. <\/code><\/pre>\n<pre><code>@Article{chen2016partial,\nTitle = {Partial Membership Latent Dirichlet Allocation for Soft Image Segmentation},\nAuthor = {Chen, C. and Zare, A. and Trinh, H. and Omotara, G. and Cobb, J. T. and Lagaunne, P. },\nJournal = {IEEE Trans. Image Proc.},\nYear = {2017},\nMonth = {Dec.},\nVolume = {26},\nNumber = {12},\nPages = {5590-5602},\nDoi = {10.1109\/TIP.2017.2736419},\n}\n<\/code><\/pre>\n","protected":false},"excerpt":{"rendered":"<p>Abstract: Topic models (e.g., pLSA, LDA, sLDA) have been widely used for segmenting imagery. However, these models are confined to crisp segmentation, forcing a visual word (i.e., an image patch) to belong to one and only one topic. Yet, there are many images in which some regions cannot be assigned a crisp categorical label (e.g., [&hellip;]<\/p>\n","protected":false},"author":28,"featured_media":1396,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"single-templates\/single-sidebar-none.php","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"featured_post":"","footnotes":"","_links_to":"","_links_to_target":""},"categories":[5,19],"tags":[159,367,417,659,683,717],"class_list":["post-1558","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-featured","category-journal_paper","tag-clustering","tag-image-processing","tag-latent-dirichlet-allocation","tag-segmentation","tag-spatial","tag-superpixel"],"acf":[],"_links":{"self":[{"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/posts\/1558","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/users\/28"}],"replies":[{"embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/comments?post=1558"}],"version-history":[{"count":1,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/posts\/1558\/revisions"}],"predecessor-version":[{"id":15003,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/posts\/1558\/revisions\/15003"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/media\/1396"}],"wp:attachment":[{"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/media?parent=1558"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/categories?post=1558"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/machine-learning\/wp-json\/wp\/v2\/tags?post=1558"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}