{"id":2403,"date":"2025-02-20T12:37:49","date_gmt":"2025-02-20T12:37:49","guid":{"rendered":"https:\/\/meyn.ece.ufl.edu\/?page_id=2403"},"modified":"2026-03-21T17:27:14","modified_gmt":"2026-03-21T22:27:14","slug":"a-new-paradigm-for-web3","status":"publish","type":"page","link":"https:\/\/faculty.eng.ufl.edu\/meyn\/c3\/c3-9\/a-new-paradigm-for-web3\/","title":{"rendered":"Bayesian Learning of Optimal Policies in Markov Decision Processes with Countably Infinite State-Space"},"content":{"rendered":"<h3><b>Vijay Subramanian (University of Michigan)\u00a0<\/b><\/h3>\n<p><span style=\"font-weight: 400\">Models of many real-life applications, such as queuing models of communication networks or computing systems, have a countably infinite state-space. Algorithmic and learning procedures that have been developed to produce optimal policies mainly focus on finite state settings, and do not directly apply to these models. To overcome this lacuna, in this work we study the problem of optimal control of a family of discrete-time countable state-space Markov Decision Processes (MDPs) governed by an unknown parameter $\\theta\\in\\Theta$, and defined on a countably-infinite state space $\\mathcal{X}=\\mathbb{Z}_+^d$, with finite action space $\\mathcal A$, and an unbounded cost function. We take a Bayesian perspective with the random unknown parameter $\\boldsymbol{\\theta}^*$ generated via a given fixed prior distribution on $\\Theta$. To optimally control the unknown MDP, we propose an algorithm based on Thompson sampling with dynamically-sized episodes: at the beginning of each episode, the posterior distribution formed via Bayes&#8217; rule is used to produce a parameter estimate, which then decides the policy applied during the episode. To ensure the stability of the Markov chain obtained by following the policy chosen for each parameter, we impose ergodicity assumptions. From this condition and using the solution of the average cost Bellman equation, we establish an $\\tilde O(dh^d\\sqrt{|\\mathcal A|T})$ upper bound on the Bayesian regret of our algorithm, where $T$ is the time-horizon. Finally, to elucidate the applicability of our algorithm, we consider two different queuing models with unknown dynamics, and show that our algorithm can be applied to develop approximately optimal control algorithms.<\/span><\/p>\n<p><strong>Bio:<\/strong> Vijay <span class=\"style_5\">Subramanian is\u00a0an Associate Professor in the\u00a0<\/span><a class=\"style_5\" title=\"https:\/\/web.eecs.umich.edu\/\" href=\"https:\/\/web.eecs.umich.edu\/\">EECS department at the University of Michigan<\/a><span class=\"style_5\">.\u00a0 <\/span><span class=\"style_5\">After graduating, he worked in the research arm of the Networks Business Sector of Motorola in Arlington Heights, Illinois, USA until May 2006. In May 2006, he returned to an academic setting. He started with a move to the <\/span><a class=\"style_5\" title=\"https:\/\/www.hamilton.ie\/\" href=\"https:\/\/www.hamilton.ie\/\">Hamilton Institute<\/a><span class=\"style_5\">\u00a0of the\u00a0<\/span><a class=\"style_5\" title=\"https:\/\/www.nuim.ie\/\" href=\"https:\/\/www.nuim.ie\/\">National University of Ireland, Maynooth<\/a><span class=\"style_5\">\u00a0as a Research Fellow. In the summer of 2010, he was a visiting researcher at\u00a0<\/span><a class=\"style_5\" title=\"https:\/\/lids.mit.edu\/\" href=\"https:\/\/lids.mit.edu\/\">LIDS MIT<\/a><span class=\"style_5\">. From November 2010 to October 2011, he was a Senior Research Associate in the\u00a0<\/span><a class=\"style_5\" title=\"https:\/\/www.eecs.northwestern.edu\/\" href=\"https:\/\/www.eecs.northwestern.edu\/\">EECS Department at Northwestern University<\/a><span class=\"style_5\">. From November 2011 until August 2014, he was a Research Assistant Professor in the\u00a0<\/span><a class=\"style_5\" title=\"https:\/\/www.eecs.northwestern.edu\/\" href=\"https:\/\/www.eecs.northwestern.edu\/\">EECS Department at Northwestern University<\/a><span class=\"style_5\">.<\/span><\/p>\n<div id=\"id7\" class=\"style_SkipStroke_2\">\n<div class=\"text-content graphic_textbox_layout_style_default_External_702_142\">\n<div class=\"graphic_textbox_layout_style_default\">\n<p class=\"paragraph_style_4\"><span class=\"style_9\">His main research interests are in stochastic modeling, communications, information theory and applied mathematics. A large portion of his past work has been on probabilistic analysis of communication networks, especially analysis of scheduling and routing algorithms. In the past, he has also done some work with applications in immunology and coding of stochastic processes. His current research interests are on game theoretic and economic modeling of socio-technological systems and networks, and the analysis of associated stochastic processes.\u00a0<\/span><span class=\"style_2\"><br \/>\n<\/span><\/p>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"id8\" class=\"style_SkipStroke_3\">\n<div class=\"text-content graphic_textbox_layout_style_default_External_690_2569\">\n<div class=\"graphic_textbox_layout_style_default\"><\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Vijay Subramanian (University of Michigan)\u00a0 Models of many real-life applications, such as queuing models of communication networks or computing systems, have a countably infinite state-space. Algorithmic and learning procedures that have been developed to produce optimal policies mainly focus on finite state settings, and do not directly apply to these models. To overcome this lacuna, [&hellip;]<\/p>\n","protected":false},"author":1347,"featured_media":0,"parent":2631,"menu_order":1,"comment_status":"closed","ping_status":"closed","template":"page-templates\/page-section-nav.php","meta":{"_acf_changed":false,"inline_featured_image":false,"featured_post":"","footnotes":"","_links_to":"","_links_to_target":""},"class_list":["post-2403","page","type-page","status-publish","hentry"],"acf":[],"_links":{"self":[{"href":"https:\/\/faculty.eng.ufl.edu\/meyn\/wp-json\/wp\/v2\/pages\/2403","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/faculty.eng.ufl.edu\/meyn\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/faculty.eng.ufl.edu\/meyn\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/meyn\/wp-json\/wp\/v2\/users\/1347"}],"replies":[{"embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/meyn\/wp-json\/wp\/v2\/comments?post=2403"}],"version-history":[{"count":2,"href":"https:\/\/faculty.eng.ufl.edu\/meyn\/wp-json\/wp\/v2\/pages\/2403\/revisions"}],"predecessor-version":[{"id":2787,"href":"https:\/\/faculty.eng.ufl.edu\/meyn\/wp-json\/wp\/v2\/pages\/2403\/revisions\/2787"}],"up":[{"embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/meyn\/wp-json\/wp\/v2\/pages\/2631"}],"wp:attachment":[{"href":"https:\/\/faculty.eng.ufl.edu\/meyn\/wp-json\/wp\/v2\/media?parent=2403"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}