{"id":1727,"date":"2023-08-31T20:41:48","date_gmt":"2023-08-31T20:41:48","guid":{"rendered":"https:\/\/smartsystems.ece.ufl.edu\/?page_id=1727"},"modified":"2023-08-31T20:41:48","modified_gmt":"2023-08-31T20:41:48","slug":"collaborative-robotics","status":"publish","type":"page","link":"https:\/\/faculty.eng.ufl.edu\/smartsystems\/research\/projects\/collaborative-robotics\/","title":{"rendered":"Collaborative Robotics"},"content":{"rendered":"<p>Human Movement Prediction through 2D Imaging:<\/p>\n<table style=\"background-color: #ffffff;border-collapse: collapse;border: 1px solid #ffffff;color: #000000;width: 100%\" border=\"1\" cellspacing=\"3\" cellpadding=\"3\">\n<tbody>\n<tr>\n<td>This project focuses on methods to determine and use context to predict how humans will behave when working alongside robots.<br \/>\nThis foreknowledge enables the robotic system to behave safely, prevent collisions, and better support human users.<br \/>\nContext is derived from objects visible in the scene, combining results from 2D joint pose estimation and objection detection into a single combined representation.<br \/>\nThis representation is then used to predict future states through deep learning.<\/td>\n<td><div style=\"width: 360px;\" class=\"wp-video\"><video class=\"wp-video-shortcode\" id=\"video-1727-1\" width=\"360\" height=\"640\" loop autoplay preload=\"metadata\" controls=\"controls\"><source type=\"video\/mp4\" src=\"http:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-content\/uploads\/sites\/659\/2023\/08\/FetchADrink.mp4?_=1\" \/><a href=\"http:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-content\/uploads\/sites\/659\/2023\/08\/FetchADrink.mp4\">http:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-content\/uploads\/sites\/659\/2023\/08\/FetchADrink.mp4<\/a><\/video><\/div><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<table style=\"background-color: #ffffff;border-collapse: collapse;border: 1px solid #ffffff;color: #ffffff;width: 1426px;height: 472px\" border=\"1\" cellspacing=\"3\" cellpadding=\"3\">\n<tbody>\n<tr>\n<td><div style=\"width: 332px;\" class=\"wp-video\"><video class=\"wp-video-shortcode\" id=\"video-1727-2\" width=\"332\" height=\"472\" loop autoplay preload=\"metadata\" controls=\"controls\"><source type=\"video\/mp4\" src=\"http:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-content\/uploads\/sites\/659\/2023\/08\/hmp-cam3-wbb_cropped.mp4?_=2\" \/><a href=\"http:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-content\/uploads\/sites\/659\/2023\/08\/hmp-cam3-wbb_cropped.mp4\">http:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-content\/uploads\/sites\/659\/2023\/08\/hmp-cam3-wbb_cropped.mp4<\/a><\/video><\/div><\/td>\n<td><div style=\"width: 664px;\" class=\"wp-video\"><video class=\"wp-video-shortcode\" id=\"video-1727-3\" width=\"664\" height=\"462\" loop autoplay preload=\"metadata\" controls=\"controls\"><source type=\"video\/mp4\" src=\"http:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-content\/uploads\/sites\/659\/2023\/08\/hmp-cam2-wbb-cropped.mp4?_=3\" \/><a href=\"http:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-content\/uploads\/sites\/659\/2023\/08\/hmp-cam2-wbb-cropped.mp4\">http:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-content\/uploads\/sites\/659\/2023\/08\/hmp-cam2-wbb-cropped.mp4<\/a><\/video><\/div><\/td>\n<td><div style=\"width: 332px;\" class=\"wp-video\"><video class=\"wp-video-shortcode\" id=\"video-1727-4\" width=\"332\" height=\"472\" loop autoplay preload=\"metadata\" controls=\"controls\"><source type=\"video\/mp4\" src=\"http:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-content\/uploads\/sites\/659\/2023\/08\/hmp-cam1-wbb_cropped.mp4?_=4\" \/><a href=\"http:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-content\/uploads\/sites\/659\/2023\/08\/hmp-cam1-wbb_cropped.mp4\">http:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-content\/uploads\/sites\/659\/2023\/08\/hmp-cam1-wbb_cropped.mp4<\/a><\/video><\/div><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n","protected":false},"excerpt":{"rendered":"<p>Human Movement Prediction through 2D Imaging: This project focuses on methods to determine and use context to predict how humans will behave when working alongside robots. This foreknowledge enables the robotic system to behave safely, prevent collisions, and better support human users. Context is derived from objects visible in the scene, combining results from 2D [&hellip;]<\/p>\n","protected":false},"author":1329,"featured_media":1847,"parent":17,"menu_order":2,"comment_status":"closed","ping_status":"closed","template":"page-templates\/page-section-nav.php","meta":{"_acf_changed":false,"inline_featured_image":false,"featured_post":"","footnotes":"","_links_to":"","_links_to_target":""},"class_list":["post-1727","page","type-page","status-publish","has-post-thumbnail","hentry"],"acf":[],"_links":{"self":[{"href":"https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-json\/wp\/v2\/pages\/1727","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-json\/wp\/v2\/users\/1329"}],"replies":[{"embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-json\/wp\/v2\/comments?post=1727"}],"version-history":[{"count":0,"href":"https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-json\/wp\/v2\/pages\/1727\/revisions"}],"up":[{"embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-json\/wp\/v2\/pages\/17"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-json\/wp\/v2\/media\/1847"}],"wp:attachment":[{"href":"https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-json\/wp\/v2\/media?parent=1727"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}