{"id":11,"date":"2015-11-03T15:53:38","date_gmt":"2015-11-03T20:53:38","guid":{"rendered":"https:\/\/test.eng.ufl.edu\/faculty-site\/?page_id=11"},"modified":"2026-03-26T16:13:11","modified_gmt":"2026-03-26T20:13:11","slug":"publications","status":"publish","type":"page","link":"https:\/\/faculty.eng.ufl.edu\/jain\/publications\/","title":{"rendered":"Publications"},"content":{"rendered":"\n<h5 class=\"wp-block-heading\">Journals<\/h5>\n\n\n\n<p><strong>2026<\/strong><\/p>\n\n\n\n<p>Ibragimov, Azim et al. \u201c<a href=\"https:\/\/arxiv.org\/abs\/2506.13882\">Toward Multimodal Privacy in XR: Design and Evaluation of Composite Privatization Methods for Gaze and Body Tracking Data<\/a>.\u201d (2025).<\/p>\n\n\n\n<p>E. Bozkir et al., &#8220;<a href=\"https:\/\/ieeexplore.ieee.org\/document\/11366239\" target=\"_blank\" rel=\"noreferrer noopener\">Eye-Tracked Virtual Reality: A Comprehensive Survey on Methods and Privacy Challenges<\/a>,&#8221; in Proceedings of the IEEE, doi: 10.1109\/JPROC.2026.3653661.<\/p>\n\n\n\n<p><strong>2025<\/strong><\/p>\n\n\n\n<p>Shaina Murphy, Shakthi Sampath, Ethan Wilson, Karina LaRubbio, Ethan Smith, Apu Kapadia, and Eakta Jain. 2025. <a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3736413\" target=\"_blank\" rel=\"noreferrer noopener\">Opto-diversity and Eye Tracking: Assumptions about ocular alignment in virtual reality eye tracking exclude users with strabismus and amblyopia<\/a>. ACM Trans. Appl. Percept. https:\/\/doi.org\/10.1145\/3736413<\/p>\n\n\n\n<p><strong>2024<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/faculty.eng.ufl.edu\/jain\/wp-content\/uploads\/sites\/745\/2024\/02\/CAG_2024.pdf\" data-type=\"link\" data-id=\"https:\/\/faculty.eng.ufl.edu\/jain\/wp-content\/uploads\/sites\/745\/2024\/02\/CAG_2024.pdf\">Towards mitigating uncann(eye)ness in face swaps via gaze-centric loss terms<\/a>, Wilson, Ethan and Shic, Frederick and Joerg, Sophie and Jain, Eakta. <em>Computers and Graphics Journal Special Issue: Eye Gaze Visualization, Interaction, Synthesis, and Analysis,<\/em> doi: 10.1016\/j.cag.2024.103888<\/p>\n\n\n\n<p><a href=\"https:\/\/faculty.eng.ufl.edu\/jain\/wp-content\/uploads\/sites\/745\/2024\/01\/Privacy-Preserving-Gaze-Data-Streaming-in-Immersive-Interactive-Virtual-Reality-Robustness-and-User-Experience.pdf\" data-type=\"link\" data-id=\"https:\/\/faculty.eng.ufl.edu\/jain\/wp-content\/uploads\/sites\/745\/2024\/01\/Privacy-Preserving-Gaze-Data-Streaming-in-Immersive-Interactive-Virtual-Reality-Robustness-and-User-Experience.pdf\">Privacy-Preserving Gaze Data Streaming in Immersive Interactive Virtual Reality: Robustness and User Experience<\/a>, Wilson, Ethan and Ibragimov, Azim and Proulx, Michael and Tetali, Sai Deep and Butler, Kevin and Jain, Eakta, in <em>IEEE Transactions on Visualization and Computer Graphics, vol. 30, no. 5, pp. 2257-2268, May 2024, doi: 10.1109\/TVCG.2024.3372032<\/em><\/p>\n\n\n\n<p><strong>2023<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/faculty.eng.ufl.edu\/jain\/wp-content\/uploads\/sites\/745\/2023\/03\/2023_IEEE_VR_sample_privacy_datasets.pdf\" data-type=\"link\" data-id=\"https:\/\/faculty.eng.ufl.edu\/jain\/wp-content\/uploads\/sites\/745\/2023\/03\/2023_IEEE_VR_sample_privacy_datasets.pdf\">Privacy-preserving datasets of eye-tracking samples with applications in XR<\/a>, B. David-John, K. Butler and E. Jain, in <em>IEEE Transactions on Visualization and Computer Graphics<\/em>, doi: 10.1109\/TVCG.2023.3247048.<\/p>\n\n\n\n<p><a href=\"https:\/\/arxiv.org\/abs\/2305.14080\">Eye-tracked Virtual Reality: A Comprehensive Survey on Methods and Privacy Challenges<\/a>, Bozkir, E., Suleyman, O., Wang, M., David-John, B., Gao, H., Butler, K., Jain, E., Kasneci, E. (2023). DOI: 10.48550\/arXiv.2305.14080<\/p>\n\n\n\n<p><strong>2022<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/onlinelibrary.wiley.com\/doi\/10.1002\/cav.2040\">Is the Avatar Scared? Pupil as a Perceptual Cue,&nbsp;<\/a>Dong, Yuzhu and Joerg, Sophie and Jain, Eakta,&nbsp;<em>Computer Animation and Virtual Worlds.<\/em> 2022<\/p>\n\n\n\n<p><strong>2021<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/ascelibrary.org\/doi\/10.1061\/%28ASCE%29CO.1943-7862.0002090\">Online Hazard Recognition Training: A Comparative Case Study of Static Images, Cinemagraphs, and Videos,&nbsp;<\/a>Eiris, Ricardo and Jain, Eakta and Gheisari, Masoud and Wehle, Andrew,&nbsp;<em>ASCE Journal of Construction Engineering and Management, 147<\/em>(8): 04021082<\/p>\n\n\n\n<p><a href=\"https:\/\/pubmed.ncbi.nlm.nih.gov\/33856979\/\">Fast Foveating Cameras for Dense Adaptive Resolution,&nbsp;<\/a>Tilmon, B., Jain, E., Ferrari, S., &amp; Koppal, S. J. (2021).&nbsp;<em>IEEE Transactions on Pattern Analysis and Machine Intelligence,<\/em>&nbsp;vol. PP.<\/p>\n\n\n\n<p><a href=\"https:\/\/faculty.eng.ufl.edu\/jain\/research\/privacy-and-security-in-eye-tracking\/#:~:text=%E2%80%9CA%20PRIVACY%2DPRESERVING%20APPROACH%20TO%20STREAMING%20EYE%2DTRACKING%20DATA%E2%80%9D%2C%20BRENDAN%20DAVID%2DJOHN%2C%20DIANE%20HOSFELT%2C%20KEVIN%20BUTLER%2C%20EAKTA%20JAIN%2C%20IEEE%20TRANSACTIONS%20ON%20VISUALIZATION%20AND%20COMPUTER%20GRAPHICS%20(TVCG%202021)%20SPECIAL%20ISSUE%20ON%20IEEE%20VR.\">A Privacy-Preserving Approach to Streaming Eye-Tracking Data,&nbsp;<\/a>David-John, Brendan and Hosfelt, Diane and Butler, Kevin and Jain, Eakta,&nbsp;<em>IEEE Transactions on Visualization and Computer Graphics (TVCG 2021) Special Issue on IEEE VR. (p1-p12)<\/em>&nbsp;Best Paper Nominee<\/p>\n\n\n\n<p><strong>2020<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/jainlab.cise.ufl.edu\/Privacy_and_Security_in_Eye_Tracking.html#tvcg_2020\">The Security-Utility Trade-off for Iris Authentication and Eye Animation for Social Virtual Avatars,&nbsp;<\/a>John, Brendan and Koppal, Sanjeev and J\u00f6rg, Sophie and Jain, Eakta,&nbsp;<em>IEEE Transactions on Visualization and Computer Graphics (TVCG 2020) Special Issue on IEEE VR. (p1-p11)<\/em><\/p>\n\n\n\n<p><strong>2019<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/faculty.eng.ufl.edu\/jain\/research\/completed-projects\/measuring-behavioral-and-physiological-responses-in-videos-and-vr\/#:~:text=A%20BENCHMARK%20OF%20FOUR%20METHODS%20FOR%20GENERATING%20360%E2%97%A6%20SALIENCY%20MAPS%0AFROM%20EYE%20TRACKING%20DATA\">A Benchmark of Four Methods for Generating 360\u00b0 Saliency Maps from Eye Tracking Data,&nbsp;<\/a>John, Brendan and Le Meur, Olivier and Jain, Eakta,&nbsp;<em>International Journal of Semantic Computing 13.03<\/em><\/p>\n\n\n\n<p><a>Using Audience Physiology to Assess Engaging Conservation Messages and Animal Taxa,&nbsp;<\/a>Jain, Eakta and Jacobson, Susan K and Raiturkar, Pallavi and Morales, Nia A and Nagarajan, Archana and Chen, Beida and Sivasubramanian, Naveen and Chaturvedi, Kartik and Lee, Andrew,&nbsp;<em>Society &amp; Natural Resources,<\/em> Research Note.<\/p>\n\n\n\n<p><strong>2018<\/strong><\/p>\n\n\n\n<p><a>Love or Loss: Effective message framing to promote environmental conservation,&nbsp;<\/a>Jacobson, Susan K and Morales, Nia A and Chen, Beida and Soodeen, Rebecca and Moulton, Michael P and Jain, Eakta,&nbsp;<em>Applied Environmental Education &amp; Communication.<\/em><\/p>\n\n\n\n<p><strong>2017<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/faculty.eng.ufl.edu\/jain\/research\/completed-projects\/eyetracking-and-comics\/#:~:text=PROJECTS-,CREATING%20SEGMENTS%20AND%20EFFECTS%20ON%20COMICS%20BY%20CLUSTERING%20GAZE,-DATA\">Creating Segments and Effects on Comics by Clustering Gaze Data,&nbsp;<\/a>Thirunarayanan, Ishwarya and Khetarpal, Khimya and Koppal, Sanjeev and Le Meur, Olivier and Shea, John and Jain, Eakta,&nbsp;<em>ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM).<\/em><\/p>\n\n\n\n<p><strong>2016<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/faculty.eng.ufl.edu\/jain\/research\/completed-projects\/creating-child-like-characters\">Is the motion of a child perceivably different from the motion of an adult?&nbsp;<\/a>Eakta Jain, Lisa Anthony, Aishat Aloba, Amanda Castonguay, Isabella Cuba, Alex Shaw, Julia Woodward,&nbsp;<em>ACM Transactions on Applied Perception. <a href=\"https:\/\/jainlab.cise.ufl.edu\/documents\/jain-et-al-TAP2016.pdf\">Preprint<\/a>. &nbsp;2022<\/em><\/p>\n\n\n\n<p><a href=\"https:\/\/jainlab.cise.ufl.edu\/comics.html#predicting-moves-on-stills-for\">Predicting Moves-on-Stills for Comic Art using Viewer Gaze Data,&nbsp;<\/a>Eakta Jain, Yaser Sheikh, Jessica Hodgins,&nbsp;<em>IEEE CG&amp;A Special Issue on Quality Assessment and Perception in Computer Graphics 2016.<\/em><\/p>\n\n\n\n<p><strong>2015<\/strong><\/p>\n\n\n\n<p><a href=\"http:\/\/graphics.cs.cmu.edu\/projects\/gazedriven\/\">Gaze-driven Video Re-editing,&nbsp;<\/a>E. Jain, Y. Sheikh, A. Shamir and J. Hodgins,&nbsp;<em>ACM Transactions on Graphics.<\/em><\/p>\n\n\n\n<p><strong>2012<\/strong><\/p>\n\n\n\n<p><a href=\"http:\/\/graphics.cs.cmu.edu\/projects\/threeDproxy\/\">Three-dimensional Proxies for Hand-drawn Characters,&nbsp;<\/a>E. Jain, Y. Sheikh, M. Mahler and J. Hodgins,&nbsp;<em>ACM Transactions on Graphics (TOG).<\/em><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading\">Peer-reviewed Conference Proceedings<\/h5>\n\n\n\n<p><strong>2024<\/strong><\/p>\n\n\n\n<p><a href=\"http:\/\/doi.acm.org\/?doi=3658644.3690342\">&#8220;I Had Sort of a Sense that I Was Always Being Watched&#8230;Since I Was&#8221;: Examining Interpersonal Discomfort From Continuous Location-Sharing Applications,<\/a> Kevin Childs, Cassidy Gibson, Anna Crowder, Kevin Warren, Carson Stillman, Elissa M. Redmiles, Eakta Jain, Patrick Traynor, Kevin R. B. Butler. CCS &#8217;24: Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security, doi: 10.1145\/3658644.3690342<\/p>\n\n\n\n<p><strong>2023<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/jainlab.cise.ufl.edu\/documents\/GazeSynthesis_Preprint.pdf\">Real-Time Conversational Gaze Synthesis for Avatars<\/a>, Ryan Canales, Eakta Jain, Sophie Joerg, ACM\/SIGGRAPH conference on Motion, Interaction and Games 2023<\/p>\n\n\n\n<p><a href=\"https:\/\/faculty.eng.ufl.edu\/jain\/research\/human-horse-interaction\/\">Horse as Teacher: How human-horse interaction informs human-robot interaction,<\/a> Eakta Jain and Christina Gardner-Mccune. ACM Conference on Human Factors in Computing Systems (CHI) 2023.<\/p>\n\n\n\n<p><a href=\"https:\/\/faculty.eng.ufl.edu\/jain\/wp-content\/uploads\/sites\/745\/2023\/04\/ExplicitGaze_ETRA23.pdf\" data-type=\"link\" data-id=\"https:\/\/faculty.eng.ufl.edu\/jain\/wp-content\/uploads\/sites\/745\/2023\/04\/ExplicitGaze_ETRA23.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Introducing Explicit Gaze Constraints to Face Swapping<\/a>, Wilson, Ethan and Shic, Frederick and Jain, Eakta. <em>ACM Symposium on Eye Tracking Research &amp; Applications (ETRA 23).<\/em><\/p>\n\n\n\n<p><strong>2022<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/jainlab.cise.ufl.edu\/Privacy_and_Security_in_Eye_Tracking.html#for-your-eyes\">For Your Eyes Only: Privacy-preserving eye-tracking datasets,<\/a>&nbsp;David-John, Brendan and Butler, Kevin and Jain, Eakta.&nbsp;<em>ACM Symposium on Eye Tracking Research &amp; Applications (ETRA 22).<\/em><\/p>\n\n\n\n<p><strong>2021<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/faculty.eng.ufl.edu\/jain\/research\/advancing-nuclear-operators-through-virtual-reality-based-training\/#:~:text=%7D-,Priorities%20and%20Considerations%20in,and%20Technology%20Expo%2C%202021.,-Conference%20Page\">Priorities and Considerations in Advancing the Training of Nuclear Reactor Operators Through Mixed Reality,<\/a>&nbsp;Eakta Jain, Andreas Enqvist.&nbsp;<em>American Nuclear Society Winter Meeting and Technology Expo,<\/em>&nbsp;2021.<\/p>\n\n\n\n<p><strong>2020<\/strong><\/p>\n\n\n\n<p><em><a href=\"https:\/\/jainlab.cise.ufl.edu\/documents\/FoveaCam_a_mems_mirror_enabled_foveating_camera.pdf\">FoveaCam: A MEMS Mirror-Enabled Foveating Camera,<\/a>&nbsp;Tilmon, Brevin and Jain, Eakta and Ferrari, Silvia and Koppal, Sanjeev. IEEE International Conference on Computational Photography (ICCP 2020) (p1-p10).<\/em><\/p>\n\n\n\n<p><a href=\"https:\/\/ascelibrary.org\/doi\/10.1061\/9780784482865.118\">Hazard-Recognition Training Using Omnidirectional Cinemagraphs: Comparison Between Virtual Reality and Lecture-based Techniques,<\/a>&nbsp;Eiris, Ricardo and John, Brendan and Gheisari, Masoud and Jain, Eakta and Wehle, Andrew and Memarian, Babak. In Proceedings of the ASCE Construction Research Congress (CRC 2020).<\/p>\n\n\n\n<p><a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3424636.3426909\">Adult2child: Motion Style Transfer using CycleGANs,<\/a>&nbsp;Dong, Yuzhu and Aristidou, Andreas and Shamir, Ariel and Mahler, Moshe and Jain, Eakta. ACM SIGGRAPH Conference on Motion, Interaction and Games (MIG 2020).<\/p>\n\n\n\n<p><strong>2019<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/faculty.eng.ufl.edu\/jain\/research\/privacy-and-security-in-eye-tracking\/\">Differential Privacy for EyeTracking Data,&nbsp;<\/a>Liu, Ao and Xia, Lirong and Duchowski, Andrew and Bailey, Reynold and Holmqvist, Kenneth and Jain, Eakta,&nbsp;<em>Differential Privacy for EyeTracking Data. In Proceedings of ACM Symposium on Eye Tracking Research &amp; Applications (ETRA&#8217;19).<\/em><\/p>\n\n\n\n<p><a href=\"https:\/\/jainlab.cise.ufl.edu\/Privacy_and_Security_in_Eye_Tracking.html#eyeveil\">EyeVEIL: Degrading Iris Authentication in Eye Tracking Headsets,&nbsp;<\/a>John, Brendan and Koppal, Sanjeev and Jain, Eakta,&nbsp;<em>In Proceedings of ACM Symposium on Eye Tracking Research &amp; Applications (ETRA&#8217;19).<\/em><\/p>\n\n\n\n<p><a href=\"https:\/\/init.cise.ufl.edu\/wp-content\/uploads\/sites\/378\/2019\/02\/aloba-et-al-HCII2019.pdf\">Quantifying Differences between Child and Adult Motion based on Gait Features,&nbsp;<\/a>Aloba, Aishat and Luc, Annie and Woodward, Julia and Dong, Yuzhu and Zhang, Rong and Jain, Eakta and Anthony, Lisa,&nbsp;<em>2019 21st International Conference on Human-Computer Interaction.<\/em><\/p>\n\n\n\n<p><strong>2018<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/jainlab.cise.ufl.edu\/decoupling_light_reflex.html#360-degree-saliency-maps\">A Benchmark of Four Methods for Generating 360 degree Saliency Maps from Eye Tracking Data,&nbsp;<\/a>John, Brendan and Raiturkar, Pallavi and Le Meur, Olivier and Jain, Eakta,&nbsp;<em>IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR).<\/em><\/p>\n\n\n\n<p><a href=\"https:\/\/jainlab.cise.ufl.edu\/decoupling_light_reflex.html#pupillary-light-2D-VR\">An Evaluation of Pupillary Light Response Models for 2D Screens and VR HMDs,&nbsp;<\/a>John, Brendan and Raiturkar, Pallavi and Banerjee, Arunava and Jain, Eakta,&nbsp;<em>ACM Symposium on Virtual Reality Software and Technology (VRST).<\/em><\/p>\n\n\n\n<p><a href=\"http:\/\/people.irisa.fr\/Olivier.Le_Meur\/publi\/2018_ETRA\/index.html\">DeepComics: saliency estimation for comics,&nbsp;<\/a>Kevin Bannier, Eakta Jain, Olivier Le Meur,&nbsp;<em>ACM Symposium on Eye Tracking Research &amp; Applications (ETRA).<\/em><\/p>\n\n\n\n<p><a href=\"https:\/\/jainlab.cise.ufl.edu\/eyetrack-onlineux.html#how-many-words\">How many words is a picture worth? Attention allocation on thumbnails versus title text regions,&nbsp;<\/a>Yandandul, Chaitra and Paryani, Sachin and Le, Madison and Jain, Eakta,&nbsp;<em>ACM Symposium on Eye Tracking Research &amp; Applications. (ETRA)<\/em><\/p>\n\n\n\n<p><a href=\"https:\/\/jainlab.cise.ufl.edu\/pose-perception.html#kinder-gator:-the-uf\">Kinder-Gator: The UF Kinect Database of Child and Adult Motion,&nbsp;<\/a>Aishat Aloba, Gianne Flores, Julia Woodward, Alex Shaw, Amanda Castonguay, Isabella Cuba, Yuzhu Dong, Eakta Jain, and Lisa Anthony,&nbsp;<em>Eurographics 2018 Short Papers.<\/em><\/p>\n\n\n\n<p><strong>2017<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/jainlab.cise.ufl.edu\/pose-perception.html#adult2child:-dynamic-scaling\">Adult2Child: Dynamic Scaling Laws to Create Child-Like Motion,&nbsp;<\/a>Yuzhu Dong, Sachin Paryani, Neha Rana, Aishat Aloba, Lisa Anthony, Eakta Jain,&nbsp;<em>2017 Motion In Games.<\/em><\/p>\n\n\n\n<p><strong>2016<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/jainlab.cise.ufl.edu\/decoupling_light_reflex.html\">Decoupling Light Reflex from Pupillary Dilation to Measure Emotional Arousal in Videos,&nbsp;<\/a>Pallavi Raiturkar, Andrea Kleinsmith, Andreas Keil, Arunava Banerjee, Eakta Jain,&nbsp;<em>ACM Symposium on Applied Perception (SAP).<\/em><\/p>\n\n\n\n<p><a>The Role of Undergraduate Research in an Undergraduate Engineering Curriculum,&nbsp;<\/a>Anne Donnelly, Eakta Jain, David Lopatto, Heather Spooner, Sahadeo Ramjatan, Grace Chun,&nbsp;<em>ATINER&#8217;s Conference Paper Series ENGEDU2016-1957<\/em><\/p>\n\n\n\n<p><strong>2013<\/strong><\/p>\n\n\n\n<p><a href=\"http:\/\/www.cs.cmu.edu\/~hyunsoop\/socialcharge.html\">Predicting Primary Gaze Behavior using Social Saliency Fields,&nbsp;<\/a>H. S. Park, E. Jain and Y. Sheikh,&nbsp;<em>International Conference on Computer Vision (ICCV).<\/em><\/p>\n\n\n\n<p><a href=\"http:\/\/dl.acm.org\/citation.cfm?doid=2493102.2493109\">ERELT: a faster alternative to the list-based interfaces for tree exploration and searching in mobile devices,&nbsp;<\/a>A. P. Chhetri, K. Zhang and E. Jain,&nbsp;<em>Proceedings of the 6th International Symposium on Visual Information Communication and Interaction (VINCI).<\/em><\/p>\n\n\n\n<p><strong>2012<\/strong><\/p>\n\n\n\n<p><a href=\"http:\/\/www.cs.cmu.edu\/~hyunsoop\/gaze_concurrence.html\">3D Social Saliency from Head-mounted Cameras,&nbsp;<\/a>H. S. Park, E. Jain and Y. Sheikh,&nbsp;<em>Advances in Neural Information Processing Systems (NIPS).<\/em><\/p>\n\n\n\n<p><a href=\"http:\/\/graphics.cs.cmu.edu\/projects\/comics\">Inferring Artistic Intention in Comic Art through Viewer Gaze,&nbsp;<\/a>E. Jain, Y. Sheikh and J. Hodgins,&nbsp;<em>ACM Symposium on Applied Perception (SAP).&nbsp;<strong>Honorable Mention Best Paper<\/strong><\/em><\/p>\n\n\n\n<p><strong>2010<\/strong><\/p>\n\n\n\n<p><a href=\"http:\/\/graphics.cs.cmu.edu\/projects\/augmenting2d3d\/\">Augmenting Hand Animation with Three-dimensional Secondary Motion,&nbsp;<\/a>E. Jain, Y. Sheikh, M. Mahler and J. Hodgins,&nbsp;<em>Proceedings of the Symposium on Computer Animation (SCA). <strong>Best Paper Award<\/strong><\/em><\/p>\n\n\n\n<p><strong>2009<\/strong><\/p>\n\n\n\n<p><a href=\"http:\/\/graphics.cs.cmu.edu\/projects\/lifting2d3d\/\">Leveraging the Talent of Hand Animators to Create Three-Dimensional Animation,&nbsp;<\/a>E. Jain, Y. Sheikh and J. Hodgins,&nbsp;<em>Proceedings of the Symposium on Computer Animation (SCA).<\/em><\/p>\n\n\n\n<p><strong>2005<\/strong><\/p>\n\n\n\n<p><a href=\"http:\/\/dx.doi.org\/10.1109\/ICISIP.2005.1619407\">Hypergraphs- Organizing complex natural neural networks,&nbsp;<\/a>E. Jain, M. J. Healy, L. Saland, D. Hamilton, A. Allan, K. Caldwell and T. P. Caudell,&nbsp;<em>Proceedings of the Third International Conference on Intelligent Sensing and Information Processing.<\/em><\/p>\n\n\n\n<p><strong>2004<\/strong><\/p>\n\n\n\n<p>Dancing Puppets- An Innovative Approach to Learning Programming,&nbsp;R. Bhattacharya, Nidhi, E. Jain, U. Maitra, G. Sharma, S. B. Agravat and A. Mukherjee,&nbsp;<em>Proceedings of the International Conference on Engineering Education, University of Florida.<\/em><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading\">Peer-reviewed Workshop Proceedings<\/h5>\n\n\n\n<p><strong>2021<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/vr4sec.hcigroup.de\/proceedings\/VR4Sec_paper_7.pdf\">Let\u2019s SOUP up XR: Collected thoughts from an IEEE VR workshop on privacy in mixed reality,&nbsp;<\/a>David-John, Brendan and Hosfelt, Diane and Butler, Kevin and Jain, Eakta.&nbsp;<em>VR4Sec: 1st International Workshop on Security for XR and XR for Security, co-located with USENIX\/SOUPS 2021. p1-p3.<\/em><\/p>\n\n\n\n<p><strong>2020<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/jainlab.cise.ufl.edu\/Privacy_and_Security_in_Eye_Tracking.html#let_it_snow\">Let It Snow: Adding pixel noise to protect the user\u2019s identity,&nbsp;<\/a>John, Brendan and Liu, Ao and Xia, Lirong and Koppal, Sanjeev and Jain, Eakta.&nbsp;<em>1st International Workshop on Privacy and Ethics in Eye Tracking (PrEThics) p1-p3.<\/em><\/p>\n\n\n\n<p><strong>2020<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/faculty.eng.ufl.edu\/jain\/research\/completed-projects\/workforce-saftey-training-in-xr\/#workforce-safety-training-in-xr\">Look Out! A Design Framework for Safety Training Systems and A Case Study on Omnidirectional Cinemagraphs,&nbsp;<\/a>John, B., Kalyanaraman, S., Jain, E.&nbsp;<em>IEEE VR Workshops TrainingXR p1-p7.<\/em><\/p>\n\n\n\n<p><strong>2017<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/jainlab.cise.ufl.edu\/creating-saliency-volumes.html\">3D Saliency from Eye Tracking with Tomography,&nbsp;<\/a>Ma, Bo and Jain, Eakta and Entezari, Alireza,&nbsp;<em>Eye Tracking and Visualization: Foundations, Techniques, and Applications.<\/em><\/p>\n\n\n\n<p><strong>2016<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/jainlab.cise.ufl.edu\/comics.html#a-preliminary-benchmark\">A Preliminary Benchmark of Four Saliency Algorithms on Comic Art,&nbsp;<\/a>Khimya Khetarpal, Eakta Jain,&nbsp;<em>IEEE International Conference on Multimedia and Expo MMArt Workshop.<\/em><\/p>\n\n\n\n<p><strong>2015<\/strong><\/p>\n\n\n\n<p><strong><a>3D Saliency from Eye Tracking with Tomography,&nbsp;<\/a><\/strong>Bo Ma, Eakta Jain, Alireza Entezari,&nbsp;<em>Workshop on Eye Tracking and Visualization (ETVIS) co-located with IEEE VIS.<\/em><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading\">Peer-reviewed Poster Abstracts<\/h5>\n\n\n\n<p><strong>2023<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/faculty.eng.ufl.edu\/jain\/wp-content\/uploads\/sites\/745\/2023\/03\/2023_Personal_Safety_Bubble_VR2023_poster.pdf\">Give me some room please! Personal space bubbles for safety and performance,<\/a>Karina LaRubbio, Ethan Wilson, Sanjeev Koppal, Sophie J \u0308org, Eakta Jain, <em>IEEE Virtual Reality and 3D User Interfaces (VR), 2023.<\/em><\/p>\n\n\n\n<p><strong>2022<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/faculty.eng.ufl.edu\/jain\/research\/advancing-nuclear-operators-through-virtual-reality-based-training\/\">Who do you look like? Gaze-based authentication for workers in VR,<\/a> Karina LaRubbio, Jeremiah Wright, Brendan David-John, Andreas Enqvist, Eakta Jain,<em>&nbsp;IEEE Virtual Reality and 3D User Interfaces (VR), 2022.<\/em><\/p>\n\n\n\n<p><a href=\"https:\/\/faculty.eng.ufl.edu\/jain\/research\/optimizing-highly-automated-driving-systems-for-people-with-cognitive-disabilities\/#optimizingautodriving\">Optimizing Automated Driving Systems for People with Cognitive Impairments,<\/a>&nbsp;Koon, L., Akinwuntan, A., Bhattacharya, S., Davidow, A., Davidson, A., Depcik, C., Devos, H., Eskandar, M., Giang, W., Haug, J., Hu, B., Jain, E., Kondyli, A., Kumar, D., Liu, Y., Motamedi, S., Yao, H., Zhao, X.,&nbsp;<em>Refereed Abstract. Accepted for presentation at the Rehabilitation Engineering and Assistive Technology Society of North America (RESNA) Conference, July, 2022.<\/em><\/p>\n\n\n\n<p><strong><a>Case-Study Comparison of Static Images, Cinemagraphs, and Videos in Online Hazard Recognition Training: Perspectives of Construction Domain Experts,<\/a><\/strong>&nbsp;Ricardo Eiris, Masoud Gheisari, Eakta Jain, Andrew Wehle,&nbsp;<em>Refereed Abstracts, CRC 2022.<\/em><\/p>\n\n\n\n<p><a href=\"https:\/\/faculty.eng.ufl.edu\/jain\/research\/protecting-facial-privacy-through-face-swapping\/#uncannyfaceswaps\">The Uncanniness of Face Swaps,<\/a>&nbsp;Ethan Wilson, Aidan Persaud, Nicholas Esposito, Sophie Joerg, Frederick Shic, Rohit Patra, Jenny Skytta, Eakta Jain,&nbsp;<em>Journal of Vision,<\/em> 2022.<\/p>\n\n\n\n<p><strong>2020<\/strong><\/p>\n\n\n\n<p><strong><a><\/a>Effect of marker location on user detection in 360-degree panoramic images,<\/strong>&nbsp;Ricardo Eiris, Brendan John, Eakta Jain, and Masoud Gheisari,&nbsp;<em>IEEE VR 2020.<\/em><\/p>\n\n\n\n<p><strong>2019<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/jainlab.cise.ufl.edu\/documents\/2019_SAP_Poster_child2adult.pdf\">Child2adult: Revisiting dynamic scaling laws to age motion<\/a>&nbsp;Yuzhu Dong, Lisa Anthony, Eakta Jain,&nbsp;<em>ACM Symposium on Applied Perception.<\/em><\/p>\n\n\n\n<p><a href=\"https:\/\/jainlab.cise.ufl.edu\/documents\/Adult2Child%20Age%20Regression%20Using%20CycleGANs.pdf\">Adult2Child Age Regression Using CycleGANs<\/a>&nbsp;Thomas Domas, Yuzhu Dong, Brendan John, Arik Shamir, Andreas Aristidou, and Eakta Jain,&nbsp;<em>ACM Symposium on Applied Perception.<\/em><\/p>\n\n\n\n<p><strong>2018<\/strong><\/p>\n\n\n\n<p><strong><a>Style Translation to Create Child-like Motion,&nbsp;<\/a><\/strong>Yuzhu Dong, Aishat Aloba, Lisa Anthony and Eakta Jain,&nbsp;<em>Eurographics.<\/em><\/p>\n\n\n\n<p><strong>2016<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/jainlab.cise.ufl.edu\/comics.html#leveraging-gaze-data\">Leveraging Gaze Data for Segmentation and Effects on Comics,&nbsp;<\/a>Ishwarya Iyengar Thirunarayanan, Sanjeev Koppal, John Shea, Eakta Jain,&nbsp;<em>ACM Symposium on Applied Perception Poster (selected to be presented at SIGGRAPH ACM Student Research Poster).<\/em><\/p>\n\n\n\n<p><strong><a>Scan Path and Movie Trailers for Implicit Annotation of Videos,&nbsp;<\/a><\/strong>Pallavi Raiturkar, Andrew Lee, and Eakta Jain,&nbsp;<em>ACM Symposium on Applied Perception (SAP).<\/em><\/p>\n\n\n\n<p><strong><a>Measuring Viewers&#8217; Heart Rate Response to Environment Conservation Videos,&nbsp;<\/a><\/strong>Pallavi Raiturkar, Susan Jacobson, Beida Chen, Kartik Chaturvedi, Isabella Cuba, Andrew Lee, Melissa Franklin, Julian Tolentino, Nia Haynes, Rebecca Soodeen, and Eakta Jain,&nbsp;<em>ACM Symposium on Applied Perception (SAP) (selected to be presented at SIGGRAPH ACM Student Research Poster).<\/em><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading\">Other<\/h5>\n\n\n\n<p><strong>2025<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/faculty.eng.ufl.edu\/jain\/wp-content\/uploads\/sites\/745\/2025\/11\/DagRep.15.1.50.pdf\">Addressing Future Challenges of Telemedicine Applications<\/a>, Matias Volonte, Andrew T. Duchowski, Nuria Pelechano, Catarina Moreira, and Joaquim Jorge. In Dagstuhl Reports, Volume 15, Issue 1, pp. 50-83, Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik (2025)&nbsp;https:\/\/doi.org\/10.4230\/DagRep.15.1.50<\/p>\n\n\n\n<p><a href=\"https:\/\/original-ufdc.uflib.ufl.edu\/IR00012367\/00001\" target=\"_blank\" rel=\"noreferrer noopener\">Human-in-the-loop Face Annotation Tool<\/a>, Shakthi Saravanan Sampath, Kelsey Dommer, Jenny Skytta, Sarah Corrigan, Kristen Michener, Frederick Shic, Eakta Jain<\/p>\n\n\n\n<p><a href=\"https:\/\/original-ufdc.uflib.ufl.edu\/IR00012368\/00001\" target=\"_blank\" rel=\"noreferrer noopener\">Evaluating Face Privacy via Stylized Painterly Rendering<\/a>, Ekin Ercetin, Eakta Jain<\/p>\n\n\n\n<p><strong>2023<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/arxiv.org\/abs\/2410.24131\">Transit drivers&#8217; reflections on the benefits and harms of eye tracking technology<\/a>, Murphy, Shaina, Bryce Grame, Ethan Smith, Siva Srinivasan, and Eakta Jain. (2023). DOI: 10.48550\/arXiv.2410.24131<\/p>\n\n\n\n<p><a href=\"https:\/\/europepmc.org\/article\/ppr\/ppr752749\">Technical Note: DeepLabCut-Display: open-source desktop application for visualizing and analyzing two-dimensional locomotor data in livestock<\/a>. Shirey J, Smythe MP, Dewberry LS, et al. . bioRxiv; 2023. DOI: 10.1101\/2023.10.30.564795.<\/p>\n\n\n\n<p><strong>2022<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/arxiv.org\/abs\/2208.08091\" target=\"_blank\" rel=\"noreferrer noopener\">In-vehicle alertness monitoring for older adults,<\/a> Heng Yao, Sanaz Motamedi, Wayne C.W. Giang, Alexandra Kondyli, Eakta Jain. <em>arxiv preprint&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/2208.08091\">arXiv:2208.08091,<\/a>&nbsp;2022.<\/em><\/p>\n\n\n\n<p><strong>2021<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/jainlab.cise.ufl.edu\/facial_privacy_face_swapping.html#annotation-system\">Annotation System For Aiding Automatic Face Detectors,&nbsp;<\/a>Ethan Wilson, Jenny Skytta, Frederick Shic, Eakta Jain,&nbsp;<em>University of Florida Technical Report.<\/em><\/p>\n\n\n\n<p><a href=\"https:\/\/jainlab.cise.ufl.edu\/facial_privacy_face_swapping.html#benchmarking-face\">Benchmarking Face Detectors,&nbsp;<\/a>Ethan Wilson, Jenny Skytta, Frederick Shic, Eakta Jain,&nbsp;<em>University of Florida Technical Report.<\/em><\/p>\n\n\n\n<p><strong>2020<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/trid.trb.org\/view\/1758620\">Evaluation of Intelligent School Zone Beacon and Vehicle-Cyclist Detection and Warning System,<\/a>&nbsp;Jain, Eakta, Sivaramakrishnan Srinivasan, Brendan John, Pedro Adorno, Srividya Surampudi, Tushar Mahajan, Manish Chopra, Thomas Domas, Marian Ankomah, and Clark Letter. 2020.<\/p>\n\n\n\n<p><strong>2019<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/dl.acm.org\/authorize?N690867\">Eye Tracking and Virtual Reality,&nbsp;<\/a>Ann McNamara, Eakta Jain,&nbsp;<em>SIGGRAPH Asia 2019 Courses.<\/em><\/p>\n\n\n\n<p><a href=\"https:\/\/ufdc.ufl.edu\/IR00010924\/00001\">Omnidirectional Cinemagraphs for Safety Training,&nbsp;<\/a>Brendan John, Chesalon J Taylor, Sriram Kalyanaraman, Eakta Jain,&nbsp;<em>University of Florida Technical Report.<\/em><\/p>\n\n\n\n<p><strong>2018<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/faculty.eng.ufl.edu\/jain\/research\/completed-projects\/perception-of-computer-generated-faces\/#perceptioncomputerfaces\">Identifying Computer-Generated Faces: An Eye Tracking Study,&nbsp;<\/a>Pallavi Raiturkar, Hany Farid, Eakta Jain,&nbsp;<em>University of Florida Technical Report.<\/em><\/p>\n\n\n\n<p><a href=\"http:\/\/drops.dagstuhl.de\/opus\/volltexte\/2019\/10057\/pdf\/dagrep_v008_i006_p077_18252.pdf\">Who watches the Watchmen: Eye tracking in XR,&nbsp;<\/a>Eakta Jain,&nbsp;<em>Dagstuhl seminar position paper.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Journals 2026 Ibragimov, Azim et al. \u201cToward Multimodal Privacy in XR: Design and Evaluation of Composite Privatization Methods for Gaze and Body Tracking Data.\u201d (2025). E. Bozkir et al., &#8220;Eye-Tracked Virtual Reality: A Comprehensive Survey on Methods and Privacy Challenges,&#8221; in Proceedings of the IEEE, doi: 10.1109\/JPROC.2026.3653661. 2025 Shaina Murphy, Shakthi Sampath, Ethan Wilson, Karina [&hellip;]<\/p>\n","protected":false},"author":468,"featured_media":0,"parent":0,"menu_order":7,"comment_status":"closed","ping_status":"closed","template":"page-templates\/page-sidebar-none.php","meta":{"_acf_changed":false,"inline_featured_image":false,"featured_post":"","footnotes":"","_links_to":"","_links_to_target":""},"class_list":["post-11","page","type-page","status-publish","hentry"],"acf":[],"_links":{"self":[{"href":"https:\/\/faculty.eng.ufl.edu\/jain\/wp-json\/wp\/v2\/pages\/11","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/faculty.eng.ufl.edu\/jain\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/faculty.eng.ufl.edu\/jain\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/jain\/wp-json\/wp\/v2\/users\/468"}],"replies":[{"embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/jain\/wp-json\/wp\/v2\/comments?post=11"}],"version-history":[{"count":3,"href":"https:\/\/faculty.eng.ufl.edu\/jain\/wp-json\/wp\/v2\/pages\/11\/revisions"}],"predecessor-version":[{"id":2371,"href":"https:\/\/faculty.eng.ufl.edu\/jain\/wp-json\/wp\/v2\/pages\/11\/revisions\/2371"}],"wp:attachment":[{"href":"https:\/\/faculty.eng.ufl.edu\/jain\/wp-json\/wp\/v2\/media?parent=11"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}