{"id":568,"date":"2022-06-29T19:03:46","date_gmt":"2022-06-29T19:03:46","guid":{"rendered":"https:\/\/faculty.eng.ufl.edu\/jain\/?page_id=568"},"modified":"2026-02-04T14:20:07","modified_gmt":"2026-02-04T19:20:07","slug":"downloads","status":"publish","type":"page","link":"https:\/\/faculty.eng.ufl.edu\/jain\/downloads\/","title":{"rendered":"Downloads"},"content":{"rendered":"<h4>Code for &#8220;Towards Privacy-preserving Photorealistic Self-avatars in Mixed Reality&#8221;:&lt;\/4&gt;<\/h4>\n<ul class=\"p-rich_text_list p-rich_text_list__bullet p-rich_text_list--nested\" data-stringify-type=\"unordered-list\" data-list-tree=\"true\" data-indent=\"0\" data-border=\"0\">\n<li data-stringify-indent=\"0\" data-stringify-border=\"0\">Face anonymization toolkit:\u00a0<a class=\"c-link\" href=\"https:\/\/github.com\/WahahaYes\/FaceAnonEval\" target=\"_blank\" rel=\"noopener noreferrer\" data-stringify-link=\"https:\/\/github.com\/WahahaYes\/FaceAnonEval\" data-sk=\"tooltip_parent\">WahahaYes\/FaceAnonEval: A codebase to evaluate SOTA in face anonymization.<\/a><\/li>\n<li data-stringify-indent=\"0\" data-stringify-border=\"0\">Privacy-preserving implementation of the GHOST face synthesis model:\u00a0<a class=\"c-link\" href=\"https:\/\/github.com\/WahahaYes\/anonghost\" target=\"_blank\" rel=\"noopener noreferrer\" data-stringify-link=\"https:\/\/github.com\/WahahaYes\/anonghost\" data-sk=\"tooltip_parent\">WahahaYes\/anonghost: Adapting the GHOST face synthesis architecture to be privacy-preserving.<\/a><\/li>\n<li data-stringify-indent=\"0\" data-stringify-border=\"0\">Privacy-preserving implementation of Meta&#8217;s codec avatar model:\u00a0<a class=\"c-link\" href=\"https:\/\/github.com\/WahahaYes\/ava-256\" target=\"_blank\" rel=\"noopener noreferrer\" data-stringify-link=\"https:\/\/github.com\/WahahaYes\/ava-256\" data-sk=\"tooltip_parent\">WahahaYes\/ava-256: Adapting the universal codec avatar model to be privacy-preserving.<\/a><\/li>\n<\/ul>\n<h4>\u00a0<\/h4>\n<h4>Privacy-Preserving Gaze Data Streaming in Immersive Interactive Virtual Reality: Robustness and User Experience.<\/h4>\n<p><a href=\"https:\/\/zenodo.org\/records\/10519537\">Dataset<\/a><\/p>\n<p>&nbsp;<\/p>\n<div>\n<h4><span style=\"font-weight: 400\">DeEPLABCUT source code: <a href=\"https:\/\/github.com\/jakeshirey\/DeepLabCut-Display\">HERE<\/a><\/span><\/h4>\n<h4><span style=\"font-weight: 400\">NuclearGazeAuth <\/span><a style=\"font-size: 24px;text-transform: uppercase\" href=\"https:\/\/doi.org\/10.5281\/zenodo.6883466\"><span style=\"font-weight: 400\">Here<\/span><\/a><\/h4>\n<h4 id=\"ChildCharacters\">Privacy and Security in Eye Tracking:<\/h4>\n<div><a id=\"ET-DK2\" href=\"https:\/\/zenodo.org\/record\/4642612\">ET-DK2<\/a><br \/>\nNEW!!! <a id=\"EDaPT\" href=\"https:\/\/zenodo.org\/record\/6604580#.Yp450xrMKUk\">EDaPT: Eye-tracking dataset privacy toolbox<br \/>\n<\/a><\/div>\n<\/div>\n<div>\u00a0<\/div>\n<div>\n<h4>Creating Child-like Characters<\/h4>\n<div><a id=\"childAdultMotionPointLightVideo\" href=\"http:\/\/jainlab.cise.ufl.edu\/ccount\/click.php?id=childAdultMotionPointLightVideo\">Adult vs Child Motion, Point Light Stimuli Videos, ACM TAP 2016<\/a><br \/>\n<a id=\"childAdultMotionPositionsData\" href=\"http:\/\/jainlab.cise.ufl.edu\/ccount\/click.php?id=childAdultMotionPositionsData\">Adult vs Child Motion, Joint Positions Data, ACM TAP 2016<\/a><br \/>\n<a id=\"Adult2ChildCode\" href=\"https:\/\/jainlab.cise.ufl.edu\/documents\/adult2child_code.html\">Adult2Child, Survey Responses and Code, MIG 2017<\/a><br \/>\n<a id=\"Kinder-Gator\" href=\"http:\/\/jainlab.cise.ufl.edu\/documents\/dataset\/Kinder_Gator_dataset.zip\">Kinder-Gator, Dataset and RGB Videos, Eurographics 2018 Short Paper<\/a><br \/>\n<a id=\"Adult2ChildCycleGAN\" href=\"https:\/\/zenodo.org\/record\/4079507#.X4OXo25Fwy8\">Kinder-Gator 2.0, Optical motion capture, Dataset, MIG2020<\/a><\/div>\n<\/div>\n<div>\u00a0<\/div>\n<div>\n<h4>REQ Segmentation<\/h4>\n<div>\n<p><a id=\"REQSegmentation\" href=\"http:\/\/jainlab.cise.ufl.edu\/ccount\/click.php?id=7\">REQ Segmentation Code<\/a><\/p>\n<div>\n<p>CHANGELOG<\/p>\n<ul>\n<li>12 March, 2017 &#8211; added biased and unbiased groundtruths and some more results<\/li>\n<li>10 January, 2017 &#8211; fixed a few formatting issues<\/li>\n<li>5 January, 2017 &#8211; removed stray MATLAB code<\/li>\n<li>4 January, 2017 &#8211; version 1 uploaded<\/li>\n<\/ul>\n<\/div>\n<\/div>\n<\/div>\n<h4 id=\"AIVR2018\">Viewers&#8217; Behavioral and Physiological Responses in VR<\/h4>\n<div>Gaze &amp; Pupil Diameter\u00a0<a href=\"http:\/\/jainlab.cise.ufl.edu\/ccount\/click.php?id=vrst_2018_data\">(Data,<\/a>\u00a0<a href=\"http:\/\/jainlab.cise.ufl.edu\/ccount\/click.php?id=vrst_2018_code\">Code)<\/a>\u00a0<a href=\"https:\/\/jainlab.cise.ufl.edu\/publications.html#VRST_AssociatedPublication\">(Associated Publication)<\/a><br \/>\n2D Grayscale intensity videos, and Calibration intensities\u00a0<a href=\"http:\/\/jainlab.cise.ufl.edu\/ccount\/click.php?id=vrst_2018_stimuli\">(Stimuli)<\/a>\u00a0<a href=\"https:\/\/jainlab.cise.ufl.edu\/publications.html#VRST_AssociatedPublication\">(Associated Publication)<\/a><br \/>\nMethods for Generating Saliency Maps\u00a0<a href=\"http:\/\/jainlab.cise.ufl.edu\/ccount\/click.php?id=aivr_2018_code\">(Code)<\/a><a href=\"https:\/\/jainlab.cise.ufl.edu\/publications.html#AIVR_AssociatedPublication\">\u00a0(Associated Publication)<\/a><\/div>\n<div>\u00a0<\/div>\n<div>\n<h4 id=\"GAZEPUPIL\">Viewers&#8217; Behavioral and Physiological Responses on Videos<\/h4>\n<div>Gaze &amp; Pupil Diameter I\u00a0<a href=\"http:\/\/jainlab.cise.ufl.edu\/ccount\/click.php?id=13\">(Data)\u00a0<\/a><a href=\"https:\/\/jainlab.cise.ufl.edu\/publications.html#Decoupling%20Light%20Reflex%20from%20Pupillary%20Dilation%20to%20Measure%20Emotional%20Arousal%20in%20Videos\">(Associated Publication)<\/a><br \/>\nGaze &amp; Pupil Diameter II<a href=\"http:\/\/jainlab.cise.ufl.edu\/ccount\/click.php?id=14\">\u00a0(Data,\u00a0<\/a><a href=\"http:\/\/jainlab.cise.ufl.edu\/ccount\/click.php?id=15\">Code)<\/a><a href=\"https:\/\/jainlab.cise.ufl.edu\/publications.html#Decoupling%20Light%20Reflex%20from%20Pupillary%20Dilation%20to%20Measure%20Emotional%20Arousal%20in%20Videos\">\u00a0(Associated Publication)<\/a><br \/>\nGaze &amp; Pupil Diameter III<a href=\"http:\/\/jainlab.cise.ufl.edu\/ccount\/click.php?id=17\">\u00a0(Data,<\/a><a href=\"http:\/\/jainlab.cise.ufl.edu\/ccount\/click.php?id=16\">Code)<\/a><a href=\"https:\/\/jainlab.cise.ufl.edu\/publications.html#Decoupling%20Light%20Reflex%20from%20Pupillary%20Dilation%20to%20Measure%20Emotional%20Arousal%20in%20Videos\">\u00a0(Associated Publication)<\/a><br \/>\n<a href=\"http:\/\/jainlab.cise.ufl.edu\/ccount\/click.php?id=11\">Heart Rate Data<\/a><a href=\"https:\/\/jainlab.cise.ufl.edu\/documents\/hrwildlife_raiturkar_cameraready.pdf\">\u00a0(Associated Publication)<\/a><br \/>\n<a href=\"http:\/\/jainlab.cise.ufl.edu\/ccount\/click.php?id=12\">Scan Path and Movie Trailers<\/a><a href=\"https:\/\/jainlab.cise.ufl.edu\/documents\/scanpath_raiturkar_camready.pdf\">\u00a0(Associated Publication)<\/a><br \/>\n<a href=\"http:\/\/jainlab.cise.ufl.edu\/ccount\/click.php?id=DecouplingLightReflex-video\">Decoupling Light Reflex Submission Video<\/a><a href=\"https:\/\/jainlab.cise.ufl.edu\/publications.html#Decoupling%20Light%20Reflex%20from%20Pupillary%20Dilation%20to%20Measure%20Emotional%20Arousal%20in%20Videos\">\u00a0(Associated Publication)<\/a><\/div>\n<\/div>\n<div>\u00a0<\/div>\n<div>\n<h4>Eye-tracking for Online User Experience<\/h4>\n<div>\n<p><span style=\"text-decoration: underline\">How many words is a picture worth? ETRA 2018 Short Paper<\/span><br \/>\n<a id=\"words-picture-worth-vid1\" href=\"https:\/\/zenodo.org\/record\/3754785\">Videos Zip Part 1-5<\/a><br \/>\n<a id=\"words-picture-worth-vidAnn\" href=\"http:\/\/jainlab.cise.ufl.edu\/ccount\/click.php?id=wordsworthpicture-VideoAnnotations\">Video Annotations<\/a><br \/>\n<a id=\"words-picture-worth-R\" href=\"https:\/\/zenodo.org\/record\/3754785\">R Scripts<\/a><br \/>\n<a id=\"words-picture-worth-README\" href=\"https:\/\/zenodo.org\/record\/3754785\">README<\/a><br \/>\n<a id=\"words-picture-worth-slides\" href=\"http:\/\/jainlab.cise.ufl.edu\/ccount\/click.php?id=wordsworthpicture-Presentation\">Presentation Slides<\/a><\/p>\n<div>\n<p>Note:<\/p>\n<ul>\n<li>The video parts above are separated into parts. Please use compression software such as<a href=\"https:\/\/www.7-zip.org\/\">\u00a07-Zip<\/a> to open.<\/li>\n<\/ul>\n<\/div>\n<\/div>\n<\/div>\n<h4 id=\"CGFaces\">Perception of Computer Generated Faces<\/h4>\n<div><a href=\"http:\/\/jainlab.cise.ufl.edu\/ccount\/click.php?id=Code_for_Perception_of_CG_Faces\">Code<\/a><br \/>\n<a href=\"http:\/\/jainlab.cise.ufl.edu\/ccount\/click.php?id=Data_for_Perception_of_CG_Faces\">Data<\/a><br \/>\n<a href=\"http:\/\/jainlab.cise.ufl.edu\/ccount\/click.php?id=Stimuli_for_Perception_of_CG_Faces\">Stimuli<\/a><\/div>\n<div>\u00a0<\/div>\n<div>\n<h4>Mako Eye-tracking Toolbox<\/h4>\n<div>\n<p><a id=\"Subjective\" href=\"http:\/\/jainlab.cise.ufl.edu\/ccount\/click.php?id=10\">Collect Subjective Self-Report Responses on Images and Videos\u00a0<br \/>\n<\/a><a id=\"EyeTracking101\" href=\"http:\/\/jainlab.cise.ufl.edu\/ccount\/click.php?id=9\">Eye Tracking 101: A set of scripts to get started with data collection on the SMI RED-m Eyetracker\u00a0<br \/>\n<\/a><a id=\"EyeTracking102\" href=\"http:\/\/jainlab.cise.ufl.edu\/ccount\/click.php?id=8\">Eye Tracking 102: A set of scripts to get started with data collection on the Eye Tribe Eyetracker\u00a0<\/a><\/p>\n<div>\n<p>CHANGELOG<\/p>\n<ul>\n<li>5 May, 2017 &#8211; version 102 uploaded<\/li>\n<li>16 June, 2016 &#8211; version 101 &amp; Self-Report uploaded<\/li>\n<\/ul>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Code for &#8220;Towards Privacy-preserving Photorealistic Self-avatars in Mixed Reality&#8221;:&lt;\/4&gt; Face anonymization toolkit:\u00a0WahahaYes\/FaceAnonEval: A codebase to evaluate SOTA in face anonymization. Privacy-preserving implementation of the GHOST face synthesis model:\u00a0WahahaYes\/anonghost: Adapting the GHOST face synthesis architecture to be privacy-preserving. Privacy-preserving implementation of Meta&#8217;s codec avatar model:\u00a0WahahaYes\/ava-256: Adapting the universal codec avatar model to be privacy-preserving. \u00a0 Privacy-Preserving [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":0,"parent":0,"menu_order":1,"comment_status":"closed","ping_status":"closed","template":"page-templates\/page-sidebar-none.php","meta":{"_acf_changed":false,"inline_featured_image":false,"featured_post":"","footnotes":"","_links_to":"","_links_to_target":""},"class_list":["post-568","page","type-page","status-publish","hentry"],"acf":[],"_links":{"self":[{"href":"https:\/\/faculty.eng.ufl.edu\/jain\/wp-json\/wp\/v2\/pages\/568","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/faculty.eng.ufl.edu\/jain\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/faculty.eng.ufl.edu\/jain\/wp-json\/wp\/v2\/types\/page"}],"replies":[{"embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/jain\/wp-json\/wp\/v2\/comments?post=568"}],"version-history":[{"count":1,"href":"https:\/\/faculty.eng.ufl.edu\/jain\/wp-json\/wp\/v2\/pages\/568\/revisions"}],"predecessor-version":[{"id":2245,"href":"https:\/\/faculty.eng.ufl.edu\/jain\/wp-json\/wp\/v2\/pages\/568\/revisions\/2245"}],"wp:attachment":[{"href":"https:\/\/faculty.eng.ufl.edu\/jain\/wp-json\/wp\/v2\/media?parent=568"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}