Tag: COMMANet
Congratulations to Dr. Aditya Dutt for a Successful Dissertation Defense!
April 4, 2025Congratulations to Dr. Aditya Dutt for successfully passing his PhD dissertation exam! Dr. Dutt’s research introduced the Contrastive MultiModal Alignment Network (COMMANet), a novel approach to shared manifold-based domain translation and fusion. His work addressed the challenge of limited and imbalanced labeled datasets by leveraging contrastive learning with triplet networks to align multimodal data—such as […]
Read more: Congratulations to Dr. Aditya Dutt for a Successful Dissertation Defense! »Shared Manifold Learning Using a Triplet Network for Multiple Sensor Translation and Fusion with Missing Data
November 11, 2022Abstract: Heterogeneous data fusion can enhance the robustness and accuracy of an algorithm on a given task. However, due to the difference in various modalities, aligning the sensors and embedding their information into discriminative and compact representations is challenging. In this paper, we propose a Contrastive learning based MultiModal Alignment Network (CoMMANet) to align data […]
Read more: Shared Manifold Learning Using a Triplet Network for Multiple Sensor Translation and Fusion with Missing Data »Congratulation to Aditya Dutt for publishing his new paper: Contrastive learning based MultiModal Alignment Network
October 25, 2022Congratulations to our labmates and collaborators: Aditya Dutt, Alina Zare, and Paul Gader! Their paper, “Shared Manifold Learning Using a Triplet Network for Multiple Sensor Translation and Fusion with Missing Data”, was recently accepted to IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2022. In the paper, the authors developed a […]
Read more: Congratulation to Aditya Dutt for publishing his new paper: Contrastive learning based MultiModal Alignment Network »