Congratulation to Aditya Dutt for publishing his new paper: Contrastive learning based MultiModal Alignment Network

Congratulations to our labmates and collaborators: Aditya Dutt, Alina Zare, and Paul Gader! Their paper, “Shared Manifold Learning Using a Triplet Network for Multiple Sensor Translation and Fusion with Missing Data”, was recently accepted to IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2022. In the paper, the authors developed a novel method called “Contrastive learning based MultiModal Alignment Network” (COMMANet) to align data from multiple heterogeneous modalities into a common shared and discriminative manifold. The COMMANet architecture uses a multimodal triplet autoencoder to cluster the latent space in such a way that samples of the same classes from each heterogeneous modality are mapped close to each other. The authors proposed a multimodal triplet loss objective function to achieve this goal.

Since the embeddings of multiple sensors are clustered, a unified classification model can be developed which is independent of the sensor type. The shared embeddings of multiple sensors can be fused together for a robust classification. Additionally, the COMMANet allow sensor translation as well, which is helpful in reconstructing missing/ faulty sensor data. The authors demonstrated the effectiveness of this method by achieving a mean overall classification accuracy of 94.3% on the MUUFL dataset and the best overall classification accuracy of 71.26% on the Berlin dataset, which is better than the other state-of-the-art approaches.

Check out the paper and key results here!