Skip to main content

Beyond Task-Driven Features for Object Detection

April 6, 2026

Abstract: Task-driven features learned by modern object detectors optimize end task loss yet often capture shortcut correlations that fail to reflect underlying annotation structure. Such representations limit transfer, interpretability, and robustness when task definitions change or supervision becomes sparse. This paper introduces an annotation-guided feature augmentation framework that injects embeddings into an object detection backbone. […]

Read more: Beyond Task-Driven Features for Object Detection »

Task-Guided Multi-Annotation Triplet Learning for Remote Sensing Representations

April 6, 2026

Abstract: Prior multi-task triplet loss methods relied on static weights to balance supervision between various types of annotation. However, static weighting requires tuning and does not account for how tasks interact when shaping a shared representation. To address this, the proposed task-guided multi-annotation triplet loss removes this dependency by selecting triplets through a mutual-information criteria […]

Read more: Task-Guided Multi-Annotation Triplet Learning for Remote Sensing Representations »

Introducing a Ground-Truthed Multi-Resolution Drone-Based Hyperspectral Dataset for Unmixing Tasks

April 6, 2026

Abstract: Hyperspectral unmixing is fundamental for estimating material composition from remotely sensed imagery, yet objective evaluation remains difficult due to the lack of ground-truth abundances in real scenes. We present a public, drone-based hyperspectral benchmark dataset collected during the 2025 RIT Open Community eXperiment (ROCX) campaign that enables verifiable abundance ground-truth through paired multi-altitude acquisitions. […]

Read more: Introducing a Ground-Truthed Multi-Resolution Drone-Based Hyperspectral Dataset for Unmixing Tasks »