Abstract:
The fuzzy integral (FI) is used frequently as a parametric nonlinear aggregation operator for data or information fusion. To date, numerous data-driven algorithms have been put forth to learn the FI for tasks like signal and image processing, multi-criteria decision making, logistic regression and minimization of the sum of squared error (SEE) criteria in decision-level fusion. However, existing work has focused on learning the densities (worth of just the individual inputs in the underlying fuzzy measure (FM)) relative to an imputation method (algorithm that assigns values to the remainder of the FM) or the full FM is learned relative to a single criteria (e.g., SSE). Only a handful of approaches have investigated how to learn the FI relative to some minimization criteria (logistic regression or SSE) in conjunction with a second criteria, namely model complexity. Including model complexity is important because it allows us to learn solutions that are less prone to overfitting and we can lower a solution’s cost (financial, computational, etc.). Herein, we review and compare different indices of model (capacity) complexity. We show that there is no global best. Instead, applications and goals (contexts) are what drives which index is appropriate. In addition, we put forth new indices based on functions of the Shapley index. Synthetic and real-world experiments demonstrate the range and behavior of these different indices for decision-level fusion.
Links:
Citation:
A. J. Pinar, D. T. Anderson, T. C. Havens, A. Zare and T. Adeyeba, "Measures of the Shapley index for learning lower complexity fuzzy integrals," Granular Computing (2017): 1-17
@article{Pinar2017Measures,
Title = {Measures of the Shapley index for learning lower complexity fuzzy integrals},
Author = {Pinar, Anthony J. and Anderson, Derek T. and Havens, Timothy C. and Zare, Alina and Adeyeba, Titilope},
Booktitle = {Granular Computing},
Year = {2017}
}