ABSTRACT
A key component of automated algorithm selection and configuration, which in most cases are performed using supervised machine learning (ML) methods is a good-performing predictive model. The predictive model uses the feature representation of a set of problem instances as input data and predicts the algorithm performance achieved on them. Common machine learning models struggle to make predictions for instances with feature representations not covered by the training data, resulting in poor generalization to unseen problems. In this study, we propose a workflow to estimate the generalizability of a predictive model for algorithm performance, trained on one benchmark suite to another. The workflow has been tested by training predictive models across benchmark suites and the results show that generalizability patterns in the landscape feature space are reflected in the performance space.
- Nacim Belkhir, Johann Dréo, Pierre Savéant, and Marc Schoenauer. 2017. Per instance algorithm configuration of CMA-ES with limited budget. In Proc. of Genetic and Evolutionary Computation (GECCO'17). ACM, 681--688. Google ScholarDigital Library
- Pauline Bennet, Carola Doerr, Antoine Moreau, Jeremy Rapin, Fabien Teytaud, and Olivier Teytaud. 2021. Nevergrad: black-box optimization platform. ACM SIGEVOlution 14, 1 (2021), 8--15.Google ScholarDigital Library
- Gjorgjina Cenikj, Ryan Dieter Lang, Andries Petrus Engelbrecht, Carola Doerr, Peter Korošec, and Tome Eftimov. 2022. SELECTOR: Selecting a Representative Benchmark Suite for Reproducible Statistical Comparison. In Proceedings of the Genetic and Evolutionary Computation Conference (Boston, Massachusetts) (GECCO '22). Association for Computing Machinery, New York, NY, USA, 620--629. Google ScholarDigital Library
- Konstantin Dietrich and Olaf Mersmann. 2022. Increasing the Diversity of Benchmark Function Sets Through Affine Recombination. In Parallel Problem Solving from Nature-PPSN XVII: 17th International Conference, PPSN 2022, Dortmund, Germany, September 10--14, 2022, Proceedings, Part I. Springer, 590--602.Google Scholar
- Nikolaus Hansen, Anne Auger, Raymond Ros, Olaf Mersmann, Tea Tušar, and Dimo Brockhoff. 2021. COCO: A platform for comparing continuous optimizers in a black-box setting. Optimization Methods and Software 36, 1 (2021), 114--144.Google ScholarCross Ref
- Nikolaus Hansen and Andreas Ostermeier. 2001. Completely Derandomized Self-Adaptation in Evolution Strategies. Evolutionary Computation 9, 2 (2001), 159--195. Google ScholarDigital Library
- Anja Jankovic and Carola Doerr. 2020. Landscape-aware fixed-budget performance regression and algorithm selection for modular CMA-ES variants. In GECCO. ACM, 841--849.Google Scholar
- Pascal Kerschke and Heike Trautmann. 2019. Automated algorithm selection on continuous black-box problems by combining exploratory landscape analysis and machine learning. Evolutionary computation 27, 1 (2019), 99--127.Google Scholar
- Ana Kostovska, Anja Jankovic, Diederick Vermetten, Jacob de Nobel, Hao Wang, Tome Eftimov, and Carola Doerr. 2022. Per-run algorithm selection with warm-starting using trajectory-based features. In Parallel Problem Solving from Nature-PPSN XVII: 17th International Conference, PPSN 2022, Dortmund, Germany, September 10--14, 2022, Proceedings, Part I. Springer, 46--60.Google Scholar
- Ryan Dieter Lang and Andries Petrus Engelbrecht. 2021. An Exploratory Landscape Analysis-Based Benchmark Suite. Algorithms 14, 3 (2021), 78.Google ScholarCross Ref
- Ana Nikolikj. 2023. Prediction Model Generalizability. https://github.com/anikolik/assessing-generalizability-of-prediction-modelsGoogle Scholar
- Raphael Patrick Prager, Heike Trautmann, Hao Wang, Thomas HW Bäck, and Pascal Kerschke. 2020. Per-instance configuration of the modularized CMA-ES by means of classifier chains and exploratory landscape analysis. In 2020 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 996--1003.Google ScholarCross Ref
- Amit Singhal et al. 2001. Modern information retrieval: A brief overview. IEEE Data Eng. Bull. 24, 4 (2001), 35--43.Google Scholar
- Urban Škvorc, Tome Eftimov, and Peter Korošec. 2022. Transfer Learning Analysis of Multi-Class Classification for Landscape-Aware Algorithm Selection. Mathematics 10, 3 (2022), 432.Google ScholarCross Ref
- Ye Tian, Shichen Peng, Xingyi Zhang, Tobias Rodemann, Kay Chen Tan, and Yaochu Jin. 2020. A recommender system for metaheuristic algorithms for continuous optimization based on deep recurrent neural networks. IEEE transactions on artificial intelligence 1, 1 (2020), 5--18.Google Scholar
Index Terms
- Assessing the Generalizability of a Performance Predictive Model
Recommendations
Can we improve meta-learning model in few-shot learning by aligning data distributions?
AbstractMeta-learning becomes a promising way to solve the few-shot learning problem in recent research. This paradigm mainly relies on hierarchical architecture and episodic training to achieve good generalization on the new learning task. ...
Meta-learning with implicit gradients in a few-shot setting for medical image segmentation
AbstractWidely used traditional supervised deep learning methods require a large number of training samples but often fail to generalize on unseen datasets. Therefore, a more general application of any trained model is quite limited for ...
Learning to teach and learn for semi-supervised few-shot image classification
AbstractThis paper presents a novel semi-supervised few-shot image classification method named Learning to Teach and Learn (LTTL) to effectively leverage unlabeled samples in small-data regimes. Our method is based on self-training, which ...
Highlights- We propose a novel self-training strategy for semi-supervised few-shot image classification.
Comments