skip to main content
10.1145/3583133.3590617acmconferencesArticle/Chapter ViewAbstractPublication PagesgeccoConference Proceedingsconference-collections
poster

Assessing the Generalizability of a Performance Predictive Model

Published:24 July 2023Publication History

ABSTRACT

A key component of automated algorithm selection and configuration, which in most cases are performed using supervised machine learning (ML) methods is a good-performing predictive model. The predictive model uses the feature representation of a set of problem instances as input data and predicts the algorithm performance achieved on them. Common machine learning models struggle to make predictions for instances with feature representations not covered by the training data, resulting in poor generalization to unseen problems. In this study, we propose a workflow to estimate the generalizability of a predictive model for algorithm performance, trained on one benchmark suite to another. The workflow has been tested by training predictive models across benchmark suites and the results show that generalizability patterns in the landscape feature space are reflected in the performance space.

References

  1. Nacim Belkhir, Johann Dréo, Pierre Savéant, and Marc Schoenauer. 2017. Per instance algorithm configuration of CMA-ES with limited budget. In Proc. of Genetic and Evolutionary Computation (GECCO'17). ACM, 681--688. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Pauline Bennet, Carola Doerr, Antoine Moreau, Jeremy Rapin, Fabien Teytaud, and Olivier Teytaud. 2021. Nevergrad: black-box optimization platform. ACM SIGEVOlution 14, 1 (2021), 8--15.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Gjorgjina Cenikj, Ryan Dieter Lang, Andries Petrus Engelbrecht, Carola Doerr, Peter Korošec, and Tome Eftimov. 2022. SELECTOR: Selecting a Representative Benchmark Suite for Reproducible Statistical Comparison. In Proceedings of the Genetic and Evolutionary Computation Conference (Boston, Massachusetts) (GECCO '22). Association for Computing Machinery, New York, NY, USA, 620--629. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Konstantin Dietrich and Olaf Mersmann. 2022. Increasing the Diversity of Benchmark Function Sets Through Affine Recombination. In Parallel Problem Solving from Nature-PPSN XVII: 17th International Conference, PPSN 2022, Dortmund, Germany, September 10--14, 2022, Proceedings, Part I. Springer, 590--602.Google ScholarGoogle Scholar
  5. Nikolaus Hansen, Anne Auger, Raymond Ros, Olaf Mersmann, Tea Tušar, and Dimo Brockhoff. 2021. COCO: A platform for comparing continuous optimizers in a black-box setting. Optimization Methods and Software 36, 1 (2021), 114--144.Google ScholarGoogle ScholarCross RefCross Ref
  6. Nikolaus Hansen and Andreas Ostermeier. 2001. Completely Derandomized Self-Adaptation in Evolution Strategies. Evolutionary Computation 9, 2 (2001), 159--195. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Anja Jankovic and Carola Doerr. 2020. Landscape-aware fixed-budget performance regression and algorithm selection for modular CMA-ES variants. In GECCO. ACM, 841--849.Google ScholarGoogle Scholar
  8. Pascal Kerschke and Heike Trautmann. 2019. Automated algorithm selection on continuous black-box problems by combining exploratory landscape analysis and machine learning. Evolutionary computation 27, 1 (2019), 99--127.Google ScholarGoogle Scholar
  9. Ana Kostovska, Anja Jankovic, Diederick Vermetten, Jacob de Nobel, Hao Wang, Tome Eftimov, and Carola Doerr. 2022. Per-run algorithm selection with warm-starting using trajectory-based features. In Parallel Problem Solving from Nature-PPSN XVII: 17th International Conference, PPSN 2022, Dortmund, Germany, September 10--14, 2022, Proceedings, Part I. Springer, 46--60.Google ScholarGoogle Scholar
  10. Ryan Dieter Lang and Andries Petrus Engelbrecht. 2021. An Exploratory Landscape Analysis-Based Benchmark Suite. Algorithms 14, 3 (2021), 78.Google ScholarGoogle ScholarCross RefCross Ref
  11. Ana Nikolikj. 2023. Prediction Model Generalizability. https://github.com/anikolik/assessing-generalizability-of-prediction-modelsGoogle ScholarGoogle Scholar
  12. Raphael Patrick Prager, Heike Trautmann, Hao Wang, Thomas HW Bäck, and Pascal Kerschke. 2020. Per-instance configuration of the modularized CMA-ES by means of classifier chains and exploratory landscape analysis. In 2020 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 996--1003.Google ScholarGoogle ScholarCross RefCross Ref
  13. Amit Singhal et al. 2001. Modern information retrieval: A brief overview. IEEE Data Eng. Bull. 24, 4 (2001), 35--43.Google ScholarGoogle Scholar
  14. Urban Škvorc, Tome Eftimov, and Peter Korošec. 2022. Transfer Learning Analysis of Multi-Class Classification for Landscape-Aware Algorithm Selection. Mathematics 10, 3 (2022), 432.Google ScholarGoogle ScholarCross RefCross Ref
  15. Ye Tian, Shichen Peng, Xingyi Zhang, Tobias Rodemann, Kay Chen Tan, and Yaochu Jin. 2020. A recommender system for metaheuristic algorithms for continuous optimization based on deep recurrent neural networks. IEEE transactions on artificial intelligence 1, 1 (2020), 5--18.Google ScholarGoogle Scholar

Index Terms

  1. Assessing the Generalizability of a Performance Predictive Model

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          GECCO '23 Companion: Proceedings of the Companion Conference on Genetic and Evolutionary Computation
          July 2023
          2519 pages
          ISBN:9798400701207
          DOI:10.1145/3583133

          Copyright © 2023 Owner/Author(s)

          Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 24 July 2023

          Check for updates

          Qualifiers

          • poster

          Acceptance Rates

          Overall Acceptance Rate1,669of4,410submissions,38%

          Upcoming Conference

          GECCO '24
          Genetic and Evolutionary Computation Conference
          July 14 - 18, 2024
          Melbourne , VIC , Australia
        • Article Metrics

          • Downloads (Last 12 months)41
          • Downloads (Last 6 weeks)4

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader