skip to main content
10.1145/3583131.3590424acmconferencesArticle/Chapter ViewAbstractPublication PagesgeccoConference Proceedingsconference-collections
research-article

Algorithm Instance Footprint: Separating Easily Solvable and Challenging Problem Instances

Published:12 July 2023Publication History

ABSTRACT

In black-box optimization, it is essential to understand why an algorithm instance works on a set of problem instances while failing on others and provide explanations of its behavior. We propose a methodology for formulating an algorithm instance footprint that consists of a set of problem instances that are easy to be solved and a set of problem instances that are difficult to be solved, for an algorithm instance. This behavior of the algorithm instance is further linked to the landscape properties of the problem instances to provide explanations of which properties make some problem instances easy or challenging. The proposed methodology uses meta-representations that embed the landscape properties of the problem instances and the performance of the algorithm into the same vector space. These meta-representations are obtained by training a supervised machine learning regression model for algorithm performance prediction and applying model explainability techniques to assess the importance of the landscape features to the performance predictions. Next, deterministic clustering of the meta-representations demonstrates that using them captures algorithm performance across the space and detects regions of poor and good algorithm performance, together with an explanation of which landscape properties are leading to it.

References

  1. Nacim Belkhir, Johann Dréo, Pierre Savéant, and Marc Schoenauer. 2017. Per instance algorithm configuration of CMA-ES with limited budget. In GECCO. Association for Computing Machinery, New York, NY, USA, 681--688.Google ScholarGoogle Scholar
  2. Gérard Biau and Erwan Scornet. 2016. A random forest guided tour. Test 25, 2 (2016), 197--227.Google ScholarGoogle ScholarCross RefCross Ref
  3. Marcelo de Souza and Marcus Ritt. 2022. Improved regression models for algorithm configuration. In Proceedings of the Genetic and Evolutionary Computation Conference. Association for Computing Machinery, New York, NY, USA, 222--231.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Joaquín Derrac, Salvador García, Daniel Molina, and Francisco Herrera. 2011. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm and Evolutionary Computation 1, 1 (2011), 3--18.Google ScholarGoogle ScholarCross RefCross Ref
  5. Tome Eftimov, Peter Korošec, and Barbara Koroušić Seljak. 2017. A novel approach to statistical comparison of meta-heuristic stochastic optimization algorithms using deep statistics. Information Sciences 417 (2017), 186--215.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Matthias Feurer, Frank Hutter, Lars Kotthoff, and Joaquin Vanschoren. 2019. Hyperparameter Optimization. Springer International Publishing, Cham, 3--33.Google ScholarGoogle Scholar
  7. Steffen Finck, Nikolaus Hansen, Raymond Ros, and Anne Auger. 2010. Real-parameter black-box optimization benchmarking 2010: Presentation of the noisy functions. Technical Report. Citeseer.Google ScholarGoogle Scholar
  8. Nicholas G Hall and Marc E Posner. 2010. The generation of experimental data for computational testing in optimization. In Experimental methods for the analysis of optimization algorithms. Springer, Berlin, Heidelberg, 73--101.Google ScholarGoogle Scholar
  9. Nikolaus Hansen, Anne Auger, Steffen Finck, and Raymond Ros. 2010. Real-parameter black-box optimization benchmarking 2010: Experimental setup. Ph. D. Dissertation. INRIA.Google ScholarGoogle Scholar
  10. Nikolaus Hansen, Anne Auger, Raymond Ros, Olaf Mersmann, Tea Tušar, and Dimo Brockhoff. 2021. COCO: A platform for comparing continuous optimizers in a black-box setting. Optimization Methods and Software 36, 1 (2021), 114--144.Google ScholarGoogle ScholarCross RefCross Ref
  11. Anja Jankovic, Tome Eftimov, and Carola Doerr. 2021. Towards feature-based performance regression using trajectory data. In EvoApplications. Springer International Publishing, Cham, 601--617.Google ScholarGoogle Scholar
  12. Pascal Kerschke, Mike Preuss, Simon Wessing, and Heike Trautmann. 2015. Detecting funnel structures by means of exploratory landscape analysis. In Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation. Association for Computing Machinery, New York, NY, USA, 265--272.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Pascal Kerschke and Heike Trautmann. 2019. Automated algorithm selection on continuous black-box problems by combining exploratory landscape analysis and machine learning. Evolutionary computation 27, 1 (2019), 99--127.Google ScholarGoogle Scholar
  14. Pascal Kerschke and Heike Trautmann. 2019. Comprehensive feature-based landscape analysis of continuous and constrained optimization problems using the R-package flacco. In Applications in Statistical Computing. Springer International Publishing, Cham, 93--123.Google ScholarGoogle Scholar
  15. Ana Kostovska, Anja Jankovic, Diederick Vermetten, Jacob de Nobel, Hao Wang, Tome Eftimov, and Carola Doerr. 2022. Per-run algorithm selection with warm-starting using trajectory-based features. In Parallel Problem Solving from Nature-PPSN XVII: 17th International Conference, PPSN 2022, Dortmund, Germany, September 10--14, 2022, Proceedings, Part I. Springer International Publishing, Cham, 46--60.Google ScholarGoogle Scholar
  16. Ryan Dieter Lang and Andries Petrus Engelbrecht. 2021. An exploratory landscape analysis-based benchmark suite. Algorithms 14, 3 (2021), 78.Google ScholarGoogle ScholarCross RefCross Ref
  17. Monte Lunacek and Darrell Whitley. 2006. The dispersion metric and the CMA evolution strategy. In Proceedings of the 8th annual conference on Genetic and evolutionary computation. Association for Computing Machinery, New York, NY, USA, 477--484.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Leland McInnes, John Healy, and James Melville. 2018. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018).Google ScholarGoogle Scholar
  19. Olaf Mersmann, Bernd Bischl, Heike Trautmann, Mike Preuss, Claus Weihs, and Günter Rudolph. 2011. Exploratory landscape analysis. In GECCO. Association for Computing Machinery, New York, NY, USA, 829--836.Google ScholarGoogle Scholar
  20. Mario A Muñoz, Michael Kirley, and Saman K Halgamuge. 2014. Exploratory landscape analysis of continuous space optimization problems using information content. IEEE transactions on evolutionary computation 19, 1 (2014), 74--87.Google ScholarGoogle Scholar
  21. Mario A Muñoz and Kate A Smith-Miles. 2017. Performance analysis of continuous black-box optimization algorithms via footprints in instance space. Evolutionary computation 25, 4 (2017), 529--554.Google ScholarGoogle Scholar
  22. Ana Nikolikj. 2023. Algorithm Footprints. https://github.com/anikolik/algorithm-footprintsGoogle ScholarGoogle Scholar
  23. Ana Nikolikj, Ryan Lang, Peter Korošec, and Tome Eftimov. 2022. Explaining Differential Evolution Performance Through Problem Landscape Characteristics. In Bioinspired Optimization Methods and Their Applications: 10th International Conference, BIOMA 2022, Maribor, Slovenia, November 17--18, 2022, Proceedings. Springer, Cham, 99--113.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. William S Noble. 2006. What is a support vector machine? Nature biotechnology 24, 12 (2006), 1565--1567.Google ScholarGoogle Scholar
  25. Leif E Peterson. 2009. K-nearest neighbor. Scholarpedia 4, 2 (2009), 1883.Google ScholarGoogle ScholarCross RefCross Ref
  26. Benedek Rozemberczki, Lauren Watson, Péter Bayer, Hao-Tsung Yang, Olivér Kiss, Sebastian Nilsson, and Rik Sarkar. 2022. The shapley value in machine learning. arXiv preprint arXiv:2202.05594 (2022).Google ScholarGoogle Scholar
  27. Kate Smith-Miles and Mario Andrés Muñoz. 2022. Instance Space Analysis for Algorithm Testing: Methodology and Software Tools. ACM Comput. Surv. (nov 2022). Just Accepted. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Jörg Stork, Agoston E Eiben, and Thomas Bartz-Beielstein. 2020. A new taxonomy of global optimization algorithms. Natural Computing (2020), 1--24.Google ScholarGoogle Scholar
  29. Risto Trajanov, Stefan Dimeski, Martin Popovski, Peter Korošec, and Tome Eftimov. 2021. Explainable landscape-aware optimization performance prediction. In 2021 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 01--08.Google ScholarGoogle ScholarCross RefCross Ref
  30. Risto Trajanov, Stefan Dimeski, Martin Popovski, Peter Korošec, and Tome Eftimov. 2022. Explainable Landscape Analysis in Automated Algorithm Performance Prediction. In International Conference on the Applications of Evolutionary Computation (Part of EvoStar). Springer, Cham, 207--222.Google ScholarGoogle Scholar
  31. Joaquin Vanschoren. 2019. Meta-learning. Automated machine learning: methods, systems, challenges (2019), 35--61.Google ScholarGoogle Scholar

Index Terms

  1. Algorithm Instance Footprint: Separating Easily Solvable and Challenging Problem Instances

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          GECCO '23: Proceedings of the Genetic and Evolutionary Computation Conference
          July 2023
          1667 pages
          ISBN:9798400701191
          DOI:10.1145/3583131

          Copyright © 2023 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 12 July 2023

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          Overall Acceptance Rate1,669of4,410submissions,38%

          Upcoming Conference

          GECCO '24
          Genetic and Evolutionary Computation Conference
          July 14 - 18, 2024
          Melbourne , VIC , Australia
        • Article Metrics

          • Downloads (Last 12 months)67
          • Downloads (Last 6 weeks)6

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader