Data has been playing an ever-growing role in artificial/computational intelligence. Such role goes beyond its typical use in neural networks and learning systems, encompassing also evolutionary and other meta-heuristic optimization algorithms. The objective of this symposium is to provide a unique and vibrant cohort for sharing and experiencing the emerging methodologies and applications of data-driven artificial/computational intelligence. It will offer keynotes, invited lectures and discussion groups given by experts from Exeter, Leiden and other high-profile institutions. It will provide a unique opportunity for participants to 1) learn about artificial/computational intelligence approaches and their applications; 2) interact with world-renowned experts in computational intelligence; and 3) communicate with experts and peers with a broad range of backgrounds to exchange ideas and form new collaborations.
It will take place as part of the outreach activities of the Alan Turing Institute and the Institute for Data Science and Artificial Intelligence (AI) at the University of Exeter. Both institutes are actively developing and fostering a culture of effective interactions for promoting data science and AI for addressing global challenges across disciplines.
Note that the time goes with Central European Time (CET)
The symposium will be organized in one day with the following agenda.
9:00 – 9:10: Introduction talk by Ke Li/Hao Wang
9:10 – 10:00 |
Prof. Michael Emmerich | Leiden University, Netherlands Lipschitz Models versus Gaussian Process Models in Data-Driven Multi-objective Optimization Bio
Michael Emmerich is a Germany-born Computer Scientist who currently lives in Finland and in The Netherlands. Since 2016 he is appointed as Associate Professor at Leiden University, The Netherlands, where he leads the Multicriteria Optimization and Decision Analytics Group and since 2019 he is a visiting researcher at Jyvaskyla University, Finland in the Multiobjective and Industrial Optimization Group. He also is Lead AI Scientist at SILO.ai, a provider of AI solutions n the Nordic Countries. He has received his Doctorate in Natural Sciences from the Technical University of Dortmund on the topic of Gaussian Processes for Surrogate-Assisted Multiobjective Design Optimization (2005) under the supervision of Prof. Dr. Ing. H.-P. Schwefel. He also worked as a visiting fellow at the Center for Applied Systems Analysis, ICD e.V. Dorttmund, Institut für Erstarrung unter Schwerelosigkeit e.V. (Aachen), Institute for Fundamental Research of Matter (FOM) Amsterdam, Unversity of the Algarve, IST Lisbon, and Jyvaskyla University. His main contributions are in the field of indicator-based and Bayesian multiobjective optimization, and application of multiobjective optimization in chemical engineering and architectural design.
The quantification of uncertainty is an important topic when it comes to modeling function landscapes based on previously evaluated input-output pairs. Gaussian process regression and the closely related Kriging method form allegedly the most well known class of surrogate models supporting uncertainty quantification. Here the uncertainty stems from the assumption that outputs are correlated by means of a distance (in the input-space) dependent correlation function. Such knowledge can be used to compute probabilistic confidence bounds to quantify uncertainty of predictions. Lipschitz continuity (and the more general Hölder continuity) on the other hand makes assumptions about bounded change rates. It, too, is based on distances in the input space but its model assumptions yield deterministic upper and lower bounds for the uncertainty ranges in prediction tasks.
We contrast these two techniques and reveal commonalities and differences and also comment on their usefulness in the integration to (multiobjective) Bayesian optimization frameworks. A special focus will be on variants of expected improvements and show that the use of a Lipschitzian interpretation of the Expected Improvement is almost equivalent to Shubert's algorithm Computations in the Lipschitzian case are far easier and more efficient while many of the interesting properties of Gaussian process models are preserved.
10:00 – 10:50 |
Prof. Yaochu Jin | University of Bielefeld, Germany Privacy-Preserving Data-driven Evolutionary Optimization Bio
Yaochu Jin is an Alexander von Humboldt Professor for Artificial Intelligence endowed by the German Federal Ministry of Education and Research, Faculty of Technology, Bielefeld University, Germany. He is also a Distinguished Chair in Computational Intelligence, Department of Computer Science, University of Surrey, Guildford, U.K. He was a "Finland Distinguished Professor", University of Jyväskylä, Finland, "Changjiang Distinguished Visiting Professor", Northeastern University, China, and "Distinguished Visiting Scholar", University of Technology Sydney, Australia. His main research interests include evolutionary optimization, evolutionary learning, trustworthy machine learning, and evolutionary developmental systems.
Prof Jin is presently the Editor-in-Chief of Complex & Intelligent Systems. He was an IEEE Distinguished Lecturer and the Vice President for Technical Activities of the IEEE Computational Intelligence Society. He is named by the Web of Science as "a Highly Cited Researcher" from 2019 to 2021 consecutively. He is a Member of Academia Europaea and Fellow of IEEE.
Data-driven optimization has received increased interest over the past decade due to its practical importance in many industrial sectors and scientific research fields. Little attention, however, has been paid to privacy preservation the data used for optimization. This talk presents our recent work on privacy-preserving data-driven evolutionary optimization based on federated learning techniques and infill criteria. The prosed framework is applicable to both single- and multi-objective optimization.
10:50 – 11:40 |
Prof. Thomas Bäck | Leiden University, Netherlands Evolutionary Computation meets Algorithm Configuration (and Applications) Bio
Thomas Bäck is Full Professor of Computer Science at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands, where he is head of the Natural Computing group since 2002. He received his PhD (adviser: Hans-Paul Schwefel) in computer science from Dortmund University, Germany, in 1994, and then worked for the Informatik Centrum Dortmund (ICD) as department leader of the Center for Applied Systems Analysis. From 2000 - 2009, Thomas was Managing Director of NuTech Solutions GmbH and CTO of NuTech Solutions, Inc. He gained ample experience in solving real-life problems in optimization and data mining through working with global enterprises such as BMW, Beiersdorf, Daimler, Ford, Honda, and many others.
Thomas has more than 300 publications on natural computing, as well as two books on evolutionary algorithms: Evolutionary Algorithms in Theory and Practice (1996), Contemporary Evolution Strategies (2013). He is co-editor of the Handbook of Evolutionary Computation, and the Handbook of Natural Computing, and co-editor-in-chief of Springer's Natural Computing book series. He is also editorial board member and associate editor of a number of journals on evolutionary and natural computing. Thomas received the best dissertation award from the German Society of Computer Science (Gesellschaft für Informatik, GI) in 1995 and the IEEE Computational Intelligence Society Evolutionary Computation Pioneer Award in 2015.
Direct global optimization algorithms based on evolutionary computation have shown big successes in a wide range of application domains, for example engineering design optimization.
In machine learning, the optimization of hyperparameters (also called the algorithm configuration problem) is an important task. I will briefly explain this problem and provide some examples illustrating that this task can be handled by direct global optimization algorithms as well. While algorithm configuration is commonly applied to machine learning algorithms, algorithm configuration for evolution strategies is also an exciting application domain. I will give a simple example how a combinatorial design space of 4608 configuration variants of evolution strategies can be explored and investigated using data mining. This approach provides an opportunity for discovering the unexplored areas of the optimization algorithm design space. Conversely, direct global optimization methods can also be used as algorithm configurators, or even for addressing the combined algorithm selection and hyperparameter optimization (CASH) task in machine learning. I will provide some insight into research in this direction, too.
To conclude, I return to real world application examples and illustrate a few of those that my group worked on, over the past more than 20 years.
14:00 – 14:50 |
Prof. Juergen Branke | University of Warwick, UK Bayesian Optimisation and Input Uncertainty Reduction Bio
Juergen Branke is Professor of Operational Research and Systems at Warwick Business School, University of Warwick (UK). His main research interests include metaheuristics and Bayesian optimisation applied to problems under uncertainty, such as simulation optimisation, dynamically changing problems, and multi-objective problems. Prof. Branke is Editor of ACM Transactions on Evolutionary Learning and Optimization, Area Editor of the Journal of Heuristics and the Journal on Multi-Criteria Decision Analysis, as well as Associate Editor of IEEE Transactions on Evolutionary Computation and the Evolutionary Computation Journal.
Simulators often require calibration inputs estimated from real world data and the estimate can significantly affect simulation output. Particularly when performing simulation optimisation to find an optimal solution, the uncertainty in the inputs significantly affects the quality of the found solution. One remedy is to search for the solution that has the best performance on average over the uncertain range of inputs yielding an optimal compromise solution. We consider the more general setting where a user may choose between either running simulations or instead querying an external data source, improving the input estimate and enabling the search for a more targeted, less compromised solution. We explicitly examine the trade-off between simulation and real data collection in order to find the optimal solution of the simulator with the true inputs. Using a value of information procedure, we propose a novel unified simulation optimisation procedure called Bayesian Information Collection and Optimisation (BICO) that, in each iteration, automatically determines which of the two actions (running simulations or data collection) is more beneficial. Numerical experiments demonstrate that the proposed algorithm is able to automatically determine an appropriate balance between optimisation and data collection.
14:50 – 15:40 |
Prof. Kaisa Miettinen | University of Jyväskylä, Finland Perspectives to Data-driven Multiobjective Optimization with Interactive Methods Bio
Kaisa Miettinen is Professor of Industrial Optimization at the University of Jyvaskyla. Her research interests include theory, methods, applications and software of nonlinear multiobjective optimization including interactive and evolutionary approaches. She heads the Research Group on Multiobjective Optimization and is the director of the thematic research area called Decision Analytics utilizing Causal Models and Multiobjective Optimization (DEMO, www.jyu.fi/demo). She has authored over 200 refereed journal, proceedings and collection papers, edited 18 proceedings, collections and special issues and written a monograph Nonlinear Multiobjective Optimization. She is a member of the Finnish Academy of Science and Letters, Section of Science and has served as the President of the International Society on Multiple Criteria Decision Making (MCDM). She belongs to the editorial boards of seven international journals and the Steering Committee of Evolutionary Multiobjective Optimization. She has previously worked at IIASA, International Institute for Applied Systems Analysis in Austria, KTH Royal Institute of Technology in Stockholm, Sweden and Helsinki School of Economics, Finland. She has received the Georg Cantor Award of the International Society on MCDM for independent inquiry in developing innovative ideas in the theory and methodology.
In data analytics, we can use descriptive analytics to understand the data or predictive analytics to make predictions, but to know what actions to take to reach desired outcomes, we need prescriptive analytics. To make optimized recommendations or decisions based on the data, we can fit models in the data and derive optimization problems. In many cases, real decisions to be made are characterized by multiple conflicting objectives to be optimized and we can support decision making by applying appropriate multiobjective optimization methods. This we can call decision analytics.
In this talk, I discuss different elements of a seamless chain from data to data-driven decision support involving multiobjective optimization. Eventually, the derived multiobjective optimization problem is solved with an appropriate interactive method. In that way, the decision maker with domain expertise can augment information contained in the data and direct the solution process with one’s preferences. At the same time, the decision maker gains insight into the interdependencies and trade-offs among the conflicting objectives, and can get convinced of the quality of the most preferred solution. In addition, I give some examples of data-driven decision making problems. Finally, I give an overview of the modular, open-source software framework DESDEO containing different interactive methods.
We are grateful to the support from UKRI Future Leaders Fellowship (MR/S017062/1), European Network Fund@Exeter (No. GP ENF5.10) and Turing Fellowship.