# Tutorials

## Introduction to High-Performance Computing

Tutorial by Nouredine Melab

**International Conference on Metaheuristics and Nature Inspired Computing (META 2018)**

**Marrakech, 27–31 October 2018**

On the road to the exascale era, according to Top500 modern supercomputers are increasingly large and heterogeneous including multi-core processors with various architectures, accelerators (e.g. GPU) and other low-energy resources (e.g. FPGA). Such hardware diversity is translated at the software level into a more complex programming requiring various parallelization models and programming paradigms, languages and tools. The objective of this tutorial, which is more general than metaheuristics, is to give an introduction to High Performance Computing (HPC) focusing on parallel computing. The tutorial is started with a short overview of the different families of HPC platforms. Then, are given some basics of parallel algorithm design and programming for these platforms using some standard parallel programming environments. Finally, the major performance-hampering roadblocks to exascale era are pointed out and some initiatives are sketched.

## Synergy between Metaheuristics and Machine Learning

Organized by El-Ghazali Talbi

**The 8th International Conference on Bioinspired Optimization Methods and their Applications (BIOMA 2018)**

**Paris, 16–18 May 2018**

## Parallel and Distributed Evolutionary Algorithms

Organized by El-Ghazali Talbi

**IEEE World Congress on Computational Intelligence (WCCI 2018)**

**Rio de Janiero, 8–13 July 2018**

On one hand, optimization and machine learning problems are more and more complex and their resource requirements are ever increasing. Real-life optimization problems are often NP-hard, and CPU time and/or memory consuming. Although the use of meta-heuristics and evolutionary algorithms allows to significantly reduce the computational complexity of the search process, the latter remains time-consuming for many problems in diverse domains of application, where the objective function and the constraints associated to the problem are resource (e.g. CPU, memory) intensive and the size of the search space is huge. Moreover, more and more complex and resource intensive meta-heuristics are developed (e.g. hybrid meta-heuristics, multi-objective meta-heuristics, meta-heuristics under uncertainty, meta-heuristics for bi-level optimization, surrogate-assisted evolutionary algorithms).

On the other hand, the rapid development of technology in designing processors (e.g. multi-core processors, dedicated architectures), networks (e.g. local networks -LAN- such as Myrinet and Infiniband, wide area networks -WAN- such as optical networks), and data storage make the use of parallel computing more and more popular. Such architectures represent an effective strategy for the design and implementation of parallel metaheuristics and evolutionary algorithms. Indeed, sequential architectures are reaching physical limitation (speed of light, thermodynamics). Nowadays, even laptops and workstations are equipped with multi-core processors, which represent a given class of parallel architecture. Moreover, the ratio cost/performance is constantly decreasing. The proliferation of powerful workstations and fast communication networks has shown the emergence of GPUs, multi-cores architectures, clusters of processors (COWs), networks of workstations (NOWs), and large-scale network of machines (Grids) as platforms for high performance computing.

Parallel and distributed computing can be used in the design and implementation of meta-heuristics and evolutionary algorithms for the following reasons: Speedup the search: one of the main goals in parallelizing a meta-heuristic is to reduce the search time. This helps designing real-time and interactive optimization methods. This is a very important aspect for some class of problems where there are hard requirements on the search time such as in dynamic optimization problems and time critical control problems such as “real-time” planning. Improve the quality of the obtained solutions: some parallel models for meta-heuristics and evolutionary algorithms allow improving the quality of the search. Indeed, exchanging information between cooperative meta-heuristics will alter their behavior in terms of searching in the landscape associated to the problem. The main goal in a parallel cooperation between evolutionary algorithms is to improve the quality of solutions. Both better convergence and reduced search time may happen. Let us notice that a parallel model for meta-heuristics may be more effective than a sequential meta-heuristic even on a single processor. Improve the robustness: a parallel evolutionary algorithm may be more robust in terms of solving in an effective manner different optimization problems and different instances of a given problem. Robustness may be also measured in terms of the sensitivity of the algorithm to its parameters. Solve large-scale problems: parallel meta-heuristics allow to solve large scale instances of complex optimization problems. A challenge here is to solve very large instances, which cannot be solved on a sequential machine. Another similar challenge is to solve more accurate mathematical models associated to different optimization problems. Improving the accuracy of the mathematical models increases in general the size of the associated problems to be solved. Moreover, some optimization problems need the manipulation of huge databases such as data mining and machine learning problems.

The main goal of this tutorial is to provide a unified view of parallel meta-heuristics and evolutionary algorithms. It presents the main design questions and search components for all families of meta-heuristics. Not only the design aspect of meta-heuristics is presented, but also their implementation using a software framework. This will encourage the reusing of both the design and code of existent search components with a high level of transparency regarding the target applications and parallel architectures.

## Parallel and Distributed Evolutionary Algorithms

Tutorial by El-Ghazali Talbi

**IEEE Congress on Evolutionary Computation (CEC 2017)**

**Donostia, San Sebastián, 5 June 2017**

Parallel and distributed computing can be used in the design and implementation of evolutionary algorithms for speedup the search, improve the quality of the obtained solutions, improve the robustness of the obtained solutions, and solve large scale problems.

From the algorithmic design point of view, we will present the main parallel models for evolutionary algorithms (algorithmic level, iteration level, solution level). We will address also:

- Parallel hybrid models with exact methods.
- Parallel models for multi-objective optimization.
- Illustrations solving large challenging applications in networks, logistics and transportation and bioinformatics.

From the implementation point of view, we here concentrate on the parallelization of evolutionary algorithms on general-purpose parallel and distributed architectures, since this is the most widespread computational platform. The rapid evolution of technology in terms of processors (GPUs, multi-core), networks (Infiniband), and architectures (GRIDs, clusters, Clouds) make those architectures very popular nowadays.

Different architectural criteria which affect the efficiency of the implementation will be considered: shared memory / distributed memory, homogeneous / heterogeneous, dedicated / non dedicated, local network / large network. Indeed, those criteria have a strong impact on the deployment technique employed such as load balancing and fault-tolerance.

Finally, some software frameworks for parallel evolutionary algorithms such as PARADISEO are presented. Those frameworks allow the design of parallel and hybrid metaheuristics for mono-objective and multi-objective optimization, and the transparent implementation on different parallel and distributed architectures using adapted middleware.

## Visualization in Multiobjective Optimization

Tutorial by Bogdan Filipič and Tea Tušar

**IEEE Congress on Evolutionary Computation (CEC 2017)**

**Donostia, San Sebastián, 5 June 2017**

Multiobjective optimization algorithms usually produce a set of trade-off solutions approximating the Pareto front where no solution from the set is better than any other in all objectives (this is called an approximation set). While there exist many measures to assess the quality of approximation sets, no measure is as effective as visualization, especially if the Pareto front is known and can be visualized as well. Visualization in evolutionary multiobjective optimization is relevant in many aspects, such as estimating the location, range, and shape of the Pareto front, assessing conflicts and trade-offs between objectives, selecting preferred solutions, monitoring the progress or convergence of an optimization run, and assessing the relative performance of different algorithms. This tutorial will provide a comprehensive overview of methods used in multiobjective optimization for visualizing either individual approximation sets or the probabilistic distribution of multiple approximation sets through the empirical attainment function (EAF).

The tutorial will build on the well-attended 2016 tutorial edition and extend it with a taxonomy of visualization methods and ten additional visualization methods not presented last year: aggregation trees, distance-based and dominance-based mappings, level diagrams with asymmetric norm, moGrams, polar plots, tetrahedron coordinates model, trade-off region maps, treemaps, visualization following Shneiderman mantra, and 3D-RadVis. We will demonstrate how each of the introduced methods visualizes the benchmark approximation sets. In addition, animations will be used where possible to show the additional capabilities of the visualization methods. Tutorial attendees will become aware of the many ways of visualizing approximation sets and the EAF, and learn about the advantages and limitations of each method.

## Meta-Model Assisted (Evolutionary) Optimization

Tutorial by Boris Naujoks, Jörg Stork, Martin Zaefferer, and Thomas Bartz-Beielstein

**14th International Conference on Parallel Problem Solving from Nature (PPSN 2016)**

**Edinburgh, 18 September 2016**

Meta-model assisted optimization is a well-recognized research area. When the evaluation of an objective function is expensive, meta-model assisted optimization yields huge improvements in optimization time or cost in a large number of different scenarios. Hence, it is extremely useful for numerous real-world applications. These include, but are not limited to, the optimization of designs like airfoils or ship propulsion systems, chemical processes, biogas plants, composite structures, and electromagnetic circuit design.

This tutorial is largely focused on evolutionary optimization assisted by meta-models, and has the following aims: Firstly, we will provide a detailed understanding of the established concepts and distinguished methods in meta-model assisted optimization. Therefore, we will present an overview of current research and open issues in this field. Moreover, we aim for a practical approach. The tutorial should enable the participants to apply up-to-date meta-modelling approaches to actual problems at hand. Afterwards, we will discuss typical problems and their solutions with the participants. Finally, the tutorial offers new perspectives by taking a look into areas where links to meta-modelling concepts have been established more recently, e.g., the application of meta-models in multi-objective optimization or in combinatorial search spaces.

**Please download t****he presentation here.**

## Visualization in Multiobjective Optimization

Tutorial by Bogdan Filipič and Tea Tušar

**The Genetic and Evolutionary Computation Conference (GECCO 2016)**

**Denver, 20 July 2016**

Multiobjective optimization algorithms usually produce a set of trade-off solutions approximating the Pareto front where no solution from the set is better than any other in all objectives (this is called an approximation set). While there exist many measures to assess the quality of approximation sets, no measure is as effective as visualization, especially if the Pareto front is known and can be visualized as well. This tutorial provides a comprehensive overview of methods used in multiobjective optimization for visualizing either individual approximation sets or the probabilistic distribution of multiple approximation sets through the empirical attainment function (EAF).

This tutorial is among the first comprehensive presentations of visualization methods for multiobjective optimization and is valuable to students, researchers, algorithm designers and end users dealing with multiobjective optimization problems.

Contents of the tutorial:

- Requirements for visualization methods
- Benchmark approximation sets that can be used for comparing visualization methods in a similar way as performance metrics and benchmark test problems are used for comparing optimization algorithms
- Two groups of methods used for visualizing individual approximation sets (for each method we will demonstrate its outcome on the benchmark approximation sets and discuss its advantages and limitations):
- General methods (not designed for multiobjective optimization): scatter plot matrix, bubble chart, radial coordinate visualization, parallel coordinates, heatmaps, Sammon mapping, neuroscale, self-organizing maps, principal component analysis and isomap
- Specific methods (able to handle the unique features of approximation sets): distance and distribution charts, interactive decision maps, hyper-space diagonal counting, two-stage mapping, level diagrams, hyper-radial visualization, Pareto shells, seriated heatmaps, multidimensional scaling and prosections
- Visualization of EAF values and differences

We demonstrate how each of the introduced methods visualizes the benchmark approximation sets. In addition, for some of the methods (such as parallel coordinates, prosections and direct volume rendering) animations are used to show their additional capabilities.

**Please download the presentation here.**

## Combining Metaheuristics with Mathematical Programming and Machine Learning

Tutorial by El-Ghazali Talbi

**6th International Conference on Metaheuristics and Nature Inspired Computing (META 2016)
**

**Marakkech, 28 October 2016
**

During the last years, interest on hybrid metaheuristics has risen considerably in the field of optimization and machine learning. The best results found for many optimization problems in science and industry are obtained by hybrid optimization algorithms. Combinations of optimization tools such as metaheuristics, mathematical programming, constraint programming and machine learning, have provided very efficient optimization algorithms. Four different types of combinations are considered:

- Combining metaheuristics with complementary metaheuristics.
- Combining metaheuristics with exact methods from mathematical programming approaches which are mostly used in the operations research community.
- Combining metaheuristics with constraint programming approaches developed in the artificial intelligence community.
- Combining metaheuristics with machine learning and data mining techniques.