Research Topics

My research interests are centered around various aspects ofDistributed Parallel Computingwith a particular focus on interdisciplinary solutions for basic and advanced research issues in this field. More specifically I am interested in: 1)Parallel Programming & Parallel Architectures; 2)Autonomic Parallel Computing;3)High-Performance Data Stream Processing.

Parallel Programming and Parallel Architectures

With the advent of modern multi-/many-core CPUs and GPUs, Parallel Computing has been an active field of research. Applications of Parallel Computing have become increasingly pervasive in our everyday life: parallelism is available almost everywhere, from tiny devices such as our smartphones and PDAs, to desktop CPUs and high-performance computing architectures such as servers and clusters. I am interested in every aspects regarding the design and implementation of parallel programs. From a methodological viewpoint, the definition of parallel patterns, their cost models (both analytical or empirical), and the evaluation of their expressive power to model real-life applications is a central research issue on which I am currently focusing on. Furthermore, the design of efficient run-time supports for such patterns is another crucial point of my research. I am interested in the design of highly efficient mechanisms enabling fine-grained parallelismon modern CPUs. This aspect is stimulating, since it requires a deep understanding of the properties and the behavior of modern architectures: theexploitation of the memory hierarchy, cache coherence, interconnecting networks, the clever use of memory controllers and hardware multi-threading are key points affecting the run-time support design and implementation. In conclusion, my goal is to provide a contribution to the definition of novel programming models and tools enabling the development of parallel applications on both today’s and next-generation parallel architectures.

Autonomic Parallel Computing

Since my Ph.D. work, I have been interested in interdisciplinary researches merging Autonomic Computing methodologies with Parallel Computing. Distributed parallel applications executed on heterogeneous and dynamic environments like Data Centers, Grids and Clouds need advanced techniques to manage and maintain their Quality of Service in the face of unexpected execution events and workload. The problem of developing and programming autonomic parallel applications is extremely attractive and presents several challenging issues. First of all the autonomic behavior is enabled by: i) the definition of decision-making strategies which provide a correspondence between execution conditions and corresponding corrective actions on the system;ii) a run-time support properly designed to enable autonomic reconfigurations, e.g., dynamic modifications of the parallelism degree of a parallel computation, changes in the mapping between execution processes/threads onto the corresponding physical resources, and switches between alternative operating modes (parallel versions). The first point can be addressed by properly studying advanced techniques inspired by Control Theory and Artificial Intelligence. The goal is to define decision-making strategies able to achieve important properties of the autonomic process. Examples are reconfiguration stability (number and frequency of reconfigurations), reconfiguration amplitude (minimizing the “size” of reconfigurations, e.g., in terms of number of allocated/deallocated resources), and control optimality (achieving desired trade-offs between performance, memory usage, power consumption and efficiency of resource utilization). Complementary to these aspects, efficient reconfiguration mechanisms are extremely important to enact the autonomic behavior. Reconfigurations should be less intrusive as possible, minimizing the reconfiguration delay to apply a change in the current configuration by preserving the computation consistency and correctness. My goal is to provide a contribution in both the sides of the problem by studying advanced control-theoretic strategies and lightweight reconfiguration mechanisms enabling Autonomic Parallel Computing on the emerging distributed parallel environments.

High-Performance Data Stream Processing

Recently, I have investigated parallel approaches to Data Stream Processing problems (DaSP). DaSP is a vivid research area originated from the application of database queries on data streams (unlimited sequences of input values) instead of on classic relational tables.Several important on-line and real-time applications can be modeled as DaSP, including network traffic analysis, financial trading, data mining, and many others.DaSP applications can usually be modeled as directed graphs, in which arcs are data streams of tuples and vertices are operators (selection, projection, aggregate functions, sorting, join, skyline and other complex functions). The efficient execution of such queries on streams (referred to as Continuous Queries in the literature) is a very challenging research issue. DaSP introduces complex correlations between stream elements, windowing methods (tumbling/sliding and count-based/time-based semantics) and other typical computational patterns that require quite novel parallelism models and related design and implementation techniques on the emerging highly parallel architectures (multi-/many-core processing elements combined in large systems and heterogeneous clusters). The problem is further complicated by the strong performance requirements of typical DaSP scenarios: high-throughput and low-latency are unavoidable constraints that imply a careful parallelization design and the definition of novel cost models of parallel patterns supporting the parallelization of DaSP operators. The aim of my workis to study innovative parallel programming techniques and run-time supports enabling High-Performance Data Stream Processing.Guest Editorof the Special Issue “Parallel Applications for Edge/Fog/In-situ Computing on the Next Generation Computing Platforms” scheduled to appear in the International Journal of High Performance Computing Applications (IJHPCA).The special issue is in progress.

Gabriele Mencagli