Research Topics

My research interests are centered around various aspects of Distributed Parallel Computing with a particular focus on interdisciplinary solutions for basic and advanced research issues in this field. More specifically I am interested in: 1) Parallel Programming & Parallel Architectures; 2) Autonomic Parallel Computing; 3) High-Performance Data Stream Processing.

Parallel Programming and Parallel Architectures

With the advent of modern multi-/many-core CPUs and GPUs, Parallel Computing has been an active field of research. Parallel applications have become increasingly pervasive in our everyday life: parallelism is available almost everywhere, from tiny devices such as our smartphones to desktop CPUs and high-performance computing architectures such as servers and clusters. I am interested in every aspect regarding the design and implementation of parallel programs. From a methodological viewpoint, the definition of parallel patterns, their cost models (both analytical or empirical), and the evaluation of their expressive power to model real-world applications is  a central research issue. Furthermore, the design of efficient run-time supports for parallel computing is a crucial point of my research. I am interested in the design of highly efficient mechanisms enabling fine-grained parallelism on modern CPUs and accelerators (GPUs and FPGAs). This aspect is challenging, since it requires a deep understanding of the properties and the behavior of modern architectures: the exploitation of the memory hierarchy, cache coherence, interconnecting networks, the clever use of memory controllers and hardware multi-threading are key points affecting the run-time support design and implementation.

Autonomic Parallel Computing

I am interested in interdisciplinary researches merging Autonomic Computing methodologies with Parallel Computing. Distributed parallel applications executed on heterogeneous and dynamic environments like Data Centers, Clouds, and Cyber-Physical Systems need advanced techniques to manage and maintain their Quality of Service in face of unexpected execution conditions. The problem of developing and programming autonomic parallel applications is extremely attractive and presents several challenges. First of all, the autonomic behavior is enabled by: i) the definition of decision-making strategies which provide a correspondence between execution conditions and corrective actions to be applied on the system; ii) a run-time support properly designed to enable autonomic reconfigurations, e.g., dynamic modifications of the parallelism degree of a parallel computation, changes in the mapping between execution processes/threads onto the corresponding physical resources, and switching between alternative operating modes (parallel versions). The first point can be addressed by studying advanced techniques inspired by Control Theory and Artificial Intelligence. The goal is to define decision-making strategies able to achieve important properties of the autonomic process like the reconfiguration stability. Complementary to these aspects, efficient reconfiguration mechanisms are extremely important to enable the autonomic behavior. Reconfigurations should be less intrusive as possible, minimizing the reconfiguration delay to apply any change in the current configuration by preserving the computation consistency and correctness.

High-Performance Data Stream Processing

I am investigating parallel approaches to Data Stream Processing problems (DSP). DSP is a hot research area originated from the application of database queries on data streams (unlimited sequences of input values). Several important on-line and real-time applications can be modeled as DSP programs  including network traffic analysis, financial trading, data mining, and many others. Streaming applications are usually modeled as directed graphs, in which arcs are data streams and vertices are operators transforming inputs into outputs. The efficient execution of such applications is a very challenging research issue. DSP introduces complex correlations between stream elements, windowing methods (tumbling/sliding and count-based/time-based semantics) and other typical computational patterns that require quite novel parallelism models and related design and implementation techniques on the emerging highly parallel architectures (multi-/many-core processing elements combined in large systems and heterogeneous clusters and potentially equipped with accelerators). The problem is further complicated by the strong performance requirements of typical DSP scenarios: high-throughput and low-latency are unavoidable constraints that imply a careful parallelization. The aim of my work is to study innovative parallel programming techniques and run-time supports enabling High-Performance Data Stream Processing. The results of my research activities in this topic are continuously integrated into the WindFlow parallel library for data streaming on multicores and GPUs.

© Gabriele Mencagli