Research

In this page you can find useful information about my research profile, such as the research projects in which I was or I has been involved in, my editorial activities and a brief description of my main research interests.

Research Projects

This is a list of past and current research projects:

RePhrase(H2020): “Refactoring Parallel Heterogeneous Resource-Aware Applications - a Software Engineering Approach”, (01-04-2015, 36 months).The RePhrase project is aimed at producing new software engineering tools that help software developers tackle the ongoing problem with multi-core computing.

REPARA (EU STREP FP7): “Reengineering and Enabling Performance and Power of Applications”, (01-09-2013, 36 months).The REPARA project aimed at helping the transformation and deployment of new and legacy applicationsin parallel heterogeneous computing architectures while maintaining a balance betweenapplication performance, energy efficiency and source code maintainability.

GBG-Lab (Industrial Collaboration): Joint laboratory between the Department of Computer Science, University of Pisa, and the Italian company List-group S.p.A, (01-01-2012, 18 months). The collaboration was focused on the research of new architectural supports and run-time mechanisms for the development of streaming computing (High Frequency Trading) on heterogeneous and special-purpose architectures (e.g., network processors like Tilera CMPs and Netlogic networking boards).

In.Sy.Eme(Italian MIUR, FIRB): “Integrated System for Emergency”, (10-10-2007, 36 months).The goal of the research project In.Sy.Eme was to define an efficient integrated system able to support the emergency operations in different scenarios (e.g., earthquakes, floods,avalanches).

Editorial Activity

Guest Editor of the Special Issue “New Landscapes of the Data Stream Processing in the era of Fog Computing” scheduled to appear in the Elsevier journal of Future Generation Computer Systems (FGCS). The CFP is still open!

Guest Editor of the Special Issue “Parallel Applications for Edge/Fog/In-situ Computing on the Next Generation Computing Platforms” scheduled to appear in the International Journal of High Performance Computing Applications (IJHPCA).The special issue is in progress.

Member of the Editorial Board ofComputing and Informatics(CAI).

Member of the Editorial Board ofScalable Computing: Practice and Experience(SCPE).

Associate Editor of theInternational Journal of Cloud Applications and Computing(IJCAC),IGI Global.

Member of the Editorial Boardof theInternational Journal of Adaptive, Resilient and Autonomic Systems(IJARAS),IGI Global.

Member of the Editorial Board of theInternational Journal of Computer & Software Engineering(IJCSE),Graphy Publications.

Organization Roles and Participation in Program Committees

Global Workshops Chair of the international conference Euro-Par 2018 (European Conference on Parallel Processing), Turin, Italy. The role consists in the scientific and organizational responsibility of all the Euro-Par workshops.

Member of the program committee of the international conferenceICBDSC 2018(International Conference on Big Data and Smart Computing), Casa Blanca, Marocco.

Member of the program committee of the Track “Data Streams” of the international conference SAC 2018(ACM symposium on Applied Computing), Pau, France.

Member of the program committee of the Special Session “Parallel Numerical Methods and Libraries for Heterogeneous Multi/Manycores”, held in conjunction withPDP 2018(Euromicro International Conference on Parallel, Distributed and Network-Based Processing), Cambridge, UK.

Steering committee memberand of theprogram committeeof the international conferencePDP 2018(Euromicro International Conference on Parallel, Distributed and Network-Based Processing), Cambridge, UK.

Member of theprogram committeeof the international conferenceUCC 2017(IEEE/ACM International Conference on Utility and Cloud Computing), Austin, Texas, USA.

Member of the program committee of the international workshopWAMCA2017(8th Workshop on Applications for Multi-core Architectures), Campinas, Brazil.

Co-chair of the international workshopMPP 2017(6th Workshop on Parallel Programming Models—Special Edition on Fog and In-Situ Computing), Campinas, Brazil.

Member of the program committee of the international conference DependSys 2017(International Symposium on Dependability in Sensor, Cloud, and Big Data Systems and Applications), Guangzhou, China.

Member of the program committee of theinternational conferenceISPA 2017(IEEE International Symposium on Parallel and Distributed Processing with Applications), Guangzhou, China.

Member of the program committee of theinternational conferenceCANDAR 2017(International Symposium on Computing and Networking),Aomori, Japan.

Member of the programcommittee of the international workshopASBDA2017(Autonomic Systems for Big Data Analytics), held in conjuction withICCAC 2017 (International Conference on Cloud and Autonomic Computing), Tucson, USA.

Co-chair of the international workshopAuto-DaSP 2017(Autonomic Solutions for Parallel and Distributed Data Stream Processing), held in conjuction withEuro-Par 2017 (European Conference on Parallel Processing), Santiago de Compostela, Spain.

Co-chair of the international workshopAPPMM 2017(Advancements in Parallel Programming Models and Frameworks for the Multi-/Many-core Era), held in conjuction withHPCS 2017 (High Performance Computing & Simulations), Genova, Italy.

Publicity chair of the international conference ScalCom 2017(IEEE International Conference on Scalable Computing and Communications), San Franscisco, USA.

Member of the program committee of the international conference HPCS 2017(High Performance Computing & Simulations), Genova, Italy.

Member of the program committee of the international conference ICCAC 2017(International Conference on Cloud and Autonomic Computing), Tucson, USA.

Member of the program committee of the international conferenceHPCS 2016(High Performance Computing & Simulations),Innsbruck, Austria.

Member of the programcommittee of theinternational conferenceScalCom 2015(IEEE International Conference on Scalable Computing and Communications), Beijing, China.

Member of the program committee of the international workshopOrmaCloud 2014(International Workshop on Optimization techniques for Resources Management in Clouds), Vancouver, Canada.

Member of the program committee of the international workshop OrmaCloud 2013(International Workshop on Optimization techniques for Resources Management in Clouds),New York City, USA.

Research Interests

My research interests are centered around various aspects ofDistributed Parallel Computingwith a particular focus on interdisciplinary solutions for basic and advanced research issues in this field. More specifically I am interested in: 1)Parallel Programming & Parallel Architectures; 2)Autonomic Parallel Computing;3)High-Performance Data Stream Processing.

Parallel Programming & Parallel Architectures

With the advent of modern multi-/many-core CPUs and GPUs, Parallel Computing has been an active field of research. Applications of Parallel Computing have become increasingly pervasive in our everyday life: parallelism is available almost everywhere, from tiny devices such as our smartphones and PDAs, to desktop CPUs and high-performance computing architectures such as servers and clusters. I am interested in every aspects regarding the design and implementation of parallel programs. From a methodological viewpoint, the definition of parallel patterns, their cost models (both analytical or empirical), and the evaluation of their expressive power to model real-life applications is a central research issue on which I am currently focusing on. Furthermore, the design of efficient run-time supports for such patterns is another crucial point of my research. I am interested in the design of highly efficient mechanisms enabling fine-grained parallelismon modern CPUs. This aspect is stimulating, since it requires a deep understanding of the properties and the behavior of modern architectures: theexploitation of the memory hierarchy, cache coherence, interconnecting networks, the clever use of memory controllers and hardware multi-threading are key points affecting the run-time support design and implementation. In conclusion, my goal is to provide a contribution to the definition of novel programming models and tools enabling the development of parallel applications on both today’s and next-generation parallel architectures.

Autonomic Parallel Computing

Since my Ph.D. work, I have been interested in interdisciplinary researches merging Autonomic Computing methodologies with Parallel Computing. Distributed parallel applications executed on heterogeneous and dynamic environments like Data Centers, Grids and Clouds need advanced techniques to manage and maintain their Quality of Service in the face of unexpected execution events and workload. The problem of developing and programming autonomic parallel applications is extremely attractive and presents several challenging issues. First of all the autonomic behavior is enabled by: i) the definition of decision-making strategies which provide a correspondence between execution conditions and corresponding corrective actions on the system;ii) a run-time support properly designed to enable autonomic reconfigurations, e.g., dynamic modifications of the parallelism degree of a parallel computation, changes in the mapping between execution processes/threads onto the corresponding physical resources, and switches between alternative operating modes (parallel versions). The first point can be addressed by properly studying advanced techniques inspired by Control Theory and Artificial Intelligence. The goal is to define decision-making strategies able to achieve important properties of the autonomic process. Examples are reconfiguration stability (number and frequency of reconfigurations), reconfiguration amplitude (minimizing the “size” of reconfigurations, e.g., in terms of number of allocated/deallocated resources), and control optimality (achieving desired trade-offs between performance, memory usage, power consumption and efficiency of resource utilization). Complementary to these aspects, efficient reconfiguration mechanisms are extremely important to enact the autonomic behavior. Reconfigurations should be less intrusive as possible, minimizing the reconfiguration delay to apply a change in the current configuration by preserving the computation consistency and correctness. My goal is to provide a contribution in both the sides of the problem by studying advanced control-theoretic strategies and lightweight reconfiguration mechanisms enabling Autonomic Parallel Computing on the emerging distributed parallel environments.

High-Performance Data Stream Processing

Recently, I have investigated parallel approaches to Data Stream Processing problems (DaSP). DaSP is a vivid research area originated from the application of database queries on data streams (unlimited sequences of input values) instead of on classic relational tables.Several important on-line and real-time applications can be modeled as DaSP, including network traffic analysis, financial trading, data mining, and many others.DaSP applications can usually be modeled as directed graphs, in which arcs are data streams of tuples and vertices are operators (selection, projection, aggregate functions, sorting, join, skyline and other complex functions). The efficient execution of such queries on streams (referred to as Continuous Queries in the literature) is a very challenging research issue. DaSP introduces complex correlations between stream elements, windowing methods (tumbling/sliding and count-based/time-based semantics) and other typical computational patterns that require quite novel parallelism models and related design and implementation techniques on the emerging highly parallel architectures (multi-/many-core processing elements combined in large systems and heterogeneous clusters). The problem is further complicated by the strong performance requirements of typical DaSP scenarios: high-throughput and low-latency are unavoidable constraints that imply a careful parallelization design and the definition of novel cost models of parallel patterns supporting the parallelization of DaSP operators. The aim of my workis to study innovative parallel programming techniques and run-time supports enabling High-Performance Data Stream Processing.

Gabriele Mencagli 2017