Here you can find a consolidated (a.k.a. slowly updated) list of my publications. A frequently updated (and possibly noisy) list of works is available on my Google Scholar profile.
Please find below a short list of highlight publications for my recent activity.
Ceni, Andrea; Bacciu, Davide; Caro, Valerio De; Gallicchio, Claudio; Oneto, Luca Improving Fairness via Intrinsic Plasticity in Echo State Networks Conference Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , 2023. Cossu, Andrea; Spinnato, Francesco; Guidotti, Riccardo; Bacciu, Davide A Protocol for Continual Explanation of SHAP Conference Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , 2023. Carta, Antonio; Cossu, Andrea; Lomonaco, Vincenzo; Bacciu, Davide Ex-Model: Continual Learning from a Stream of Trained Models Conference Proceedings of the CVPR 2022 Workshop on Continual Learning , IEEE 2022. Bacciu, Davide; Numeroso, Danilo Explaining Deep Graph Networks via Input Perturbation Journal Article In: IEEE Transactions on Neural Networks and Learning Systems, 2022. Bacciu, Davide; Carta, Antonio; Sarli, Daniele Di; Gallicchio, Claudio; Lomonaco, Vincenzo; Petroni, Salvatore Towards Functional Safety Compliance of Recurrent Neural Networks Conference Proceedings of the International Conference on AI for People (CAIP 2021), 2021. Schoitsch, Erwin; Mylonas, Georgios (Ed.) Supporting Privacy Preservation by Distributed and Federated Learning on the Edge Periodical ERCIM News, vol. 127, 2021, visited: 30.09.2021. Macher, G.; Akarmazyan, S.; Armengaud, E.; Bacciu, D.; Calandra, C.; Danzinger, H.; Dazzi, P.; Davalas, C.; Gennaro, M. C. De; Dimitriou, A.; Dobaj, J.; Dzambic, M.; Giraudi, L.; Girbal, S.; Michail, D.; Peroglio, R.; Potenza, R.; Pourdanesh, F.; Seidl, M.; Sardianos, C.; Tserpes, K.; Valtl, J.; Varlamis, I.; Veledar, O. Dependable Integration Concepts for Human-Centric AI-based Systems Workshop Proceedings of the 40th International Conference on Computer Safety, Reliability and Security (SafeComp 2021), Springer, 2021, (Invited discussion paper). Macher, Georg; Armengaud, Eric; Bacciu, Davide; Dobaj, Jürgen; Dzambic, Maid; Seidl, Matthias; Veledar, Omar Dependable Integration Concepts for Human-Centric AI-based Systems Workshop Proceedings of the 16th International Workshop on Dependable Smart Embedded Cyber-Physical Systems and Systems-of-Systems (DECSoS 2021), 2021. Bacciu, Davide; Akarmazyan, Siranush; Armengaud, Eric; Bacco, Manlio; Bravos, George; Calandra, Calogero; Carlini, Emanuele; Carta, Antonio; Cassara, Pietro; Coppola, Massimo; Davalas, Charalampos; Dazzi, Patrizio; Degennaro, Maria Carmela; Sarli, Daniele Di; Dobaj, Jürgen; Gallicchio, Claudio; Girbal, Sylvain; Gotta, Alberto; Groppo, Riccardo; Lomonaco, Vincenzo; Macher, Georg; Mazzei, Daniele; Mencagli, Gabriele; Michail, Dimitrios; Micheli, Alessio; Peroglio, Roberta; Petroni, Salvatore; Potenza, Rosaria; Pourdanesh, Farank; Sardianos, Christos; Tserpes, Konstantinos; Tagliabò, Fulvio; Valtl, Jakob; Varlamis, Iraklis; Veledar, Omar (Ed.) TEACHING - Trustworthy autonomous cyber-physical applications through human-centred intelligence Conference Proceedings of the 2021 IEEE International Conference on Omni-Layer Intelligent Systems (COINS) , 2021. Ferrari, Elisa; Bacciu, Davide Addressing Fairness, Bias and Class Imbalance in Machine Learning: the FBI-loss Unpublished Online on Arxiv, 2021.@conference{Ceni2023,
title = { Improving Fairness via Intrinsic Plasticity in Echo State Networks },
author = {Andrea Ceni and Davide Bacciu and Valerio De Caro and Claudio Gallicchio and Luca Oneto
},
editor = {Michel Verleysen},
year = {2023},
date = {2023-10-04},
urldate = {2023-10-04},
booktitle = {Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Cossu2023,
title = { A Protocol for Continual Explanation of SHAP },
author = {Andrea Cossu and Francesco Spinnato and Riccardo Guidotti and Davide Bacciu},
editor = {Michel Verleysen},
year = {2023},
date = {2023-10-04},
urldate = {2023-10-04},
booktitle = {Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{carta2021ex,
title = {Ex-Model: Continual Learning from a Stream of Trained Models},
author = {Antonio Carta and Andrea Cossu and Vincenzo Lomonaco and Davide Bacciu},
url = {https://arxiv.org/pdf/2112.06511.pdf, Arxiv},
year = {2022},
date = {2022-06-20},
urldate = {2022-06-20},
booktitle = {Proceedings of the CVPR 2022 Workshop on Continual Learning },
journal = {arXiv preprint arXiv:2112.06511},
pages = {3790-3799},
organization = {IEEE},
abstract = {Learning continually from non-stationary data streams is a challenging research topic of growing popularity in the last few years. Being able to learn, adapt, and generalize continually in an efficient, effective, and scalable way is fundamental for a sustainable development of Artificial Intelligent systems. However, an agent-centric view of continual learning requires learning directly from raw data, which limits the interaction between independent agents, the efficiency, and the privacy of current approaches. Instead, we argue that continual learning systems should exploit the availability of compressed information in the form of trained models. In this paper, we introduce and formalize a new paradigm named "Ex-Model Continual Learning" (ExML), where an agent learns from a sequence of previously trained models instead of raw data. We further contribute with three ex-model continual learning algorithms and an empirical setting comprising three datasets (MNIST, CIFAR-10 and CORe50), and eight scenarios, where the proposed algorithms are extensively tested. Finally, we highlight the peculiarities of the ex-model paradigm and we point out interesting future research directions. },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@article{Bacciu2022,
title = {Explaining Deep Graph Networks via Input Perturbation},
author = {Davide Bacciu and Danilo Numeroso
},
doi = {10.1109/TNNLS.2022.3165618},
year = {2022},
date = {2022-04-21},
urldate = {2022-04-21},
journal = {IEEE Transactions on Neural Networks and Learning Systems},
abstract = {Deep Graph Networks are a family of machine learning models for structured data which are finding heavy application in life-sciences (drug repurposing, molecular property predictions) and on social network data (recommendation systems). The privacy and safety-critical nature of such domains motivates the need for developing effective explainability methods for this family of models. So far, progress in this field has been challenged by the combinatorial nature and complexity of graph structures. In this respect, we present a novel local explanation framework specifically tailored to graph data and deep graph networks. Our approach leverages reinforcement learning to generate meaningful local perturbations of the input graph, whose prediction we seek an interpretation for. These perturbed data points are obtained by optimising a multi-objective score taking into account similarities both at a structural level as well as at the level of the deep model outputs. By this means, we are able to populate a set of informative neighbouring samples for the query graph, which is then used to fit an interpretable model for the predictive behaviour of the deep network locally to the query graph prediction. We show the effectiveness of the proposed explainer by a qualitative analysis on two chemistry datasets, TOS and ESOL and by quantitative results on a benchmark dataset for explanations, CYCLIQ.
},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@conference{BacciuCAIP2021,
title = {Towards Functional Safety Compliance of Recurrent Neural Networks},
author = {Davide Bacciu and Antonio Carta and Daniele Di Sarli and Claudio Gallicchio and Vincenzo Lomonaco and Salvatore Petroni},
url = {https://aiforpeople.org/conference/assets/papers/CAIP21-P09.pdf, Open Access PDF},
year = {2021},
date = {2021-11-20},
booktitle = {Proceedings of the International Conference on AI for People (CAIP 2021)},
abstract = {Deploying Autonomous Driving systems requires facing some novel challenges for the Automotive industry. One of the most critical aspects that can severely compromise their deployment is Functional Safety. The ISO 26262 standard provides guidelines to ensure Functional Safety of road vehicles. However, this standard is not suitable to develop Artificial Intelligence
based systems such as systems based on Recurrent Neural Networks (RNNs). To address this issue, in this paper we propose a new methodology, composed of three steps. The first step is the robustness evaluation of the RNN against inputs perturbations. Then, a proper set of safety measures must be defined according to the model’s robustness, where less robust models will require stronger mitigation. Finally, the functionality of the entire system must be extensively tested
according to Safety Of The Intended Functionality (SOTIF) guidelines, providing quantitative results about the occurrence of unsafe scenarios, and by evaluating appropriate Safety Performance Indicators.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
based systems such as systems based on Recurrent Neural Networks (RNNs). To address this issue, in this paper we propose a new methodology, composed of three steps. The first step is the robustness evaluation of the RNN against inputs perturbations. Then, a proper set of safety measures must be defined according to the model’s robustness, where less robust models will require stronger mitigation. Finally, the functionality of the entire system must be extensively tested
according to Safety Of The Intended Functionality (SOTIF) guidelines, providing quantitative results about the occurrence of unsafe scenarios, and by evaluating appropriate Safety Performance Indicators.@periodical{Bacciu2021e,
title = {Supporting Privacy Preservation by Distributed and Federated Learning on the Edge},
author = { Davide Bacciu and Patrizio Dazzi and Alberto Gotta},
editor = {Erwin Schoitsch and Georgios Mylonas},
url = {https://ercim-news.ercim.eu/en127/r-i/supporting-privacy-preservation-by-distributed-and-federated-learning-on-the-edge},
year = {2021},
date = {2021-09-30},
urldate = {2021-09-30},
issuetitle = {ERCIM News},
volume = {127},
keywords = {},
pubstate = {published},
tppubtype = {periodical}
}
@workshop{Macher2021,
title = {Dependable Integration Concepts for Human-Centric AI-based Systems},
author = {G. Macher and S. Akarmazyan and E. Armengaud and D. Bacciu and C. Calandra and H. Danzinger and P. Dazzi and C. Davalas and M.C. De Gennaro and A. Dimitriou and J. Dobaj and M. Dzambic and L. Giraudi and S. Girbal and D. Michail and R. Peroglio and R. Potenza and F. Pourdanesh and M. Seidl and C. Sardianos and K. Tserpes and J. Valtl and I. Varlamis and O. Veledar },
year = {2021},
date = {2021-09-07},
urldate = {2021-09-07},
booktitle = {Proceedings of the 40th International Conference on Computer Safety, Reliability and Security (SafeComp 2021)},
pages = {11-23},
publisher = {Springer},
note = {Invited discussion paper},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@workshop{Macher2021b,
title = {Dependable Integration Concepts for Human-Centric AI-based Systems},
author = {Georg Macher and Eric Armengaud and Davide Bacciu and Jürgen Dobaj and Maid Dzambic and Matthias Seidl and Omar Veledar},
year = {2021},
date = {2021-09-07},
booktitle = {Proceedings of the 16th International Workshop on Dependable Smart Embedded Cyber-Physical Systems and Systems-of-Systems (DECSoS 2021)},
abstract = {The rising demand to integrate adaptive, cloud-based and/or AI-based systems is also increasing the need for associated dependability concepts. However, the practical processes and methods covering the whole life cycle still need to be instantiated. The assurance of dependability continues to be an open issue with no common solution. That is especially the case for novel AI and/or dynamical runtime-based approaches. This work focuses on engineering methods and design patterns that support the development of dependable AI-based autonomous systems. The paper presents the related body of knowledge of the TEACHING project and multiple automotive domain regulation activities and industrial working groups. It also considers the dependable architectural concepts and their impactful applicability to different scenarios to ensure the dependability of AI-based Cyber-Physical Systems of Systems (CPSoS) in the automotive domain. The paper shines the light on potential paths for dependable integration of AI-based systems into the automotive domain through identified analysis methods and targets. },
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@conference{Bacciu2021d,
title = {TEACHING - Trustworthy autonomous cyber-physical applications through human-centred intelligence},
editor = {Davide Bacciu and Siranush Akarmazyan and Eric Armengaud and Manlio Bacco and George Bravos and Calogero Calandra and Emanuele Carlini and Antonio Carta and Pietro Cassara and Massimo Coppola and Charalampos Davalas and Patrizio Dazzi and Maria Carmela Degennaro and Daniele Di Sarli and Jürgen Dobaj and Claudio Gallicchio and Sylvain Girbal and Alberto Gotta and Riccardo Groppo and Vincenzo Lomonaco and Georg Macher and Daniele Mazzei and Gabriele Mencagli and Dimitrios Michail and Alessio Micheli and Roberta Peroglio and Salvatore Petroni and Rosaria Potenza and Farank Pourdanesh and Christos Sardianos and Konstantinos Tserpes and Fulvio Tagliabò and Jakob Valtl and Iraklis Varlamis and Omar Veledar},
doi = {10.1109/COINS51742.2021.9524099},
year = {2021},
date = {2021-08-23},
urldate = {2021-08-23},
booktitle = {Proceedings of the 2021 IEEE International Conference on Omni-Layer Intelligent Systems (COINS) },
abstract = {This paper discusses the perspective of the H2020 TEACHING project on the next generation of autonomous applications running in a distributed and highly heterogeneous environment comprising both virtual and physical resources spanning the edge-cloud continuum. TEACHING puts forward a human-centred vision leveraging the physiological, emotional, and cognitive state of the users as a driver for the adaptation and optimization of the autonomous applications. It does so by building a distributed, embedded and federated learning system complemented by methods and tools to enforce its dependability, security and privacy preservation. The paper discusses the main concepts of the TEACHING approach and singles out the main AI-related research challenges associated with it. Further, we provide a discussion of the design choices for the TEACHING system to tackle the aforementioned challenges},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@unpublished{Ferrari2021,
title = {Addressing Fairness, Bias and Class Imbalance in Machine Learning: the FBI-loss},
author = {Elisa Ferrari and Davide Bacciu},
url = {https://arxiv.org/abs/2105.06345, Arxiv},
year = {2021},
date = {2021-05-13},
urldate = {2021-05-13},
abstract = {Resilience to class imbalance and confounding biases, together with the assurance of fairness guarantees are highly desirable properties of autonomous decision-making systems with real-life impact. Many different targeted solutions have been proposed to address separately these three problems, however a unifying perspective seems to be missing. With this work, we provide a general formalization, showing that they are different expressions of unbalance. Following this intuition, we formulate a unified loss correction to address issues related to Fairness, Biases and Imbalances (FBI-loss). The correction capabilities of the proposed approach are assessed on three real-world benchmarks, each associated to one of the issues under consideration, and on a family of synthetic data in order to better investigate the effectiveness of our loss on tasks with different complexities. The empirical results highlight that the flexible formulation of the FBI-loss leads also to competitive performances with respect to literature solutions specialised for the single problems.},
howpublished = {Online on Arxiv},
keywords = {},
pubstate = {published},
tppubtype = {unpublished}
}