Here you can find a consolidated (a.k.a. slowly updated) list of my publications. A frequently updated (and possibly noisy) list of works is available on my Google Scholar profile.
Please find below a short list of highlight publications for my recent activity.
Pasquali, Alex; Lomonaco, Vincenzo; Bacciu, Davide; Paganelli, Federica
Deep Reinforcement Learning for Network Slice Placement and the DeepNetSlice Toolkit Conference Forthcoming
Proceedings of the IEEE International Conference on Machine Learning for Communication and Networking 2024 (IEEE ICMLCN 2024), IEEE, Forthcoming.
@conference{nokey,
title = {Deep Reinforcement Learning for Network Slice Placement and the DeepNetSlice Toolkit},
author = {Alex Pasquali and Vincenzo Lomonaco and Davide Bacciu and Federica Paganelli},
year = {2024},
date = {2024-05-05},
urldate = {2024-05-05},
booktitle = {Proceedings of the IEEE International Conference on Machine Learning for Communication and Networking 2024 (IEEE ICMLCN 2024)},
publisher = {IEEE},
keywords = {},
pubstate = {forthcoming},
tppubtype = {conference}
}
Caro, Valerio De; Mauro, Antonio Di; Bacciu, Davide; Gallicchio, Claudio
Communication-Efficient Ridge Regression in Federated Echo State Networks Conference
Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , 2023.
@conference{Caro2023,
title = { Communication-Efficient Ridge Regression in Federated Echo State Networks },
author = {Valerio De Caro and Antonio Di Mauro and Davide Bacciu and Claudio Gallicchio
},
editor = {Michel Verleysen},
year = {2023},
date = {2023-10-04},
urldate = {2023-10-04},
booktitle = {Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Lomonaco, Vincenzo; Caro, Valerio De; Gallicchio, Claudio; Carta, Antonio; Sardianos, Christos; Varlamis, Iraklis; Tserpes, Konstantinos; Coppola, Massimo; Marpena, Mina; Politi, Sevasti; Schoitsch, Erwin; Bacciu, Davide
AI-Toolkit: a Microservices Architecture for Low-Code Decentralized Machine Intelligence Conference
Proceedings of 2023 IEEE International Conference on Acoustics, Speech and Signal Processing, 2023.
@conference{Lomonaco2023,
title = {AI-Toolkit: a Microservices Architecture for Low-Code Decentralized Machine Intelligence},
author = {Vincenzo Lomonaco and Valerio De Caro and Claudio Gallicchio and Antonio Carta and Christos Sardianos and Iraklis Varlamis and Konstantinos Tserpes and Massimo Coppola and Mina Marpena and Sevasti Politi and Erwin Schoitsch and Davide Bacciu},
year = {2023},
date = {2023-06-04},
urldate = {2023-06-04},
booktitle = {Proceedings of 2023 IEEE International Conference on Acoustics, Speech and Signal Processing},
abstract = {Artificial Intelligence and Machine Learning toolkits such as Scikit-learn, PyTorch and Tensorflow provide today a solid starting point for the rapid prototyping of R&D solutions. However, they can be hardly ported to heterogeneous decentralised hardware and real-world production environments. A common practice involves outsourcing deployment solutions to scalable cloud infrastructures such as Amazon SageMaker or Microsoft Azure. In this paper, we proposed an open-source microservices-based architecture for decentralised machine intelligence which aims at bringing R&D and deployment functionalities closer following a low-code approach. Such an approach would guarantee flexible integration of cutting-edge functionalities while preserving complete control over the deployed solutions at negligible costs and maintenance efforts.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Artificial Intelligence and Machine Learning toolkits such as Scikit-learn, PyTorch and Tensorflow provide today a solid starting point for the rapid prototyping of R&D solutions. However, they can be hardly ported to heterogeneous decentralised hardware and real-world production environments. A common practice involves outsourcing deployment solutions to scalable cloud infrastructures such as Amazon SageMaker or Microsoft Azure. In this paper, we proposed an open-source microservices-based architecture for decentralised machine intelligence which aims at bringing R&D and deployment functionalities closer following a low-code approach. Such an approach would guarantee flexible integration of cutting-edge functionalities while preserving complete control over the deployed solutions at negligible costs and maintenance efforts.
Corti, Francesco; Entezari, Rahim; Hooker, Sara; Bacciu, Davide; Saukh, Olga
Studying the impact of magnitude pruning on contrastive learning methods Workshop
ICML 2022 workshop on Hardware Aware Efficient Training (HAET 2022), 2022.
@workshop{nokey,
title = {Studying the impact of magnitude pruning on contrastive learning methods},
author = {Francesco Corti and Rahim Entezari and Sara Hooker and Davide Bacciu and Olga Saukh},
year = {2022},
date = {2022-07-23},
urldate = {2022-07-23},
booktitle = {ICML 2022 workshop on Hardware Aware Efficient Training (HAET 2022)},
abstract = {We study the impact of different pruning techniques on the representation learned by deep neural networks trained with contrastive loss functions. Our work finds that at high sparsity levels, contrastive learning results in a higher number of misclassified examples relative to models trained with traditional cross-entropy loss. To understand this pronounced difference, we use metrics such as the number of PIEs, qscore and pdepth to measure the impact of pruning on the learned representation quality. Our analysis suggests the schedule of the pruning method implementation matters. We find that the negative impact of sparsity on the quality of the learned representation is the highest when pruning is introduced early-on in training phase.},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
We study the impact of different pruning techniques on the representation learned by deep neural networks trained with contrastive loss functions. Our work finds that at high sparsity levels, contrastive learning results in a higher number of misclassified examples relative to models trained with traditional cross-entropy loss. To understand this pronounced difference, we use metrics such as the number of PIEs, qscore and pdepth to measure the impact of pruning on the learned representation quality. Our analysis suggests the schedule of the pruning method implementation matters. We find that the negative impact of sparsity on the quality of the learned representation is the highest when pruning is introduced early-on in training phase.