Massidda, Martina Cinquini Francesco Landolfi Riccardo Constraint-Free Structure Learning with Smooth Acyclic Orientations Conference The Twelfth International Conference on Learning Representations, 2024. Pasquali, Alex; Lomonaco, Vincenzo; Bacciu, Davide; Paganelli, Federica Deep Reinforcement Learning for Network Slice Placement and the DeepNetSlice Toolkit Conference Forthcoming Proceedings of the IEEE International Conference on Machine Learning for Communication and Networking 2024 (IEEE ICMLCN 2024), IEEE, Forthcoming. Ninniri, Matteo; Podda, Marco; Bacciu, Davide Classifier-free graph diffusion for molecular property targeting Workshop 4th workshop on Graphs and more Complex structures for Learning and Reasoning (GCLR) at AAAI 2024, 2024. Lepri, Marco; Bacciu, Davide; Santina, Cosimo Della Neural Autoencoder-Based Structure-Preserving Model Order Reduction and Control Design for High-Dimensional Physical Systems Journal Article In: IEEE Control Systems Letters, 2023. Gravina, Alessio; Lovisotto, Giulio; Gallicchio, Claudio; Bacciu, Davide; Grohnfeldt, Claas Effective Non-Dissipative Propagation for Continuous-Time Dynamic Graphs Workshop Temporal Graph Learning Workshop, NeurIPS 2023, 2023. Georgiev, Dobrik; Numeroso, Danilo; Bacciu, Davide; Lio, Pietro Neural Algorithmic Reasoning for Combinatorial Optimisation Proceeding PMRL, 2023. Errica, Federico; Bacciu, Davide; Micheli, Alessio PyDGN: a Python Library for Flexible and Reproducible Research on Deep Learning for Graphs Journal Article In: Journal of Open Source Software, vol. 8, no. 90, pp. 5713, 2023. Errica, Federico; Gravina, Alessio; Bacciu, Davide; Micheli, Alessio Hidden Markov Models for Temporal Graph Representation Learning Conference Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , 2023. Landolfi, Francesco; Bacciu, Davide; Numeroso, Danilo A Tropical View of Graph Neural Networks Conference Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , 2023. Ceni, Andrea; Bacciu, Davide; Caro, Valerio De; Gallicchio, Claudio; Oneto, Luca Improving Fairness via Intrinsic Plasticity in Echo State Networks Conference Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , 2023. Cossu, Andrea; Spinnato, Francesco; Guidotti, Riccardo; Bacciu, Davide A Protocol for Continual Explanation of SHAP Conference Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , 2023. Caro, Valerio De; Mauro, Antonio Di; Bacciu, Davide; Gallicchio, Claudio Communication-Efficient Ridge Regression in Federated Echo State Networks Conference Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , 2023. Bacciu, Davide; Errica, Federico; Micheli, Alessio; Navarin, Nicolò; Pasa, Luca; Podda, Marco; Zambon, Daniele Graph Representation Learning Conference Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , 2023. Ceni, Andrea; Cossu, Andrea; Liu, Jingyue; Stölzle, Maximilian; Santina, Cosimo Della; Gallicchio, Claudio; Bacciu, Davide Randomly Coupled Oscillators Workshop Proceedings of the ECML/PKDD Workshop on Deep Learning meets Neuromorphic Hardware, 2023. Gravina, Alessio; Gallicchio, Claudio; Bacciu, Davide Non-Dissipative Propagation by Randomized Anti-Symmetric Deep Graph Networks Workshop Proceedings of the ECML/PKDD Workshop on Deep Learning meets Neuromorphic Hardware, 2023. Cosenza, Emanuele; Valenti, Andrea; Bacciu, Davide Graph-based Polyphonic Multitrack Music Generation Conference Proceedings of the 32nd INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI 2023), 2023. Hemati, Hamed; Lomonaco, Vincenzo; Bacciu, Davide; Borth, Damian Partial Hypernetworks for Continual Learning Conference Proceedings of the International Conference on Lifelong Learning Agents (CoLLAs 2023), Proceedings of Machine Learning Research, 2023. Hemati, Hamed; Cossu, Andrea; Carta, Antonio; Hurtado, Julio; Pellegrini, Lorenzo; Bacciu, Davide; Lomonaco, Vincenzo; Borth, Damian Class-Incremental Learning with Repetition Conference Proceedings of the International Conference on Lifelong Learning Agents (CoLLAs 2023), Proceedings of Machine Learning Research, 2023. Caro, Valerio De; Bacciu, Davide; Gallicchio, Claudio Decentralized Plasticity in Reservoir Dynamical Networks for Pervasive Environments Workshop Proceedings of the 2023 ICML Workshop on Localized Learning: Decentralized Model Updates via Non-Global Objectives
, 2023. Ceni, Andrea; Cossu, Andrea; Liu, Jingyue; Stölzle, Maximilian; Santina, Cosimo Della; Gallicchio, Claudio; Bacciu, Davide Randomly Coupled Oscillators for Time Series Processing Workshop Proceedings of the 2023 ICML Workshop on New Frontiers in Learning, Control, and Dynamical Systems , 2023. Massidda, Riccardo; Landolfi, Francesco; Cinquini, Martina; Bacciu, Davide Differentiable Causal Discovery with Smooth Acyclic Orientations Workshop Proceedings of the 2023 ICML Workshop on Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators , 2023. Simone, Lorenzo; Bacciu, Davide ECGAN: generative adversarial network for electrocardiography Conference Proceedings of Artificial Intelligence In Medicine 2023 (AIME 2023), 2023. Lomonaco, Vincenzo; Caro, Valerio De; Gallicchio, Claudio; Carta, Antonio; Sardianos, Christos; Varlamis, Iraklis; Tserpes, Konstantinos; Coppola, Massimo; Marpena, Mina; Politi, Sevasti; Schoitsch, Erwin; Bacciu, Davide AI-Toolkit: a Microservices Architecture for Low-Code Decentralized Machine Intelligence Conference Proceedings of 2023 IEEE International Conference on Acoustics, Speech and Signal Processing, 2023. Caro, Valerio De; Danzinger, Herbert; Gallicchio, Claudio; Könczöl, Clemens; Lomonaco, Vincenzo; Marmpena, Mina; Marpena, Mina; Politi, Sevasti; Veledar, Omar; Bacciu, Davide Prediction of Driver's Stress Affection in Simulated Autonomous Driving Scenarios Conference Proceedings of 2023 IEEE International Conference on Acoustics, Speech and Signal Processing, 2023. Gravina, Alessio; Bacciu, Davide; Gallicchio, Claudio Anti-Symmetric DGN: a stable architecture for Deep Graph Networks Conference Proceedings of the Eleventh International Conference on Learning Representations (ICLR 2023) , 2023. Numeroso, Danilo; Bacciu, Davide; Veličković, Petar Dual Algorithmic Reasoning Conference Proceedings of the Eleventh International Conference on Learning Representations (ICLR 2023), 2023, (Notable Spotlight paper). Massidda, Riccardo; Geiger, Atticus; Icard, Thomas; Bacciu, Davide Causal Abstraction with Soft Interventions Conference Proceedings of the 2nd Conference on Causal Learning and Reasoning (CLeaR 2023), PMLR, 2023. Gravina, Alessio; Bacciu, Davide; Gallicchio, Claudio Non-Dissipative Propagation by Anti-Symmetric Deep Graph Networks Workshop Proceedigns of the Ninth International Workshop on Deep Learning on Graphs: Method and Applications (DLG-AAAI’23), 2023, (Winner of the Best Student Paper Award at DLG-AAAI23). Bacciu, Davide; Conte, Alessio; Landolfi, Francesco Generalizing Downsampling from Regular Data to Graphs Conference Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, 2023. Bacciu, Davide; Errica, Federico; Gravina, Alessio; Madeddu, Lorenzo; Podda, Marco; Stilo, Giovanni Deep Graph Networks for Drug Repurposing with Multi-Protein Targets Journal Article In: IEEE Transactions on Emerging Topics in Computing, 2023, 2023. Lanciano, Giacomo; Galli, Filippo; Cucinotta, Tommaso; Bacciu, Davide; Passarella, Andrea Extending OpenStack Monasca for Predictive Elasticity Control Journal Article In: Big Data Mining and Analytics, 2023. Caro, Valerio De; Gallicchio, Claudio; Bacciu, Davide Continual adaptation of federated reservoirs in pervasive environments Journal Article In: Neurocomputing, pp. 126638, 2023, ISSN: 0925-2312. Lanciano, Giacomo; Andreoli, Remo; Cucinotta, Tommaso; Bacciu, Davide; Passarella, Andrea A 2-phase Strategy For Intelligent Cloud Operations Journal Article In: IEEE Access, pp. 1-1, 2023. Caro, Valerio De; Gallicchio, Claudio; Bacciu, Davide Federated Adaptation of Reservoirs via Intrinsic Plasticity Conference Proceedings of the 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2022), 2022. Bacciu, Davide; Errica, Federico; Navarin, Nicolò; Pasa, Luca; Zambon, Daniele Deep Learning for Graphs Conference Proceedings of the 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2022), 2022. Valenti, Andrea; Bacciu, Davide Modular Representations for Weak Disentanglement Conference Proceedings of the 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2022), 2022. Matteoni, Federico; Cossu, Andrea; Gallicchio, Claudio; Lomonaco, Vincenzo; Bacciu, Davide Continual Learning for Human State Monitoring Conference Proceedings of the 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2022), 2022. Massidda, Riccardo; Bacciu, Davide Knowledge-Driven Interpretation of Convolutional Neural Networks Conference Proceedings of the 2022 European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2022), 2022. Lagani, Gabriele; Bacciu, Davide; Gallicchio, Claudio; Falchi, Fabrizio; Gennaro, Claudio; Amato, Giuseppe Deep Features for CBIR with Scarce Data using Hebbian Learning Conference Proc. of the 19th International Conference on Content-based Multimedia Indexing (CBMI2022), 2022. Corti, Francesco; Entezari, Rahim; Hooker, Sara; Bacciu, Davide; Saukh, Olga Studying the impact of magnitude pruning on contrastive learning methods Workshop ICML 2022 workshop on Hardware Aware Efficient Training (HAET 2022), 2022. Sangermano, Matteo; Carta, Antonio; Cossu, Andrea; Lomonaco, Vincenzo; Bacciu, Davide Sample Condensation in Online Continual Learning Conference Proceedings of the 2022 IEEE World Congress on Computational Intelligence, IEEE, 2022. Valenti, Andrea; Bacciu, Davide Leveraging Relational Information for Learning Weakly Disentangled Representations Conference Proceedings of the 2022 IEEE World Congress on Computational Intelligence, IEEE, 2022. Castellana, Daniele; Errica, Federico; Bacciu, Davide; Micheli, Alessio The Infinite Contextual Graph Markov Model Conference Proceedings of the 39th International Conference on Machine Learning (ICML 2022), 2022. Semola, Rudy; Lomonaco, Vincenzo; Bacciu, Davide Continual-Learning-as-a-Service (CLaaS): On-Demand Efficient Adaptation of Predictive Models Workshop Proc. of the 1st International Workshop on Pervasive Artificial Intelligence, 2022 IEEE World Congress on Computational Intelligence, 2022. Dukic, Haris; Mokarizadeh, Shahab; Deligiorgis, Georgios; Sepe, Pierpaolo; Bacciu, Davide; Trincavelli, Marco Inductive-Transductive Learning for Very Sparse Fashion Graphs Journal Article In: Neurocomputing, 2022, ISSN: 0925-2312. Sattar, Asma; Bacciu, Davide Graph Neural Network for Context-Aware Recommendation Journal Article In: Neural Processing Letters, 2022. Carta, Antonio; Cossu, Andrea; Lomonaco, Vincenzo; Bacciu, Davide Ex-Model: Continual Learning from a Stream of Trained Models Conference Proceedings of the CVPR 2022 Workshop on Continual Learning , IEEE 2022. Serramazza, Davide Italo; Bacciu, Davide Learning image captioning as a structured transduction task Conference Proceedings of the 23rd International Conference on Engineering Applications of Neural Networks (EANN 2022), vol. 1600, Communications in Computer and Information Science Springer, 2022. Lucchesi, Nicolò; Carta, Antonio; Lomonaco, Vincenzo; Bacciu, Davide Avalanche RL: a Continual Reinforcement Learning Library Conference Proceedings of the 21st International Conference on Image Analysis and Processing (ICIAP 2021), 2022. Ferrari, Elisa; Gargani, Luna; Barbieri, Greta; Ghiadoni, Lorenzo; Faita, Francesco; Bacciu, Davide A causal learning framework for the analysis and interpretation of COVID-19 clinical data Journal Article In: Plos One, vol. 17, no. 5, 2022.2024
@conference{cosmo2024,
title = {Constraint-Free Structure Learning with Smooth Acyclic Orientations},
author = {Martina Cinquini Francesco Landolfi Riccardo Massidda},
url = {https://openreview.net/forum?id=KWO8LSUC5W},
year = {2024},
date = {2024-05-06},
urldate = {2024-01-01},
booktitle = {The Twelfth International Conference on Learning Representations},
abstract = {The structure learning problem consists of fitting data generated by a Directed Acyclic Graph (DAG) to correctly reconstruct its arcs. In this context, differentiable approaches constrain or regularize an optimization problem with a continuous relaxation of the acyclicity property. The computational cost of evaluating graph acyclicity is cubic on the number of nodes and significantly affects scalability. In this paper, we introduce COSMO, a constraint-free continuous optimization scheme for acyclic structure learning. At the core of our method lies a novel differentiable approximation of an orientation matrix parameterized by a single priority vector. Differently from previous works, our parameterization fits a smooth orientation matrix and the resulting acyclic adjacency matrix without evaluating acyclicity at any step. Despite this absence, we prove that COSMO always converges to an acyclic solution. In addition to being asymptotically faster, our empirical analysis highlights how COSMO performance on graph reconstruction compares favorably with competing structure learning methods.
},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{nokey,
title = {Deep Reinforcement Learning for Network Slice Placement and the DeepNetSlice Toolkit},
author = {Alex Pasquali and Vincenzo Lomonaco and Davide Bacciu and Federica Paganelli},
year = {2024},
date = {2024-05-05},
urldate = {2024-05-05},
booktitle = {Proceedings of the IEEE International Conference on Machine Learning for Communication and Networking 2024 (IEEE ICMLCN 2024)},
publisher = {IEEE},
keywords = {},
pubstate = {forthcoming},
tppubtype = {conference}
}
@workshop{Ninniri2024,
title = {Classifier-free graph diffusion for molecular property targeting},
author = {Matteo Ninniri and Marco Podda and Davide Bacciu},
url = {https://arxiv.org/abs/2312.17397, Arxiv},
year = {2024},
date = {2024-02-27},
booktitle = {4th workshop on Graphs and more Complex structures for Learning and Reasoning (GCLR) at AAAI 2024},
abstract = {This work focuses on the task of property targeting: that is, generating molecules conditioned on target chemical properties to expedite candidate screening for novel drug and materials development. DiGress is a recent diffusion model for molecular graphs whose distinctive feature is allowing property targeting through classifier-based (CB) guidance. While CB guidance may work to generate molecular-like graphs, we hint at the fact that its assumptions apply poorly to the chemical domain. Based on this insight we propose a classifier-free DiGress (FreeGress), which works by directly injecting the conditioning information into the training process. CF guidance is convenient given its less stringent assumptions and since it does not require to train an auxiliary property regressor, thus halving the number of trainable parameters in the model. We empirically show that our model yields up to 79% improvement in Mean Absolute Error with respect to DiGress on property targeting tasks on QM9 and ZINC-250k benchmarks. As an additional contribution, we propose a simple yet powerful approach to improve chemical validity of generated samples, based on the observation that certain chemical properties such as molecular weight correlate with the number of atoms in molecules. },
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
2023
@article{lepri2023neural,
title = {Neural Autoencoder-Based Structure-Preserving Model Order Reduction and Control Design for High-Dimensional Physical Systems},
author = {Marco Lepri and Davide Bacciu and Cosimo Della Santina},
year = {2023},
date = {2023-12-21},
urldate = {2023-01-01},
journal = {IEEE Control Systems Letters},
publisher = {IEEE},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@workshop{Gravina2023b,
title = {Effective Non-Dissipative Propagation for Continuous-Time Dynamic Graphs},
author = {Alessio Gravina and Giulio Lovisotto and Claudio Gallicchio and Davide Bacciu and Claas Grohnfeldt},
url = {https://openreview.net/forum?id=zAHFC2LNEe, PDF},
year = {2023},
date = {2023-12-11},
urldate = {2023-12-11},
booktitle = {Temporal Graph Learning Workshop, NeurIPS 2023},
abstract = {Recent research on Deep Graph Networks (DGNs) has broadened the domain of learning on graphs to real-world systems of interconnected entities that evolve over time. This paper addresses prediction problems on graphs defined by a stream of events, possibly irregularly sampled over time, generally referred to as Continuous-Time Dynamic Graphs (C-TDGs). While many predictive problems on graphs may require capturing interactions between nodes at different distances, existing DGNs for C-TDGs are not designed to propagate and preserve long-range information - resulting in suboptimal performance. In this work, we present Continuous-Time Graph Anti-Symmetric Network (CTAN), a DGN for C-TDGs designed within the ordinary differential equations framework that enables efficient propagation of long-range dependencies. We show that our method robustly performs stable and non-dissipative information propagation over dynamically evolving graphs, where the number of ODE discretization steps allows scaling the propagation range. We empirically validate the proposed approach on several real and synthetic graph benchmarks, showing that CTAN leads to improved performance while enabling the propagation of long-range information},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@proceedings{Georgiev2023,
title = {Neural Algorithmic Reasoning for Combinatorial Optimisation},
author = {Dobrik Georgiev and Danilo Numeroso and Davide Bacciu and Pietro Lio },
year = {2023},
date = {2023-11-27},
urldate = {2023-11-27},
booktitle = {Proceedings of the Learning on Graphs Conference (LOG 2023)},
publisher = {PMRL},
abstract = { Solving NP-hard/complete combinatorial problems with neural networks is a challenging research area that aims to surpass classical approximate algorithms. The long-term objective is to outperform hand-designed heuristics for NP-hard/complete problems by learning to generate superior solutions solely from training data. Current neural-based methods for solving CO problems often overlook the inherent "algorithmic" nature of the problems. In contrast, heuristics designed for CO problems, e.g. TSP, frequently leverage well-established algorithms, such as those for finding the minimum spanning tree. In this paper, we propose leveraging recent advancements in neural algorithmic reasoning to improve the learning of CO problems. Specifically, we suggest pre-training our neural model on relevant algorithms before training it on CO instances. Our results demonstrate that, using this learning setup, we achieve superior performance compared to non-algorithmically informed deep learning models.},
keywords = {},
pubstate = {published},
tppubtype = {proceedings}
}
@article{errica2023pydgn,
title = {PyDGN: a Python Library for Flexible and Reproducible Research on Deep Learning for Graphs},
author = {Federico Errica and Davide Bacciu and Alessio Micheli},
year = {2023},
date = {2023-10-31},
urldate = {2023-01-01},
journal = {Journal of Open Source Software},
volume = {8},
number = {90},
pages = {5713},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@conference{Errica2023,
title = {Hidden Markov Models for Temporal Graph Representation Learning},
author = {Federico Errica and Alessio Gravina and Davide Bacciu and Alessio Micheli},
editor = {Michel Verleysen},
year = {2023},
date = {2023-10-04},
urldate = {2023-10-04},
booktitle = {Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Landolfi2023,
title = { A Tropical View of Graph Neural Networks },
author = {Francesco Landolfi and Davide Bacciu and Danilo Numeroso
},
editor = {Michel Verleysen},
year = {2023},
date = {2023-10-04},
urldate = {2023-10-04},
booktitle = {Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Ceni2023,
title = { Improving Fairness via Intrinsic Plasticity in Echo State Networks },
author = {Andrea Ceni and Davide Bacciu and Valerio De Caro and Claudio Gallicchio and Luca Oneto
},
editor = {Michel Verleysen},
year = {2023},
date = {2023-10-04},
urldate = {2023-10-04},
booktitle = {Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Cossu2023,
title = { A Protocol for Continual Explanation of SHAP },
author = {Andrea Cossu and Francesco Spinnato and Riccardo Guidotti and Davide Bacciu},
editor = {Michel Verleysen},
year = {2023},
date = {2023-10-04},
urldate = {2023-10-04},
booktitle = {Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Caro2023,
title = { Communication-Efficient Ridge Regression in Federated Echo State Networks },
author = {Valerio De Caro and Antonio Di Mauro and Davide Bacciu and Claudio Gallicchio
},
editor = {Michel Verleysen},
year = {2023},
date = {2023-10-04},
urldate = {2023-10-04},
booktitle = {Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Bacciu2023c,
title = {Graph Representation Learning },
author = {Davide Bacciu and Federico Errica and Alessio Micheli and Nicolò Navarin and Luca Pasa and Marco Podda and Daniele Zambon
},
editor = {Michel Verleysen},
year = {2023},
date = {2023-10-04},
urldate = {2023-10-04},
booktitle = {Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@workshop{Ceni2023c,
title = {Randomly Coupled Oscillators},
author = {Andrea Ceni and Andrea Cossu and Jingyue Liu and Maximilian Stölzle and Cosimo Della Santina and Claudio Gallicchio and Davide Bacciu},
year = {2023},
date = {2023-09-18},
booktitle = {Proceedings of the ECML/PKDD Workshop on Deep Learning meets Neuromorphic Hardware},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@workshop{Gravina2023c,
title = {Non-Dissipative Propagation by Randomized Anti-Symmetric Deep Graph Networks},
author = {Alessio Gravina and Claudio Gallicchio and Davide Bacciu},
year = {2023},
date = {2023-09-18},
urldate = {2023-09-18},
booktitle = {Proceedings of the ECML/PKDD Workshop on Deep Learning meets Neuromorphic Hardware},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@conference{Cosenza2023,
title = {Graph-based Polyphonic Multitrack Music Generation},
author = {Emanuele Cosenza and Andrea Valenti and Davide Bacciu },
year = {2023},
date = {2023-08-19},
urldate = {2023-08-19},
booktitle = {Proceedings of the 32nd INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI 2023)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Hemati2023,
title = {Partial Hypernetworks for Continual Learning},
author = {Hamed Hemati and Vincenzo Lomonaco and Davide Bacciu and Damian Borth},
year = {2023},
date = {2023-08-01},
urldate = {2023-08-01},
booktitle = {Proceedings of the International Conference on Lifelong Learning Agents (CoLLAs 2023)},
publisher = {Proceedings of Machine Learning Research},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Hemati2023b,
title = {Class-Incremental Learning with Repetition },
author = {Hamed Hemati and Andrea Cossu and Antonio Carta and Julio Hurtado and Lorenzo Pellegrini and Davide Bacciu and Vincenzo Lomonaco and Damian Borth},
year = {2023},
date = {2023-08-01},
urldate = {2023-08-01},
booktitle = {Proceedings of the International Conference on Lifelong Learning Agents (CoLLAs 2023)},
publisher = {Proceedings of Machine Learning Research},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@workshop{nokey,
title = {Decentralized Plasticity in Reservoir Dynamical Networks for Pervasive Environments},
author = {Valerio De Caro and Davide Bacciu and Claudio Gallicchio
},
url = {https://openreview.net/forum?id=5hScPOeDaR, PDF},
year = {2023},
date = {2023-07-29},
urldate = {2023-07-29},
booktitle = {Proceedings of the 2023 ICML Workshop on Localized Learning: Decentralized Model Updates via Non-Global Objectives
},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@workshop{Ceni2023b,
title = {Randomly Coupled Oscillators for Time Series Processing},
author = {Andrea Ceni and Andrea Cossu and Jingyue Liu and Maximilian Stölzle and Cosimo Della Santina and Claudio Gallicchio and Davide Bacciu},
url = {https://openreview.net/forum?id=fmn7PMykEb, PDF},
year = {2023},
date = {2023-07-28},
urldate = {2023-07-28},
booktitle = {Proceedings of the 2023 ICML Workshop on New Frontiers in Learning, Control, and Dynamical Systems },
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@workshop{Massidda2023b,
title = {Differentiable Causal Discovery with Smooth Acyclic Orientations},
author = {Riccardo Massidda and Francesco Landolfi and Martina Cinquini and Davide Bacciu},
url = {https://openreview.net/forum?id=IVwWgscehR, PDF},
year = {2023},
date = {2023-07-28},
urldate = {2023-07-28},
booktitle = {Proceedings of the 2023 ICML Workshop on Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators },
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@conference{nokey,
title = {ECGAN: generative adversarial network for electrocardiography},
author = {Lorenzo Simone and Davide Bacciu },
year = {2023},
date = {2023-06-12},
urldate = {2023-06-12},
booktitle = {Proceedings of Artificial Intelligence In Medicine 2023 (AIME 2023)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Lomonaco2023,
title = {AI-Toolkit: a Microservices Architecture for Low-Code Decentralized Machine Intelligence},
author = {Vincenzo Lomonaco and Valerio De Caro and Claudio Gallicchio and Antonio Carta and Christos Sardianos and Iraklis Varlamis and Konstantinos Tserpes and Massimo Coppola and Mina Marpena and Sevasti Politi and Erwin Schoitsch and Davide Bacciu},
year = {2023},
date = {2023-06-04},
urldate = {2023-06-04},
booktitle = {Proceedings of 2023 IEEE International Conference on Acoustics, Speech and Signal Processing},
abstract = {Artificial Intelligence and Machine Learning toolkits such as Scikit-learn, PyTorch and Tensorflow provide today a solid starting point for the rapid prototyping of R&D solutions. However, they can be hardly ported to heterogeneous decentralised hardware and real-world production environments. A common practice involves outsourcing deployment solutions to scalable cloud infrastructures such as Amazon SageMaker or Microsoft Azure. In this paper, we proposed an open-source microservices-based architecture for decentralised machine intelligence which aims at bringing R&D and deployment functionalities closer following a low-code approach. Such an approach would guarantee flexible integration of cutting-edge functionalities while preserving complete control over the deployed solutions at negligible costs and maintenance efforts.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{DeCaro2023,
title = {Prediction of Driver's Stress Affection in Simulated Autonomous Driving Scenarios},
author = {Valerio De Caro and Herbert Danzinger and Claudio Gallicchio and Clemens Könczöl and Vincenzo Lomonaco and Mina Marmpena and Mina Marpena and Sevasti Politi and Omar Veledar and Davide Bacciu},
year = {2023},
date = {2023-06-04},
urldate = {2023-06-04},
booktitle = {Proceedings of 2023 IEEE International Conference on Acoustics, Speech and Signal Processing},
abstract = {We investigate the task of predicting stress affection from physiological data of users experiencing simulations of autonomous driving. We approach this task on two levels of granularity, depending on whether the prediction is performed at end of the simulation, or along the simulation. In the former, denoted as coarse-grained prediction, we employed Decision Trees. In the latter, denoted as fine-grained prediction, we employed Echo State Networks, a Recurrent Neural Network
that allows efficient learning from temporal data and hence is
suitable for pervasive environments. We conduct experiments on a private dataset of physiological data from people participating in multiple driving scenarios simulating different stressful events. The results show that the proposed model is capable of detecting conditions of event-related cognitive stress proving, the existence of a correlation between stressful events and the physiological data.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
that allows efficient learning from temporal data and hence is
suitable for pervasive environments. We conduct experiments on a private dataset of physiological data from people participating in multiple driving scenarios simulating different stressful events. The results show that the proposed model is capable of detecting conditions of event-related cognitive stress proving, the existence of a correlation between stressful events and the physiological data.@conference{Gravina2023,
title = {Anti-Symmetric DGN: a stable architecture for Deep Graph Networks},
author = {Alessio Gravina and Davide Bacciu and Claudio Gallicchio},
url = {https://openreview.net/pdf?id=J3Y7cgZOOS},
year = {2023},
date = {2023-05-01},
urldate = {2023-05-01},
booktitle = {Proceedings of the Eleventh International Conference on Learning Representations (ICLR 2023) },
abstract = {Deep Graph Networks (DGNs) currently dominate the research landscape of learning from graphs, due to their efficiency and ability to implement an adaptive message-passing scheme between the nodes. However, DGNs are typically limited in their ability to propagate and preserve long-term dependencies between nodes, i.e., they suffer from the over-squashing phenomena. As a result, we can expect them to under-perform, since different problems require to capture interactions at different (and possibly large) radii in order to be effectively solved. In this work, we present Anti-Symmetric Deep Graph Networks (A-DGNs), a framework for stable and non-dissipative DGN design, conceived through the lens of ordinary differential equations. We give theoretical proof that our method is stable and non-dissipative, leading to two key results: long-range information between nodes is preserved, and no gradient vanishing or explosion occurs in training. We empirically validate the proposed approach on several graph benchmarks, showing that A-DGN yields to improved performance and enables to learn effectively even when dozens of layers are used.ers are used.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Numeroso2023,
title = {Dual Algorithmic Reasoning},
author = {Danilo Numeroso and Davide Bacciu and Petar Veličković},
url = {https://openreview.net/pdf?id=hhvkdRdWt1F},
year = {2023},
date = {2023-05-01},
urldate = {2023-05-01},
booktitle = {Proceedings of the Eleventh International Conference on Learning Representations (ICLR 2023)},
abstract = {Neural Algorithmic Reasoning is an emerging area of machine learning which seeks to infuse algorithmic computation in neural networks, typically by training neural models to approximate steps of classical algorithms. In this context, much of the current work has focused on learning reachability and shortest path graph algorithms, showing that joint learning on similar algorithms is beneficial for generalisation. However, when targeting more complex problems, such "similar" algorithms become more difficult to find. Here, we propose to learn algorithms by exploiting duality of the underlying algorithmic problem. Many algorithms solve optimisation problems. We demonstrate that simultaneously learning the dual definition of these optimisation problems in algorithmic learning allows for better learning and qualitatively better solutions. Specifically, we exploit the max-flow min-cut theorem to simultaneously learn these two algorithms over synthetically generated graphs, demonstrating the effectiveness of the proposed approach. We then validate the real-world utility of our dual algorithmic reasoner by deploying it on a challenging brain vessel classification task, which likely depends on the vessels’ flow properties. We demonstrate a clear performance gain when using our model within such a context, and empirically show that learning the max-flow and min-cut algorithms together is critical for achieving such a result.},
note = {Notable Spotlight paper},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Massidda2023,
title = {Causal Abstraction with Soft Interventions},
author = {Riccardo Massidda and Atticus Geiger and Thomas Icard and Davide Bacciu},
year = {2023},
date = {2023-04-17},
urldate = {2023-04-17},
booktitle = {Proceedings of the 2nd Conference on Causal Learning and Reasoning (CLeaR 2023)},
publisher = {PMLR},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@workshop{nokey,
title = {Non-Dissipative Propagation by Anti-Symmetric Deep Graph Networks},
author = {Alessio Gravina and Davide Bacciu and Claudio Gallicchio},
url = {https://drive.google.com/file/d/1uPHhjwSa3g_hRvHwx6UnbMLgGN_cAqMu/view. PDF},
year = {2023},
date = {2023-02-13},
urldate = {2023-02-13},
booktitle = {Proceedigns of the Ninth International Workshop on Deep Learning on Graphs: Method and Applications (DLG-AAAI’23)},
abstract = {Deep Graph Networks (DGNs) currently dominate the research landscape of learning from graphs, due to the efficiency of their adaptive message-passing scheme between nodes. However, DGNs are typically limited in their ability to propagate and preserve long-term dependencies between nodes, i.e., they suffer from the over-squashing phenomena. This reduces their effectiveness, since predictive problems may require to capture interactions at different, and possibly large, radii in order to be effectively solved. In this work, we present Anti-Symmetric DGN (A-DGN), a framework forstable and non-dissipative DGN design, conceived through the lens of ordinary differential equations. We give theoretical proof that our method is stable and non-dissipative, leading to two key results: long-range information between nodes is preserved, and no gradient vanishing or explosion occurs in training. We empirically validate the proposed approach on several graph benchmarks, showing that A-DGN yields to improved performance and enables to learn effectively even when dozens of layers are used.},
note = {Winner of the Best Student Paper Award at DLG-AAAI23},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@conference{Bacciu2023,
title = {Generalizing Downsampling from Regular Data to Graphs},
author = {Davide Bacciu and Alessio Conte and Francesco Landolfi},
url = {https://arxiv.org/abs/2208.03523, Arxiv},
year = {2023},
date = {2023-02-07},
urldate = {2023-02-07},
booktitle = {Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence},
abstract = {Downsampling produces coarsened, multi-resolution representations of data and it is used, for example, to produce lossy compression and visualization of large images, reduce computational costs, and boost deep neural representation learning. Unfortunately, due to their lack of a regular structure, there is still no consensus on how downsampling should apply to graphs and linked data. Indeed reductions in graph data are still needed for the goals described above, but reduction mechanisms do not have the same focus on preserving topological structures and properties, while allowing for resolution-tuning, as is the case in regular data downsampling. In this paper, we take a step in this direction, introducing a unifying interpretation of downsampling in regular and graph data. In particular, we define a graph coarsening mechanism which is a graph-structured counterpart of controllable equispaced coarsening mechanisms in regular data. We prove theoretical guarantees for distortion bounds on path lengths, as well as the ability to preserve key topological properties in the coarsened graphs. We leverage these concepts to define a graph pooling mechanism that we empirically assess in graph classification tasks, providing a greedy algorithm that allows efficient parallel implementation on GPUs, and showing that it compares favorably against pooling methods in literature. },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@article{Bacciu2023b,
title = {Deep Graph Networks for Drug Repurposing with Multi-Protein Targets},
author = {Davide Bacciu and Federico Errica and Alessio Gravina and Lorenzo Madeddu and Marco Podda and Giovanni Stilo},
doi = {10.1109/TETC.2023.3238963},
year = {2023},
date = {2023-02-01},
urldate = {2023-02-01},
journal = {IEEE Transactions on Emerging Topics in Computing, 2023},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Lanciano2023extending,
title = {Extending OpenStack Monasca for Predictive Elasticity Control},
author = {Giacomo Lanciano and Filippo Galli and Tommaso Cucinotta and Davide Bacciu and Andrea Passarella},
doi = {10.26599/BDMA.2023.9020014},
year = {2023},
date = {2023-01-01},
urldate = {2023-01-01},
journal = {Big Data Mining and Analytics},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{DECARO2023126638,
title = {Continual adaptation of federated reservoirs in pervasive environments},
author = {Valerio De Caro and Claudio Gallicchio and Davide Bacciu},
url = {https://www.sciencedirect.com/science/article/pii/S0925231223007610},
doi = {https://doi.org/10.1016/j.neucom.2023.126638},
issn = {0925-2312},
year = {2023},
date = {2023-01-01},
urldate = {2023-01-01},
journal = {Neurocomputing},
pages = {126638},
abstract = {When performing learning tasks in pervasive environments, the main challenge arises from the need of combining federated and continual settings. The former comes from the massive distribution of devices with privacy-regulated data. The latter is required by the low resources of the participating devices, which may retain data for short periods of time. In this paper, we propose a setup for learning with Echo State Networks (ESNs) in pervasive environments. Our proposal focuses on the use of Intrinsic Plasticity (IP), a gradient-based method for adapting the reservoir’s non-linearity. First, we extend the objective function of IP to include the uncertainty arising from the distribution of the data over space and time. Then, we propose Federated Intrinsic Plasticity (FedIP), which is intended for client–server federated topologies with stationary data, and adapts the learning scheme provided by Federated Averaging (FedAvg) to include the learning rule of IP. Finally, we further extend this algorithm for learning to Federated Continual Intrinsic Plasticity (FedCLIP) to equip clients with CL strategies for dealing with continuous data streams. We evaluate our approach on an incremental setup built upon real-world datasets from human monitoring, where we tune the complexity of the scenario in terms of the distribution of the data over space and time. Results show that both our algorithms improve the representation capabilities and the performance of the ESN, while being robust to catastrophic forgetting.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{10239346,
title = {A 2-phase Strategy For Intelligent Cloud Operations},
author = {Giacomo Lanciano and Remo Andreoli and Tommaso Cucinotta and Davide Bacciu and Andrea Passarella},
doi = {10.1109/ACCESS.2023.3312218},
year = {2023},
date = {2023-01-01},
urldate = {2023-01-01},
journal = {IEEE Access},
pages = {1-1},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2022
@conference{Caro2022,
title = {Federated Adaptation of Reservoirs via Intrinsic Plasticity},
author = {Valerio {De Caro} and Claudio Gallicchio and Davide Bacciu},
editor = {Michel Verleysen},
url = {https://arxiv.org/abs/2206.11087, Arxiv},
year = {2022},
date = {2022-10-05},
urldate = {2022-10-05},
booktitle = {Proceedings of the 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2022)},
abstract = {We propose a novel algorithm for performing federated learning with Echo State Networks (ESNs) in a client-server scenario. In particular, our proposal focuses on the adaptation of reservoirs by combining Intrinsic Plasticity with Federated Averaging. The former is a gradient-based method for adapting the reservoir's non-linearity in a local and unsupervised manner, while the latter provides the framework for learning in the federated scenario. We evaluate our approach on real-world datasets from human monitoring, in comparison with the previous approach for federated ESNs existing in literature. Results show that adapting the reservoir with our algorithm provides a significant improvement on the performance of the global model. },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{nokey,
title = {Deep Learning for Graphs},
author = {Davide Bacciu and Federico Errica and Nicolò Navarin and Luca Pasa and Daniele Zambon},
editor = {Michel Verleysen},
year = {2022},
date = {2022-10-05},
urldate = {2022-10-05},
booktitle = {Proceedings of the 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2022)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Valenti2022c,
title = {Modular Representations for Weak Disentanglement},
author = {Andrea Valenti and Davide Bacciu},
editor = {Michel Verleysen},
url = {https://arxiv.org/pdf/2209.05336.pdf},
year = {2022},
date = {2022-10-05},
urldate = {2022-10-05},
booktitle = {Proceedings of the 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2022)},
abstract = {The recently introduced weakly disentangled representations proposed to relax some constraints of the previous definitions of disentanglement, in exchange for more flexibility. However, at the moment, weak disentanglement can only be achieved by increasing the amount of supervision as the number of factors of variations of the data increase. In this paper, we introduce modular representations for weak disentanglement, a novel method that allows to keep the amount of supervised information constant with respect the number of generative factors. The experiments shows that models using modular representations can increase their performance with respect to previous work without the need of additional supervision.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Matteoni2022,
title = {Continual Learning for Human State Monitoring},
author = {Federico Matteoni and Andrea Cossu and Claudio Gallicchio and Vincenzo Lomonaco and Davide Bacciu},
editor = {Michel Verleysen},
url = {https://arxiv.org/pdf/2207.00010, Arxiv},
year = {2022},
date = {2022-10-05},
urldate = {2022-10-05},
booktitle = {Proceedings of the 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2022)},
abstract = {Continual Learning (CL) on time series data represents a promising but under-studied avenue for real-world applications. We propose two new CL benchmarks for Human State Monitoring. We carefully designed the benchmarks to mirror real-world environments in which new subjects are continuously added. We conducted an empirical evaluation to assess the ability of popular CL strategies to mitigate forgetting in our benchmarks. Our results show that, possibly due to the domain-incremental properties of our benchmarks, forgetting can be easily tackled even with a simple finetuning and that existing strategies struggle in accumulating knowledge over a fixed, held-out, test subject.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Massidda2022,
title = {Knowledge-Driven Interpretation of Convolutional Neural Networks},
author = {Riccardo Massidda and Davide Bacciu},
year = {2022},
date = {2022-09-20},
urldate = {2022-09-20},
booktitle = {Proceedings of the 2022 European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2022)},
abstract = {Since the widespread adoption of deep learning solutions in critical environments, the interpretation of artificial neural networks has become a significant issue. To this end, numerous approaches currently try to align human-level concepts with the activation patterns of artificial neurons. Nonetheless, they often understate two related aspects: the distributed nature of neural representations and the semantic relations between concepts. We explicitly tackled this interrelatedness by defining a novel semantic alignment framework to align distributed activation patterns and structured knowledge. In particular, we detailed a solution to assign to both neurons and their linear combinations one or more concepts from the WordNet semantic network. Acknowledging semantic links also enabled the clustering of neurons into semantically rich and meaningful neural circuits. Our empirical analysis of popular convolutional networks for image classification found evidence of the emergence of such neural circuits. Finally, we discovered neurons in neural circuits to be pivotal for the network to perform effectively on semantically related tasks. We also contribute by releasing the code that implements our alignment framework.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{nokey,
title = {Deep Features for CBIR with Scarce Data using Hebbian Learning},
author = {Gabriele Lagani and Davide Bacciu and Claudio Gallicchio and Fabrizio Falchi and Claudio Gennaro and Giuseppe Amato},
url = {https://arxiv.org/abs/2205.08935, Arxiv},
year = {2022},
date = {2022-09-14},
urldate = {2022-09-14},
booktitle = {Proc. of the 19th International Conference on Content-based Multimedia Indexing (CBMI2022)},
abstract = { Features extracted from Deep Neural Networks (DNNs) have proven to be very effective in the context of Content Based Image Retrieval (CBIR). In recent work, biologically inspired textit{Hebbian} learning algorithms have shown promises for DNN training. In this contribution, we study the performance of such algorithms in the development of feature extractors for CBIR tasks. Specifically, we consider a semi-supervised learning strategy in two steps: first, an unsupervised pre-training stage is performed using Hebbian learning on the image dataset; second, the network is fine-tuned using supervised Stochastic Gradient Descent (SGD) training. For the unsupervised pre-training stage, we explore the nonlinear Hebbian Principal Component Analysis (HPCA) learning rule. For the supervised fine-tuning stage, we assume sample efficiency scenarios, in which the amount of labeled samples is just a small fraction of the whole dataset. Our experimental analysis, conducted on the CIFAR10 and CIFAR100 datasets shows that, when few labeled samples are available, our Hebbian approach provides relevant improvements compared to various alternative methods. },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@workshop{nokey,
title = {Studying the impact of magnitude pruning on contrastive learning methods},
author = {Francesco Corti and Rahim Entezari and Sara Hooker and Davide Bacciu and Olga Saukh},
year = {2022},
date = {2022-07-23},
urldate = {2022-07-23},
booktitle = {ICML 2022 workshop on Hardware Aware Efficient Training (HAET 2022)},
abstract = {We study the impact of different pruning techniques on the representation learned by deep neural networks trained with contrastive loss functions. Our work finds that at high sparsity levels, contrastive learning results in a higher number of misclassified examples relative to models trained with traditional cross-entropy loss. To understand this pronounced difference, we use metrics such as the number of PIEs, qscore and pdepth to measure the impact of pruning on the learned representation quality. Our analysis suggests the schedule of the pruning method implementation matters. We find that the negative impact of sparsity on the quality of the learned representation is the highest when pruning is introduced early-on in training phase.},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@conference{Sangermano2022,
title = {Sample Condensation in Online Continual Learning},
author = {Matteo Sangermano and Antonio Carta and Andrea Cossu and Vincenzo Lomonaco and Davide Bacciu
},
url = {https://arxiv.org/abs/2206.11849, Arxiv},
year = {2022},
date = {2022-07-18},
urldate = {2022-07-18},
booktitle = {Proceedings of the 2022 IEEE World Congress on Computational Intelligence},
publisher = {IEEE},
abstract = {Online Continual learning is a challenging learning scenario where the model observes a non-stationary stream of data and learns online. The main challenge is to incrementally learn while avoiding catastrophic forgetting, namely the problem of forgetting previously acquired knowledge while learning from new data. A popular solution in these scenario is to use a small memory to retain old data and rehearse them over time. Unfortunately, due to the limited memory size, the quality of the memory will deteriorate over time. In this paper we propose OLCGM, a novel replay-based continual learning strategy that uses knowledge condensation techniques to continuously compress the memory and achieve a better use of its limited size. The sample condensation step compresses old samples, instead of removing them like other replay strategies. As a result, the experiments show that, whenever the memory budget is limited compared to the complexity of the data, OLCGM improves the final accuracy compared to state-of-the-art replay strategies.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Valenti2022,
title = { Leveraging Relational Information for Learning Weakly Disentangled Representations },
author = {Andrea Valenti and Davide Bacciu
},
url = {https://arxiv.org/abs/2205.10056, Arxiv},
year = {2022},
date = {2022-07-18},
urldate = {2022-07-18},
booktitle = {Proceedings of the 2022 IEEE World Congress on Computational Intelligence},
publisher = {IEEE},
abstract = {Disentanglement is a difficult property to enforce in neural representations. This might be due, in part, to a formalization of the disentanglement problem that focuses too heavily on separating relevant factors of variation of the data in single isolated dimensions of the neural representation. We argue that such a definition might be too restrictive and not necessarily beneficial in terms of downstream tasks. In this work, we present an alternative view over learning (weakly) disentangled representations, which leverages concepts from relational learning. We identify the regions of the latent space that correspond to specific instances of generative factors, and we learn the relationships among these regions in order to perform controlled changes to the latent codes. We also introduce a compound generative model that implements such a weak disentanglement approach. Our experiments shows that the learned representations can separate the relevant factors of variation in the data, while preserving the information needed for effectively generating high quality data samples.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{nokey,
title = {The Infinite Contextual Graph Markov Model},
author = {Daniele Castellana and Federico Errica and Davide Bacciu and Alessio Micheli
},
year = {2022},
date = {2022-07-18},
urldate = {2022-07-18},
booktitle = {Proceedings of the 39th International Conference on Machine Learning (ICML 2022)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@workshop{Semola2022,
title = {Continual-Learning-as-a-Service (CLaaS): On-Demand Efficient Adaptation of Predictive Models},
author = {Rudy Semola and Vincenzo Lomonaco and Davide Bacciu},
url = {https://arxiv.org/pdf/2206.06957.pdf},
year = {2022},
date = {2022-07-18},
urldate = {2022-07-18},
booktitle = {Proc. of the 1st International Workshop on Pervasive Artificial Intelligence, 2022 IEEE World Congress on Computational Intelligence},
abstract = {Predictive machine learning models nowadays are often updated in a stateless and expensive way. The two main future trends for companies that want to build machine learning-based applications and systems are real-time inference and continual updating. Unfortunately, both trends require a mature infrastructure that is hard and costly to realize on-premise. This paper defines a novel software service and model delivery infrastructure termed Continual Learning-as-a-Service (CLaaS) to address these issues. Specifically, it embraces continual machine learning and continuous integration techniques. It provides support for model updating and validation tools for data scientists without an on-premise solution and in an efficient, stateful and easy-to-use manner. Finally, this CL model service is easy to encapsulate in any machine learning infrastructure or cloud system. This paper presents the design and implementation of a CLaaS instantiation, called LiquidBrain, evaluated in two real-world scenarios. The former is a robotic object recognition setting using the CORe50 dataset while the latter is a named category and attribute prediction using the DeepFashion-C dataset in the fashion domain. Our preliminary results suggest the usability and efficiency of the Continual Learning model services and the effectiveness of the solution in addressing real-world use-cases regardless of where the computation happens in the continuum Edge-Cloud.},
howpublished = {CEUR-WS Proceedings},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@article{DUKIC2022,
title = {Inductive-Transductive Learning for Very Sparse Fashion Graphs},
author = {Haris Dukic and Shahab Mokarizadeh and Georgios Deligiorgis and Pierpaolo Sepe and Davide Bacciu and Marco Trincavelli},
doi = {https://doi.org/10.1016/j.neucom.2022.06.050},
issn = {0925-2312},
year = {2022},
date = {2022-06-27},
urldate = {2022-06-27},
journal = {Neurocomputing},
abstract = {The assortments of global retailers are composed of hundreds of thousands of products linked by several types of relationships such as style compatibility, ”bought together”, ”watched together”, etc. Graphs are a natural representation for assortments, where products are nodes and relations are edges. Style compatibility relations are produced manually and do not cover the whole graph uniformly. We propose to use inductive learning to enhance a graph encoding style compatibility of a fashion assortment, leveraging rich node information comprising textual descriptions and visual data. Then, we show how the proposed graph enhancement substantially improves the performance on transductive tasks with a minor impact on graph sparsity. Although demonstrated in a challenging and novel industrial application case, the approach we propose is general enough to be applied to any node-level or edge-level prediction task in very sparse, large-scale networks.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{nokey,
title = {Graph Neural Network for Context-Aware Recommendation},
author = {Asma Sattar and Davide Bacciu},
doi = {10.1007/s11063-022-10917-3},
year = {2022},
date = {2022-06-22},
urldate = {2022-06-22},
journal = {Neural Processing Letters},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@conference{carta2021ex,
title = {Ex-Model: Continual Learning from a Stream of Trained Models},
author = {Antonio Carta and Andrea Cossu and Vincenzo Lomonaco and Davide Bacciu},
url = {https://arxiv.org/pdf/2112.06511.pdf, Arxiv},
year = {2022},
date = {2022-06-20},
urldate = {2022-06-20},
booktitle = {Proceedings of the CVPR 2022 Workshop on Continual Learning },
journal = {arXiv preprint arXiv:2112.06511},
pages = {3790-3799},
organization = {IEEE},
abstract = {Learning continually from non-stationary data streams is a challenging research topic of growing popularity in the last few years. Being able to learn, adapt, and generalize continually in an efficient, effective, and scalable way is fundamental for a sustainable development of Artificial Intelligent systems. However, an agent-centric view of continual learning requires learning directly from raw data, which limits the interaction between independent agents, the efficiency, and the privacy of current approaches. Instead, we argue that continual learning systems should exploit the availability of compressed information in the form of trained models. In this paper, we introduce and formalize a new paradigm named "Ex-Model Continual Learning" (ExML), where an agent learns from a sequence of previously trained models instead of raw data. We further contribute with three ex-model continual learning algorithms and an empirical setting comprising three datasets (MNIST, CIFAR-10 and CORe50), and eight scenarios, where the proposed algorithms are extensively tested. Finally, we highlight the peculiarities of the ex-model paradigm and we point out interesting future research directions. },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Serramazza2022,
title = {Learning image captioning as a structured transduction task},
author = {Davide Italo Serramazza and Davide Bacciu},
doi = {doi.org/10.1007/978-3-031-08223-8_20},
year = {2022},
date = {2022-06-20},
urldate = {2022-06-20},
booktitle = {Proceedings of the 23rd International Conference on Engineering Applications of Neural Networks (EANN 2022)},
volume = {1600},
pages = {235–246},
publisher = {Springer},
series = {Communications in Computer and Information Science },
abstract = {Image captioning is a task typically approached by deep encoder-decoder architectures, where the encoder component works on a flat representation of the image while the decoder considers a sequential representation of natural language sentences. As such, these encoder-decoder architectures implement a simple and very specific form of structured transduction, that is a generalization of a predictive problem where the input data and output predictions might have substantially different structures and topologies. In this paper, we explore a generalization of such an approach by addressing the problem as a general structured transduction problem. In particular, we provide a framework that allows considering input and output information with a tree-structured representation. This allows taking into account the hierarchical nature underlying both images and sentences. To this end, we introduce an approach to generate tree-structured representations from images along with an autoencoder working with this kind of data. We empirically assess our approach on both synthetic and realistic tasks.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Lucchesi2022,
title = {Avalanche RL: a Continual Reinforcement Learning Library},
author = {Nicolò Lucchesi and Antonio Carta and Vincenzo Lomonaco and Davide Bacciu},
url = {https://arxiv.org/abs/2202.13657, Arxiv},
year = {2022},
date = {2022-05-23},
urldate = {2022-05-23},
booktitle = {Proceedings of the 21st International Conference on Image Analysis and Processing (ICIAP 2021)},
abstract = {Continual Reinforcement Learning (CRL) is a challenging setting where an agent learns to interact with an environment that is constantly changing over time (the stream of experiences). In this paper, we describe Avalanche RL, a library for Continual Reinforcement Learning which allows to easily train agents on a continuous stream of tasks. Avalanche RL is based on PyTorch and supports any OpenAI Gym environment. Its design is based on Avalanche, one of the more popular continual learning libraries, which allow us to reuse a large number of continual learning strategies and improve the interaction between reinforcement learning and continual learning researchers. Additionally, we propose Continual Habitat-Lab, a novel benchmark and a high-level library which enables the usage of the photorealistic simulator Habitat-Sim for CRL research. Overall, Avalanche RL attempts to unify under a common framework continual reinforcement learning applications, which we hope will foster the growth of the field. },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@article{DBLP:journals/corr/abs-2105-06998,
title = {A causal learning framework for the analysis and interpretation of COVID-19 clinical data},
author = {Elisa Ferrari and Luna Gargani and Greta Barbieri and Lorenzo Ghiadoni and Francesco Faita and Davide Bacciu},
url = {https://arxiv.org/abs/2105.06998, Arxiv},
doi = {doi.org/10.1371/journal.pone.0268327},
year = {2022},
date = {2022-05-19},
urldate = {2022-05-19},
journal = {Plos One},
volume = {17},
number = {5},
abstract = {We present a workflow for clinical data analysis that relies on Bayesian Structure Learning (BSL), an unsupervised learning approach, robust to noise and biases, that allows to incorporate prior medical knowledge into the learning process and that provides explainable results in the form of a graph showing the causal connections among the analyzed features. The workflow consists in a multi-step approach that goes from identifying the main causes of patient's outcome through BSL, to the realization of a tool suitable for clinical practice, based on a Binary Decision Tree (BDT), to recognize patients at high-risk with information available already at hospital admission time. We evaluate our approach on a feature-rich COVID-19 dataset, showing that the proposed framework provides a schematic overview of the multi-factorial processes that jointly contribute to the outcome. We discuss how these computational findings are confirmed by current understanding of the COVID-19 pathogenesis. Further, our approach yields to a highly interpretable tool correctly predicting the outcome of 85% of subjects based exclusively on 3 features: age, a previous history of chronic obstructive pulmonary disease and the PaO2/FiO2 ratio at the time of arrival to the hospital. The inclusion of additional information from 4 routine blood tests (Creatinine, Glucose, pO2 and Sodium) increases predictive accuracy to 94.5%. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
All
Constraint-Free Structure Learning with Smooth Acyclic Orientations Conference The Twelfth International Conference on Learning Representations, 2024. Deep Reinforcement Learning for Network Slice Placement and the DeepNetSlice Toolkit Conference Forthcoming Proceedings of the IEEE International Conference on Machine Learning for Communication and Networking 2024 (IEEE ICMLCN 2024), IEEE, Forthcoming. Classifier-free graph diffusion for molecular property targeting Workshop 4th workshop on Graphs and more Complex structures for Learning and Reasoning (GCLR) at AAAI 2024, 2024. Neural Autoencoder-Based Structure-Preserving Model Order Reduction and Control Design for High-Dimensional Physical Systems Journal Article In: IEEE Control Systems Letters, 2023. Effective Non-Dissipative Propagation for Continuous-Time Dynamic Graphs Workshop Temporal Graph Learning Workshop, NeurIPS 2023, 2023. Neural Algorithmic Reasoning for Combinatorial Optimisation Proceeding PMRL, 2023. PyDGN: a Python Library for Flexible and Reproducible Research on Deep Learning for Graphs Journal Article In: Journal of Open Source Software, vol. 8, no. 90, pp. 5713, 2023. Hidden Markov Models for Temporal Graph Representation Learning Conference Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , 2023. A Tropical View of Graph Neural Networks Conference Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , 2023. Improving Fairness via Intrinsic Plasticity in Echo State Networks Conference Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , 2023. A Protocol for Continual Explanation of SHAP Conference Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , 2023. Communication-Efficient Ridge Regression in Federated Echo State Networks Conference Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , 2023. Graph Representation Learning Conference Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , 2023. Randomly Coupled Oscillators Workshop Proceedings of the ECML/PKDD Workshop on Deep Learning meets Neuromorphic Hardware, 2023. Non-Dissipative Propagation by Randomized Anti-Symmetric Deep Graph Networks Workshop Proceedings of the ECML/PKDD Workshop on Deep Learning meets Neuromorphic Hardware, 2023. Graph-based Polyphonic Multitrack Music Generation Conference Proceedings of the 32nd INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI 2023), 2023. Partial Hypernetworks for Continual Learning Conference Proceedings of the International Conference on Lifelong Learning Agents (CoLLAs 2023), Proceedings of Machine Learning Research, 2023. Class-Incremental Learning with Repetition Conference Proceedings of the International Conference on Lifelong Learning Agents (CoLLAs 2023), Proceedings of Machine Learning Research, 2023. Decentralized Plasticity in Reservoir Dynamical Networks for Pervasive Environments Workshop Proceedings of the 2023 ICML Workshop on Localized Learning: Decentralized Model Updates via Non-Global Objectives
, 2023. Randomly Coupled Oscillators for Time Series Processing Workshop Proceedings of the 2023 ICML Workshop on New Frontiers in Learning, Control, and Dynamical Systems , 2023. Differentiable Causal Discovery with Smooth Acyclic Orientations Workshop Proceedings of the 2023 ICML Workshop on Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators , 2023. ECGAN: generative adversarial network for electrocardiography Conference Proceedings of Artificial Intelligence In Medicine 2023 (AIME 2023), 2023. AI-Toolkit: a Microservices Architecture for Low-Code Decentralized Machine Intelligence Conference Proceedings of 2023 IEEE International Conference on Acoustics, Speech and Signal Processing, 2023. Prediction of Driver's Stress Affection in Simulated Autonomous Driving Scenarios Conference Proceedings of 2023 IEEE International Conference on Acoustics, Speech and Signal Processing, 2023. Anti-Symmetric DGN: a stable architecture for Deep Graph Networks Conference Proceedings of the Eleventh International Conference on Learning Representations (ICLR 2023) , 2023. Dual Algorithmic Reasoning Conference Proceedings of the Eleventh International Conference on Learning Representations (ICLR 2023), 2023, (Notable Spotlight paper). Causal Abstraction with Soft Interventions Conference Proceedings of the 2nd Conference on Causal Learning and Reasoning (CLeaR 2023), PMLR, 2023. Non-Dissipative Propagation by Anti-Symmetric Deep Graph Networks Workshop Proceedigns of the Ninth International Workshop on Deep Learning on Graphs: Method and Applications (DLG-AAAI’23), 2023, (Winner of the Best Student Paper Award at DLG-AAAI23). Generalizing Downsampling from Regular Data to Graphs Conference Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, 2023. Deep Graph Networks for Drug Repurposing with Multi-Protein Targets Journal Article In: IEEE Transactions on Emerging Topics in Computing, 2023, 2023. Extending OpenStack Monasca for Predictive Elasticity Control Journal Article In: Big Data Mining and Analytics, 2023. Continual adaptation of federated reservoirs in pervasive environments Journal Article In: Neurocomputing, pp. 126638, 2023, ISSN: 0925-2312. A 2-phase Strategy For Intelligent Cloud Operations Journal Article In: IEEE Access, pp. 1-1, 2023. Federated Adaptation of Reservoirs via Intrinsic Plasticity Conference Proceedings of the 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2022), 2022. Deep Learning for Graphs Conference Proceedings of the 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2022), 2022. Modular Representations for Weak Disentanglement Conference Proceedings of the 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2022), 2022. Continual Learning for Human State Monitoring Conference Proceedings of the 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2022), 2022. Knowledge-Driven Interpretation of Convolutional Neural Networks Conference Proceedings of the 2022 European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2022), 2022. Deep Features for CBIR with Scarce Data using Hebbian Learning Conference Proc. of the 19th International Conference on Content-based Multimedia Indexing (CBMI2022), 2022. Studying the impact of magnitude pruning on contrastive learning methods Workshop ICML 2022 workshop on Hardware Aware Efficient Training (HAET 2022), 2022. Sample Condensation in Online Continual Learning Conference Proceedings of the 2022 IEEE World Congress on Computational Intelligence, IEEE, 2022. Leveraging Relational Information for Learning Weakly Disentangled Representations Conference Proceedings of the 2022 IEEE World Congress on Computational Intelligence, IEEE, 2022. The Infinite Contextual Graph Markov Model Conference Proceedings of the 39th International Conference on Machine Learning (ICML 2022), 2022. Continual-Learning-as-a-Service (CLaaS): On-Demand Efficient Adaptation of Predictive Models Workshop Proc. of the 1st International Workshop on Pervasive Artificial Intelligence, 2022 IEEE World Congress on Computational Intelligence, 2022. Inductive-Transductive Learning for Very Sparse Fashion Graphs Journal Article In: Neurocomputing, 2022, ISSN: 0925-2312. Graph Neural Network for Context-Aware Recommendation Journal Article In: Neural Processing Letters, 2022. Ex-Model: Continual Learning from a Stream of Trained Models Conference Proceedings of the CVPR 2022 Workshop on Continual Learning , IEEE 2022. Learning image captioning as a structured transduction task Conference Proceedings of the 23rd International Conference on Engineering Applications of Neural Networks (EANN 2022), vol. 1600, Communications in Computer and Information Science Springer, 2022. Avalanche RL: a Continual Reinforcement Learning Library Conference Proceedings of the 21st International Conference on Image Analysis and Processing (ICIAP 2021), 2022. A causal learning framework for the analysis and interpretation of COVID-19 clinical data Journal Article In: Plos One, vol. 17, no. 5, 2022.2024
2023
2022