Here you can find a consolidated (a.k.a. slowly updated) list of my publications. A frequently updated (and possibly noisy) list of works is available on my Google Scholar profile.
Please find below a short list of highlight publications for my recent activity.
Pasquali, Alex; Lomonaco, Vincenzo; Bacciu, Davide; Paganelli, Federica Deep Reinforcement Learning for Network Slice Placement and the DeepNetSlice Toolkit Conference Forthcoming Proceedings of the IEEE International Conference on Machine Learning for Communication and Networking 2024 (IEEE ICMLCN 2024), IEEE, Forthcoming. Ninniri, Matteo; Podda, Marco; Bacciu, Davide Classifier-free graph diffusion for molecular property targeting Workshop 4th workshop on Graphs and more Complex structures for Learning and Reasoning (GCLR) at AAAI 2024, 2024. Georgiev, Dobrik Georgiev; Numeroso, Danilo; Bacciu, Davide; Liò, Pietro Neural algorithmic reasoning for combinatorial optimisation Proceedings Article In: Learning on Graphs Conference, pp. 28–1, PMLR 2023. Gravina, Alessio; Lovisotto, Giulio; Gallicchio, Claudio; Bacciu, Davide; Grohnfeldt, Claas Effective Non-Dissipative Propagation for Continuous-Time Dynamic Graphs Workshop Temporal Graph Learning Workshop, NeurIPS 2023, 2023. Errica, Federico; Bacciu, Davide; Micheli, Alessio PyDGN: a Python Library for Flexible and Reproducible Research on Deep Learning for Graphs Journal Article In: Journal of Open Source Software, vol. 8, no. 90, pp. 5713, 2023. Errica, Federico; Gravina, Alessio; Bacciu, Davide; Micheli, Alessio Hidden Markov Models for Temporal Graph Representation Learning Conference Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , 2023. Bacciu, Davide; Errica, Federico; Micheli, Alessio; Navarin, Nicolò; Pasa, Luca; Podda, Marco; Zambon, Daniele Graph Representation Learning Conference Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , 2023. Gravina, Alessio; Gallicchio, Claudio; Bacciu, Davide Non-Dissipative Propagation by Randomized Anti-Symmetric Deep Graph Networks Workshop Proceedings of the ECML/PKDD Workshop on Deep Learning meets Neuromorphic Hardware, 2023. Cosenza, Emanuele; Valenti, Andrea; Bacciu, Davide Graph-based Polyphonic Multitrack Music Generation Conference Proceedings of the 32nd INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI 2023), 2023. Gravina, Alessio; Bacciu, Davide; Gallicchio, Claudio Anti-Symmetric DGN: a stable architecture for Deep Graph Networks Conference Proceedings of the Eleventh International Conference on Learning Representations (ICLR 2023) , 2023. Numeroso, Danilo; Bacciu, Davide; Veličković, Petar Dual Algorithmic Reasoning Conference Proceedings of the Eleventh International Conference on Learning Representations (ICLR 2023), 2023, (Notable Spotlight paper). Gravina, Alessio; Bacciu, Davide; Gallicchio, Claudio Non-Dissipative Propagation by Anti-Symmetric Deep Graph Networks Workshop Proceedigns of the Ninth International Workshop on Deep Learning on Graphs: Method and Applications (DLG-AAAI’23), 2023, (Winner of the Best Student Paper Award at DLG-AAAI23). Bacciu, Davide; Errica, Federico; Gravina, Alessio; Madeddu, Lorenzo; Podda, Marco; Stilo, Giovanni Deep Graph Networks for Drug Repurposing with Multi-Protein Targets Journal Article In: IEEE Transactions on Emerging Topics in Computing, 2023, 2023. Bacciu, Davide; Errica, Federico; Navarin, Nicolò; Pasa, Luca; Zambon, Daniele Deep Learning for Graphs Conference Proceedings of the 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2022), 2022. Dukic, Haris; Mokarizadeh, Shahab; Deligiorgis, Georgios; Sepe, Pierpaolo; Bacciu, Davide; Trincavelli, Marco Inductive-Transductive Learning for Very Sparse Fashion Graphs Journal Article In: Neurocomputing, 2022, ISSN: 0925-2312.@conference{nokey,
title = {Deep Reinforcement Learning for Network Slice Placement and the DeepNetSlice Toolkit},
author = {Alex Pasquali and Vincenzo Lomonaco and Davide Bacciu and Federica Paganelli},
year = {2024},
date = {2024-05-05},
urldate = {2024-05-05},
booktitle = {Proceedings of the IEEE International Conference on Machine Learning for Communication and Networking 2024 (IEEE ICMLCN 2024)},
publisher = {IEEE},
keywords = {},
pubstate = {forthcoming},
tppubtype = {conference}
}
@workshop{Ninniri2024,
title = {Classifier-free graph diffusion for molecular property targeting},
author = {Matteo Ninniri and Marco Podda and Davide Bacciu},
url = {https://arxiv.org/abs/2312.17397, Arxiv},
year = {2024},
date = {2024-02-27},
booktitle = {4th workshop on Graphs and more Complex structures for Learning and Reasoning (GCLR) at AAAI 2024},
abstract = {This work focuses on the task of property targeting: that is, generating molecules conditioned on target chemical properties to expedite candidate screening for novel drug and materials development. DiGress is a recent diffusion model for molecular graphs whose distinctive feature is allowing property targeting through classifier-based (CB) guidance. While CB guidance may work to generate molecular-like graphs, we hint at the fact that its assumptions apply poorly to the chemical domain. Based on this insight we propose a classifier-free DiGress (FreeGress), which works by directly injecting the conditioning information into the training process. CF guidance is convenient given its less stringent assumptions and since it does not require to train an auxiliary property regressor, thus halving the number of trainable parameters in the model. We empirically show that our model yields up to 79% improvement in Mean Absolute Error with respect to DiGress on property targeting tasks on QM9 and ZINC-250k benchmarks. As an additional contribution, we propose a simple yet powerful approach to improve chemical validity of generated samples, based on the observation that certain chemical properties such as molecular weight correlate with the number of atoms in molecules. },
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@inproceedings{georgiev2024neural,
title = {Neural algorithmic reasoning for combinatorial optimisation},
author = {Dobrik Georgiev Georgiev and Danilo Numeroso and Davide Bacciu and Pietro Liò},
year = {2023},
date = {2023-12-15},
urldate = {2023-12-15},
booktitle = {Learning on Graphs Conference},
pages = {28–1},
organization = {PMLR},
abstract = {Solving NP-hard/complete combinatorial problems with neural networks is a challenging research area that aims to surpass classical approximate algorithms. The long-term objective is to outperform hand-designed heuristics for NP-hard/complete problems by learning to generate superior solutions solely from training data. Current neural-based methods for solving CO problems often overlook the inherent" algorithmic" nature of the problems. In contrast, heuristics designed for CO problems, eg TSP, frequently leverage well-established algorithms, such as those for finding the minimum spanning tree. In this paper, we propose leveraging recent advancements in neural algorithmic reasoning to improve the learning of CO problems. Specifically, we suggest pre-training our neural model on relevant algorithms before training it on CO instances. Our results demonstrate that, using this learning setup, we achieve superior performance compared to non-algorithmically informed deep learning models.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@workshop{Gravina2023b,
title = {Effective Non-Dissipative Propagation for Continuous-Time Dynamic Graphs},
author = {Alessio Gravina and Giulio Lovisotto and Claudio Gallicchio and Davide Bacciu and Claas Grohnfeldt},
url = {https://openreview.net/forum?id=zAHFC2LNEe, PDF},
year = {2023},
date = {2023-12-11},
urldate = {2023-12-11},
booktitle = {Temporal Graph Learning Workshop, NeurIPS 2023},
abstract = {Recent research on Deep Graph Networks (DGNs) has broadened the domain of learning on graphs to real-world systems of interconnected entities that evolve over time. This paper addresses prediction problems on graphs defined by a stream of events, possibly irregularly sampled over time, generally referred to as Continuous-Time Dynamic Graphs (C-TDGs). While many predictive problems on graphs may require capturing interactions between nodes at different distances, existing DGNs for C-TDGs are not designed to propagate and preserve long-range information - resulting in suboptimal performance. In this work, we present Continuous-Time Graph Anti-Symmetric Network (CTAN), a DGN for C-TDGs designed within the ordinary differential equations framework that enables efficient propagation of long-range dependencies. We show that our method robustly performs stable and non-dissipative information propagation over dynamically evolving graphs, where the number of ODE discretization steps allows scaling the propagation range. We empirically validate the proposed approach on several real and synthetic graph benchmarks, showing that CTAN leads to improved performance while enabling the propagation of long-range information},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@article{errica2023pydgn,
title = {PyDGN: a Python Library for Flexible and Reproducible Research on Deep Learning for Graphs},
author = {Federico Errica and Davide Bacciu and Alessio Micheli},
year = {2023},
date = {2023-10-31},
urldate = {2023-01-01},
journal = {Journal of Open Source Software},
volume = {8},
number = {90},
pages = {5713},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@conference{Errica2023,
title = {Hidden Markov Models for Temporal Graph Representation Learning},
author = {Federico Errica and Alessio Gravina and Davide Bacciu and Alessio Micheli},
editor = {Michel Verleysen},
year = {2023},
date = {2023-10-04},
urldate = {2023-10-04},
booktitle = {Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Bacciu2023c,
title = {Graph Representation Learning },
author = {Davide Bacciu and Federico Errica and Alessio Micheli and Nicolò Navarin and Luca Pasa and Marco Podda and Daniele Zambon
},
editor = {Michel Verleysen},
year = {2023},
date = {2023-10-04},
urldate = {2023-10-04},
booktitle = {Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@workshop{Gravina2023c,
title = {Non-Dissipative Propagation by Randomized Anti-Symmetric Deep Graph Networks},
author = {Alessio Gravina and Claudio Gallicchio and Davide Bacciu},
year = {2023},
date = {2023-09-18},
urldate = {2023-09-18},
booktitle = {Proceedings of the ECML/PKDD Workshop on Deep Learning meets Neuromorphic Hardware},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@conference{Cosenza2023,
title = {Graph-based Polyphonic Multitrack Music Generation},
author = {Emanuele Cosenza and Andrea Valenti and Davide Bacciu },
year = {2023},
date = {2023-08-19},
urldate = {2023-08-19},
booktitle = {Proceedings of the 32nd INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI 2023)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Gravina2023,
title = {Anti-Symmetric DGN: a stable architecture for Deep Graph Networks},
author = {Alessio Gravina and Davide Bacciu and Claudio Gallicchio},
url = {https://openreview.net/pdf?id=J3Y7cgZOOS},
year = {2023},
date = {2023-05-01},
urldate = {2023-05-01},
booktitle = {Proceedings of the Eleventh International Conference on Learning Representations (ICLR 2023) },
abstract = {Deep Graph Networks (DGNs) currently dominate the research landscape of learning from graphs, due to their efficiency and ability to implement an adaptive message-passing scheme between the nodes. However, DGNs are typically limited in their ability to propagate and preserve long-term dependencies between nodes, i.e., they suffer from the over-squashing phenomena. As a result, we can expect them to under-perform, since different problems require to capture interactions at different (and possibly large) radii in order to be effectively solved. In this work, we present Anti-Symmetric Deep Graph Networks (A-DGNs), a framework for stable and non-dissipative DGN design, conceived through the lens of ordinary differential equations. We give theoretical proof that our method is stable and non-dissipative, leading to two key results: long-range information between nodes is preserved, and no gradient vanishing or explosion occurs in training. We empirically validate the proposed approach on several graph benchmarks, showing that A-DGN yields to improved performance and enables to learn effectively even when dozens of layers are used.ers are used.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Numeroso2023,
title = {Dual Algorithmic Reasoning},
author = {Danilo Numeroso and Davide Bacciu and Petar Veličković},
url = {https://openreview.net/pdf?id=hhvkdRdWt1F},
year = {2023},
date = {2023-05-01},
urldate = {2023-05-01},
booktitle = {Proceedings of the Eleventh International Conference on Learning Representations (ICLR 2023)},
abstract = {Neural Algorithmic Reasoning is an emerging area of machine learning which seeks to infuse algorithmic computation in neural networks, typically by training neural models to approximate steps of classical algorithms. In this context, much of the current work has focused on learning reachability and shortest path graph algorithms, showing that joint learning on similar algorithms is beneficial for generalisation. However, when targeting more complex problems, such "similar" algorithms become more difficult to find. Here, we propose to learn algorithms by exploiting duality of the underlying algorithmic problem. Many algorithms solve optimisation problems. We demonstrate that simultaneously learning the dual definition of these optimisation problems in algorithmic learning allows for better learning and qualitatively better solutions. Specifically, we exploit the max-flow min-cut theorem to simultaneously learn these two algorithms over synthetically generated graphs, demonstrating the effectiveness of the proposed approach. We then validate the real-world utility of our dual algorithmic reasoner by deploying it on a challenging brain vessel classification task, which likely depends on the vessels’ flow properties. We demonstrate a clear performance gain when using our model within such a context, and empirically show that learning the max-flow and min-cut algorithms together is critical for achieving such a result.},
note = {Notable Spotlight paper},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@workshop{nokey,
title = {Non-Dissipative Propagation by Anti-Symmetric Deep Graph Networks},
author = {Alessio Gravina and Davide Bacciu and Claudio Gallicchio},
url = {https://drive.google.com/file/d/1uPHhjwSa3g_hRvHwx6UnbMLgGN_cAqMu/view. PDF},
year = {2023},
date = {2023-02-13},
urldate = {2023-02-13},
booktitle = {Proceedigns of the Ninth International Workshop on Deep Learning on Graphs: Method and Applications (DLG-AAAI’23)},
abstract = {Deep Graph Networks (DGNs) currently dominate the research landscape of learning from graphs, due to the efficiency of their adaptive message-passing scheme between nodes. However, DGNs are typically limited in their ability to propagate and preserve long-term dependencies between nodes, i.e., they suffer from the over-squashing phenomena. This reduces their effectiveness, since predictive problems may require to capture interactions at different, and possibly large, radii in order to be effectively solved. In this work, we present Anti-Symmetric DGN (A-DGN), a framework forstable and non-dissipative DGN design, conceived through the lens of ordinary differential equations. We give theoretical proof that our method is stable and non-dissipative, leading to two key results: long-range information between nodes is preserved, and no gradient vanishing or explosion occurs in training. We empirically validate the proposed approach on several graph benchmarks, showing that A-DGN yields to improved performance and enables to learn effectively even when dozens of layers are used.},
note = {Winner of the Best Student Paper Award at DLG-AAAI23},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@article{Bacciu2023b,
title = {Deep Graph Networks for Drug Repurposing with Multi-Protein Targets},
author = {Davide Bacciu and Federico Errica and Alessio Gravina and Lorenzo Madeddu and Marco Podda and Giovanni Stilo},
doi = {10.1109/TETC.2023.3238963},
year = {2023},
date = {2023-02-01},
urldate = {2023-02-01},
journal = {IEEE Transactions on Emerging Topics in Computing, 2023},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@conference{nokey,
title = {Deep Learning for Graphs},
author = {Davide Bacciu and Federico Errica and Nicolò Navarin and Luca Pasa and Daniele Zambon},
editor = {Michel Verleysen},
year = {2022},
date = {2022-10-05},
urldate = {2022-10-05},
booktitle = {Proceedings of the 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2022)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@article{DUKIC2022,
title = {Inductive-Transductive Learning for Very Sparse Fashion Graphs},
author = {Haris Dukic and Shahab Mokarizadeh and Georgios Deligiorgis and Pierpaolo Sepe and Davide Bacciu and Marco Trincavelli},
doi = {https://doi.org/10.1016/j.neucom.2022.06.050},
issn = {0925-2312},
year = {2022},
date = {2022-06-27},
urldate = {2022-06-27},
journal = {Neurocomputing},
abstract = {The assortments of global retailers are composed of hundreds of thousands of products linked by several types of relationships such as style compatibility, ”bought together”, ”watched together”, etc. Graphs are a natural representation for assortments, where products are nodes and relations are edges. Style compatibility relations are produced manually and do not cover the whole graph uniformly. We propose to use inductive learning to enhance a graph encoding style compatibility of a fashion assortment, leveraging rich node information comprising textual descriptions and visual data. Then, we show how the proposed graph enhancement substantially improves the performance on transductive tasks with a minor impact on graph sparsity. Although demonstrated in a challenging and novel industrial application case, the approach we propose is general enough to be applied to any node-level or edge-level prediction task in very sparse, large-scale networks.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}