Here you can find a consolidated (a.k.a. slowly updated) list of my publications. A frequently updated (and possibly noisy) list of works is available on my Google Scholar profile.
Please find below a short list of highlight publications for my recent activity.
Pasquali, Alex; Lomonaco, Vincenzo; Bacciu, Davide; Paganelli, Federica Deep Reinforcement Learning for Network Slice Placement and the DeepNetSlice Toolkit Conference Forthcoming Proceedings of the IEEE International Conference on Machine Learning for Communication and Networking 2024 (IEEE ICMLCN 2024), IEEE, Forthcoming. Ninniri, Matteo; Podda, Marco; Bacciu, Davide Classifier-free graph diffusion for molecular property targeting Workshop 4th workshop on Graphs and more Complex structures for Learning and Reasoning (GCLR) at AAAI 2024, 2024. Georgiev, Dobrik Georgiev; Numeroso, Danilo; Bacciu, Davide; Liò, Pietro Neural algorithmic reasoning for combinatorial optimisation Proceedings Article In: Learning on Graphs Conference, pp. 28–1, PMLR 2023. Gravina, Alessio; Lovisotto, Giulio; Gallicchio, Claudio; Bacciu, Davide; Grohnfeldt, Claas Effective Non-Dissipative Propagation for Continuous-Time Dynamic Graphs Workshop Temporal Graph Learning Workshop, NeurIPS 2023, 2023. Errica, Federico; Bacciu, Davide; Micheli, Alessio PyDGN: a Python Library for Flexible and Reproducible Research on Deep Learning for Graphs Journal Article In: Journal of Open Source Software, vol. 8, no. 90, pp. 5713, 2023. Landolfi, Francesco; Bacciu, Davide; Numeroso, Danilo A Tropical View of Graph Neural Networks Conference Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , 2023. Bacciu, Davide; Errica, Federico; Micheli, Alessio; Navarin, Nicolò; Pasa, Luca; Podda, Marco; Zambon, Daniele Graph Representation Learning Conference Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , 2023. Gravina, Alessio; Gallicchio, Claudio; Bacciu, Davide Non-Dissipative Propagation by Randomized Anti-Symmetric Deep Graph Networks Workshop Proceedings of the ECML/PKDD Workshop on Deep Learning meets Neuromorphic Hardware, 2023. Cosenza, Emanuele; Valenti, Andrea; Bacciu, Davide Graph-based Polyphonic Multitrack Music Generation Conference Proceedings of the 32nd INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI 2023), 2023. Gravina, Alessio; Bacciu, Davide; Gallicchio, Claudio Anti-Symmetric DGN: a stable architecture for Deep Graph Networks Conference Proceedings of the Eleventh International Conference on Learning Representations (ICLR 2023) , 2023. Numeroso, Danilo; Bacciu, Davide; Veličković, Petar Dual Algorithmic Reasoning Conference Proceedings of the Eleventh International Conference on Learning Representations (ICLR 2023), 2023, (Notable Spotlight paper). Gravina, Alessio; Bacciu, Davide; Gallicchio, Claudio Non-Dissipative Propagation by Anti-Symmetric Deep Graph Networks Workshop Proceedigns of the Ninth International Workshop on Deep Learning on Graphs: Method and Applications (DLG-AAAI’23), 2023, (Winner of the Best Student Paper Award at DLG-AAAI23). Bacciu, Davide; Conte, Alessio; Landolfi, Francesco Generalizing Downsampling from Regular Data to Graphs Conference Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, 2023. Bacciu, Davide; Errica, Federico; Gravina, Alessio; Madeddu, Lorenzo; Podda, Marco; Stilo, Giovanni Deep Graph Networks for Drug Repurposing with Multi-Protein Targets Journal Article In: IEEE Transactions on Emerging Topics in Computing, 2023, 2023. Bacciu, Davide; Errica, Federico; Navarin, Nicolò; Pasa, Luca; Zambon, Daniele Deep Learning for Graphs Conference Proceedings of the 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2022), 2022. Castellana, Daniele; Errica, Federico; Bacciu, Davide; Micheli, Alessio The Infinite Contextual Graph Markov Model Conference Proceedings of the 39th International Conference on Machine Learning (ICML 2022), 2022. Dukic, Haris; Mokarizadeh, Shahab; Deligiorgis, Georgios; Sepe, Pierpaolo; Bacciu, Davide; Trincavelli, Marco Inductive-Transductive Learning for Very Sparse Fashion Graphs Journal Article In: Neurocomputing, 2022, ISSN: 0925-2312. Sattar, Asma; Bacciu, Davide Graph Neural Network for Context-Aware Recommendation Journal Article In: Neural Processing Letters, 2022. Numeroso, Danilo; Bacciu, Davide; Veličković, Petar Learning heuristics for A* Workshop ICRL 2022 Workshop on Anchoring Machine Learning in Classical Algorithmic Theory (GroundedML 2022), 2022. Bacciu, Davide; Numeroso, Danilo Explaining Deep Graph Networks via Input Perturbation Journal Article In: IEEE Transactions on Neural Networks and Learning Systems, 2022. Collodi, Lorenzo; Bacciu, Davide; Bianchi, Matteo; Averta, Giuseppe Learning with few examples the semantic description of novel human-inspired grasp strategies from RGB data Journal Article In: IEEE Robotics and Automation Letters, pp. 2573 - 2580, 2022. Gravina, Alessio; Wilson, Jennifer L.; Bacciu, Davide; Grimes, Kevin J.; Priami, Corrado Controlling astrocyte-mediated synaptic pruning signals for schizophrenia drug repurposing with Deep Graph Networks Journal Article In: Plos Computational Biology, vol. 18, no. 5, 2022. Carta, Antonio; Cossu, Andrea; Errica, Federico; Bacciu, Davide Catastrophic Forgetting in Deep Graph Networks: a Graph Classification benchmark Journal Article In: Frontiers in Artificial Intelligence , 2022. Bacciu, Davide; Bianchi, Filippo Maria; Paassen, Benjamin; Alippi, Cesare Deep learning for graphs Conference Proceedings of the 29th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2021), 2021. Dukic, Haris; Deligiorgis, Georgios; Sepe, Pierpaolo; Bacciu, Davide; Trincavelli, Marco Inductive learning for product assortment graph completion Conference Proceedings of the 29th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2021), 2021. Bacciu, Davide; Conte, Alessio; Grossi, Roberto; Landolfi, Francesco; Marino, Andrea K-Plex Cover Pooling for Graph Neural Networks Journal Article In: Data Mining and Knowledge Discovery, 2021, (Accepted also as paper to the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2021)). Atzeni, Daniele; Bacciu, Davide; Errica, Federico; Micheli, Alessio Modeling Edge Features with Deep Bayesian Graph Networks Conference Proceedings of the International Joint Conference on Neural Networks (IJCNN 2021), IEEE IEEE, 2021. Numeroso, Danilo; Bacciu, Davide MEG: Generating Molecular Counterfactual Explanations for Deep Graph Networks Conference Proceedings of the International Joint Conference on Neural Networks (IJCNN 2021), IEEE 2021. Bacciu, Davide; Podda, Marco GraphGen-Redux: a Fast and Lightweight Recurrent Model for Labeled Graph Generation Conference Proceedings of the International Joint Conference on Neural Networks (IJCNN 2021), IEEE 2021. Errica, Federico; Bacciu, Davide; Micheli, Alessio Graph Mixture Density Networks Conference Proceedings of the 38th International Conference on Machine Learning (ICML 2021), PMLR, 2021. Sattar, Asma; Bacciu, Davide Context-aware Graph Convolutional Autoencoder Conference Proceedings of the 16th International Work Conference on Artificial Neural Networks (IWANN 2021), vol. 12862, LNCS Springer, 2021. Carta, Antonio; Cossu, Andrea; Errica, Federico; Bacciu, Davide Catastrophic Forgetting in Deep Graph Networks: an Introductory Benchmark for Graph Classification Workshop The Web Conference 2021 Workshop on Graph Learning Benchmarks (GLB21), 2021. Errica, Federico; Giulini, Marco; Bacciu, Davide; Menichetti, Roberto; Micheli, Alessio; Potestio, Raffaello A deep graph network-enhanced sampling approach to efficiently explore the space of reduced representations of proteins Journal Article In: Frontiers in Molecular Biosciences, vol. 8, pp. 136, 2021. Bacciu, Davide; Conte, Alessio; Grossi, Roberto; Landolfi, Francesco; Marino, Andrea K-plex Cover Pooling for Graph Neural Networks Workshop 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Workshop on Learning Meets Combinatorial Algorithms, 2020. Bacciu, Davide; Numeroso, Danilo Explaining Deep Graph Networks with Molecular Counterfactuals Workshop 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Workshop on Machine Learning for Molecules - Accepted as Contributed Talk (Oral), 2020. Bacciu, Davide; Errica, Federico; Micheli, Alessio; Podda, Marco A Gentle Introduction to Deep Learning for Graphs Journal Article In: Neural Networks, vol. 129, pp. 203-221, 2020. Bacciu, Davide; Errica, Federico; Micheli, Alessio Probabilistic Learning on Graphs via Contextual Architectures Journal Article In: Journal of Machine Learning Research, vol. 21, no. 134, pp. 1−39, 2020. Podda, Marco; Bacciu, Davide; Micheli, Alessio A Deep Generative Model for Fragment-Based Molecule Generation Conference Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS 2020) , 2020. Errica, Federico; Podda, Marco; Bacciu, Davide; Micheli, Alessio A Fair Comparison of Graph Neural Networks for Graph Classification Conference Proceedings of the Eighth International Conference on Learning Representations (ICLR 2020), 2020. Podda, Marco; Micheli, Alessio; Bacciu, Davide; Milazzo, Paolo Biochemical Pathway Robustness Prediction with Graph Neural Networks Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20), 2020. Errica, Federico; Bacciu, Davide; Micheli, Alessio Theoretically Expressive and Edge-aware Graph Learning Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20), 2020. Bacciu, Davide; Micheli, Alessio Deep Learning for Graphs Book Chapter In: Oneto, Luca; Navarin, Nicolo; Sperduti, Alessandro; Anguita, Davide (Ed.): Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), vol. 896, pp. 99-127, Springer International Publishing, 2020, ISBN: 978-3-030-43883-8. Bacciu, Davide; Micheli, Alessio; Podda, Marco Edge-based sequential graph generation with recurrent neural networks Journal Article In: Neurocomputing, 2019. Bacciu, Davide; Sotto, Luigi Di A non-negative factorization approach to node pooling in graph convolutional neural networks Conference Proceedings of the 18th International Conference of the Italian Association for Artificial Intelligence (AIIA 2019), Lecture Notes in Artificial Intelligence Springer-Verlag, 2019. Bacciu, Davide; Micheli, Alessio; Podda, Marco Graph generation by sequential edge prediction Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'19), i6doc.com, Louvain-la-Neuve, Belgium, 2019. Davide, Bacciu; Antonio, Bruno Text Summarization as Tree Transduction by Top-Down TreeLSTM Conference Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence (SSCI'18), IEEE, 2018. Davide, Bacciu; Federico, Errica; Alessio, Micheli Contextual Graph Markov Model: A Deep and Generative Approach to Graph Processing Conference Proceedings of the 35th International Conference on Machine Learning (ICML 2018), 2018.@conference{nokey,
title = {Deep Reinforcement Learning for Network Slice Placement and the DeepNetSlice Toolkit},
author = {Alex Pasquali and Vincenzo Lomonaco and Davide Bacciu and Federica Paganelli},
year = {2024},
date = {2024-05-05},
urldate = {2024-05-05},
booktitle = {Proceedings of the IEEE International Conference on Machine Learning for Communication and Networking 2024 (IEEE ICMLCN 2024)},
publisher = {IEEE},
keywords = {},
pubstate = {forthcoming},
tppubtype = {conference}
}
@workshop{Ninniri2024,
title = {Classifier-free graph diffusion for molecular property targeting},
author = {Matteo Ninniri and Marco Podda and Davide Bacciu},
url = {https://arxiv.org/abs/2312.17397, Arxiv},
year = {2024},
date = {2024-02-27},
booktitle = {4th workshop on Graphs and more Complex structures for Learning and Reasoning (GCLR) at AAAI 2024},
abstract = {This work focuses on the task of property targeting: that is, generating molecules conditioned on target chemical properties to expedite candidate screening for novel drug and materials development. DiGress is a recent diffusion model for molecular graphs whose distinctive feature is allowing property targeting through classifier-based (CB) guidance. While CB guidance may work to generate molecular-like graphs, we hint at the fact that its assumptions apply poorly to the chemical domain. Based on this insight we propose a classifier-free DiGress (FreeGress), which works by directly injecting the conditioning information into the training process. CF guidance is convenient given its less stringent assumptions and since it does not require to train an auxiliary property regressor, thus halving the number of trainable parameters in the model. We empirically show that our model yields up to 79% improvement in Mean Absolute Error with respect to DiGress on property targeting tasks on QM9 and ZINC-250k benchmarks. As an additional contribution, we propose a simple yet powerful approach to improve chemical validity of generated samples, based on the observation that certain chemical properties such as molecular weight correlate with the number of atoms in molecules. },
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@inproceedings{georgiev2024neural,
title = {Neural algorithmic reasoning for combinatorial optimisation},
author = {Dobrik Georgiev Georgiev and Danilo Numeroso and Davide Bacciu and Pietro Liò},
year = {2023},
date = {2023-12-15},
urldate = {2023-12-15},
booktitle = {Learning on Graphs Conference},
pages = {28–1},
organization = {PMLR},
abstract = {Solving NP-hard/complete combinatorial problems with neural networks is a challenging research area that aims to surpass classical approximate algorithms. The long-term objective is to outperform hand-designed heuristics for NP-hard/complete problems by learning to generate superior solutions solely from training data. Current neural-based methods for solving CO problems often overlook the inherent" algorithmic" nature of the problems. In contrast, heuristics designed for CO problems, eg TSP, frequently leverage well-established algorithms, such as those for finding the minimum spanning tree. In this paper, we propose leveraging recent advancements in neural algorithmic reasoning to improve the learning of CO problems. Specifically, we suggest pre-training our neural model on relevant algorithms before training it on CO instances. Our results demonstrate that, using this learning setup, we achieve superior performance compared to non-algorithmically informed deep learning models.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@workshop{Gravina2023b,
title = {Effective Non-Dissipative Propagation for Continuous-Time Dynamic Graphs},
author = {Alessio Gravina and Giulio Lovisotto and Claudio Gallicchio and Davide Bacciu and Claas Grohnfeldt},
url = {https://openreview.net/forum?id=zAHFC2LNEe, PDF},
year = {2023},
date = {2023-12-11},
urldate = {2023-12-11},
booktitle = {Temporal Graph Learning Workshop, NeurIPS 2023},
abstract = {Recent research on Deep Graph Networks (DGNs) has broadened the domain of learning on graphs to real-world systems of interconnected entities that evolve over time. This paper addresses prediction problems on graphs defined by a stream of events, possibly irregularly sampled over time, generally referred to as Continuous-Time Dynamic Graphs (C-TDGs). While many predictive problems on graphs may require capturing interactions between nodes at different distances, existing DGNs for C-TDGs are not designed to propagate and preserve long-range information - resulting in suboptimal performance. In this work, we present Continuous-Time Graph Anti-Symmetric Network (CTAN), a DGN for C-TDGs designed within the ordinary differential equations framework that enables efficient propagation of long-range dependencies. We show that our method robustly performs stable and non-dissipative information propagation over dynamically evolving graphs, where the number of ODE discretization steps allows scaling the propagation range. We empirically validate the proposed approach on several real and synthetic graph benchmarks, showing that CTAN leads to improved performance while enabling the propagation of long-range information},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@article{errica2023pydgn,
title = {PyDGN: a Python Library for Flexible and Reproducible Research on Deep Learning for Graphs},
author = {Federico Errica and Davide Bacciu and Alessio Micheli},
year = {2023},
date = {2023-10-31},
urldate = {2023-01-01},
journal = {Journal of Open Source Software},
volume = {8},
number = {90},
pages = {5713},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@conference{Landolfi2023,
title = { A Tropical View of Graph Neural Networks },
author = {Francesco Landolfi and Davide Bacciu and Danilo Numeroso
},
editor = {Michel Verleysen},
year = {2023},
date = {2023-10-04},
urldate = {2023-10-04},
booktitle = {Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Bacciu2023c,
title = {Graph Representation Learning },
author = {Davide Bacciu and Federico Errica and Alessio Micheli and Nicolò Navarin and Luca Pasa and Marco Podda and Daniele Zambon
},
editor = {Michel Verleysen},
year = {2023},
date = {2023-10-04},
urldate = {2023-10-04},
booktitle = {Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@workshop{Gravina2023c,
title = {Non-Dissipative Propagation by Randomized Anti-Symmetric Deep Graph Networks},
author = {Alessio Gravina and Claudio Gallicchio and Davide Bacciu},
year = {2023},
date = {2023-09-18},
urldate = {2023-09-18},
booktitle = {Proceedings of the ECML/PKDD Workshop on Deep Learning meets Neuromorphic Hardware},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@conference{Cosenza2023,
title = {Graph-based Polyphonic Multitrack Music Generation},
author = {Emanuele Cosenza and Andrea Valenti and Davide Bacciu },
year = {2023},
date = {2023-08-19},
urldate = {2023-08-19},
booktitle = {Proceedings of the 32nd INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI 2023)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Gravina2023,
title = {Anti-Symmetric DGN: a stable architecture for Deep Graph Networks},
author = {Alessio Gravina and Davide Bacciu and Claudio Gallicchio},
url = {https://openreview.net/pdf?id=J3Y7cgZOOS},
year = {2023},
date = {2023-05-01},
urldate = {2023-05-01},
booktitle = {Proceedings of the Eleventh International Conference on Learning Representations (ICLR 2023) },
abstract = {Deep Graph Networks (DGNs) currently dominate the research landscape of learning from graphs, due to their efficiency and ability to implement an adaptive message-passing scheme between the nodes. However, DGNs are typically limited in their ability to propagate and preserve long-term dependencies between nodes, i.e., they suffer from the over-squashing phenomena. As a result, we can expect them to under-perform, since different problems require to capture interactions at different (and possibly large) radii in order to be effectively solved. In this work, we present Anti-Symmetric Deep Graph Networks (A-DGNs), a framework for stable and non-dissipative DGN design, conceived through the lens of ordinary differential equations. We give theoretical proof that our method is stable and non-dissipative, leading to two key results: long-range information between nodes is preserved, and no gradient vanishing or explosion occurs in training. We empirically validate the proposed approach on several graph benchmarks, showing that A-DGN yields to improved performance and enables to learn effectively even when dozens of layers are used.ers are used.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Numeroso2023,
title = {Dual Algorithmic Reasoning},
author = {Danilo Numeroso and Davide Bacciu and Petar Veličković},
url = {https://openreview.net/pdf?id=hhvkdRdWt1F},
year = {2023},
date = {2023-05-01},
urldate = {2023-05-01},
booktitle = {Proceedings of the Eleventh International Conference on Learning Representations (ICLR 2023)},
abstract = {Neural Algorithmic Reasoning is an emerging area of machine learning which seeks to infuse algorithmic computation in neural networks, typically by training neural models to approximate steps of classical algorithms. In this context, much of the current work has focused on learning reachability and shortest path graph algorithms, showing that joint learning on similar algorithms is beneficial for generalisation. However, when targeting more complex problems, such "similar" algorithms become more difficult to find. Here, we propose to learn algorithms by exploiting duality of the underlying algorithmic problem. Many algorithms solve optimisation problems. We demonstrate that simultaneously learning the dual definition of these optimisation problems in algorithmic learning allows for better learning and qualitatively better solutions. Specifically, we exploit the max-flow min-cut theorem to simultaneously learn these two algorithms over synthetically generated graphs, demonstrating the effectiveness of the proposed approach. We then validate the real-world utility of our dual algorithmic reasoner by deploying it on a challenging brain vessel classification task, which likely depends on the vessels’ flow properties. We demonstrate a clear performance gain when using our model within such a context, and empirically show that learning the max-flow and min-cut algorithms together is critical for achieving such a result.},
note = {Notable Spotlight paper},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@workshop{nokey,
title = {Non-Dissipative Propagation by Anti-Symmetric Deep Graph Networks},
author = {Alessio Gravina and Davide Bacciu and Claudio Gallicchio},
url = {https://drive.google.com/file/d/1uPHhjwSa3g_hRvHwx6UnbMLgGN_cAqMu/view. PDF},
year = {2023},
date = {2023-02-13},
urldate = {2023-02-13},
booktitle = {Proceedigns of the Ninth International Workshop on Deep Learning on Graphs: Method and Applications (DLG-AAAI’23)},
abstract = {Deep Graph Networks (DGNs) currently dominate the research landscape of learning from graphs, due to the efficiency of their adaptive message-passing scheme between nodes. However, DGNs are typically limited in their ability to propagate and preserve long-term dependencies between nodes, i.e., they suffer from the over-squashing phenomena. This reduces their effectiveness, since predictive problems may require to capture interactions at different, and possibly large, radii in order to be effectively solved. In this work, we present Anti-Symmetric DGN (A-DGN), a framework forstable and non-dissipative DGN design, conceived through the lens of ordinary differential equations. We give theoretical proof that our method is stable and non-dissipative, leading to two key results: long-range information between nodes is preserved, and no gradient vanishing or explosion occurs in training. We empirically validate the proposed approach on several graph benchmarks, showing that A-DGN yields to improved performance and enables to learn effectively even when dozens of layers are used.},
note = {Winner of the Best Student Paper Award at DLG-AAAI23},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@conference{Bacciu2023,
title = {Generalizing Downsampling from Regular Data to Graphs},
author = {Davide Bacciu and Alessio Conte and Francesco Landolfi},
url = {https://arxiv.org/abs/2208.03523, Arxiv},
year = {2023},
date = {2023-02-07},
urldate = {2023-02-07},
booktitle = {Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence},
abstract = {Downsampling produces coarsened, multi-resolution representations of data and it is used, for example, to produce lossy compression and visualization of large images, reduce computational costs, and boost deep neural representation learning. Unfortunately, due to their lack of a regular structure, there is still no consensus on how downsampling should apply to graphs and linked data. Indeed reductions in graph data are still needed for the goals described above, but reduction mechanisms do not have the same focus on preserving topological structures and properties, while allowing for resolution-tuning, as is the case in regular data downsampling. In this paper, we take a step in this direction, introducing a unifying interpretation of downsampling in regular and graph data. In particular, we define a graph coarsening mechanism which is a graph-structured counterpart of controllable equispaced coarsening mechanisms in regular data. We prove theoretical guarantees for distortion bounds on path lengths, as well as the ability to preserve key topological properties in the coarsened graphs. We leverage these concepts to define a graph pooling mechanism that we empirically assess in graph classification tasks, providing a greedy algorithm that allows efficient parallel implementation on GPUs, and showing that it compares favorably against pooling methods in literature. },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@article{Bacciu2023b,
title = {Deep Graph Networks for Drug Repurposing with Multi-Protein Targets},
author = {Davide Bacciu and Federico Errica and Alessio Gravina and Lorenzo Madeddu and Marco Podda and Giovanni Stilo},
doi = {10.1109/TETC.2023.3238963},
year = {2023},
date = {2023-02-01},
urldate = {2023-02-01},
journal = {IEEE Transactions on Emerging Topics in Computing, 2023},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@conference{nokey,
title = {Deep Learning for Graphs},
author = {Davide Bacciu and Federico Errica and Nicolò Navarin and Luca Pasa and Daniele Zambon},
editor = {Michel Verleysen},
year = {2022},
date = {2022-10-05},
urldate = {2022-10-05},
booktitle = {Proceedings of the 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2022)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{nokey,
title = {The Infinite Contextual Graph Markov Model},
author = {Daniele Castellana and Federico Errica and Davide Bacciu and Alessio Micheli
},
year = {2022},
date = {2022-07-18},
urldate = {2022-07-18},
booktitle = {Proceedings of the 39th International Conference on Machine Learning (ICML 2022)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@article{DUKIC2022,
title = {Inductive-Transductive Learning for Very Sparse Fashion Graphs},
author = {Haris Dukic and Shahab Mokarizadeh and Georgios Deligiorgis and Pierpaolo Sepe and Davide Bacciu and Marco Trincavelli},
doi = {https://doi.org/10.1016/j.neucom.2022.06.050},
issn = {0925-2312},
year = {2022},
date = {2022-06-27},
urldate = {2022-06-27},
journal = {Neurocomputing},
abstract = {The assortments of global retailers are composed of hundreds of thousands of products linked by several types of relationships such as style compatibility, ”bought together”, ”watched together”, etc. Graphs are a natural representation for assortments, where products are nodes and relations are edges. Style compatibility relations are produced manually and do not cover the whole graph uniformly. We propose to use inductive learning to enhance a graph encoding style compatibility of a fashion assortment, leveraging rich node information comprising textual descriptions and visual data. Then, we show how the proposed graph enhancement substantially improves the performance on transductive tasks with a minor impact on graph sparsity. Although demonstrated in a challenging and novel industrial application case, the approach we propose is general enough to be applied to any node-level or edge-level prediction task in very sparse, large-scale networks.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{nokey,
title = {Graph Neural Network for Context-Aware Recommendation},
author = {Asma Sattar and Davide Bacciu},
doi = {10.1007/s11063-022-10917-3},
year = {2022},
date = {2022-06-22},
urldate = {2022-06-22},
journal = {Neural Processing Letters},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@workshop{Numeroso2022,
title = {Learning heuristics for A*},
author = { Danilo Numeroso and Davide Bacciu and Petar Veličković},
year = {2022},
date = {2022-04-29},
urldate = {2022-04-29},
booktitle = {ICRL 2022 Workshop on Anchoring Machine Learning in Classical Algorithmic Theory (GroundedML 2022)},
abstract = {Path finding in graphs is one of the most studied classes of problems in computer science. In this context, search algorithms are often extended with heuristics for a more efficient search of target nodes. In this work we combine recent advancements in Neural Algorithmic Reasoning to learn efficient heuristic functions for path finding problems on graphs. At training time, we exploit multi-task learning to learn jointly the Dijkstra's algorithm and a {it consistent} heuristic function for the A* search algorithm. At inference time, we plug our learnt heuristics into the A* algorithm. Results show that running A* over the learnt heuristics value can greatly speed up target node searching compared to Dijkstra, while still finding minimal-cost paths.
},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@article{Bacciu2022,
title = {Explaining Deep Graph Networks via Input Perturbation},
author = {Davide Bacciu and Danilo Numeroso
},
doi = {10.1109/TNNLS.2022.3165618},
year = {2022},
date = {2022-04-21},
urldate = {2022-04-21},
journal = {IEEE Transactions on Neural Networks and Learning Systems},
abstract = {Deep Graph Networks are a family of machine learning models for structured data which are finding heavy application in life-sciences (drug repurposing, molecular property predictions) and on social network data (recommendation systems). The privacy and safety-critical nature of such domains motivates the need for developing effective explainability methods for this family of models. So far, progress in this field has been challenged by the combinatorial nature and complexity of graph structures. In this respect, we present a novel local explanation framework specifically tailored to graph data and deep graph networks. Our approach leverages reinforcement learning to generate meaningful local perturbations of the input graph, whose prediction we seek an interpretation for. These perturbed data points are obtained by optimising a multi-objective score taking into account similarities both at a structural level as well as at the level of the deep model outputs. By this means, we are able to populate a set of informative neighbouring samples for the query graph, which is then used to fit an interpretable model for the predictive behaviour of the deep network locally to the query graph prediction. We show the effectiveness of the proposed explainer by a qualitative analysis on two chemistry datasets, TOS and ESOL and by quantitative results on a benchmark dataset for explanations, CYCLIQ.
},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Collodi2022,
title = {Learning with few examples the semantic description of novel human-inspired grasp strategies from RGB data},
author = { Lorenzo Collodi and Davide Bacciu and Matteo Bianchi and Giuseppe Averta},
url = {https://www.researchgate.net/profile/Giuseppe-Averta/publication/358006552_Learning_With_Few_Examples_the_Semantic_Description_of_Novel_Human-Inspired_Grasp_Strategies_From_RGB_Data/links/61eae01e8d338833e3857251/Learning-With-Few-Examples-the-Semantic-Description-of-Novel-Human-Inspired-Grasp-Strategies-From-RGB-Data.pdf, Open Version},
doi = {https://doi.org/10.1109/LRA.2022.3144520},
year = {2022},
date = {2022-04-04},
urldate = {2022-04-04},
journal = { IEEE Robotics and Automation Letters},
pages = { 2573 - 2580},
publisher = {IEEE},
abstract = {Data-driven approaches and human inspiration are fundamental to endow robotic manipulators with advanced autonomous grasping capabilities. However, to capitalize upon these two pillars, several aspects need to be considered, which include the number of human examples used for training; the need for having in advance all the required information for classification (hardly feasible in unstructured environments); the trade-off between the task performance and the processing cost. In this paper, we propose a RGB-based pipeline that can identify the object to be grasped and guide the actual execution of the grasping primitive selected through a combination of Convolutional and Gated Graph Neural Networks. We consider a set of human-inspired grasp strategies, which are afforded by the geometrical properties of the objects and identified from a human grasping taxonomy, and propose to learn new grasping skills with only a few examples. We test our framework with a manipulator endowed with an under-actuated soft robotic hand. Even though we use only 2D information to minimize the footprint of the network, we achieve 90% of successful identifications of the most appropriate human-inspired grasping strategy over ten different classes, of which three were few-shot learned, outperforming an ideal model trained with all the classes, in sample-scarce conditions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Gravina2022,
title = {Controlling astrocyte-mediated synaptic pruning signals for schizophrenia drug repurposing with Deep Graph Networks},
author = {Alessio Gravina and Jennifer L. Wilson and Davide Bacciu and Kevin J. Grimes and Corrado Priami},
url = {https://www.biorxiv.org/content/10.1101/2021.10.07.463459v1, BioArxiv},
doi = {doi.org/10.1371/journal.pcbi.1009531},
year = {2022},
date = {2022-04-01},
urldate = {2022-04-01},
journal = {Plos Computational Biology},
volume = {18},
number = {5},
abstract = {Schizophrenia is a debilitating psychiatric disorder, leading to both physical and social morbidity. Worldwide 1% of the population is struggling with the disease, with 100,000 new cases annually only in the United States. Despite its importance, the goal of finding effective treatments for schizophrenia remains a challenging task, and previous work conducted expensive large-scale phenotypic screens. This work investigates the benefits of Machine Learning for graphs to optimize drug phenotypic screens and predict compounds that mitigate abnormal brain reduction induced by excessive glial phagocytic activity in schizophrenia subjects. Given a compound and its concentration as input, we propose a method that predicts a score associated with three possible compound effects, ie reduce, increase, or not influence phagocytosis. We leverage a high-throughput screening to prove experimentally that our method achieves good generalization capabilities. The screening involves 2218 compounds at five different concentrations. Then, we analyze the usability of our approach in a practical setting, ie prioritizing the selection of compounds in the SWEETLEAD library. We provide a list of 64 compounds from the library that have the most potential clinical utility for glial phagocytosis mitigation. Lastly, we propose a novel approach to computationally validate their utility as possible therapies for schizophrenia.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Carta2022,
title = {Catastrophic Forgetting in Deep Graph Networks: a Graph Classification benchmark},
author = {Antonio Carta and Andrea Cossu and Federico Errica and Davide Bacciu},
doi = {10.3389/frai.2022.824655},
year = {2022},
date = {2022-01-11},
urldate = {2022-01-11},
journal = {Frontiers in Artificial Intelligence },
abstract = { In this work, we study the phenomenon of catastrophic forgetting in the graph representation learning scenario. The primary objective of the analysis is to understand whether classical continual learning techniques for flat and sequential data have a tangible impact on performances when applied to graph data. To do so, we experiment with a structure-agnostic model and a deep graph network in a robust and controlled environment on three different datasets. The benchmark is complemented by an investigation on the effect of structure-preserving regularization techniques on catastrophic forgetting. We find that replay is the most effective strategy in so far, which also benefits the most from the use of regularization. Our findings suggest interesting future research at the intersection of the continual and graph representation learning fields. Finally, we provide researchers with a flexible software framework to reproduce our results and carry out further experiments. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@conference{Bacciu2021c,
title = { Deep learning for graphs},
author = {Davide Bacciu and Filippo Maria Bianchi and Benjamin Paassen and Cesare Alippi},
editor = {Michel Verleysen},
doi = {10.14428/esann/2021.ES2021-5},
year = {2021},
date = {2021-10-06},
urldate = {2021-10-06},
booktitle = {Proceedings of the 29th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2021)},
pages = {89-98},
abstract = { Deep learning for graphs encompasses all those models endowed with multiple layers of abstraction, which operate on data represented as graphs. The most common building blocks of these models are graph encoding layers, which compute a vector embedding for each node in a graph based on a sum of messages received from its neighbors. However, the family also includes architectures with decoders from vectors to graphs and models that process time-varying graphs and hypergraphs. In this paper, we provide an overview of the key concepts in the field, point towards open questions, and frame the contributions of the ESANN 2021 special session into the broader context of deep learning for graphs. },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Dukic2021,
title = {Inductive learning for product assortment graph completion},
author = {Haris Dukic and Georgios Deligiorgis and Pierpaolo Sepe and Davide Bacciu and Marco Trincavelli},
editor = {Michel Verleysen},
doi = {10.14428/esann/2021.ES2021-73},
year = {2021},
date = {2021-10-06},
urldate = {2021-10-06},
booktitle = {Proceedings of the 29th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2021)},
pages = {129-134},
abstract = { Global retailers have assortments that contain hundreds of thousands of products that can be linked by several types of relationships like style compatibility, "bought together", "watched together", etc. Graphs are a natural representation for assortments, where products are nodes and relations are edges. Relations like style compatibility are often produced by a manual process and therefore do not cover uniformly the whole graph. We propose to use inductive learning to enhance a graph encoding style compatibility of a fashion assortment, leveraging rich node information comprising textual descriptions and visual data. Then, we show how the proposed graph enhancement improves substantially the performance on transductive tasks with a minor impact on graph sparsity. },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@article{Bacciu2021b,
title = {K-Plex Cover Pooling for Graph Neural Networks},
author = {Davide Bacciu and Alessio Conte and Roberto Grossi and Francesco Landolfi and Andrea Marino},
editor = {Annalisa Appice and Sergio Escalera and José A. Gámez and Heike Trautmann},
url = {https://link.springer.com/article/10.1007/s10618-021-00779-z, Published version},
doi = {10.1007/s10618-021-00779-z},
year = {2021},
date = {2021-09-13},
urldate = {2021-09-13},
journal = {Data Mining and Knowledge Discovery},
abstract = {raph pooling methods provide mechanisms for structure reduction that are intended to ease the diffusion of context between nodes further in the graph, and that typically leverage community discovery mechanisms or node and edge pruning heuristics. In this paper, we introduce a novel pooling technique which borrows from classical results in graph theory that is non-parametric and generalizes well to graphs of different nature and connectivity patterns. Our pooling method, named KPlexPool, builds on the concepts of graph covers and k-plexes, i.e. pseudo-cliques where each node can miss up to k links. The experimental evaluation on benchmarks on molecular and social graph classification shows that KPlexPool achieves state of the art performances against both parametric and non-parametric pooling methods in the literature, despite generating pooled graphs based solely on topological information.},
note = {Accepted also as paper to the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2021)},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@conference{Atzeni2021,
title = { Modeling Edge Features with Deep Bayesian Graph Networks},
author = {Daniele Atzeni and Davide Bacciu and Federico Errica and Alessio Micheli},
doi = {10.1109/IJCNN52387.2021.9533430},
year = {2021},
date = {2021-07-18},
urldate = {2021-07-18},
booktitle = {Proceedings of the International Joint Conference on Neural Networks (IJCNN 2021)},
publisher = {IEEE},
organization = {IEEE},
abstract = {We propose an extension of the Contextual Graph Markov Model, a deep and probabilistic machine learning model for graphs, to model the distribution of edge features. Our approach is architectural, as we introduce an additional Bayesian network mapping edge features into discrete states to be used by the original model. In doing so, we are also able to build richer graph representations even in the absence of edge features, which is confirmed by the performance improvements on standard graph classification benchmarks. Moreover, we successfully test our proposal in a graph regression scenario where edge features are of fundamental importance, and we show that the learned edge representation provides substantial performance improvements against the original model on three link prediction tasks. By keeping the computational complexity linear in the number of edges, the proposed model is amenable to large-scale graph processing.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Numeroso2021,
title = {MEG: Generating Molecular Counterfactual Explanations for Deep Graph Networks},
author = {Danilo Numeroso and Davide Bacciu},
year = {2021},
date = {2021-07-18},
urldate = {2021-07-18},
booktitle = {Proceedings of the International Joint Conference on Neural Networks (IJCNN 2021)},
organization = {IEEE},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{BacciuPoddaIJCNN2021,
title = {GraphGen-Redux: a Fast and Lightweight Recurrent Model for Labeled Graph Generation},
author = {Davide Bacciu and Marco Podda},
doi = {10.1109/IJCNN52387.2021.9533743},
year = {2021},
date = {2021-07-18},
urldate = {2021-07-18},
booktitle = {Proceedings of the International Joint Conference on Neural Networks (IJCNN 2021)},
organization = {IEEE},
abstract = {The problem of labeled graph generation is gaining attention in the Deep Learning community. The task is challenging due to the sparse and discrete nature of graph spaces. Several approaches have been proposed in the literature, most of which require to transform the graphs into sequences that encode their structure and labels and to learn the distribution of such sequences through an auto-regressive generative model. Among this family of approaches, we focus on the Graphgen model. The preprocessing phase of Graphgen transforms graphs into unique edge sequences called Depth-First Search (DFS) codes, such that two isomorphic graphs are assigned the same DFS code. Each element of a DFS code is associated with a graph edge: specifically, it is a quintuple comprising one node identifier for each of the two endpoints, their node labels, and the edge label. Graphgen learns to generate such sequences auto-regressively and models the probability of each component of the quintuple independently. While effective, the independence assumption made by the model is too loose to capture the complex label dependencies of real-world graphs precisely. By introducing a novel graph preprocessing approach, we are able to process the labeling information of both nodes and edges jointly. The corresponding model, which we term Graphgen-redux, improves upon the generative performances of Graphgen in a wide range of datasets of chemical and social graphs. In addition, it uses approximately 78% fewer parameters than the vanilla variant and requires 50% fewer epochs of training on average.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Errica2021,
title = {Graph Mixture Density Networks},
author = {Federico Errica and Davide Bacciu and Alessio Micheli},
url = {https://proceedings.mlr.press/v139/errica21a.html, PDF},
year = {2021},
date = {2021-07-18},
urldate = {2021-07-18},
booktitle = {Proceedings of the 38th International Conference on Machine Learning (ICML 2021)},
pages = {3025-3035},
publisher = {PMLR},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Sattar2021,
title = {Context-aware Graph Convolutional Autoencoder},
author = {Asma Sattar and Davide Bacciu
},
doi = {10.1007/978-3-030-85030-2_23},
year = {2021},
date = {2021-06-16},
urldate = {2021-06-16},
booktitle = {Proceedings of the 16th International Work Conference on Artificial Neural Networks (IWANN 2021)},
volume = {12862},
pages = { 279-290},
publisher = {Springer},
series = {LNCS},
abstract = {Recommendation problems can be addressed as link prediction tasks in a bipartite graph between user and item nodes, labelled with rating on edges. Existing matrix completion approaches model the user’s opinion on items by ignoring context information that can instead be associated with the edges of the bipartite graph. Context is an important factor to be considered as it heavily affects opinions and preferences. Following this line of research, this paper proposes a graph convolutional auto-encoder approach which considers users’ opinion on items as well as the static node features and context information on edges. Our graph encoder produces a representation of users and items from the perspective of context, static features, and rating opinion. The empirical analysis on three real-world datasets shows that the proposed approach outperforms recent state-of-the-art recommendation systems.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@workshop{Carta2021,
title = { Catastrophic Forgetting in Deep Graph Networks: an Introductory Benchmark for Graph Classification },
author = {Antonio Carta and Andrea Cossu and Federico Errica and Davide Bacciu},
year = {2021},
date = {2021-04-12},
urldate = {2021-04-12},
booktitle = {The Web Conference 2021 Workshop on Graph Learning Benchmarks (GLB21)},
abstract = {In this work, we study the phenomenon of catastrophic forgetting in the graph representation learning scenario. The primary objective of the analysis is to understand whether classical continual learning techniques for flat and sequential data have a tangible impact on performances when applied to graph data. To do so, we experiment with a structure-agnostic model and a deep graph network in a robust and controlled environment on three different datasets. The benchmark is complemented by an investigation on the effect of structure-preserving regularization techniques on catastrophic forgetting. We find that replay is the most effective strategy in so far, which also benefits the most from the use of regularization. Our findings suggest interesting future research at the intersection of the continual and graph representation learning fields. Finally, we provide researchers with a flexible software framework to reproduce our results and carry out further experiments.},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@article{errica_deep_2021,
title = {A deep graph network-enhanced sampling approach to efficiently explore the space of reduced representations of proteins},
author = {Federico Errica and Marco Giulini and Davide Bacciu and Roberto Menichetti and Alessio Micheli and Raffaello Potestio},
doi = {10.3389/fmolb.2021.637396},
year = {2021},
date = {2021-02-28},
urldate = {2021-02-28},
journal = {Frontiers in Molecular Biosciences},
volume = {8},
pages = {136},
publisher = {Frontiers},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@workshop{kplexWS2020,
title = {K-plex Cover Pooling for Graph Neural Networks},
author = {Davide Bacciu and Alessio Conte and Roberto Grossi and Francesco Landolfi and Andrea Marino},
year = {2020},
date = {2020-12-11},
urldate = {2020-12-11},
booktitle = {34th Conference on Neural Information Processing Systems (NeurIPS 2020), Workshop on Learning Meets Combinatorial Algorithms},
abstract = {We introduce a novel pooling technique which borrows from classical results in graph theory that is non-parametric and generalizes well to graphs of different nature and connectivity pattern. Our pooling method, named KPlexPool, builds on the concepts of graph covers and $k$-plexes, i.e. pseudo-cliques where each node can miss up to $k$ links. The experimental evaluation on molecular and social graph classification shows that KPlexPool achieves state of the art performances, supporting the intuition that well-founded graph-theoretic approaches can be effectively integrated in learning models for graphs. },
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@workshop{megWS2020,
title = {Explaining Deep Graph Networks with Molecular Counterfactuals},
author = {Davide Bacciu and Danilo Numeroso},
url = {https://arxiv.org/pdf/2011.05134.pdf, Arxiv},
year = {2020},
date = {2020-12-11},
urldate = {2020-12-11},
booktitle = {34th Conference on Neural Information Processing Systems (NeurIPS 2020), Workshop on Machine Learning for Molecules - Accepted as Contributed Talk (Oral)},
abstract = {We present a novel approach to tackle explainability of deep graph networks in the context of molecule property prediction tasks, named MEG (Molecular Explanation Generator). We generate informative counterfactual explanations for a specific prediction under the form of (valid) compounds with high structural similarity and different predicted properties. We discuss preliminary results showing how the model can convey non-ML experts with key insights into the learning model focus in the neighborhood of a molecule. },
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@article{gentleGraphs2020,
title = {A Gentle Introduction to Deep Learning for Graphs},
author = {Davide Bacciu and Federico Errica and Alessio Micheli and Marco Podda},
url = {https://arxiv.org/abs/1912.12693, Arxiv
https://doi.org/10.1016/j.neunet.2020.06.006, Original Paper},
doi = {10.1016/j.neunet.2020.06.006},
year = {2020},
date = {2020-09-01},
urldate = {2020-09-01},
journal = {Neural Networks},
volume = {129},
pages = {203-221},
publisher = {Elsevier},
abstract = {The adaptive processing of graph data is a long-standing research topic which has been lately consolidated as a theme of major interest in the deep learning community. The snap increase in the amount and breadth of related research has come at the price of little systematization of knowledge and attention to earlier literature. This work is designed as a tutorial introduction to the field of deep learning for graphs. It favours a consistent and progressive introduction of the main concepts and architectural aspects over an exposition of the most recent literature, for which the reader is referred to available surveys. The paper takes a top-down view to the problem, introducing a generalized formulation of graph representation learning based on a local and iterative approach to structured information processing. It introduces the basic building blocks that can be combined to design novel and effective neural models for graphs. The methodological exposition is complemented by a discussion of interesting research challenges and applications in the field. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{jmlrCGMM20,
title = {Probabilistic Learning on Graphs via Contextual Architectures},
author = {Davide Bacciu and Federico Errica and Alessio Micheli},
editor = {Pushmeet Kohli},
url = {http://jmlr.org/papers/v21/19-470.html, Paper},
year = {2020},
date = {2020-07-27},
urldate = {2020-07-27},
journal = {Journal of Machine Learning Research},
volume = {21},
number = {134},
pages = {1−39},
abstract = {We propose a novel methodology for representation learning on graph-structured data, in which a stack of Bayesian Networks learns different distributions of a vertex's neighborhood. Through an incremental construction policy and layer-wise training, we can build deeper architectures with respect to typical graph convolutional neural networks, with benefits in terms of context spreading between vertices.
First, the model learns from graphs via maximum likelihood estimation without using target labels.
Then, a supervised readout is applied to the learned graph embeddings to deal with graph classification and vertex classification tasks, showing competitive results against neural models for graphs. The computational complexity is linear in the number of edges, facilitating learning on large scale data sets. By studying how depth affects the performances of our model, we discover that a broader context generally improves performances. In turn, this leads to a critical analysis of some benchmarks used in literature.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
First, the model learns from graphs via maximum likelihood estimation without using target labels.
Then, a supervised readout is applied to the learned graph embeddings to deal with graph classification and vertex classification tasks, showing competitive results against neural models for graphs. The computational complexity is linear in the number of edges, facilitating learning on large scale data sets. By studying how depth affects the performances of our model, we discover that a broader context generally improves performances. In turn, this leads to a critical analysis of some benchmarks used in literature.@conference{aistats2020,
title = {A Deep Generative Model for Fragment-Based Molecule Generation},
author = {Marco Podda and Davide Bacciu and Alessio Micheli},
url = {https://arxiv.org/abs/2002.12826},
year = {2020},
date = {2020-06-03},
urldate = {2020-06-03},
booktitle = {Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS 2020) },
abstract = {Molecule generation is a challenging open problem in cheminformatics. Currently, deep generative approaches addressing the challenge belong to two broad categories, differing in how molecules are represented. One approach encodes molecular graphs as strings of text, and learn their corresponding character-based language model. Another, more expressive, approach operates directly on the molecular graph. In this work, we address two limitations of the former: generation of invalid or duplicate molecules. To improve validity rates, we develop a language model for small molecular substructures called fragments, loosely inspired by the well-known paradigm of Fragment-Based Drug Design. In other words, we generate molecules fragment by fragment, instead of atom by atom. To improve uniqueness rates, we present a frequency-based clustering strategy that helps to generate molecules with infrequent fragments. We show experimentally that our model largely outperforms other language model-based competitors, reaching state-of-the-art performances typical of graph-based approaches. Moreover, generated molecules display molecular properties similar to those in the training sample, even in absence of explicit task-specific supervision.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{iclr19,
title = {A Fair Comparison of Graph Neural Networks for Graph Classification},
author = {Federico Errica and Marco Podda and Davide Bacciu and Alessio Micheli},
url = {https://openreview.net/pdf?id=HygDF6NFPB, PDF
https://iclr.cc/virtual_2020/poster_HygDF6NFPB.html, Talk
https://github.com/diningphil/gnn-comparison, Code},
year = {2020},
date = {2020-04-30},
booktitle = {Proceedings of the Eighth International Conference on Learning Representations (ICLR 2020)},
abstract = {Experimental reproducibility and replicability are critical topics in machine learning. Authors have often raised concerns about their lack in scientific publications to improve the quality of the field. Recently, the graph representation learning field has attracted the attention of a wide research community, which resulted in a large stream of works.
As such, several Graph Neural Network models have been developed to effectively tackle graph classification. However, experimental procedures often lack rigorousness and are hardly reproducible. Motivated by this, we provide an overview of common practices that should be avoided to fairly compare with the state of the art. To counter this troubling trend, we ran more than 47000 experiments in a controlled and uniform framework to re-evaluate five popular models across nine common benchmarks. Moreover, by comparing GNNs with structure-agnostic baselines we provide convincing evidence that, on some datasets, structural information has not been exploited yet. We believe that this work can contribute to the development of the graph learning field, by providing a much needed grounding for rigorous evaluations of graph classification models.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
As such, several Graph Neural Network models have been developed to effectively tackle graph classification. However, experimental procedures often lack rigorousness and are hardly reproducible. Motivated by this, we provide an overview of common practices that should be avoided to fairly compare with the state of the art. To counter this troubling trend, we ran more than 47000 experiments in a controlled and uniform framework to re-evaluate five popular models across nine common benchmarks. Moreover, by comparing GNNs with structure-agnostic baselines we provide convincing evidence that, on some datasets, structural information has not been exploited yet. We believe that this work can contribute to the development of the graph learning field, by providing a much needed grounding for rigorous evaluations of graph classification models.@conference{esann20Podda,
title = { Biochemical Pathway Robustness Prediction with Graph Neural Networks },
author = {Marco Podda and Alessio Micheli and Davide Bacciu and Paolo Milazzo},
editor = {Michel Verleysen},
year = {2020},
date = {2020-04-21},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{esann20Errica,
title = { Theoretically Expressive and Edge-aware Graph Learning },
author = {Federico Errica and Davide Bacciu and Alessio Micheli},
editor = {Michel Verleysen},
url = {https://arxiv.org/abs/2001.09005},
year = {2020},
date = {2020-04-21},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20)},
abstract = {We propose a new Graph Neural Network that combines recent advancements in the field. We give theoretical contributions by proving that the model is strictly more general than the Graph Isomorphism Network and the Gated Graph Neural Network, as it can approximate the same functions and deal with arbitrary edge values. Then, we show how a single node information can flow through the graph unchanged. },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@inbook{graphsBDDL2020,
title = {Deep Learning for Graphs},
author = {Davide Bacciu and Alessio Micheli},
editor = {Luca Oneto and Nicolo Navarin and Alessandro Sperduti and Davide Anguita
},
url = {https://link.springer.com/chapter/10.1007/978-3-030-43883-8_5},
doi = {10.1007/978-3-030-43883-8_5},
isbn = {978-3-030-43883-8},
year = {2020},
date = {2020-04-04},
booktitle = {Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019)},
volume = {896},
pages = {99-127},
publisher = {Springer International Publishing},
series = {Studies in Computational Intelligence Series},
abstract = {We introduce an overview of methods for learning in structured domains covering foundational works developed within the last twenty years to deal with a whole range of complex data representations, including hierarchical structures, graphs and networks, and giving special attention to recent deep learning models for graphs. While we provide a general introduction to the field, we explicitly focus on the neural network paradigm showing how, across the years, these models have been extended to the adaptive processing of incrementally more complex classes of structured data. The ultimate aim is to show how to cope with the fundamental issue of learning adaptive representations for samples with varying size and topology.},
keywords = {},
pubstate = {published},
tppubtype = {inbook}
}
@article{neucompEsann19,
title = {Edge-based sequential graph generation with recurrent neural networks},
author = {Davide Bacciu and Alessio Micheli and Marco Podda},
url = {https://arxiv.org/abs/2002.00102v1},
year = {2019},
date = {2019-12-31},
journal = {Neurocomputing},
abstract = { Graph generation with Machine Learning is an open problem with applications in various research fields. In this work, we propose to cast the generative process of a graph into a sequential one, relying on a node ordering procedure. We use this sequential process to design a novel generative model composed of two recurrent neural networks that learn to predict the edges of graphs: the first network generates one endpoint of each edge, while the second network generates the other endpoint conditioned on the state of the first. We test our approach extensively on five different datasets, comparing with two well-known baselines coming from graph literature, and two recurrent approaches, one of which holds state of the art performances. Evaluation is conducted considering quantitative and qualitative characteristics of the generated samples. Results show that our approach is able to yield novel, and unique graphs originating from very different distributions, while retaining structural properties very similar to those in the training sample. Under the proposed evaluation framework, our approach is able to reach performances comparable to the current state of the art on the graph generation task. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@conference{aiia2019,
title = {A non-negative factorization approach to node pooling in graph convolutional neural networks},
author = {Davide Bacciu and Luigi {Di Sotto}},
url = {https://arxiv.org/pdf/1909.03287.pdf},
year = {2019},
date = {2019-11-22},
booktitle = {Proceedings of the 18th International Conference of the Italian Association for Artificial Intelligence (AIIA 2019)},
publisher = {Springer-Verlag},
series = {Lecture Notes in Artificial Intelligence},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{esann19GraphGen,
title = {Graph generation by sequential edge prediction},
author = {Davide Bacciu and Alessio Micheli and Marco Podda},
editor = {Michel Verleysen},
url = {https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2019-107.pdf},
year = {2019},
date = {2019-04-24},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'19)},
publisher = {i6doc.com},
address = {Louvain-la-Neuve, Belgium},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{ssci2018,
title = {Text Summarization as Tree Transduction by Top-Down TreeLSTM},
author = {Bacciu Davide and Bruno Antonio},
url = {https://arxiv.org/abs/1809.09096},
doi = {10.1109/SSCI.2018.8628873},
year = {2018},
date = {2018-11-18},
urldate = {2018-11-18},
booktitle = {Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence (SSCI'18)},
pages = {1411-1418},
publisher = {IEEE},
abstract = {Extractive compression is a challenging natural language processing problem. This work contributes by formulating neural extractive compression as a parse tree transduction problem, rather than a sequence transduction task. Motivated by this, we introduce a deep neural model for learning structure-to-substructure tree transductions by extending the standard Long Short-Term Memory, considering the parent-child relationships in the structural recursion. The proposed model can achieve state of the art performance on sentence compression benchmarks, both in terms of accuracy and compression rate. },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{icml2018,
title = {Contextual Graph Markov Model: A Deep and Generative Approach to Graph Processing},
author = {Bacciu Davide and Errica Federico and Micheli Alessio},
url = {https://arxiv.org/abs/1805.10636},
year = {2018},
date = {2018-07-11},
urldate = {2018-07-11},
booktitle = {Proceedings of the 35th International Conference on Machine Learning (ICML 2018)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}