Carta, Antonio; Cossu, Andrea; Lomonaco, Vincenzo; Bacciu, Davide; Weijer, Joost Projected Latent Distillation for Data-Agnostic Consolidation in distributed continual learning Journal Article In: Neurocomputing, vol. 598, pp. 127935, 2024, ISSN: 0925-2312. Cossu, Andrea; Spinnato, Francesco; Guidotti, Riccardo; Bacciu, Davide Drifting explanations in continual learning Journal Article In: Neurocomputing, vol. 597, pp. 127960, 2024, ISSN: 0925-2312. Gravina, Alessio; Bacciu, Davide Deep Learning for Dynamic Graphs: Models and Benchmarks Journal Article In: IEEE Transactions on Neural Networks and Learning Systems, pp. 1-14, 2024. Zhang, Kun; Shpitser, Ilya; Magliacane, Sara; Bacciu, Davide; Wu, Fei; Zhang, Changshui; Spirtes, Peter IEEE Transactions on Neural Networks and Learning Systems Special Issue on Causal Discovery and Causality-Inspired Machine Learning Journal Article In: IEEE Transactions on Neural Networks and Learning Systems, vol. 35, no. 4, pp. 4899-4901, 2024. Cossu, Andrea; Carta, Antonio; Passaro, Lucia; Lomonaco, Vincenzo; Tuytelaars, Tinne; Bacciu, Davide Continual pre-training mitigates forgetting in language and vision Journal Article In: Neural Networks, vol. 179, pp. 106492, 2024, ISSN: 0893-6080. Lepri, Marco; Bacciu, Davide; Santina, Cosimo Della Neural Autoencoder-Based Structure-Preserving Model Order Reduction and Control Design for High-Dimensional Physical Systems Journal Article In: IEEE Control Systems Letters, 2023. Errica, Federico; Bacciu, Davide; Micheli, Alessio PyDGN: a Python Library for Flexible and Reproducible Research on Deep Learning for Graphs Journal Article In: Journal of Open Source Software, vol. 8, no. 90, pp. 5713, 2023. Bacciu, Davide; Errica, Federico; Gravina, Alessio; Madeddu, Lorenzo; Podda, Marco; Stilo, Giovanni Deep Graph Networks for Drug Repurposing with Multi-Protein Targets Journal Article In: IEEE Transactions on Emerging Topics in Computing, 2023, 2023. Lanciano, Giacomo; Galli, Filippo; Cucinotta, Tommaso; Bacciu, Davide; Passarella, Andrea Extending OpenStack Monasca for Predictive Elasticity Control Journal Article In: Big Data Mining and Analytics, 2023. Caro, Valerio De; Gallicchio, Claudio; Bacciu, Davide Continual adaptation of federated reservoirs in pervasive environments Journal Article In: Neurocomputing, pp. 126638, 2023, ISSN: 0925-2312. Lanciano, Giacomo; Andreoli, Remo; Cucinotta, Tommaso; Bacciu, Davide; Passarella, Andrea A 2-phase Strategy For Intelligent Cloud Operations Journal Article In: IEEE Access, pp. 1-1, 2023. Dukic, Haris; Mokarizadeh, Shahab; Deligiorgis, Georgios; Sepe, Pierpaolo; Bacciu, Davide; Trincavelli, Marco Inductive-Transductive Learning for Very Sparse Fashion Graphs Journal Article In: Neurocomputing, 2022, ISSN: 0925-2312. Sattar, Asma; Bacciu, Davide Graph Neural Network for Context-Aware Recommendation Journal Article In: Neural Processing Letters, 2022. Ferrari, Elisa; Gargani, Luna; Barbieri, Greta; Ghiadoni, Lorenzo; Faita, Francesco; Bacciu, Davide A causal learning framework for the analysis and interpretation of COVID-19 clinical data Journal Article In: Plos One, vol. 17, no. 5, 2022. Bacciu, Davide; Morelli, Davide; Pandelea, Vlad Modeling Mood Polarity and Declaration Occurrence by Neural Temporal Point Processes Journal Article In: IEEE Transactions on Neural Networks and Learning Systems, pp. 1-8, 2022. Bacciu, Davide; Numeroso, Danilo Explaining Deep Graph Networks via Input Perturbation Journal Article In: IEEE Transactions on Neural Networks and Learning Systems, 2022. Collodi, Lorenzo; Bacciu, Davide; Bianchi, Matteo; Averta, Giuseppe Learning with few examples the semantic description of novel human-inspired grasp strategies from RGB data Journal Article In: IEEE Robotics and Automation Letters, pp. 2573 - 2580, 2022. Gravina, Alessio; Wilson, Jennifer L.; Bacciu, Davide; Grimes, Kevin J.; Priami, Corrado Controlling astrocyte-mediated synaptic pruning signals for schizophrenia drug repurposing with Deep Graph Networks Journal Article In: Plos Computational Biology, vol. 18, no. 5, 2022. Castellana, Daniele; Bacciu, Davide A Tensor Framework for Learning in Structured Domains Journal Article In: Neurocomputing, vol. 470, pp. 405-426, 2022. Carta, Antonio; Cossu, Andrea; Errica, Federico; Bacciu, Davide Catastrophic Forgetting in Deep Graph Networks: a Graph Classification benchmark Journal Article In: Frontiers in Artificial Intelligence , 2022. Cossu, Andrea; Graffieti, Gabriele; Pellegrini, Lorenzo; Maltoni, Davide; Bacciu, Davide; Carta, Antonio; Lomonaco, Vincenzo Is Class-Incremental Enough for Continual Learning? Journal Article In: Frontiers in Artificial Intelligence, vol. 5, 2022, ISSN: 2624-8212. Atzeni, Daniele; Bacciu, Davide; Mazzei, Daniele; Prencipe, Giuseppe A Systematic Review of Wi-Fi and Machine Learning Integration with Topic Modeling Techniques Journal Article In: Sensors, vol. 22, no. 13, 2022, ISSN: 1424-8220. Cossu, Andrea; Carta, Antonio; Lomonaco, Vincenzo; Bacciu, Davide Continual Learning for Recurrent Neural Networks: an Empirical Evaluation Journal Article In: Neural Networks, vol. 143, pp. 607-627, 2021. Carta, Antonio; Sperduti, Alessandro; Bacciu, Davide Encoding-based Memory for Recurrent Neural Networks Journal Article In: Neurocomputing, vol. 456, pp. 407-420, 2021. Averta, Giuseppe; Barontini, Federica; Valdambrini, Irene; Cheli, Paolo; Bacciu, Davide; Bianchi, Matteo Learning to Prevent Grasp Failure with Soft Hands: From Online Prediction to Dual-Arm Grasp Recovery Journal Article In: Advanced Intelligent Systems, 2021. Bacciu, Davide; Conte, Alessio; Grossi, Roberto; Landolfi, Francesco; Marino, Andrea K-Plex Cover Pooling for Graph Neural Networks Journal Article In: Data Mining and Knowledge Discovery, 2021, (Accepted also as paper to the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2021)). Resta, Michele; Monreale, Anna; Bacciu, Davide Occlusion-based Explanations in Deep Recurrent Models for Biomedical Signals Journal Article In: Entropy, vol. 23, no. 8, pp. 1064, 2021, (Special issue on Representation Learning). Errica, Federico; Giulini, Marco; Bacciu, Davide; Menichetti, Roberto; Micheli, Alessio; Potestio, Raffaello A deep graph network-enhanced sampling approach to efficiently explore the space of reduced representations of proteins Journal Article In: Frontiers in Molecular Biosciences, vol. 8, pp. 136, 2021. Bontempi, Gianluca; Chavarriaga, Ricardo; Canck, Hans De; Girardi, Emanuela; Hoos, Holger; Kilbane-Dawe, Iarla; Ball, Tonio; Nowé, Ann; Sousa, Jose; Bacciu, Davide; Aldinucci, Marco; Domenico, Manlio De; Saffiotti, Alessandro; Maratea, Marco The CLAIRE COVID-19 initiative: approach, experiences and recommendations Journal Article In: Ethics and Information Technology, 2021. Michele Barsotti Andrea Valenti, Davide Bacciu; Ascari, Luca A Deep Classifier for Upper-Limbs Motor Anticipation Tasks in an Online BCI Setting Journal Article In: Bioengineering , 2021. Bacciu, Davide; Bertoncini, Gioele; Morelli, Davide Topographic mapping for quality inspection and intelligent filtering of smart-bracelet data Journal Article In: Neural Computing Applications, 2021. Crecchi, Francesco; Melis, Marco; Sotgiu, Angelo; Bacciu, Davide; Biggio, Battista FADER: Fast Adversarial Example Rejection Journal Article In: Neurocomputing, 2021, ISSN: 0925-2312. Bacciu, Davide; Errica, Federico; Micheli, Alessio; Podda, Marco A Gentle Introduction to Deep Learning for Graphs Journal Article In: Neural Networks, vol. 129, pp. 203-221, 2020. Bacciu, Davide; Errica, Federico; Micheli, Alessio Probabilistic Learning on Graphs via Contextual Architectures Journal Article In: Journal of Machine Learning Research, vol. 21, no. 134, pp. 1−39, 2020. Ferrari, Elisa; Retico, Alessandra; Bacciu, Davide Measuring the effects of confounders in medical supervised classification problems: the Confounding Index (CI) Journal Article In: Artificial Intelligence in Medicine, vol. 103, 2020. Bacciu, Davide; Micheli, Alessio; Podda, Marco Edge-based sequential graph generation with recurrent neural networks Journal Article In: Neurocomputing, 2019. Davide, Bacciu; Maurizio, Di Rocco; Mauro, Dragone; Claudio, Gallicchio; Alessio, Micheli; Alessandro, Saffiotti An Ambient Intelligence Approach for Learning in Smart Robotic Environments Journal Article In: Computational Intelligence, 2019, (Early View (Online Version of Record before inclusion in an issue)
). Davide, Bacciu; Daniele, Castellana Bayesian Mixtures of Hidden Tree Markov Models for Structured Data Clustering Journal Article In: Neurocomputing, vol. 342, pp. 49-59, 2019, ISBN: 0925-2312. Bacciu, Davide; Crecchi, Francesco Augmenting Recurrent Neural Networks Resilience by Dropout Journal Article In: IEEE Transactions on Neural Networs and Learning Systems, 2019. Cosimo, Della Santina; Visar, Arapi; Giuseppe, Averta; Francesca, Damiani; Gaia, Fiore; Alessandro, Settimi; Giuseppe, Catalano Manuel; Davide, Bacciu; Antonio, Bicchi; Matteo, Bianchi Learning from humans how to grasp: a data-driven architecture for autonomous grasping with anthropomorphic soft hands Journal Article In: IEEE Robotics and Automation Letters, pp. 1-8, 2019, ISSN: 2377-3766, (Also accepted for presentation at ICRA 2019). Arapi, Visar; Santina, Cosimo Della; Bacciu, Davide; Bianchi, Matteo; Bicchi, Antonio DeepDynamicHand: A deep neural architecture for labeling hand manipulation strategies in video sources exploiting temporal information Journal Article In: Frontiers in Neurorobotics, vol. 12, pp. 86, 2018. Marco, Podda; Davide, Bacciu; Alessio, Micheli; Roberto, Bellu; Giulia, Placidi; Luigi, Gagliardi A machine learning approach to estimating preterm infants survival: development of the Preterm Infants Survival Assessment (PISA) predictor Journal Article In: Nature Scientific Reports, vol. 8, 2018. Davide, Bacciu; Michele, Colombo; Davide, Morelli; David, Plans Randomized neural networks for preference learning with physiological data Journal Article In: Neurocomputing, vol. 298, pp. 9-20, 2018. Davide, Bacciu; Alessio, Micheli; Alessandro, Sperduti Generative Kernels for Tree-Structured Data Journal Article In: Neural Networks and Learning Systems, IEEE Transactions on, 2018, ISSN: 2162-2388 . Davide, Bacciu; Stefano, Chessa; Claudio, Gallicchio; Alessio, Micheli; Luca, Pedrelli; Erina, Ferro; Luigi, Fortunati; Davide, La Rosa; Filippo, Palumbo; Federico, Vozzi; Oberdan, Parodi A Learning System for Automatic Berg Balance Scale Score Estimation Journal Article In: Engineering Applications of Artificial Intelligence journal, vol. 66, pp. 60-74, 2017. Filippo, Palumbo; Davide, La Rosa; Erina, Ferro; Davide, Bacciu; Claudio, Gallicchio; Alession, Micheli; Stefano, Chessa; Federico, Vozzi; Oberdan, Parodi Reliability and human factors in Ambient Assisted Living environments: The DOREMI case study Journal Article In: Journal of Reliable Intelligent Environments, vol. 3, no. 3, pp. 139–157, 2017, ISBN: 2199-4668. Davide, Bacciu; Antonio, Carta; Stefania, Gnesi; Laura, Semini An Experience in using Machine Learning for Short-term Predictions in Smart Transportation Systems Journal Article In: Journal of Logical and Algebraic Methods in Programming , vol. 87, pp. 52-66, 2017, ISSN: 2352-2208. Davide, Bacciu Unsupervised feature selection for sensor time-series in pervasive computing applications Journal Article In: Neural Computing and Applications, vol. 27, no. 5, pp. 1077-1091, 2016, ISSN: 1433-3058. Giuseppe, Amato; Davide, Bacciu; Mathias, Broxvall; Stefano, Chessa; Sonya, Coleman; Maurizio, Di Rocco; Mauro, Dragone; Claudio, Gallicchio; Claudio, Gennaro; Hector, Lozano; Martin, McGinnity T; Alessio, Micheli; AK, Ray; Arantxa, Renteria; Alessandro, Saffiotti; David, Swords; Claudio, Vairo; Philip, Vance Robotic Ubiquitous Cognitive Ecology for Smart Homes Journal Article In: Journal of Intelligent & Robotic Systems, vol. 80, no. 1, pp. 57-81, 2015, ISSN: 0921-0296. Mauro, Dragone; Giuseppe, Amato; Davide, Bacciu; Stefano, Chessa; Sonya, Coleman; Maurizio, Di Rocco; Claudio, Gallicchio; Claudio, Gennaro; Hector, Lozano; Liam, Maguire; Martin, McGinnity; Alessio, Micheli; M.P., O'Hare Gregory; Arantxa, Renteria; Alessandro, Saffiotti; Claudio, Vairo; Philip, Vance A Cognitive Robotic Ecology Approach to Self-configuring and Evolving AAL Systems Journal Article In: Engineering Applications of Artificial Intelligence, vol. 45, no. C, pp. 269–280, 2015, ISSN: 0952-1976.2024
@article{CARTA2024127935,
title = {Projected Latent Distillation for Data-Agnostic Consolidation in distributed continual learning},
author = {Antonio Carta and Andrea Cossu and Vincenzo Lomonaco and Davide Bacciu and Joost Weijer},
url = {https://www.sciencedirect.com/science/article/pii/S0925231224007069},
doi = {https://doi.org/10.1016/j.neucom.2024.127935},
issn = {0925-2312},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
journal = {Neurocomputing},
volume = {598},
pages = {127935},
abstract = {In continual learning applications on-the-edge multiple self-centered devices (SCD) learn different local tasks independently, with each SCD only optimizing its own task. Can we achieve (almost) zero-cost collaboration between different devices? We formalize this problem as a Distributed Continual Learning (DCL) scenario, where SCDs greedily adapt to their own local tasks and a separate continual learning (CL) model perform a sparse and asynchronous consolidation step that combines the SCD models sequentially into a single multi-task model without using the original data. Unfortunately, current CL methods are not directly applicable to this scenario. We propose Data-Agnostic Consolidation (DAC), a novel double knowledge distillation method which performs distillation in the latent space via a novel Projected Latent Distillation loss. Experimental results show that DAC enables forward transfer between SCDs and reaches state-of-the-art accuracy on Split CIFAR100, CORe50 and Split TinyImageNet, both in single device and distributed CL scenarios. Somewhat surprisingly, a single out-of-distribution image is sufficient as the only source of data for DAC.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{COSSU2024127960,
title = {Drifting explanations in continual learning},
author = {Andrea Cossu and Francesco Spinnato and Riccardo Guidotti and Davide Bacciu},
url = {https://www.sciencedirect.com/science/article/pii/S0925231224007318},
doi = {https://doi.org/10.1016/j.neucom.2024.127960},
issn = {0925-2312},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
journal = {Neurocomputing},
volume = {597},
pages = {127960},
abstract = {Continual Learning (CL) trains models on streams of data, with the aim of learning new information without forgetting previous knowledge. However, many of these models lack interpretability, making it difficult to understand or explain how they make decisions. This lack of interpretability becomes even more challenging given the non-stationary nature of the data streams in CL. Furthermore, CL strategies aimed at mitigating forgetting directly impact the learned representations. We study the behavior of different explanation methods in CL and propose CLEX (ContinuaL EXplanations), an evaluation protocol to robustly assess the change of explanations in Class-Incremental scenarios, where forgetting is pronounced. We observed that models with similar predictive accuracy do not generate similar explanations. Replay-based strategies, well-known to be some of the most effective ones in class-incremental scenarios, are able to generate explanations that are aligned to the ones of a model trained offline. On the contrary, naive fine-tuning often results in degenerate explanations that drift from the ones of an offline model. Finally, we discovered that even replay strategies do not always operate at best when applied to fully-trained recurrent models. Instead, randomized recurrent models (leveraging on an untrained recurrent component) clearly reduce the drift of the explanations. This discrepancy between fully-trained and randomized recurrent models, previously known only in the context of their predictive continual performance, is more general, including also continual explanations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{10490120,
title = {Deep Learning for Dynamic Graphs: Models and Benchmarks},
author = {Alessio Gravina and Davide Bacciu},
doi = {10.1109/TNNLS.2024.3379735},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
journal = {IEEE Transactions on Neural Networks and Learning Systems},
pages = {1-14},
abstract = {Recent progress in research on deep graph networks (DGNs) has led to a maturation of the domain of learning on graphs. Despite the growth of this research field, there are still important challenges that are yet unsolved. Specifically, there is an urge of making DGNs suitable for predictive tasks on real-world systems of interconnected entities, which evolve over time. With the aim of fostering research in the domain of dynamic graphs, first, we survey recent advantages in learning both temporal and spatial information, providing a comprehensive overview of the current state-of-the-art in the domain of representation learning for dynamic graphs. Second, we conduct a fair performance comparison among the most popular proposed approaches on node-and edge-level tasks, leveraging rigorous model selection and assessment for all the methods, thus establishing a sound baseline for evaluating new architectures and approaches.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{10492646,
title = {IEEE Transactions on Neural Networks and Learning Systems Special Issue on Causal Discovery and Causality-Inspired Machine Learning},
author = {Kun Zhang and Ilya Shpitser and Sara Magliacane and Davide Bacciu and Fei Wu and Changshui Zhang and Peter Spirtes},
doi = {10.1109/TNNLS.2024.3365968},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
journal = {IEEE Transactions on Neural Networks and Learning Systems},
volume = {35},
number = {4},
pages = {4899-4901},
abstract = {Causality is a fundamental notion in science and engineering. It has attracted much interest across research communities in statistics, machine learning (ML), healthcare, and artificial intelligence (AI), and is becoming increasingly recognized as a vital research area. One of the fundamental problems in causality is how to find the causal structure or the underlying causal model. Accordingly, one focus of this Special Issue is on causal discovery , i.e., how can we discover causal structure over a set of variables from observational data with automated procedures? Besides learning causality, another focus is on using causality to help understand and advance ML, that is, causality-inspired ML.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{COSSU2024106492,
title = {Continual pre-training mitigates forgetting in language and vision},
author = {Andrea Cossu and Antonio Carta and Lucia Passaro and Vincenzo Lomonaco and Tinne Tuytelaars and Davide Bacciu},
url = {https://www.sciencedirect.com/science/article/pii/S0893608024004167},
doi = {https://doi.org/10.1016/j.neunet.2024.106492},
issn = {0893-6080},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
journal = {Neural Networks},
volume = {179},
pages = {106492},
abstract = {Pre-trained models are commonly used in Continual Learning to initialize the model before training on the stream of non-stationary data. However, pre-training is rarely applied during Continual Learning. We investigate the characteristics of the Continual Pre-Training scenario, where a model is continually pre-trained on a stream of incoming data and only later fine-tuned to different downstream tasks. We introduce an evaluation protocol for Continual Pre-Training which monitors forgetting against a Forgetting Control dataset not present in the continual stream. We disentangle the impact on forgetting of 3 main factors: the input modality (NLP, Vision), the architecture type (Transformer, ResNet) and the pre-training protocol (supervised, self-supervised). Moreover, we propose a Sample-Efficient Pre-training method (SEP) that speeds up the pre-training phase. We show that the pre-training protocol is the most important factor accounting for forgetting. Surprisingly, we discovered that self-supervised continual pre-training in both NLP and Vision is sufficient to mitigate forgetting without the use of any Continual Learning strategy. Other factors, like model depth, input modality and architecture type are not as crucial.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2023
@article{lepri2023neural,
title = {Neural Autoencoder-Based Structure-Preserving Model Order Reduction and Control Design for High-Dimensional Physical Systems},
author = {Marco Lepri and Davide Bacciu and Cosimo Della Santina},
year = {2023},
date = {2023-12-21},
urldate = {2023-01-01},
journal = {IEEE Control Systems Letters},
publisher = {IEEE},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{errica2023pydgn,
title = {PyDGN: a Python Library for Flexible and Reproducible Research on Deep Learning for Graphs},
author = {Federico Errica and Davide Bacciu and Alessio Micheli},
year = {2023},
date = {2023-10-31},
urldate = {2023-01-01},
journal = {Journal of Open Source Software},
volume = {8},
number = {90},
pages = {5713},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Bacciu2023b,
title = {Deep Graph Networks for Drug Repurposing with Multi-Protein Targets},
author = {Davide Bacciu and Federico Errica and Alessio Gravina and Lorenzo Madeddu and Marco Podda and Giovanni Stilo},
doi = {10.1109/TETC.2023.3238963},
year = {2023},
date = {2023-02-01},
urldate = {2023-02-01},
journal = {IEEE Transactions on Emerging Topics in Computing, 2023},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Lanciano2023extending,
title = {Extending OpenStack Monasca for Predictive Elasticity Control},
author = {Giacomo Lanciano and Filippo Galli and Tommaso Cucinotta and Davide Bacciu and Andrea Passarella},
doi = {10.26599/BDMA.2023.9020014},
year = {2023},
date = {2023-01-01},
urldate = {2023-01-01},
journal = {Big Data Mining and Analytics},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{DECARO2023126638,
title = {Continual adaptation of federated reservoirs in pervasive environments},
author = {Valerio De Caro and Claudio Gallicchio and Davide Bacciu},
url = {https://www.sciencedirect.com/science/article/pii/S0925231223007610},
doi = {https://doi.org/10.1016/j.neucom.2023.126638},
issn = {0925-2312},
year = {2023},
date = {2023-01-01},
urldate = {2023-01-01},
journal = {Neurocomputing},
pages = {126638},
abstract = {When performing learning tasks in pervasive environments, the main challenge arises from the need of combining federated and continual settings. The former comes from the massive distribution of devices with privacy-regulated data. The latter is required by the low resources of the participating devices, which may retain data for short periods of time. In this paper, we propose a setup for learning with Echo State Networks (ESNs) in pervasive environments. Our proposal focuses on the use of Intrinsic Plasticity (IP), a gradient-based method for adapting the reservoir’s non-linearity. First, we extend the objective function of IP to include the uncertainty arising from the distribution of the data over space and time. Then, we propose Federated Intrinsic Plasticity (FedIP), which is intended for client–server federated topologies with stationary data, and adapts the learning scheme provided by Federated Averaging (FedAvg) to include the learning rule of IP. Finally, we further extend this algorithm for learning to Federated Continual Intrinsic Plasticity (FedCLIP) to equip clients with CL strategies for dealing with continuous data streams. We evaluate our approach on an incremental setup built upon real-world datasets from human monitoring, where we tune the complexity of the scenario in terms of the distribution of the data over space and time. Results show that both our algorithms improve the representation capabilities and the performance of the ESN, while being robust to catastrophic forgetting.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{10239346,
title = {A 2-phase Strategy For Intelligent Cloud Operations},
author = {Giacomo Lanciano and Remo Andreoli and Tommaso Cucinotta and Davide Bacciu and Andrea Passarella},
doi = {10.1109/ACCESS.2023.3312218},
year = {2023},
date = {2023-01-01},
urldate = {2023-01-01},
journal = {IEEE Access},
pages = {1-1},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2022
@article{DUKIC2022,
title = {Inductive-Transductive Learning for Very Sparse Fashion Graphs},
author = {Haris Dukic and Shahab Mokarizadeh and Georgios Deligiorgis and Pierpaolo Sepe and Davide Bacciu and Marco Trincavelli},
doi = {https://doi.org/10.1016/j.neucom.2022.06.050},
issn = {0925-2312},
year = {2022},
date = {2022-06-27},
urldate = {2022-06-27},
journal = {Neurocomputing},
abstract = {The assortments of global retailers are composed of hundreds of thousands of products linked by several types of relationships such as style compatibility, ”bought together”, ”watched together”, etc. Graphs are a natural representation for assortments, where products are nodes and relations are edges. Style compatibility relations are produced manually and do not cover the whole graph uniformly. We propose to use inductive learning to enhance a graph encoding style compatibility of a fashion assortment, leveraging rich node information comprising textual descriptions and visual data. Then, we show how the proposed graph enhancement substantially improves the performance on transductive tasks with a minor impact on graph sparsity. Although demonstrated in a challenging and novel industrial application case, the approach we propose is general enough to be applied to any node-level or edge-level prediction task in very sparse, large-scale networks.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{nokey,
title = {Graph Neural Network for Context-Aware Recommendation},
author = {Asma Sattar and Davide Bacciu},
doi = {10.1007/s11063-022-10917-3},
year = {2022},
date = {2022-06-22},
urldate = {2022-06-22},
journal = {Neural Processing Letters},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{DBLP:journals/corr/abs-2105-06998,
title = {A causal learning framework for the analysis and interpretation of COVID-19 clinical data},
author = {Elisa Ferrari and Luna Gargani and Greta Barbieri and Lorenzo Ghiadoni and Francesco Faita and Davide Bacciu},
url = {https://arxiv.org/abs/2105.06998, Arxiv},
doi = {doi.org/10.1371/journal.pone.0268327},
year = {2022},
date = {2022-05-19},
urldate = {2022-05-19},
journal = {Plos One},
volume = {17},
number = {5},
abstract = {We present a workflow for clinical data analysis that relies on Bayesian Structure Learning (BSL), an unsupervised learning approach, robust to noise and biases, that allows to incorporate prior medical knowledge into the learning process and that provides explainable results in the form of a graph showing the causal connections among the analyzed features. The workflow consists in a multi-step approach that goes from identifying the main causes of patient's outcome through BSL, to the realization of a tool suitable for clinical practice, based on a Binary Decision Tree (BDT), to recognize patients at high-risk with information available already at hospital admission time. We evaluate our approach on a feature-rich COVID-19 dataset, showing that the proposed framework provides a schematic overview of the multi-factorial processes that jointly contribute to the outcome. We discuss how these computational findings are confirmed by current understanding of the COVID-19 pathogenesis. Further, our approach yields to a highly interpretable tool correctly predicting the outcome of 85% of subjects based exclusively on 3 features: age, a previous history of chronic obstructive pulmonary disease and the PaO2/FiO2 ratio at the time of arrival to the hospital. The inclusion of additional information from 4 routine blood tests (Creatinine, Glucose, pO2 and Sodium) increases predictive accuracy to 94.5%. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{pandelea2022,
title = {Modeling Mood Polarity and Declaration Occurrence by Neural Temporal Point Processes},
author = {Davide Bacciu and Davide Morelli and Vlad Pandelea},
doi = {10.1109/TNNLS.2022.3172871},
year = {2022},
date = {2022-05-13},
urldate = {2022-05-13},
journal = {IEEE Transactions on Neural Networks and Learning Systems},
pages = {1-8},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Bacciu2022,
title = {Explaining Deep Graph Networks via Input Perturbation},
author = {Davide Bacciu and Danilo Numeroso
},
doi = {10.1109/TNNLS.2022.3165618},
year = {2022},
date = {2022-04-21},
urldate = {2022-04-21},
journal = {IEEE Transactions on Neural Networks and Learning Systems},
abstract = {Deep Graph Networks are a family of machine learning models for structured data which are finding heavy application in life-sciences (drug repurposing, molecular property predictions) and on social network data (recommendation systems). The privacy and safety-critical nature of such domains motivates the need for developing effective explainability methods for this family of models. So far, progress in this field has been challenged by the combinatorial nature and complexity of graph structures. In this respect, we present a novel local explanation framework specifically tailored to graph data and deep graph networks. Our approach leverages reinforcement learning to generate meaningful local perturbations of the input graph, whose prediction we seek an interpretation for. These perturbed data points are obtained by optimising a multi-objective score taking into account similarities both at a structural level as well as at the level of the deep model outputs. By this means, we are able to populate a set of informative neighbouring samples for the query graph, which is then used to fit an interpretable model for the predictive behaviour of the deep network locally to the query graph prediction. We show the effectiveness of the proposed explainer by a qualitative analysis on two chemistry datasets, TOS and ESOL and by quantitative results on a benchmark dataset for explanations, CYCLIQ.
},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Collodi2022,
title = {Learning with few examples the semantic description of novel human-inspired grasp strategies from RGB data},
author = { Lorenzo Collodi and Davide Bacciu and Matteo Bianchi and Giuseppe Averta},
url = {https://www.researchgate.net/profile/Giuseppe-Averta/publication/358006552_Learning_With_Few_Examples_the_Semantic_Description_of_Novel_Human-Inspired_Grasp_Strategies_From_RGB_Data/links/61eae01e8d338833e3857251/Learning-With-Few-Examples-the-Semantic-Description-of-Novel-Human-Inspired-Grasp-Strategies-From-RGB-Data.pdf, Open Version},
doi = {https://doi.org/10.1109/LRA.2022.3144520},
year = {2022},
date = {2022-04-04},
urldate = {2022-04-04},
journal = { IEEE Robotics and Automation Letters},
pages = { 2573 - 2580},
publisher = {IEEE},
abstract = {Data-driven approaches and human inspiration are fundamental to endow robotic manipulators with advanced autonomous grasping capabilities. However, to capitalize upon these two pillars, several aspects need to be considered, which include the number of human examples used for training; the need for having in advance all the required information for classification (hardly feasible in unstructured environments); the trade-off between the task performance and the processing cost. In this paper, we propose a RGB-based pipeline that can identify the object to be grasped and guide the actual execution of the grasping primitive selected through a combination of Convolutional and Gated Graph Neural Networks. We consider a set of human-inspired grasp strategies, which are afforded by the geometrical properties of the objects and identified from a human grasping taxonomy, and propose to learn new grasping skills with only a few examples. We test our framework with a manipulator endowed with an under-actuated soft robotic hand. Even though we use only 2D information to minimize the footprint of the network, we achieve 90% of successful identifications of the most appropriate human-inspired grasping strategy over ten different classes, of which three were few-shot learned, outperforming an ideal model trained with all the classes, in sample-scarce conditions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Gravina2022,
title = {Controlling astrocyte-mediated synaptic pruning signals for schizophrenia drug repurposing with Deep Graph Networks},
author = {Alessio Gravina and Jennifer L. Wilson and Davide Bacciu and Kevin J. Grimes and Corrado Priami},
url = {https://www.biorxiv.org/content/10.1101/2021.10.07.463459v1, BioArxiv},
doi = {doi.org/10.1371/journal.pcbi.1009531},
year = {2022},
date = {2022-04-01},
urldate = {2022-04-01},
journal = {Plos Computational Biology},
volume = {18},
number = {5},
abstract = {Schizophrenia is a debilitating psychiatric disorder, leading to both physical and social morbidity. Worldwide 1% of the population is struggling with the disease, with 100,000 new cases annually only in the United States. Despite its importance, the goal of finding effective treatments for schizophrenia remains a challenging task, and previous work conducted expensive large-scale phenotypic screens. This work investigates the benefits of Machine Learning for graphs to optimize drug phenotypic screens and predict compounds that mitigate abnormal brain reduction induced by excessive glial phagocytic activity in schizophrenia subjects. Given a compound and its concentration as input, we propose a method that predicts a score associated with three possible compound effects, ie reduce, increase, or not influence phagocytosis. We leverage a high-throughput screening to prove experimentally that our method achieves good generalization capabilities. The screening involves 2218 compounds at five different concentrations. Then, we analyze the usability of our approach in a practical setting, ie prioritizing the selection of compounds in the SWEETLEAD library. We provide a list of 64 compounds from the library that have the most potential clinical utility for glial phagocytosis mitigation. Lastly, we propose a novel approach to computationally validate their utility as possible therapies for schizophrenia.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Castellana2021,
title = {A Tensor Framework for Learning in Structured Domains},
author = {Daniele Castellana and Davide Bacciu},
editor = {Kerstin Bunte and Niccolo Navarin and Luca Oneto},
doi = {10.1016/j.neucom.2021.05.110},
year = {2022},
date = {2022-01-22},
urldate = {2022-01-22},
journal = {Neurocomputing},
volume = {470},
pages = {405-426},
abstract = {Learning machines for structured data (e.g., trees) are intrinsically based on their capacity to learn representations by aggregating information from the multi-way relationships emerging from the structure topology. While complex aggregation functions are desirable in this context to increase the expressiveness of the learned representations, the modelling of higher-order interactions among structure constituents is unfeasible, in practice, due to the exponential number of parameters required. Therefore, the common approach is to define models which rely only on first-order interactions among structure constituents.
In this work, we leverage tensors theory to define a framework for learning in structured domains. Such a framework is built on the observation that more expressive models require a tensor parameterisation. This observation is the stepping stone for the application of tensor decompositions in the context of recursive models. From this point of view, the advantage of using tensor decompositions is twofold since it allows limiting the number of model parameters while injecting inductive biases that do not ignore higher-order interactions.
We apply the proposed framework on probabilistic and neural models for structured data, defining different models which leverage tensor decompositions. The experimental validation clearly shows the advantage of these models compared to first-order and full-tensorial models.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
In this work, we leverage tensors theory to define a framework for learning in structured domains. Such a framework is built on the observation that more expressive models require a tensor parameterisation. This observation is the stepping stone for the application of tensor decompositions in the context of recursive models. From this point of view, the advantage of using tensor decompositions is twofold since it allows limiting the number of model parameters while injecting inductive biases that do not ignore higher-order interactions.
We apply the proposed framework on probabilistic and neural models for structured data, defining different models which leverage tensor decompositions. The experimental validation clearly shows the advantage of these models compared to first-order and full-tensorial models.@article{Carta2022,
title = {Catastrophic Forgetting in Deep Graph Networks: a Graph Classification benchmark},
author = {Antonio Carta and Andrea Cossu and Federico Errica and Davide Bacciu},
doi = {10.3389/frai.2022.824655},
year = {2022},
date = {2022-01-11},
urldate = {2022-01-11},
journal = {Frontiers in Artificial Intelligence },
abstract = { In this work, we study the phenomenon of catastrophic forgetting in the graph representation learning scenario. The primary objective of the analysis is to understand whether classical continual learning techniques for flat and sequential data have a tangible impact on performances when applied to graph data. To do so, we experiment with a structure-agnostic model and a deep graph network in a robust and controlled environment on three different datasets. The benchmark is complemented by an investigation on the effect of structure-preserving regularization techniques on catastrophic forgetting. We find that replay is the most effective strategy in so far, which also benefits the most from the use of regularization. Our findings suggest interesting future research at the intersection of the continual and graph representation learning fields. Finally, we provide researchers with a flexible software framework to reproduce our results and carry out further experiments. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{10.3389/frai.2022.829842,
title = {Is Class-Incremental Enough for Continual Learning?},
author = {Andrea Cossu and Gabriele Graffieti and Lorenzo Pellegrini and Davide Maltoni and Davide Bacciu and Antonio Carta and Vincenzo Lomonaco},
url = {https://www.frontiersin.org/article/10.3389/frai.2022.829842},
doi = {10.3389/frai.2022.829842},
issn = {2624-8212},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {Frontiers in Artificial Intelligence},
volume = {5},
abstract = {The ability of a model to learn continually can be empirically assessed in different continual learning scenarios. Each scenario defines the constraints and the opportunities of the learning environment. Here, we challenge the current trend in the continual learning literature to experiment mainly on class-incremental scenarios, where classes present in one experience are never revisited. We posit that an excessive focus on this setting may be limiting for future research on continual learning, since class-incremental scenarios artificially exacerbate catastrophic forgetting, at the expense of other important objectives like forward transfer and computational efficiency. In many real-world environments, in fact, repetition of previously encountered concepts occurs naturally and contributes to softening the disruption of previous knowledge. We advocate for a more in-depth study of alternative continual learning scenarios, in which repetition is integrated by design in the stream of incoming information. Starting from already existing proposals, we describe the advantages such class-incremental with repetition scenarios could offer for a more comprehensive assessment of continual learning models.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{atzeni2022,
title = {A Systematic Review of Wi-Fi and Machine Learning Integration with Topic Modeling Techniques},
author = {Daniele Atzeni and Davide Bacciu and Daniele Mazzei and Giuseppe Prencipe},
url = {https://www.mdpi.com/1424-8220/22/13/4925},
doi = {10.3390/s22134925},
issn = {1424-8220},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {Sensors},
volume = {22},
number = {13},
abstract = {Wireless networks have drastically influenced our lifestyle, changing our workplaces and society. Among the variety of wireless technology, Wi-Fi surely plays a leading role, especially in local area networks. The spread of mobiles and tablets, and more recently, the advent of Internet of Things, have resulted in a multitude of Wi-Fi-enabled devices continuously sending data to the Internet and between each other. At the same time, Machine Learning has proven to be one of the most effective and versatile tools for the analysis of fast streaming data. This systematic review aims at studying the interaction between these technologies and how it has developed throughout their lifetimes. We used Scopus, Web of Science, and IEEE Xplore databases to retrieve paper abstracts and leveraged a topic modeling technique, namely, BERTopic, to analyze the resulting document corpus. After these steps, we inspected the obtained clusters and computed statistics to characterize and interpret the topics they refer to. Our results include both the applications of Wi-Fi sensing and the variety of Machine Learning algorithms used to tackle them. We also report how the Wi-Fi advances have affected sensing applications and the choice of the most suitable Machine Learning models.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2021
@article{Cossu2021b,
title = {Continual Learning for Recurrent Neural Networks: an Empirical Evaluation},
author = {Andrea Cossu and Antonio Carta and Vincenzo Lomonaco and Davide Bacciu},
url = {https://arxiv.org/abs/2103.07492, Arxiv},
year = {2021},
date = {2021-12-03},
urldate = {2021-12-03},
journal = {Neural Networks},
volume = {143},
pages = {607-627},
abstract = { Learning continuously during all model lifetime is fundamental to deploy machine learning solutions robust to drifts in the data distribution. Advances in Continual Learning (CL) with recurrent neural networks could pave the way to a large number of applications where incoming data is non stationary, like natural language processing and robotics. However, the existing body of work on the topic is still fragmented, with approaches which are application-specific and whose assessment is based on heterogeneous learning protocols and datasets. In this paper, we organize the literature on CL for sequential data processing by providing a categorization of the contributions and a review of the benchmarks. We propose two new benchmarks for CL with sequential data based on existing datasets, whose characteristics resemble real-world applications. We also provide a broad empirical evaluation of CL and Recurrent Neural Networks in class-incremental scenario, by testing their ability to mitigate forgetting with a number of different strategies which are not specific to sequential data processing. Our results highlight the key role played by the sequence length and the importance of a clear specification of the CL scenario. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Carta2021b,
title = {Encoding-based Memory for Recurrent Neural Networks},
author = {Antonio Carta and Alessandro Sperduti and Davide Bacciu},
url = {https://arxiv.org/abs/2001.11771, Arxiv},
doi = {10.1016/j.neucom.2021.04.051},
year = {2021},
date = {2021-10-07},
urldate = {2021-10-07},
journal = {Neurocomputing},
volume = {456},
pages = {407-420},
publisher = {Elsevier},
abstract = {Learning to solve sequential tasks with recurrent models requires the ability to memorize long sequences and to extract task-relevant features from them. In this paper, we study the memorization subtask from the point of view of the design and training of recurrent neural networks. We propose a new model, the Linear Memory Network, which features an encoding-based memorization component built with a linear autoencoder for sequences. We extend the memorization component with a modular memory that encodes the hidden state sequence at different sampling frequencies. Additionally, we provide a specialized training algorithm that initializes the memory to efficiently encode the hidden activations of the network. The experimental results on synthetic and real-world datasets show that specializing the training algorithm to train the memorization component always improves the final performance whenever the memorization of long sequences is necessary to solve the problem. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Averta2021,
title = {Learning to Prevent Grasp Failure with Soft Hands: From Online Prediction to Dual-Arm Grasp Recovery},
author = {Giuseppe Averta and Federica Barontini and Irene Valdambrini and Paolo Cheli and Davide Bacciu and Matteo Bianchi},
doi = {10.1002/aisy.202100146},
year = {2021},
date = {2021-10-07},
urldate = {2021-10-07},
journal = {Advanced Intelligent Systems},
abstract = {Soft hands allow to simplify the grasp planning to achieve a successful grasp, thanks to their intrinsic adaptability. At the same time, their usage poses new challenges, related to the adoption of classical sensing techniques originally developed for rigid end defectors, which provide fundamental information, such as to detect object slippage. Under this regard, model-based approaches for the processing of the gathered information are hard to use, due to the difficulties in modeling hand–object interaction when softness is involved. To overcome these limitations, in this article, we proposed to combine distributed tactile sensing and machine learning (recurrent neural network) to detect sliding conditions for a soft robotic hand mounted on a robotic manipulator, targeting the prediction of the grasp failure event and the direction of sliding. The outcomes of these predictions allow for an online triggering of a compensatory action performed with a second robotic arm–hand system, to prevent the failure. Despite the fact that the network is trained only with spherical and cylindrical objects, we demonstrate high generalization capabilities of our framework, achieving a correct prediction of the failure direction in 75% of cases, and a 85% of successful regrasps, for a selection of 12 objects of common use.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Bacciu2021b,
title = {K-Plex Cover Pooling for Graph Neural Networks},
author = {Davide Bacciu and Alessio Conte and Roberto Grossi and Francesco Landolfi and Andrea Marino},
editor = {Annalisa Appice and Sergio Escalera and José A. Gámez and Heike Trautmann},
url = {https://link.springer.com/article/10.1007/s10618-021-00779-z, Published version},
doi = {10.1007/s10618-021-00779-z},
year = {2021},
date = {2021-09-13},
urldate = {2021-09-13},
journal = {Data Mining and Knowledge Discovery},
abstract = {raph pooling methods provide mechanisms for structure reduction that are intended to ease the diffusion of context between nodes further in the graph, and that typically leverage community discovery mechanisms or node and edge pruning heuristics. In this paper, we introduce a novel pooling technique which borrows from classical results in graph theory that is non-parametric and generalizes well to graphs of different nature and connectivity patterns. Our pooling method, named KPlexPool, builds on the concepts of graph covers and k-plexes, i.e. pseudo-cliques where each node can miss up to k links. The experimental evaluation on benchmarks on molecular and social graph classification shows that KPlexPool achieves state of the art performances against both parametric and non-parametric pooling methods in the literature, despite generating pooled graphs based solely on topological information.},
note = {Accepted also as paper to the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2021)},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Resta2021,
title = { Occlusion-based Explanations in Deep Recurrent Models for Biomedical Signals },
author = {Michele Resta and Anna Monreale and Davide Bacciu},
editor = {Fabio Aiolli and Mirko Polato},
doi = {10.3390/e23081064},
year = {2021},
date = {2021-09-01},
urldate = {2021-09-01},
journal = {Entropy},
volume = {23},
number = {8},
pages = {1064},
abstract = { The biomedical field is characterized by an ever-increasing production of sequential data, which often come under the form of biosignals capturing the time-evolution of physiological processes, such as blood pressure and brain activity. This has motivated a large body of research dealing with the development of machine learning techniques for the predictive analysis of such biosignals. Unfortunately, in high-stakes decision making, such as clinical diagnosis, the opacity of machine learning models becomes a crucial aspect to be addressed in order to increase the trust and adoption of AI technology. In this paper we propose a model agnostic explanation method, based on occlusion, enabling the learning of the input influence on the model predictions. We specifically target problems involving the predictive analysis of time-series data and the models which are typically used to deal with data of such nature, i.e. recurrent neural networks. Our approach is able to provide two different kinds of explanations: one suitable for technical experts, who need to verify the quality and correctness of machine learning models, and one suited to physicians, who need to understand the rationale underlying the prediction to take aware decisions. A wide experimentation on different physiological data demonstrate the effectiveness of our approach, both in classification and regression tasks. },
note = {Special issue on Representation Learning},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{errica_deep_2021,
title = {A deep graph network-enhanced sampling approach to efficiently explore the space of reduced representations of proteins},
author = {Federico Errica and Marco Giulini and Davide Bacciu and Roberto Menichetti and Alessio Micheli and Raffaello Potestio},
doi = {10.3389/fmolb.2021.637396},
year = {2021},
date = {2021-02-28},
urldate = {2021-02-28},
journal = {Frontiers in Molecular Biosciences},
volume = {8},
pages = {136},
publisher = {Frontiers},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Bontempi2021,
title = {The CLAIRE COVID-19 initiative: approach, experiences and recommendations},
author = {Gianluca Bontempi and Ricardo Chavarriaga and Hans De Canck and Emanuela Girardi and Holger Hoos and Iarla Kilbane-Dawe and Tonio Ball and Ann Nowé and Jose Sousa and Davide Bacciu and Marco Aldinucci and Manlio De Domenico and Alessandro Saffiotti and Marco Maratea},
doi = {10.1007/s10676-020-09567-7},
year = {2021},
date = {2021-02-09},
journal = {Ethics and Information Technology},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Valenti2021,
title = {A Deep Classifier for Upper-Limbs Motor Anticipation Tasks in an Online BCI Setting},
author = {Andrea Valenti, Michele Barsotti, Davide Bacciu and Luca Ascari
},
url = {https://www.mdpi.com/2306-5354/8/2/21, Open Access },
doi = {10.3390/bioengineering8020021},
year = {2021},
date = {2021-02-05},
urldate = {2021-02-05},
journal = {Bioengineering },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{BacciuNCA2020,
title = {Topographic mapping for quality inspection and intelligent filtering of smart-bracelet data},
author = {Davide Bacciu and Gioele Bertoncini and Davide Morelli},
doi = {10.1007/s00521-020-05600-4},
year = {2021},
date = {2021-01-04},
urldate = {2021-01-04},
journal = {Neural Computing Applications},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{CRECCHI2021,
title = {FADER: Fast Adversarial Example Rejection},
author = {Francesco Crecchi and Marco Melis and Angelo Sotgiu and Davide Bacciu and Battista Biggio},
url = {https://arxiv.org/abs/2010.09119, Arxiv},
doi = {https://doi.org/10.1016/j.neucom.2021.10.082},
issn = {0925-2312},
year = {2021},
date = {2021-01-01},
urldate = {2021-01-01},
journal = {Neurocomputing},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2020
@article{gentleGraphs2020,
title = {A Gentle Introduction to Deep Learning for Graphs},
author = {Davide Bacciu and Federico Errica and Alessio Micheli and Marco Podda},
url = {https://arxiv.org/abs/1912.12693, Arxiv
https://doi.org/10.1016/j.neunet.2020.06.006, Original Paper},
doi = {10.1016/j.neunet.2020.06.006},
year = {2020},
date = {2020-09-01},
urldate = {2020-09-01},
journal = {Neural Networks},
volume = {129},
pages = {203-221},
publisher = {Elsevier},
abstract = {The adaptive processing of graph data is a long-standing research topic which has been lately consolidated as a theme of major interest in the deep learning community. The snap increase in the amount and breadth of related research has come at the price of little systematization of knowledge and attention to earlier literature. This work is designed as a tutorial introduction to the field of deep learning for graphs. It favours a consistent and progressive introduction of the main concepts and architectural aspects over an exposition of the most recent literature, for which the reader is referred to available surveys. The paper takes a top-down view to the problem, introducing a generalized formulation of graph representation learning based on a local and iterative approach to structured information processing. It introduces the basic building blocks that can be combined to design novel and effective neural models for graphs. The methodological exposition is complemented by a discussion of interesting research challenges and applications in the field. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{jmlrCGMM20,
title = {Probabilistic Learning on Graphs via Contextual Architectures},
author = {Davide Bacciu and Federico Errica and Alessio Micheli},
editor = {Pushmeet Kohli},
url = {http://jmlr.org/papers/v21/19-470.html, Paper},
year = {2020},
date = {2020-07-27},
urldate = {2020-07-27},
journal = {Journal of Machine Learning Research},
volume = {21},
number = {134},
pages = {1−39},
abstract = {We propose a novel methodology for representation learning on graph-structured data, in which a stack of Bayesian Networks learns different distributions of a vertex's neighborhood. Through an incremental construction policy and layer-wise training, we can build deeper architectures with respect to typical graph convolutional neural networks, with benefits in terms of context spreading between vertices.
First, the model learns from graphs via maximum likelihood estimation without using target labels.
Then, a supervised readout is applied to the learned graph embeddings to deal with graph classification and vertex classification tasks, showing competitive results against neural models for graphs. The computational complexity is linear in the number of edges, facilitating learning on large scale data sets. By studying how depth affects the performances of our model, we discover that a broader context generally improves performances. In turn, this leads to a critical analysis of some benchmarks used in literature.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
First, the model learns from graphs via maximum likelihood estimation without using target labels.
Then, a supervised readout is applied to the learned graph embeddings to deal with graph classification and vertex classification tasks, showing competitive results against neural models for graphs. The computational complexity is linear in the number of edges, facilitating learning on large scale data sets. By studying how depth affects the performances of our model, we discover that a broader context generally improves performances. In turn, this leads to a critical analysis of some benchmarks used in literature.@article{aime20Confound,
title = {Measuring the effects of confounders in medical supervised classification problems: the Confounding Index (CI)},
author = {Elisa Ferrari and Alessandra Retico and Davide Bacciu},
url = {https://arxiv.org/abs/1905.08871},
doi = {10.1016/j.artmed.2020.101804},
year = {2020},
date = {2020-03-01},
journal = {Artificial Intelligence in Medicine},
volume = {103},
abstract = {Over the years, there has been growing interest in using Machine Learning techniques for biomedical data processing. When tackling these tasks, one needs to bear in mind that biomedical data depends on a variety of characteristics, such as demographic aspects (age, gender, etc) or the acquisition technology, which might be unrelated with the target of the analysis. In supervised tasks, failing to match the ground truth targets with respect to such characteristics, called confounders, may lead to very misleading estimates of the predictive performance. Many strategies have been proposed to handle confounders, ranging from data selection, to normalization techniques, up to the use of training algorithm for learning with imbalanced data. However, all these solutions require the confounders to be known a priori. To this aim, we introduce a novel index that is able to measure the confounding effect of a data attribute in a bias-agnostic way. This index can be used to quantitatively compare the confounding effects of different variables and to inform correction methods such as normalization procedures or ad-hoc-prepared learning algorithms. The effectiveness of this index is validated on both simulated data and real-world neuroimaging data. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2019
@article{neucompEsann19,
title = {Edge-based sequential graph generation with recurrent neural networks},
author = {Davide Bacciu and Alessio Micheli and Marco Podda},
url = {https://arxiv.org/abs/2002.00102v1},
year = {2019},
date = {2019-12-31},
journal = {Neurocomputing},
abstract = { Graph generation with Machine Learning is an open problem with applications in various research fields. In this work, we propose to cast the generative process of a graph into a sequential one, relying on a node ordering procedure. We use this sequential process to design a novel generative model composed of two recurrent neural networks that learn to predict the edges of graphs: the first network generates one endpoint of each edge, while the second network generates the other endpoint conditioned on the state of the first. We test our approach extensively on five different datasets, comparing with two well-known baselines coming from graph literature, and two recurrent approaches, one of which holds state of the art performances. Evaluation is conducted considering quantitative and qualitative characteristics of the generated samples. Results show that our approach is able to yield novel, and unique graphs originating from very different distributions, while retaining structural properties very similar to those in the training sample. Under the proposed evaluation framework, our approach is able to reach performances comparable to the current state of the art on the graph generation task. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{rubicon2019CI,
title = {An Ambient Intelligence Approach for Learning in Smart Robotic Environments},
author = {Bacciu Davide and Di Rocco Maurizio and Dragone Mauro and Gallicchio Claudio and Micheli Alessio and Saffiotti Alessandro},
doi = {10.1111/coin.12233},
year = {2019},
date = {2019-07-31},
journal = {Computational Intelligence},
abstract = {Smart robotic environments combine traditional (ambient) sensing devices and mobile robots. This combination extends the type of applications that can be considered, reduces their complexity, and enhances the individual values of the devices involved by enabling new services that cannot be performed by a single device. In order to reduce the amount of preparation and pre-programming required for their deployment in real world applications, it is important to make these systems self-learning, self-configuring, and self-adapting. The solution presented in this paper is based upon a type of compositional adaptation where (possibly multiple) plans of actions are created through planning and involve the activation of pre-existing capabilities. All the devices in the smart environment participate in a pervasive learning infrastructure, which is exploited to recognize which plans of actions are most suited to the current situation. The system is evaluated in experiments run in a real domestic environment, showing its ability to pro-actively and smoothly adapt to subtle changes in the environment and in the habits and preferences
of their user(s).},
note = {Early View (Online Version of Record before inclusion in an issue)
},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
of their user(s).@article{neucomBayesHTMM,
title = {Bayesian Mixtures of Hidden Tree Markov Models for Structured Data Clustering},
author = {Bacciu Davide and Castellana Daniele},
url = {https://doi.org/10.1016/j.neucom.2018.11.091},
doi = {10.1016/j.neucom.2018.11.091},
isbn = {0925-2312},
year = {2019},
date = {2019-05-21},
journal = {Neurocomputing},
volume = {342},
pages = {49-59},
abstract = {The paper deals with the problem of unsupervised learning with structured data, proposing a mixture model approach to cluster tree samples. First, we discuss how to use the Switching-Parent Hidden Tree Markov Model, a compositional model for learning tree distributions, to define a finite mixture model where the number of components is fixed by a hyperparameter. Then, we show how to relax such an assumption by introducing a Bayesian non-parametric mixture model where the number of necessary hidden tree components is learned from data. Experimental validation on synthetic and real datasets show the benefit of mixture models over simple hidden tree models in clustering applications. Further, we provide a characterization of the behaviour of the two mixture models for different choices of their hyperparameters.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{tnnnls_dropin2019,
title = {Augmenting Recurrent Neural Networks Resilience by Dropout},
author = {Davide Bacciu and Francesco Crecchi },
doi = {10.1109/TNNLS.2019.2899744},
year = {2019},
date = {2019-03-31},
urldate = {2019-03-31},
journal = {IEEE Transactions on Neural Networs and Learning Systems},
abstract = {The paper discusses the simple idea that dropout regularization can be used to efficiently induce resiliency to missing inputs at prediction time in a generic neural network. We show how the approach can be effective on tasks where imputation strategies often fail, namely involving recurrent neural networks and scenarios where whole sequences of input observations are missing. The experimental analysis provides an assessment of the accuracy-resiliency tradeoff in multiple recurrent models, including reservoir computing methods, and comprising real-world ambient intelligence and biomedical time series.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{ral2019,
title = {Learning from humans how to grasp: a data-driven architecture for autonomous grasping with anthropomorphic soft hands},
author = {Della Santina Cosimo and Arapi Visar and Averta Giuseppe and Damiani Francesca and Fiore Gaia and Settimi Alessandro and Catalano Manuel Giuseppe and Bacciu Davide and Bicchi Antonio and Bianchi Matteo},
url = {https://ieeexplore.ieee.org/document/8629968},
doi = {10.1109/LRA.2019.2896485},
issn = {2377-3766},
year = {2019},
date = {2019-02-01},
journal = {IEEE Robotics and Automation Letters},
pages = {1-8},
note = {Also accepted for presentation at ICRA 2019},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2018
@article{frontNeurob18,
title = {DeepDynamicHand: A deep neural architecture for labeling hand manipulation strategies in video sources exploiting temporal information },
author = {Visar Arapi and Cosimo Della Santina and Davide Bacciu and Matteo Bianchi and Antonio Bicchi},
url = {https://www.frontiersin.org/articles/10.3389/fnbot.2018.00086/full},
doi = {10.3389/fnbot.2018.00086},
year = {2018},
date = {2018-12-17},
urldate = {2018-12-17},
journal = {Frontiers in Neurorobotics},
volume = {12},
pages = {86},
abstract = {Humans are capable of complex manipulation interactions with the environment, relying on the intrinsic adaptability and compliance of their hands. Recently, soft robotic manipulation has attempted to reproduce such an extraordinary behavior, through the design of deformable yet robust end-effectors. To this goal, the investigation of human behavior has become crucial to correctly inform technological developments of robotic hands that can successfully exploit environmental constraint as humans actually do. Among the different tools robotics can leverage on to achieve this objective, deep learning has emerged as a promising approach for the study and then the implementation of neuro-scientific observations on the artificial side. However, current approaches tend to neglect the dynamic nature of hand pose recognition problems, limiting the effectiveness of these techniques in identifying sequences of manipulation primitives underpinning action generation, e.g. during purposeful interaction with the environment. In this work, we propose a vision-based supervised Hand Pose Recognition method which, for the first time, takes into account temporal information to identify meaningful sequences of actions in grasping and manipulation tasks . More specifically, we apply Deep Neural Networks to automatically learn features from hand posture images that consist of frames extracted from grasping and manipulation task videos with objects and external environmental constraints. For training purposes, videos are divided into intervals, each associated to a specific action by a human supervisor. The proposed algorithm combines a Convolutional Neural Network to detect the hand within each video frame and a Recurrent Neural Network to predict the hand action in the current frame, while taking into consideration the history of actions performed in the previous frames. Experimental validation has been performed on two datasets of dynamic hand-centric strategies, where subjects regularly interact with objects and environment. Proposed architecture achieved a very good classification accuracy on both datasets, reaching performance up to 94%, and outperforming state of the art techniques. The outcomes of this study can be successfully applied to robotics, e.g for planning and control of soft anthropomorphic manipulators. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{naturescirep2018,
title = {A machine learning approach to estimating preterm infants survival: development of the Preterm Infants Survival Assessment (PISA) predictor},
author = {Podda Marco and Bacciu Davide and Micheli Alessio and Bellu Roberto and Placidi Giulia and Gagliardi Luigi },
url = {https://doi.org/10.1038/s41598-018-31920-6},
doi = {10.1038/s41598-018-31920-6},
year = {2018},
date = {2018-09-13},
urldate = {2018-09-13},
journal = {Nature Scientific Reports},
volume = {8},
abstract = {Estimation of mortality risk of very preterm neonates is carried out in clinical and research settings. We aimed at elaborating a prediction tool using machine learning methods. We developed models on a cohort of 23747 neonates <30 weeks gestational age, or <1501 g birth weight, enrolled in the Italian Neonatal Network in 2008–2014 (development set), using 12 easily collected perinatal variables. We used a cohort from 2015–2016 (N = 5810) as a test set. Among several machine learning methods we chose artificial Neural Networks (NN). The resulting predictor was compared with logistic regression models. In the test cohort, NN had a slightly better discrimination than logistic regression (P < 0.002). The differences were greater in subgroups of neonates (at various gestational age or birth weight intervals, singletons). Using a cutoff of death probability of 0.5, logistic regression misclassified 67/5810 neonates (1.2 percent) more than NN. In conclusion our study – the largest published so far – shows that even in this very simplified scenario, using only limited information available up to 5 minutes after birth, a NN approach had a small but significant advantage over current approaches. The software implementing the predictor is made freely available to the community.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{neurocomp2017,
title = {Randomized neural networks for preference learning with physiological data},
author = {Bacciu Davide and Colombo Michele and Morelli Davide and Plans David},
editor = {Fabio Aiolli and Luca Oneto and Michael Biehl },
url = {https://authors.elsevier.com/a/1Wxbz_L2Otpsb3},
doi = {10.1016/j.neucom.2017.11.070},
year = {2018},
date = {2018-07-12},
journal = {Neurocomputing},
volume = {298},
pages = {9-20},
abstract = {The paper discusses the use of randomized neural networks to learn a complete ordering between samples of heart-rate variability data by relying solely on partial and subject-dependent information concerning pairwise relations between samples. We confront two approaches, i.e. Extreme Learning Machines and Echo State Networks, assessing the effectiveness in exploiting hand-engineered heart-rate variability features versus using raw beat-to-beat sequential data. Additionally, we introduce a weight sharing architecture and a preference learning error function whose performance is compared with a standard architecture realizing pairwise ranking as a binary-classification task. The models are evaluated on real-world data from a mobile application realizing a guided breathing exercise, using a dataset of over 54K exercising sessions. Results show how a randomized neural model processing information in its raw sequential form can outperform its vectorial counterpart, increasing accuracy in predicting the correct sample ordering by about 20%. Further, the experiments highlight the importance of using weight sharing architectures to learn smooth and generalizable complete orders induced by the preference relation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{tnnlsTreeKer17,
title = {Generative Kernels for Tree-Structured Data},
author = {Bacciu Davide and Micheli Alessio and Sperduti Alessandro},
doi = {10.1109/TNNLS.2017.2785292},
issn = {2162-2388 },
year = {2018},
date = {2018-01-15},
journal = {Neural Networks and Learning Systems, IEEE Transactions on},
abstract = {The paper presents a family of methods for the design of adaptive kernels for tree-structured data that exploits the summarization properties of hidden states of hidden Markov models for trees. We introduce a compact and discriminative feature space based on the concept of hidden states multisets and we discuss different approaches to estimate such hidden state encoding. We show how it can be used to build an efficient and general tree kernel based on Jaccard similarity. Further, we derive an unsupervised convolutional generative kernel using a topology induced on the Markov states by a tree topographic mapping. The paper provides an extensive empirical assessment on a variety of structured data learning tasks, comparing the predictive accuracy and computational efficiency of state-of-the-art generative, adaptive and syntactical tree kernels. The results show that the proposed generative approach has a good tradeoff between computational complexity and predictive performance, in particular when considering the soft matching introduced by the topographic mapping.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2017
@article{eaai2017,
title = {A Learning System for Automatic Berg Balance Scale Score Estimation},
author = {Bacciu Davide and Chessa Stefano and Gallicchio Claudio and Micheli Alessio and Pedrelli Luca and Ferro Erina and Fortunati Luigi and La Rosa Davide and Palumbo Filippo and Vozzi Federico and Parodi Oberdan},
url = {http://www.sciencedirect.com/science/article/pii/S0952197617302026},
doi = {https://doi.org/10.1016/j.engappai.2017.08.018},
year = {2017},
date = {2017-08-24},
urldate = {2017-08-24},
journal = {Engineering Applications of Artificial Intelligence journal},
volume = {66},
pages = {60-74},
abstract = {The objective of this work is the development of a learning system for the automatic assessment of balance abilities in elderly people. The system is based on estimating the Berg Balance Scale (BBS) score from the stream of sensor data gathered by a Wii Balance Board. The scientific challenge tackled by our investigation is to assess the feasibility of exploiting the richness of the temporal signals gathered by the balance board for inferring the complete BBS score based on data from a single BBS exercise.
The relation between the data collected by the balance board and the BBS score is inferred by neural networks for temporal data, modeled in particular as Echo State Networks within the Reservoir Computing (RC) paradigm, as a result of a comprehensive comparison among different learning models. The proposed system results to be able to estimate the complete BBS score directly from temporal data on exercise #10 of the BBS test, with ≈≈10 s of duration. Experimental results on real-world data show an absolute error below 4 BBS score points (i.e. below the 7% of the whole BBS range), resulting in a favorable trade-off between predictive performance and user’s required time with respect to previous works in literature. Results achieved by RC models compare well also with respect to different related learning models.
Overall, the proposed system puts forward as an effective tool for an accurate automated assessment of balance abilities in the elderly and it is characterized by being unobtrusive, easy to use and suitable for autonomous usage.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The relation between the data collected by the balance board and the BBS score is inferred by neural networks for temporal data, modeled in particular as Echo State Networks within the Reservoir Computing (RC) paradigm, as a result of a comprehensive comparison among different learning models. The proposed system results to be able to estimate the complete BBS score directly from temporal data on exercise #10 of the BBS test, with ≈≈10 s of duration. Experimental results on real-world data show an absolute error below 4 BBS score points (i.e. below the 7% of the whole BBS range), resulting in a favorable trade-off between predictive performance and user’s required time with respect to previous works in literature. Results achieved by RC models compare well also with respect to different related learning models.
Overall, the proposed system puts forward as an effective tool for an accurate automated assessment of balance abilities in the elderly and it is characterized by being unobtrusive, easy to use and suitable for autonomous usage.@article{jrie2017,
title = {Reliability and human factors in Ambient Assisted Living environments: The DOREMI case study},
author = {Palumbo Filippo and La Rosa Davide and Ferro Erina and Bacciu Davide and Gallicchio Claudio and Micheli Alession and Chessa Stefano and Vozzi Federico and Parodi Oberdan},
doi = {10.1007/s40860-017-0042-1},
isbn = {2199-4668},
year = {2017},
date = {2017-06-17},
journal = {Journal of Reliable Intelligent Environments},
volume = {3},
number = {3},
pages = {139–157},
publisher = {Springer},
abstract = {Malnutrition, sedentariness, and cognitive decline in elderly people represent the target areas addressed by the DOREMI project. It aimed at developing a systemic solution for elderly, able to prolong their functional and cognitive capacity by empowering, stimulating, and unobtrusively monitoring the daily activities according to well-defined “Active Ageing” life-style protocols. Besides the key features of DOREMI in terms of technological and medical protocol solutions, this work is focused on the analysis of the impact of such a solution on the daily life of users and how the users’ behaviour modifies the expected results of the system in a long-term perspective. To this end, we analyse the reliability of the whole system in terms of human factors and their effects on the reliability requirements identified before starting the experimentation in the pilot sites. After giving an overview of the technological solutions we adopted in the project, this paper concentrates on the activities conducted during the two pilot site studies (32 test sites across UK and Italy), the users’ experience of the entire system, and how human factors influenced its overall reliability.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{jlamp2016,
title = {An Experience in using Machine Learning for Short-term Predictions in Smart Transportation Systems},
author = {Bacciu Davide and Carta Antonio and Gnesi Stefania and Semini Laura},
editor = {Alberto Lluch Lafuente and Maurice ter Beek},
doi = {10.1016/j.jlamp.2016.11.002},
issn = {2352-2208},
year = {2017},
date = {2017-01-01},
journal = { Journal of Logical and Algebraic Methods in Programming },
volume = {87},
pages = {52-66},
publisher = {Elsevier},
abstract = {Bike-sharing systems (BSS) are a means of smart transportation with the benefit of a positive impact on urban mobility. To improve the satisfaction of a user of a BSS, it is useful to inform her/him on the status of the stations at run time, and indeed most of the current systems provide the information in terms of number of bicycles parked in each docking stations by means of services available via web. However, when the departure station is empty, the user could also be happy to know how the situation will evolve and, in particular, if a bike is going to arrive (and vice versa when the arrival station is full).
To fulfill this expectation, we envisage services able to make a prediction and infer if there is in use a bike that could be, with high probability, returned at the station where she/he is waiting. The goal of this paper is hence to analyze the feasibility of these services. To this end, we put forward the idea of using Machine Learning methodologies, proposing and comparing different solutions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
To fulfill this expectation, we envisage services able to make a prediction and infer if there is in use a bike that could be, with high probability, returned at the station where she/he is waiting. The goal of this paper is hence to analyze the feasibility of these services. To this end, we put forward the idea of using Machine Learning methodologies, proposing and comparing different solutions.2016
@article{icfNca15,
title = {Unsupervised feature selection for sensor time-series in pervasive computing applications},
author = {Bacciu Davide},
url = {https://pages.di.unipi.it/bacciu/wp-content/uploads/sites/12/2016/04/nca2015.pdf},
doi = {10.1007/s00521-015-1924-x},
issn = {1433-3058},
year = {2016},
date = {2016-07-01},
urldate = {2016-07-01},
journal = {Neural Computing and Applications},
volume = {27},
number = {5},
pages = {1077-1091},
publisher = {Springer London},
abstract = {The paper introduces an efficient feature selection approach for multivariate time-series of heterogeneous sensor data within a pervasive computing scenario. An iterative filtering procedure is devised to reduce information redundancy measured in terms of time-series cross-correlation. The algorithm is capable of identifying nonredundant sensor sources in an unsupervised fashion even in presence of a large proportion of noisy features. In particular, the proposed feature selection process does not require expert intervention to determine the number of selected features, which is a key advancement with respect to time-series filters in the literature. The characteristic of the prosed algorithm allows enriching learning systems, in pervasive computing applications, with a fully automatized feature selection mechanism which can be triggered and performed at run time during system operation. A comparative experimental analysis on real-world data from three pervasive computing applications is provided, showing that the algorithm addresses major limitations of unsupervised filters in the literature when dealing with sensor time-series. Specifically, it is presented an assessment both in terms of reduction of time-series redundancy and in terms of preservation of informative features with respect to associated supervised learning tasks.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2015
@article{bacciuJirs15,
title = {Robotic Ubiquitous Cognitive Ecology for Smart Homes},
author = {Amato Giuseppe and Bacciu Davide and Broxvall Mathias and Chessa Stefano and Coleman Sonya and Di Rocco Maurizio and Dragone Mauro and Gallicchio Claudio and Gennaro Claudio and Lozano Hector and McGinnity T Martin and Micheli Alessio and Ray AK and Renteria Arantxa and Saffiotti Alessandro and Swords David and Vairo Claudio and Vance Philip},
url = {http://dx.doi.org/10.1007/s10846-015-0178-2},
doi = {10.1007/s10846-015-0178-2},
issn = {0921-0296},
year = {2015},
date = {2015-01-01},
journal = {Journal of Intelligent & Robotic Systems},
volume = {80},
number = {1},
pages = {57-81},
publisher = {Springer Netherlands},
abstract = {Robotic ecologies are networks of heterogeneous robotic devices pervasively embedded in everyday environments, where they cooperate to perform complex tasks. While their potential makes them increasingly popular, one fundamental problem is how to make them both autonomous and adaptive, so as to reduce the amount of preparation, pre-programming and human supervision that they require in real world applications. The project RUBICON develops learning solutions which yield cheaper, adaptive and efficient coordination of robotic ecologies. The approach we pursue builds upon a unique combination of methods from cognitive robotics, machine learning, planning and agent-based control, and wireless sensor networks. This paper illustrates the innovations advanced by RUBICON in each of these fronts before describing how the resulting techniques have been integrated and applied to a proof of concept smart home scenario. The resulting system is able to provide useful services and pro-actively assist the users in their activities. RUBICON learns through an incremental and progressive approach driven by the feedback received from its own activities and from the user, while also self-organizing the manner in which it uses available sensors, actuators and other functional components in the process. This paper summarises some of the lessons learned by adopting such an approach and outlines promising directions for future work.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Dragone:2015:CRE:2827370.2827596,
title = {A Cognitive Robotic Ecology Approach to Self-configuring and Evolving AAL Systems},
author = {Dragone Mauro and Amato Giuseppe and Bacciu Davide and Chessa Stefano and Coleman Sonya and Di Rocco Maurizio and Gallicchio Claudio and Gennaro Claudio and Lozano Hector and Maguire Liam and McGinnity Martin and Micheli Alessio and O'Hare Gregory M.P. and Renteria Arantxa and Saffiotti Alessandro and Vairo Claudio and Vance Philip},
url = {http://dx.doi.org/10.1016/j.engappai.2015.07.004},
doi = {10.1016/j.engappai.2015.07.004},
issn = {0952-1976},
year = {2015},
date = {2015-01-01},
urldate = {2015-01-01},
journal = {Engineering Applications of Artificial Intelligence},
volume = {45},
number = {C},
pages = {269--280},
publisher = {Pergamon Press, Inc.},
address = {Tarrytown, NY, USA},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Journals
Projected Latent Distillation for Data-Agnostic Consolidation in distributed continual learning Journal Article In: Neurocomputing, vol. 598, pp. 127935, 2024, ISSN: 0925-2312. Drifting explanations in continual learning Journal Article In: Neurocomputing, vol. 597, pp. 127960, 2024, ISSN: 0925-2312. Deep Learning for Dynamic Graphs: Models and Benchmarks Journal Article In: IEEE Transactions on Neural Networks and Learning Systems, pp. 1-14, 2024. IEEE Transactions on Neural Networks and Learning Systems Special Issue on Causal Discovery and Causality-Inspired Machine Learning Journal Article In: IEEE Transactions on Neural Networks and Learning Systems, vol. 35, no. 4, pp. 4899-4901, 2024. Continual pre-training mitigates forgetting in language and vision Journal Article In: Neural Networks, vol. 179, pp. 106492, 2024, ISSN: 0893-6080. Neural Autoencoder-Based Structure-Preserving Model Order Reduction and Control Design for High-Dimensional Physical Systems Journal Article In: IEEE Control Systems Letters, 2023. PyDGN: a Python Library for Flexible and Reproducible Research on Deep Learning for Graphs Journal Article In: Journal of Open Source Software, vol. 8, no. 90, pp. 5713, 2023. Deep Graph Networks for Drug Repurposing with Multi-Protein Targets Journal Article In: IEEE Transactions on Emerging Topics in Computing, 2023, 2023. Extending OpenStack Monasca for Predictive Elasticity Control Journal Article In: Big Data Mining and Analytics, 2023. Continual adaptation of federated reservoirs in pervasive environments Journal Article In: Neurocomputing, pp. 126638, 2023, ISSN: 0925-2312. A 2-phase Strategy For Intelligent Cloud Operations Journal Article In: IEEE Access, pp. 1-1, 2023. Inductive-Transductive Learning for Very Sparse Fashion Graphs Journal Article In: Neurocomputing, 2022, ISSN: 0925-2312. Graph Neural Network for Context-Aware Recommendation Journal Article In: Neural Processing Letters, 2022. A causal learning framework for the analysis and interpretation of COVID-19 clinical data Journal Article In: Plos One, vol. 17, no. 5, 2022. Modeling Mood Polarity and Declaration Occurrence by Neural Temporal Point Processes Journal Article In: IEEE Transactions on Neural Networks and Learning Systems, pp. 1-8, 2022. Explaining Deep Graph Networks via Input Perturbation Journal Article In: IEEE Transactions on Neural Networks and Learning Systems, 2022. Learning with few examples the semantic description of novel human-inspired grasp strategies from RGB data Journal Article In: IEEE Robotics and Automation Letters, pp. 2573 - 2580, 2022. Controlling astrocyte-mediated synaptic pruning signals for schizophrenia drug repurposing with Deep Graph Networks Journal Article In: Plos Computational Biology, vol. 18, no. 5, 2022. A Tensor Framework for Learning in Structured Domains Journal Article In: Neurocomputing, vol. 470, pp. 405-426, 2022. Catastrophic Forgetting in Deep Graph Networks: a Graph Classification benchmark Journal Article In: Frontiers in Artificial Intelligence , 2022. Is Class-Incremental Enough for Continual Learning? Journal Article In: Frontiers in Artificial Intelligence, vol. 5, 2022, ISSN: 2624-8212. A Systematic Review of Wi-Fi and Machine Learning Integration with Topic Modeling Techniques Journal Article In: Sensors, vol. 22, no. 13, 2022, ISSN: 1424-8220. Continual Learning for Recurrent Neural Networks: an Empirical Evaluation Journal Article In: Neural Networks, vol. 143, pp. 607-627, 2021. Encoding-based Memory for Recurrent Neural Networks Journal Article In: Neurocomputing, vol. 456, pp. 407-420, 2021. Learning to Prevent Grasp Failure with Soft Hands: From Online Prediction to Dual-Arm Grasp Recovery Journal Article In: Advanced Intelligent Systems, 2021. K-Plex Cover Pooling for Graph Neural Networks Journal Article In: Data Mining and Knowledge Discovery, 2021, (Accepted also as paper to the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2021)). Occlusion-based Explanations in Deep Recurrent Models for Biomedical Signals Journal Article In: Entropy, vol. 23, no. 8, pp. 1064, 2021, (Special issue on Representation Learning). A deep graph network-enhanced sampling approach to efficiently explore the space of reduced representations of proteins Journal Article In: Frontiers in Molecular Biosciences, vol. 8, pp. 136, 2021. The CLAIRE COVID-19 initiative: approach, experiences and recommendations Journal Article In: Ethics and Information Technology, 2021. A Deep Classifier for Upper-Limbs Motor Anticipation Tasks in an Online BCI Setting Journal Article In: Bioengineering , 2021. Topographic mapping for quality inspection and intelligent filtering of smart-bracelet data Journal Article In: Neural Computing Applications, 2021. FADER: Fast Adversarial Example Rejection Journal Article In: Neurocomputing, 2021, ISSN: 0925-2312. A Gentle Introduction to Deep Learning for Graphs Journal Article In: Neural Networks, vol. 129, pp. 203-221, 2020. Probabilistic Learning on Graphs via Contextual Architectures Journal Article In: Journal of Machine Learning Research, vol. 21, no. 134, pp. 1−39, 2020. Measuring the effects of confounders in medical supervised classification problems: the Confounding Index (CI) Journal Article In: Artificial Intelligence in Medicine, vol. 103, 2020. Edge-based sequential graph generation with recurrent neural networks Journal Article In: Neurocomputing, 2019. An Ambient Intelligence Approach for Learning in Smart Robotic Environments Journal Article In: Computational Intelligence, 2019, (Early View (Online Version of Record before inclusion in an issue)
). Bayesian Mixtures of Hidden Tree Markov Models for Structured Data Clustering Journal Article In: Neurocomputing, vol. 342, pp. 49-59, 2019, ISBN: 0925-2312. Augmenting Recurrent Neural Networks Resilience by Dropout Journal Article In: IEEE Transactions on Neural Networs and Learning Systems, 2019. Learning from humans how to grasp: a data-driven architecture for autonomous grasping with anthropomorphic soft hands Journal Article In: IEEE Robotics and Automation Letters, pp. 1-8, 2019, ISSN: 2377-3766, (Also accepted for presentation at ICRA 2019). DeepDynamicHand: A deep neural architecture for labeling hand manipulation strategies in video sources exploiting temporal information Journal Article In: Frontiers in Neurorobotics, vol. 12, pp. 86, 2018. A machine learning approach to estimating preterm infants survival: development of the Preterm Infants Survival Assessment (PISA) predictor Journal Article In: Nature Scientific Reports, vol. 8, 2018. Randomized neural networks for preference learning with physiological data Journal Article In: Neurocomputing, vol. 298, pp. 9-20, 2018. Generative Kernels for Tree-Structured Data Journal Article In: Neural Networks and Learning Systems, IEEE Transactions on, 2018, ISSN: 2162-2388 . A Learning System for Automatic Berg Balance Scale Score Estimation Journal Article In: Engineering Applications of Artificial Intelligence journal, vol. 66, pp. 60-74, 2017. Reliability and human factors in Ambient Assisted Living environments: The DOREMI case study Journal Article In: Journal of Reliable Intelligent Environments, vol. 3, no. 3, pp. 139–157, 2017, ISBN: 2199-4668. An Experience in using Machine Learning for Short-term Predictions in Smart Transportation Systems Journal Article In: Journal of Logical and Algebraic Methods in Programming , vol. 87, pp. 52-66, 2017, ISSN: 2352-2208. Unsupervised feature selection for sensor time-series in pervasive computing applications Journal Article In: Neural Computing and Applications, vol. 27, no. 5, pp. 1077-1091, 2016, ISSN: 1433-3058. Robotic Ubiquitous Cognitive Ecology for Smart Homes Journal Article In: Journal of Intelligent & Robotic Systems, vol. 80, no. 1, pp. 57-81, 2015, ISSN: 0921-0296. A Cognitive Robotic Ecology Approach to Self-configuring and Evolving AAL Systems Journal Article In: Engineering Applications of Artificial Intelligence, vol. 45, no. C, pp. 269–280, 2015, ISSN: 0952-1976.2024
2023
2022
2021
2020
2019
2018
2017
2016
2015