Bacciu, Davide; Errica, Federico; Micheli, Alessio Probabilistic Learning on Graphs via Contextual Architectures Journal Article In: Journal of Machine Learning Research, vol. 21, no. 134, pp. 1−39, 2020. Castellana, Daniele; Bacciu, Davide Generalising Recursive Neural Models by Tensor Decomposition Conference Proceedings of the 2020 IEEE World Congress on Computational Intelligence, 2020. Cossu, Andrea; Carta, Antonio; Bacciu, Davide Continual Learning with Gated Incremental Memories for Sequential Data Processing Conference Proceedings of the 2020 IEEE World Congress on Computational Intelligence, 2020. Valenti, Andrea; Carta, Antonio; Bacciu, Davide Learning a Latent Space of Style-Aware Music Representations by Adversarial Autoencoders Conference Proceedings of the 24th European Conference on Artificial Intelligence (ECAI 2020), 2020. Carta, Antonio; Sperduti, Alessandro; Bacciu, Davide Incremental training of a recurrent neural network exploiting a multi-scale dynamic memory Conference Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases 2020 (ECML-PKDD 2020), Springer International Publishing, 2020. Podda, Marco; Bacciu, Davide; Micheli, Alessio A Deep Generative Model for Fragment-Based Molecule Generation Conference Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS 2020) , 2020. Errica, Federico; Podda, Marco; Bacciu, Davide; Micheli, Alessio A Fair Comparison of Graph Neural Networks for Graph Classification Conference Proceedings of the Eighth International Conference on Learning Representations (ICLR 2020), 2020. Podda, Marco; Micheli, Alessio; Bacciu, Davide; Milazzo, Paolo Biochemical Pathway Robustness Prediction with Graph Neural Networks Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20), 2020. Errica, Federico; Bacciu, Davide; Micheli, Alessio Theoretically Expressive and Edge-aware Graph Learning Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20), 2020. Crecchi, Francesco; de Bodt, Cyril; Bacciu, Davide; Verleysen, Michel; John, Lee Perplexity-free Parametric t-SNE Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20), 2020. Bacciu, Davide; Mandic, Danilo Tensor Decompositions in Deep Learning Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20), 2020. Castellana, Daniele; Bacciu, Davide Tensor Decompositions in Recursive Neural Networks for Tree-Structured Data Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20), 2020. Bacciu, Davide; Micheli, Alessio Deep Learning for Graphs Book Chapter In: Oneto, Luca; Navarin, Nicolo; Sperduti, Alessandro; Anguita, Davide (Ed.): Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), vol. 896, pp. 99-127, Springer International Publishing, 2020, ISBN: 978-3-030-43883-8. Ferrari, Elisa; Retico, Alessandra; Bacciu, Davide Measuring the effects of confounders in medical supervised classification problems: the Confounding Index (CI) Journal Article In: Artificial Intelligence in Medicine, vol. 103, 2020. Bacciu, Davide; Micheli, Alessio; Podda, Marco Edge-based sequential graph generation with recurrent neural networks Journal Article In: Neurocomputing, 2019. Bacciu, Davide; Carta, Antonio Sequential Sentence Embeddings for Semantic Similarity Conference Proceedings of the 2019 IEEE Symposium Series on Computational Intelligence (SSCI'19), IEEE, 2019. Bacciu, Davide Reti neurali e linguaggio. Le insidie nascoste di un'algebra delle parole Online Tavosanis, Mirko (Ed.): Lingua Italiana - Treccani 2019, visited: 03.12.2019. Bacciu, Davide; Sotto, Luigi Di A non-negative factorization approach to node pooling in graph convolutional neural networks Conference Proceedings of the 18th International Conference of the Italian Association for Artificial Intelligence (AIIA 2019), Lecture Notes in Artificial Intelligence Springer-Verlag, 2019. Cafagna, Michele; Mattei, Lorenzo De; Bacciu, Davide; Nissim, Malvina Suitable doesn’t mean attractive. Human-based evaluation of automatically generated headlines Conference Proceedings of the 6th Italian Conference on Computational Linguistics (CLiC-it 2019), vol. 2481 , AI*IA series CEUR, 2019. Bacciu, Davide; Carta, Antonio; Sperduti, Alessandro Linear Memory Networks Conference Proceedings of the 28th International Conference on Artificial Neural Networks (ICANN 2019), , vol. 11727, Lecture Notes in Computer Science Springer-Verlag, 2019. Cosimo, Della Santina; Giuseppe, Averta; Visar, Arapi; Alessandro, Settimi; Giuseppe, Catalano Manuel; Davide, Bacciu; Antonio, Bicchi; Matteo, Bianchi Autonomous Grasping with SoftHands: Combining Human Inspiration, Deep Learning and Embodied Machine Intelligence Presentation 11.09.2019. Davide, Bacciu; Maurizio, Di Rocco; Mauro, Dragone; Claudio, Gallicchio; Alessio, Micheli; Alessandro, Saffiotti An Ambient Intelligence Approach for Learning in Smart Robotic Environments Journal Article In: Computational Intelligence, 2019, (Early View (Online Version of Record before inclusion in an issue)
). Castellana, Daniele; Bacciu, Davide Bayesian Tensor Factorisation for Bottom-up Hidden Tree Markov Models Conference Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN 2019I) , IEEE, 2019. Davide, Bacciu; Daniele, Castellana Bayesian Mixtures of Hidden Tree Markov Models for Structured Data Clustering Journal Article In: Neurocomputing, vol. 342, pp. 49-59, 2019, ISBN: 0925-2312. Crecchi, Francesco; Bacciu, Davide; Biggio, Battista Detecting Black-box Adversarial Examples through Nonlinear Dimensionality Reduction Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'19), i6doc.com, Louvain-la-Neuve, Belgium, 2019. Bacciu, Davide; Micheli, Alessio; Podda, Marco Graph generation by sequential edge prediction Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'19), i6doc.com, Louvain-la-Neuve, Belgium, 2019. Bacciu, Davide; Biggio, Battista; Crecchi, Francesco; Lisboa, Paulo J. G.; Martin, José D.; Oneto, Luca; Vellido, Alfredo Societal Issues in Machine Learning: When Learning from Data is Not Enough Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'19), i6doc.com, Louvain-la-Neuve, Belgium, 2019. Bacciu, Davide; Crecchi, Francesco Augmenting Recurrent Neural Networks Resilience by Dropout Journal Article In: IEEE Transactions on Neural Networs and Learning Systems, 2019. Cosimo, Della Santina; Visar, Arapi; Giuseppe, Averta; Francesca, Damiani; Gaia, Fiore; Alessandro, Settimi; Giuseppe, Catalano Manuel; Davide, Bacciu; Antonio, Bicchi; Matteo, Bianchi Learning from humans how to grasp: a data-driven architecture for autonomous grasping with anthropomorphic soft hands Journal Article In: IEEE Robotics and Automation Letters, pp. 1-8, 2019, ISSN: 2377-3766, (Also accepted for presentation at ICRA 2019). Davide, Bacciu; Antonio, Bruno Deep Tree Transductions - A Short Survey Conference Proceedings of the 2019 INNS Big Data and Deep Learning (INNSBDDL 2019) , Recent Advances in Big Data and Deep Learning Springer International Publishing, 2019. Arapi, Visar; Santina, Cosimo Della; Bacciu, Davide; Bianchi, Matteo; Bicchi, Antonio DeepDynamicHand: A deep neural architecture for labeling hand manipulation strategies in video sources exploiting temporal information Journal Article In: Frontiers in Neurorobotics, vol. 12, pp. 86, 2018. Davide, Bacciu; Antonio, Bruno Text Summarization as Tree Transduction by Top-Down TreeLSTM Conference Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence (SSCI'18), IEEE, 2018. Marco, Podda; Davide, Bacciu; Alessio, Micheli; Roberto, Bellu; Giulia, Placidi; Luigi, Gagliardi A machine learning approach to estimating preterm infants survival: development of the Preterm Infants Survival Assessment (PISA) predictor Journal Article In: Nature Scientific Reports, vol. 8, 2018. Davide, Bacciu; Daniele, Castellana Learning Tree Distributions by Hidden Markov Models Workshop Proceedings of the FLOC 2018 Workshop on Learning and Automata (LearnAut'18), 2018. Davide, Bacciu; Michele, Colombo; Davide, Morelli; David, Plans Randomized neural networks for preference learning with physiological data Journal Article In: Neurocomputing, vol. 298, pp. 9-20, 2018. Davide, Bacciu; Federico, Errica; Alessio, Micheli Contextual Graph Markov Model: A Deep and Generative Approach to Graph Processing Conference Proceedings of the 35th International Conference on Machine Learning (ICML 2018), 2018. Davide, Bacciu; Andrea, Bongiorno Concentric ESN: Assessing the Effect of Modularity in Cycle Reservoirs Conference Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN 2018) , IEEE, 2018. Davide, Bacciu; Daniele, Castellana Mixture of Hidden Markov Models as Tree Encoder Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'18), i6doc.com, Louvain-la-Neuve, Belgium, 2018, ISBN: 978-287587047-6. Davide, Bacciu; JG, Lisboa Paulo; D, Martin Jose; Ruxandra, Stoean; Alfredo, Vellido Bioinformatics and medicine in the era of deep learning Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'18), i6doc.com, Louvain-la-Neuve, Belgium, 2018, ISBN: 978-287587047-6. Davide, Bacciu; Alessio, Micheli; Alessandro, Sperduti Generative Kernels for Tree-Structured Data Journal Article In: Neural Networks and Learning Systems, IEEE Transactions on, 2018, ISSN: 2162-2388 . Davide, Bacciu Hidden Tree Markov Networks: Deep and Wide Learning for Structured Data Conference Proc. of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI'17), IEEE, 2017. Davide, Bacciu; Stefano, Chessa; Claudio, Gallicchio; Alessio, Micheli On the Need of Machine Learning as a Service for the Internet of Things Conference To appear in the Proc. of the International Conference on Internet of Things and Machine Learning (IML 2017), International Conference Proceedings Series (ICPS) ACM, 2017, ISBN: 978-1-4503-5243-7. Davide, Bacciu; Stefano, Chessa; Claudio, Gallicchio; Alessio, Micheli; Luca, Pedrelli; Erina, Ferro; Luigi, Fortunati; Davide, La Rosa; Filippo, Palumbo; Federico, Vozzi; Oberdan, Parodi A Learning System for Automatic Berg Balance Scale Score Estimation Journal Article In: Engineering Applications of Artificial Intelligence journal, vol. 66, pp. 60-74, 2017. Ovidiu, Vermesan; Arne, Broring; Elias, Tragos; Martin, Serrano; Davide, Bacciu; Stefano, Chessa; Claudio, Gallicchio; Alessio, Micheli; Mauro, Dragone; Alessandro, Saffiotti; Pieter, Simoens; Filippo, Cavallo; Roy, Bahr In: Vermesan, Ovidiu; Bacquet, Joel (Ed.): Cognitive Hyperconnected Digital Transformation: Internet of Things Intelligence Evolution, Chapter 4, pp. 97-155, River Publishers, 2017, ISBN: 9788793609105. Filippo, Palumbo; Davide, La Rosa; Erina, Ferro; Davide, Bacciu; Claudio, Gallicchio; Alession, Micheli; Stefano, Chessa; Federico, Vozzi; Oberdan, Parodi Reliability and human factors in Ambient Assisted Living environments: The DOREMI case study Journal Article In: Journal of Reliable Intelligent Environments, vol. 3, no. 3, pp. 139–157, 2017, ISBN: 2199-4668. Davide, Bacciu; Francesco, Crecchi; Davide, Morelli DropIn: Making Neural Networks Robust to Missing Inputs by Dropout Conference Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN 2017) , IEEE, 2017, ISBN: 978-1-5090-6182-2. Davide, Bacciu; Michele, Colombo; Davide, Morelli; David, Plans ELM Preference Learning for Physiological Data Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'17), i6doc.com, Louvain-la-Neuve, Belgium, 2017, ISBN: 978-2-875870384. Davide, Bacciu; Antonio, Carta; Stefania, Gnesi; Laura, Semini An Experience in using Machine Learning for Short-term Predictions in Smart Transportation Systems Journal Article In: Journal of Logical and Algebraic Methods in Programming , vol. 87, pp. 52-66, 2017, ISSN: 2352-2208. Davide, Bacciu; Stefano, Chessa; Erina, Ferro; Luigi, Fortunati; Claudio, Gallicchio; Davide, La Rosa; Miguel, Llorente; Alessio, Micheli; Filippo, Palumbo; Oberdan, Parodi; Andrea, Valenti; Federico, Vozzi Detecting socialization events in ageing people: the experienze of the DOREMI project Conference Proceedings of the IEEE 12th International Conference on Intelligent Environments (IE 2016), , IEEE, UK, London, 2016, ISSN: 2472-7571 . Davide, Bacciu Unsupervised feature selection for sensor time-series in pervasive computing applications Journal Article In: Neural Computing and Applications, vol. 27, no. 5, pp. 1077-1091, 2016, ISSN: 1433-3058.2020
@article{jmlrCGMM20,
title = {Probabilistic Learning on Graphs via Contextual Architectures},
author = {Davide Bacciu and Federico Errica and Alessio Micheli},
editor = {Pushmeet Kohli},
url = {http://jmlr.org/papers/v21/19-470.html, Paper},
year = {2020},
date = {2020-07-27},
urldate = {2020-07-27},
journal = {Journal of Machine Learning Research},
volume = {21},
number = {134},
pages = {1−39},
abstract = {We propose a novel methodology for representation learning on graph-structured data, in which a stack of Bayesian Networks learns different distributions of a vertex's neighborhood. Through an incremental construction policy and layer-wise training, we can build deeper architectures with respect to typical graph convolutional neural networks, with benefits in terms of context spreading between vertices.
First, the model learns from graphs via maximum likelihood estimation without using target labels.
Then, a supervised readout is applied to the learned graph embeddings to deal with graph classification and vertex classification tasks, showing competitive results against neural models for graphs. The computational complexity is linear in the number of edges, facilitating learning on large scale data sets. By studying how depth affects the performances of our model, we discover that a broader context generally improves performances. In turn, this leads to a critical analysis of some benchmarks used in literature.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
First, the model learns from graphs via maximum likelihood estimation without using target labels.
Then, a supervised readout is applied to the learned graph embeddings to deal with graph classification and vertex classification tasks, showing competitive results against neural models for graphs. The computational complexity is linear in the number of edges, facilitating learning on large scale data sets. By studying how depth affects the performances of our model, we discover that a broader context generally improves performances. In turn, this leads to a critical analysis of some benchmarks used in literature.@conference{Wcci20Tensor,
title = {Generalising Recursive Neural Models by Tensor Decomposition},
author = {Daniele Castellana and Davide Bacciu},
url = {https://arxiv.org/abs/2006.10021, Arxiv},
year = {2020},
date = {2020-07-19},
urldate = {2020-07-19},
booktitle = {Proceedings of the 2020 IEEE World Congress on Computational Intelligence},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Wcci20CL,
title = {Continual Learning with Gated Incremental Memories for Sequential Data Processing},
author = {Andrea Cossu and Antonio Carta and Davide Bacciu},
url = {https://arxiv.org/pdf/2004.04077.pdf, Arxiv},
doi = {10.1109/IJCNN48605.2020.9207550},
year = {2020},
date = {2020-07-19},
urldate = {2020-07-19},
booktitle = {Proceedings of the 2020 IEEE World Congress on Computational Intelligence},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{ecai2020,
title = { Learning a Latent Space of Style-Aware Music Representations by Adversarial Autoencoders},
author = {Andrea Valenti and Antonio Carta and Davide Bacciu},
url = {https://arxiv.org/abs/2001.05494},
year = {2020},
date = {2020-06-08},
booktitle = {Proceedings of the 24th European Conference on Artificial Intelligence (ECAI 2020)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{ecml2020LMN,
title = {Incremental training of a recurrent neural network exploiting a multi-scale dynamic memory},
author = {Antonio Carta and Alessandro Sperduti and Davide Bacciu},
year = {2020},
date = {2020-06-05},
urldate = {2020-06-05},
booktitle = {Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases 2020 (ECML-PKDD 2020)},
publisher = {Springer International Publishing},
abstract = {The effectiveness of recurrent neural networks can be largely influenced by their ability to store into their dynamical memory information extracted from input sequences at different frequencies and timescales. Such a feature can be introduced into a neural architecture by an appropriate modularization of the dynamic memory. In this paper we propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning. First, we show how to extend the architecture of a simple RNN by separating its hidden state into different modules, each subsampling the network hidden activations at different frequencies. Then, we discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies. Each new module works at a slower frequency than the previous ones and it is initialized to encode the subsampled sequence of hidden activations. Experimental results on synthetic and real-world datasets on speech recognition and handwritten characters show that the modular architecture and the incremental training algorithm improve the ability of recurrent neural networks to capture long-term dependencies.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{aistats2020,
title = {A Deep Generative Model for Fragment-Based Molecule Generation},
author = {Marco Podda and Davide Bacciu and Alessio Micheli},
url = {https://arxiv.org/abs/2002.12826},
year = {2020},
date = {2020-06-03},
urldate = {2020-06-03},
booktitle = {Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS 2020) },
abstract = {Molecule generation is a challenging open problem in cheminformatics. Currently, deep generative approaches addressing the challenge belong to two broad categories, differing in how molecules are represented. One approach encodes molecular graphs as strings of text, and learn their corresponding character-based language model. Another, more expressive, approach operates directly on the molecular graph. In this work, we address two limitations of the former: generation of invalid or duplicate molecules. To improve validity rates, we develop a language model for small molecular substructures called fragments, loosely inspired by the well-known paradigm of Fragment-Based Drug Design. In other words, we generate molecules fragment by fragment, instead of atom by atom. To improve uniqueness rates, we present a frequency-based clustering strategy that helps to generate molecules with infrequent fragments. We show experimentally that our model largely outperforms other language model-based competitors, reaching state-of-the-art performances typical of graph-based approaches. Moreover, generated molecules display molecular properties similar to those in the training sample, even in absence of explicit task-specific supervision.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{iclr19,
title = {A Fair Comparison of Graph Neural Networks for Graph Classification},
author = {Federico Errica and Marco Podda and Davide Bacciu and Alessio Micheli},
url = {https://openreview.net/pdf?id=HygDF6NFPB, PDF
https://iclr.cc/virtual_2020/poster_HygDF6NFPB.html, Talk
https://github.com/diningphil/gnn-comparison, Code},
year = {2020},
date = {2020-04-30},
booktitle = {Proceedings of the Eighth International Conference on Learning Representations (ICLR 2020)},
abstract = {Experimental reproducibility and replicability are critical topics in machine learning. Authors have often raised concerns about their lack in scientific publications to improve the quality of the field. Recently, the graph representation learning field has attracted the attention of a wide research community, which resulted in a large stream of works.
As such, several Graph Neural Network models have been developed to effectively tackle graph classification. However, experimental procedures often lack rigorousness and are hardly reproducible. Motivated by this, we provide an overview of common practices that should be avoided to fairly compare with the state of the art. To counter this troubling trend, we ran more than 47000 experiments in a controlled and uniform framework to re-evaluate five popular models across nine common benchmarks. Moreover, by comparing GNNs with structure-agnostic baselines we provide convincing evidence that, on some datasets, structural information has not been exploited yet. We believe that this work can contribute to the development of the graph learning field, by providing a much needed grounding for rigorous evaluations of graph classification models.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
As such, several Graph Neural Network models have been developed to effectively tackle graph classification. However, experimental procedures often lack rigorousness and are hardly reproducible. Motivated by this, we provide an overview of common practices that should be avoided to fairly compare with the state of the art. To counter this troubling trend, we ran more than 47000 experiments in a controlled and uniform framework to re-evaluate five popular models across nine common benchmarks. Moreover, by comparing GNNs with structure-agnostic baselines we provide convincing evidence that, on some datasets, structural information has not been exploited yet. We believe that this work can contribute to the development of the graph learning field, by providing a much needed grounding for rigorous evaluations of graph classification models.@conference{esann20Podda,
title = { Biochemical Pathway Robustness Prediction with Graph Neural Networks },
author = {Marco Podda and Alessio Micheli and Davide Bacciu and Paolo Milazzo},
editor = {Michel Verleysen},
year = {2020},
date = {2020-04-21},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{esann20Errica,
title = { Theoretically Expressive and Edge-aware Graph Learning },
author = {Federico Errica and Davide Bacciu and Alessio Micheli},
editor = {Michel Verleysen},
url = {https://arxiv.org/abs/2001.09005},
year = {2020},
date = {2020-04-21},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20)},
abstract = {We propose a new Graph Neural Network that combines recent advancements in the field. We give theoretical contributions by proving that the model is strictly more general than the Graph Isomorphism Network and the Gated Graph Neural Network, as it can approximate the same functions and deal with arbitrary edge values. Then, we show how a single node information can flow through the graph unchanged. },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{esann20Crecchi,
title = { Perplexity-free Parametric t-SNE},
author = {Francesco Crecchi and Cyril de Bodt and Davide Bacciu and Michel Verleysen and Lee John},
editor = {Michel Verleysen},
year = {2020},
date = {2020-04-21},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{esann20Tutorial,
title = {Tensor Decompositions in Deep Learning},
author = {Davide Bacciu and Danilo Mandic},
editor = {Michel Verleysen},
url = {https://arxiv.org/abs/2002.11835},
year = {2020},
date = {2020-04-21},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{esann20Castellana,
title = { Tensor Decompositions in Recursive Neural Networks for Tree-Structured Data },
author = {Daniele Castellana and Davide Bacciu},
editor = {Michel Verleysen},
url = {https://arxiv.org/pdf/2006.10619.pdf, Arxiv},
year = {2020},
date = {2020-04-21},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@inbook{graphsBDDL2020,
title = {Deep Learning for Graphs},
author = {Davide Bacciu and Alessio Micheli},
editor = {Luca Oneto and Nicolo Navarin and Alessandro Sperduti and Davide Anguita
},
url = {https://link.springer.com/chapter/10.1007/978-3-030-43883-8_5},
doi = {10.1007/978-3-030-43883-8_5},
isbn = {978-3-030-43883-8},
year = {2020},
date = {2020-04-04},
booktitle = {Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019)},
volume = {896},
pages = {99-127},
publisher = {Springer International Publishing},
series = {Studies in Computational Intelligence Series},
abstract = {We introduce an overview of methods for learning in structured domains covering foundational works developed within the last twenty years to deal with a whole range of complex data representations, including hierarchical structures, graphs and networks, and giving special attention to recent deep learning models for graphs. While we provide a general introduction to the field, we explicitly focus on the neural network paradigm showing how, across the years, these models have been extended to the adaptive processing of incrementally more complex classes of structured data. The ultimate aim is to show how to cope with the fundamental issue of learning adaptive representations for samples with varying size and topology.},
keywords = {},
pubstate = {published},
tppubtype = {inbook}
}
@article{aime20Confound,
title = {Measuring the effects of confounders in medical supervised classification problems: the Confounding Index (CI)},
author = {Elisa Ferrari and Alessandra Retico and Davide Bacciu},
url = {https://arxiv.org/abs/1905.08871},
doi = {10.1016/j.artmed.2020.101804},
year = {2020},
date = {2020-03-01},
journal = {Artificial Intelligence in Medicine},
volume = {103},
abstract = {Over the years, there has been growing interest in using Machine Learning techniques for biomedical data processing. When tackling these tasks, one needs to bear in mind that biomedical data depends on a variety of characteristics, such as demographic aspects (age, gender, etc) or the acquisition technology, which might be unrelated with the target of the analysis. In supervised tasks, failing to match the ground truth targets with respect to such characteristics, called confounders, may lead to very misleading estimates of the predictive performance. Many strategies have been proposed to handle confounders, ranging from data selection, to normalization techniques, up to the use of training algorithm for learning with imbalanced data. However, all these solutions require the confounders to be known a priori. To this aim, we introduce a novel index that is able to measure the confounding effect of a data attribute in a bias-agnostic way. This index can be used to quantitatively compare the confounding effects of different variables and to inform correction methods such as normalization procedures or ad-hoc-prepared learning algorithms. The effectiveness of this index is validated on both simulated data and real-world neuroimaging data. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2019
@article{neucompEsann19,
title = {Edge-based sequential graph generation with recurrent neural networks},
author = {Davide Bacciu and Alessio Micheli and Marco Podda},
url = {https://arxiv.org/abs/2002.00102v1},
year = {2019},
date = {2019-12-31},
journal = {Neurocomputing},
abstract = { Graph generation with Machine Learning is an open problem with applications in various research fields. In this work, we propose to cast the generative process of a graph into a sequential one, relying on a node ordering procedure. We use this sequential process to design a novel generative model composed of two recurrent neural networks that learn to predict the edges of graphs: the first network generates one endpoint of each edge, while the second network generates the other endpoint conditioned on the state of the first. We test our approach extensively on five different datasets, comparing with two well-known baselines coming from graph literature, and two recurrent approaches, one of which holds state of the art performances. Evaluation is conducted considering quantitative and qualitative characteristics of the generated samples. Results show that our approach is able to yield novel, and unique graphs originating from very different distributions, while retaining structural properties very similar to those in the training sample. Under the proposed evaluation framework, our approach is able to reach performances comparable to the current state of the art on the graph generation task. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@conference{ssci19,
title = {Sequential Sentence Embeddings for Semantic Similarity},
author = {Davide Bacciu and Antonio Carta},
doi = {10.1109/SSCI44817.2019.9002824},
year = {2019},
date = {2019-12-06},
urldate = {2019-12-06},
booktitle = {Proceedings of the 2019 IEEE Symposium Series on Computational Intelligence (SSCI'19)},
publisher = {IEEE},
abstract = { Sentence embeddings are distributed representations of sentences intended to be general features to be effectively used as input for deep learning models across different natural language processing tasks.
State-of-the-art sentence embeddings for semantic similarity are computed with a weighted average of pretrained word embeddings, hence completely ignoring the contribution of word ordering within a sentence in defining its semantics. We propose a novel approach to compute sentence embeddings for semantic similarity that exploits a linear autoencoder for sequences. The method can be trained in closed form and it is easy to fit on unlabeled sentences. Our method provides a grounded approach to identify and subtract common discourse from a sentence and its embedding, to remove associated uninformative features. Unlike similar methods in the literature (e.g. the popular Smooth Inverse Frequency approach), our method is able to account for word order. We show that our estimate of the common discourse vector improves the results on two different semantic similarity benchmarks when compared to related approaches from the literature.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
State-of-the-art sentence embeddings for semantic similarity are computed with a weighted average of pretrained word embeddings, hence completely ignoring the contribution of word ordering within a sentence in defining its semantics. We propose a novel approach to compute sentence embeddings for semantic similarity that exploits a linear autoencoder for sequences. The method can be trained in closed form and it is easy to fit on unlabeled sentences. Our method provides a grounded approach to identify and subtract common discourse from a sentence and its embedding, to remove associated uninformative features. Unlike similar methods in the literature (e.g. the popular Smooth Inverse Frequency approach), our method is able to account for word order. We show that our estimate of the common discourse vector improves the results on two different semantic similarity benchmarks when compared to related approaches from the literature.@online{treccani19,
title = {Reti neurali e linguaggio. Le insidie nascoste di un'algebra delle parole},
author = {Davide Bacciu},
editor = {Mirko Tavosanis},
url = {http://www.treccani.it/magazine/lingua_italiana/speciali/IA/02_Bacciu.html},
year = {2019},
date = {2019-12-03},
urldate = {2019-12-03},
organization = {Lingua Italiana - Treccani},
keywords = {},
pubstate = {published},
tppubtype = {online}
}
@conference{aiia2019,
title = {A non-negative factorization approach to node pooling in graph convolutional neural networks},
author = {Davide Bacciu and Luigi {Di Sotto}},
url = {https://arxiv.org/pdf/1909.03287.pdf},
year = {2019},
date = {2019-11-22},
booktitle = {Proceedings of the 18th International Conference of the Italian Association for Artificial Intelligence (AIIA 2019)},
publisher = {Springer-Verlag},
series = {Lecture Notes in Artificial Intelligence},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{clic2019,
title = {Suitable doesn’t mean attractive. Human-based evaluation of automatically generated headlines},
author = {Michele Cafagna and Lorenzo {De Mattei} and Davide Bacciu and Malvina Nissim},
editor = {Raffaella Bernardi and Roberto Navigli and Giovanni Semeraro},
url = {http://ceur-ws.org/Vol-2481/paper13.pdf},
year = {2019},
date = {2019-11-15},
urldate = {2019-11-15},
booktitle = {Proceedings of the 6th Italian Conference on Computational Linguistics (CLiC-it 2019)},
volume = {2481 },
publisher = {CEUR},
series = {AI*IA series},
abstract = {We train three different models to generate newspaper headlines from a portion of the corresponding article. The articles are obtained from two mainstream Italian newspapers. In order to assess the models’ performance, we set up a human-based evaluation where 30 different native speakers expressed their judgment over a variety of aspects. The outcome shows that (i) pointer networks perform better than standard sequence to sequence models, creating mostly correct and appropriate titles; (ii) the suitability of a headline to its article for pointer networks is on par or better than the gold headline; (iii) gold headlines are still by far more inviting than generated headlines to read the whole article, highlighting the contrast between human creativity and content appropriateness.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{lmnArx18,
title = {Linear Memory Networks},
author = {Davide Bacciu and Antonio Carta and Alessandro Sperduti},
url = {https://arxiv.org/pdf/1811.03356.pdf},
doi = {10.1007/978-3-030-30487-4_40},
year = {2019},
date = {2019-09-17},
urldate = {2019-09-17},
booktitle = {Proceedings of the 28th International Conference on Artificial Neural Networks (ICANN 2019), },
volume = {11727},
pages = {513-525 },
publisher = {Springer-Verlag},
series = {Lecture Notes in Computer Science},
abstract = {Recurrent neural networks can learn complex transduction problems that require maintaining and actively exploiting a memory of their inputs. Such models traditionally consider memory and input-output functionalities indissolubly entangled. We introduce a novel recurrent architecture based on the conceptual separation between the functional input-output transformation and the memory mechanism, showing how they can be implemented through different neural components. By building on such conceptualization, we introduce the Linear Memory Network, a recurrent model comprising a feedforward neural network, realizing the non-linear functional transformation, and a linear autoencoder for sequences, implementing the memory component. The resulting architecture can be efficiently trained by building on closed-form solutions to linear optimization problems. Further, by exploiting equivalence results between feedforward and recurrent neural networks we devise a pretraining schema for the proposed architecture. Experiments on polyphonic music datasets show competitive results against gated recurrent networks and other state of the art models. },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@misc{automatica2019,
title = {Autonomous Grasping with SoftHands: Combining Human Inspiration, Deep Learning and Embodied Machine Intelligence},
author = {Della Santina Cosimo and Averta Giuseppe and Arapi Visar and Settimi Alessandro and Catalano Manuel Giuseppe and Bacciu Davide and Bicchi Antonio and Bianchi Matteo},
year = {2019},
date = {2019-09-11},
booktitle = {Oral contribution to AUTOMATICA.IT 2019 },
keywords = {},
pubstate = {published},
tppubtype = {presentation}
}
@article{rubicon2019CI,
title = {An Ambient Intelligence Approach for Learning in Smart Robotic Environments},
author = {Bacciu Davide and Di Rocco Maurizio and Dragone Mauro and Gallicchio Claudio and Micheli Alessio and Saffiotti Alessandro},
doi = {10.1111/coin.12233},
year = {2019},
date = {2019-07-31},
journal = {Computational Intelligence},
abstract = {Smart robotic environments combine traditional (ambient) sensing devices and mobile robots. This combination extends the type of applications that can be considered, reduces their complexity, and enhances the individual values of the devices involved by enabling new services that cannot be performed by a single device. In order to reduce the amount of preparation and pre-programming required for their deployment in real world applications, it is important to make these systems self-learning, self-configuring, and self-adapting. The solution presented in this paper is based upon a type of compositional adaptation where (possibly multiple) plans of actions are created through planning and involve the activation of pre-existing capabilities. All the devices in the smart environment participate in a pervasive learning infrastructure, which is exploited to recognize which plans of actions are most suited to the current situation. The system is evaluated in experiments run in a real domestic environment, showing its ability to pro-actively and smoothly adapt to subtle changes in the environment and in the habits and preferences
of their user(s).},
note = {Early View (Online Version of Record before inclusion in an issue)
},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
of their user(s).@conference{ijcnn2019,
title = {Bayesian Tensor Factorisation for Bottom-up Hidden Tree Markov Models},
author = {Daniele Castellana and Davide Bacciu},
url = {https://arxiv.org/pdf/1905.13528.pdf},
year = {2019},
date = {2019-07-15},
urldate = {2019-07-15},
booktitle = {Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN 2019I) },
publisher = {IEEE},
abstract = {Bottom-Up Hidden Tree Markov Model is a highly expressive model for tree-structured data. Unfortunately, it cannot be used in practice due to the intractable size of its state-transition matrix. We propose a new approximation which lies on the Tucker factorisation of tensors. The probabilistic interpretation of such approximation allows us to define a new probabilistic model for tree-structured data. Hence, we define the new approximated model and we derive its learning algorithm. Then, we empirically assess the effective power of the new model evaluating it on two different tasks. In both cases, our model outperforms the other approximated model known in the literature.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@article{neucomBayesHTMM,
title = {Bayesian Mixtures of Hidden Tree Markov Models for Structured Data Clustering},
author = {Bacciu Davide and Castellana Daniele},
url = {https://doi.org/10.1016/j.neucom.2018.11.091},
doi = {10.1016/j.neucom.2018.11.091},
isbn = {0925-2312},
year = {2019},
date = {2019-05-21},
journal = {Neurocomputing},
volume = {342},
pages = {49-59},
abstract = {The paper deals with the problem of unsupervised learning with structured data, proposing a mixture model approach to cluster tree samples. First, we discuss how to use the Switching-Parent Hidden Tree Markov Model, a compositional model for learning tree distributions, to define a finite mixture model where the number of components is fixed by a hyperparameter. Then, we show how to relax such an assumption by introducing a Bayesian non-parametric mixture model where the number of necessary hidden tree components is learned from data. Experimental validation on synthetic and real datasets show the benefit of mixture models over simple hidden tree models in clustering applications. Further, we provide a characterization of the behaviour of the two mixture models for different choices of their hyperparameters.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@conference{esann19Attacks,
title = {Detecting Black-box Adversarial Examples through Nonlinear Dimensionality Reduction},
author = {Francesco Crecchi and Davide Bacciu and Battista Biggio },
editor = {Michel Verleysen},
url = {https://arxiv.org/pdf/1904.13094.pdf},
year = {2019},
date = {2019-04-24},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'19)},
publisher = {i6doc.com},
address = {Louvain-la-Neuve, Belgium},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{esann19GraphGen,
title = {Graph generation by sequential edge prediction},
author = {Davide Bacciu and Alessio Micheli and Marco Podda},
editor = {Michel Verleysen},
url = {https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2019-107.pdf},
year = {2019},
date = {2019-04-24},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'19)},
publisher = {i6doc.com},
address = {Louvain-la-Neuve, Belgium},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{esann19Tutorial,
title = {Societal Issues in Machine Learning: When Learning from Data is Not Enough},
author = { Davide Bacciu and Battista Biggio and Francesco Crecchi and Paulo J. G. Lisboa and José D. Martin and Luca Oneto and Alfredo Vellido},
editor = {Michel Verleysen},
url = {https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2019-6.pdf},
year = {2019},
date = {2019-04-24},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'19)},
publisher = {i6doc.com},
address = {Louvain-la-Neuve, Belgium},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@article{tnnnls_dropin2019,
title = {Augmenting Recurrent Neural Networks Resilience by Dropout},
author = {Davide Bacciu and Francesco Crecchi },
doi = {10.1109/TNNLS.2019.2899744},
year = {2019},
date = {2019-03-31},
urldate = {2019-03-31},
journal = {IEEE Transactions on Neural Networs and Learning Systems},
abstract = {The paper discusses the simple idea that dropout regularization can be used to efficiently induce resiliency to missing inputs at prediction time in a generic neural network. We show how the approach can be effective on tasks where imputation strategies often fail, namely involving recurrent neural networks and scenarios where whole sequences of input observations are missing. The experimental analysis provides an assessment of the accuracy-resiliency tradeoff in multiple recurrent models, including reservoir computing methods, and comprising real-world ambient intelligence and biomedical time series.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{ral2019,
title = {Learning from humans how to grasp: a data-driven architecture for autonomous grasping with anthropomorphic soft hands},
author = {Della Santina Cosimo and Arapi Visar and Averta Giuseppe and Damiani Francesca and Fiore Gaia and Settimi Alessandro and Catalano Manuel Giuseppe and Bacciu Davide and Bicchi Antonio and Bianchi Matteo},
url = {https://ieeexplore.ieee.org/document/8629968},
doi = {10.1109/LRA.2019.2896485},
issn = {2377-3766},
year = {2019},
date = {2019-02-01},
journal = {IEEE Robotics and Automation Letters},
pages = {1-8},
note = {Also accepted for presentation at ICRA 2019},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@conference{inns2019,
title = {Deep Tree Transductions - A Short Survey},
author = {Bacciu Davide and Bruno Antonio},
editor = {Luca Oneto and Nicol{`o} Navarin and Alessandro Sperduti and Davide Anguita},
url = {https://arxiv.org/abs/1902.01737},
doi = {10.1007/978-3-030-16841-4_25},
year = {2019},
date = {2019-01-04},
urldate = {2019-01-04},
booktitle = {Proceedings of the 2019 INNS Big Data and Deep Learning (INNSBDDL 2019) },
pages = {236--245},
publisher = {Springer International Publishing},
series = {Recent Advances in Big Data and Deep Learning},
abstract = {The paper surveys recent extensions of the Long-Short Term Memory networks to handle tree structures from the perspective of learning non-trivial forms of isomorph structured transductions. It provides a discussion of modern TreeLSTM models, showing the effect of the bias induced by the direction of tree processing. An empirical analysis is performed on real-world benchmarks, highlighting how there is no single model adequate to effectively approach all transduction problems.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
2018
@article{frontNeurob18,
title = {DeepDynamicHand: A deep neural architecture for labeling hand manipulation strategies in video sources exploiting temporal information },
author = {Visar Arapi and Cosimo Della Santina and Davide Bacciu and Matteo Bianchi and Antonio Bicchi},
url = {https://www.frontiersin.org/articles/10.3389/fnbot.2018.00086/full},
doi = {10.3389/fnbot.2018.00086},
year = {2018},
date = {2018-12-17},
urldate = {2018-12-17},
journal = {Frontiers in Neurorobotics},
volume = {12},
pages = {86},
abstract = {Humans are capable of complex manipulation interactions with the environment, relying on the intrinsic adaptability and compliance of their hands. Recently, soft robotic manipulation has attempted to reproduce such an extraordinary behavior, through the design of deformable yet robust end-effectors. To this goal, the investigation of human behavior has become crucial to correctly inform technological developments of robotic hands that can successfully exploit environmental constraint as humans actually do. Among the different tools robotics can leverage on to achieve this objective, deep learning has emerged as a promising approach for the study and then the implementation of neuro-scientific observations on the artificial side. However, current approaches tend to neglect the dynamic nature of hand pose recognition problems, limiting the effectiveness of these techniques in identifying sequences of manipulation primitives underpinning action generation, e.g. during purposeful interaction with the environment. In this work, we propose a vision-based supervised Hand Pose Recognition method which, for the first time, takes into account temporal information to identify meaningful sequences of actions in grasping and manipulation tasks . More specifically, we apply Deep Neural Networks to automatically learn features from hand posture images that consist of frames extracted from grasping and manipulation task videos with objects and external environmental constraints. For training purposes, videos are divided into intervals, each associated to a specific action by a human supervisor. The proposed algorithm combines a Convolutional Neural Network to detect the hand within each video frame and a Recurrent Neural Network to predict the hand action in the current frame, while taking into consideration the history of actions performed in the previous frames. Experimental validation has been performed on two datasets of dynamic hand-centric strategies, where subjects regularly interact with objects and environment. Proposed architecture achieved a very good classification accuracy on both datasets, reaching performance up to 94%, and outperforming state of the art techniques. The outcomes of this study can be successfully applied to robotics, e.g for planning and control of soft anthropomorphic manipulators. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@conference{ssci2018,
title = {Text Summarization as Tree Transduction by Top-Down TreeLSTM},
author = {Bacciu Davide and Bruno Antonio},
url = {https://arxiv.org/abs/1809.09096},
doi = {10.1109/SSCI.2018.8628873},
year = {2018},
date = {2018-11-18},
urldate = {2018-11-18},
booktitle = {Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence (SSCI'18)},
pages = {1411-1418},
publisher = {IEEE},
abstract = {Extractive compression is a challenging natural language processing problem. This work contributes by formulating neural extractive compression as a parse tree transduction problem, rather than a sequence transduction task. Motivated by this, we introduce a deep neural model for learning structure-to-substructure tree transductions by extending the standard Long Short-Term Memory, considering the parent-child relationships in the structural recursion. The proposed model can achieve state of the art performance on sentence compression benchmarks, both in terms of accuracy and compression rate. },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@article{naturescirep2018,
title = {A machine learning approach to estimating preterm infants survival: development of the Preterm Infants Survival Assessment (PISA) predictor},
author = {Podda Marco and Bacciu Davide and Micheli Alessio and Bellu Roberto and Placidi Giulia and Gagliardi Luigi },
url = {https://doi.org/10.1038/s41598-018-31920-6},
doi = {10.1038/s41598-018-31920-6},
year = {2018},
date = {2018-09-13},
urldate = {2018-09-13},
journal = {Nature Scientific Reports},
volume = {8},
abstract = {Estimation of mortality risk of very preterm neonates is carried out in clinical and research settings. We aimed at elaborating a prediction tool using machine learning methods. We developed models on a cohort of 23747 neonates <30 weeks gestational age, or <1501 g birth weight, enrolled in the Italian Neonatal Network in 2008–2014 (development set), using 12 easily collected perinatal variables. We used a cohort from 2015–2016 (N = 5810) as a test set. Among several machine learning methods we chose artificial Neural Networks (NN). The resulting predictor was compared with logistic regression models. In the test cohort, NN had a slightly better discrimination than logistic regression (P < 0.002). The differences were greater in subgroups of neonates (at various gestational age or birth weight intervals, singletons). Using a cutoff of death probability of 0.5, logistic regression misclassified 67/5810 neonates (1.2 percent) more than NN. In conclusion our study – the largest published so far – shows that even in this very simplified scenario, using only limited information available up to 5 minutes after birth, a NN approach had a small but significant advantage over current approaches. The software implementing the predictor is made freely available to the community.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@workshop{learnaut18,
title = {Learning Tree Distributions by Hidden Markov Models},
author = {Bacciu Davide and Castellana Daniele},
editor = {Rémi Eyraud and Jeffrey Heinz and Guillaume Rabusseau and Matteo Sammartino },
url = {https://arxiv.org/abs/1805.12372},
year = {2018},
date = {2018-07-13},
booktitle = {Proceedings of the FLOC 2018 Workshop on Learning and Automata (LearnAut'18)},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@article{neurocomp2017,
title = {Randomized neural networks for preference learning with physiological data},
author = {Bacciu Davide and Colombo Michele and Morelli Davide and Plans David},
editor = {Fabio Aiolli and Luca Oneto and Michael Biehl },
url = {https://authors.elsevier.com/a/1Wxbz_L2Otpsb3},
doi = {10.1016/j.neucom.2017.11.070},
year = {2018},
date = {2018-07-12},
journal = {Neurocomputing},
volume = {298},
pages = {9-20},
abstract = {The paper discusses the use of randomized neural networks to learn a complete ordering between samples of heart-rate variability data by relying solely on partial and subject-dependent information concerning pairwise relations between samples. We confront two approaches, i.e. Extreme Learning Machines and Echo State Networks, assessing the effectiveness in exploiting hand-engineered heart-rate variability features versus using raw beat-to-beat sequential data. Additionally, we introduce a weight sharing architecture and a preference learning error function whose performance is compared with a standard architecture realizing pairwise ranking as a binary-classification task. The models are evaluated on real-world data from a mobile application realizing a guided breathing exercise, using a dataset of over 54K exercising sessions. Results show how a randomized neural model processing information in its raw sequential form can outperform its vectorial counterpart, increasing accuracy in predicting the correct sample ordering by about 20%. Further, the experiments highlight the importance of using weight sharing architectures to learn smooth and generalizable complete orders induced by the preference relation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@conference{icml2018,
title = {Contextual Graph Markov Model: A Deep and Generative Approach to Graph Processing},
author = {Bacciu Davide and Errica Federico and Micheli Alessio},
url = {https://arxiv.org/abs/1805.10636},
year = {2018},
date = {2018-07-11},
urldate = {2018-07-11},
booktitle = {Proceedings of the 35th International Conference on Machine Learning (ICML 2018)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{ijcnn2018,
title = {Concentric ESN: Assessing the Effect of Modularity in Cycle Reservoirs},
author = {Bacciu Davide and Bongiorno Andrea},
url = {https://arxiv.org/abs/1805.09244},
year = {2018},
date = {2018-07-09},
urldate = {2018-07-09},
booktitle = {Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN 2018) },
pages = {1-9},
publisher = {IEEE},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{esann2018Tree,
title = {Mixture of Hidden Markov Models as Tree Encoder},
author = {Bacciu Davide and Castellana Daniele},
editor = {Michel Verleysen},
isbn = {978-287587047-6},
year = {2018},
date = {2018-04-26},
urldate = {2018-04-26},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'18)},
pages = {543-548},
publisher = {i6doc.com},
address = {Louvain-la-Neuve, Belgium},
abstract = {The paper introduces a new probabilistic tree encoder based on a mixture of Bottom-up Hidden Tree Markov Models. The ability to recognise similar structures in data is experimentally assessed both in clusterization and classification tasks. The results of these preliminary experiments suggest that the model can be successfully used to compress the tree structural and label patterns in a vectorial representation.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{esann2018Tut,
title = {Bioinformatics and medicine in the era of deep learning},
author = {Bacciu Davide and Lisboa Paulo JG and Martin Jose D and Stoean Ruxandra and Vellido Alfredo},
editor = {Michel Verleysen},
url = {http://arxiv.org/abs/1802.09791},
isbn = {978-287587047-6},
year = {2018},
date = {2018-04-26},
urldate = {2018-04-26},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'18)},
pages = {345-354},
publisher = {i6doc.com},
address = {Louvain-la-Neuve, Belgium},
abstract = {Many of the current scientific advances in the life sciences have their origin in the intensive use of data for knowledge discovery. In no area this is so clear as in bioinformatics, led by technological breakthroughs in data acquisition technologies. It has been argued that bioinformatics could quickly become the field of research generating the largest data repositories, beating other data-intensive areas such as high-energy physics or astroinformatics. Over the last decade, deep learning has become a disruptive advance in machine learning, giving new live to the long-standing connectionist paradigm in artificial intelligence. Deep learning methods are ideally suited to large-scale data and, therefore, they should be ideally suited to knowledge discovery in bioinformatics and biomedicine at large. In this brief paper, we review key aspects of the application of deep learning in bioinformatics and medicine, drawing from the themes covered by the contributions to an ESANN 2018 special session devoted to this topic.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@article{tnnlsTreeKer17,
title = {Generative Kernels for Tree-Structured Data},
author = {Bacciu Davide and Micheli Alessio and Sperduti Alessandro},
doi = {10.1109/TNNLS.2017.2785292},
issn = {2162-2388 },
year = {2018},
date = {2018-01-15},
journal = {Neural Networks and Learning Systems, IEEE Transactions on},
abstract = {The paper presents a family of methods for the design of adaptive kernels for tree-structured data that exploits the summarization properties of hidden states of hidden Markov models for trees. We introduce a compact and discriminative feature space based on the concept of hidden states multisets and we discuss different approaches to estimate such hidden state encoding. We show how it can be used to build an efficient and general tree kernel based on Jaccard similarity. Further, we derive an unsupervised convolutional generative kernel using a topology induced on the Markov states by a tree topographic mapping. The paper provides an extensive empirical assessment on a variety of structured data learning tasks, comparing the predictive accuracy and computational efficiency of state-of-the-art generative, adaptive and syntactical tree kernels. The results show that the proposed generative approach has a good tradeoff between computational complexity and predictive performance, in particular when considering the soft matching introduced by the topographic mapping.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2017
@conference{dl2017,
title = {Hidden Tree Markov Networks: Deep and Wide Learning for Structured Data},
author = {Bacciu Davide},
url = {https://arxiv.org/abs/1711.07784},
year = {2017},
date = {2017-11-27},
urldate = {2017-11-27},
booktitle = {Proc. of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI'17)},
publisher = {IEEE},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{iml2017,
title = {On the Need of Machine Learning as a Service for the Internet of Things},
author = {Bacciu Davide and Chessa Stefano and Gallicchio Claudio and Micheli Alessio},
isbn = {978-1-4503-5243-7},
year = {2017},
date = {2017-10-18},
booktitle = {To appear in the Proc. of the International Conference on Internet of Things and Machine Learning (IML 2017)},
journal = {Proc},
publisher = {ACM},
series = {International Conference Proceedings Series (ICPS)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@article{eaai2017,
title = {A Learning System for Automatic Berg Balance Scale Score Estimation},
author = {Bacciu Davide and Chessa Stefano and Gallicchio Claudio and Micheli Alessio and Pedrelli Luca and Ferro Erina and Fortunati Luigi and La Rosa Davide and Palumbo Filippo and Vozzi Federico and Parodi Oberdan},
url = {http://www.sciencedirect.com/science/article/pii/S0952197617302026},
doi = {https://doi.org/10.1016/j.engappai.2017.08.018},
year = {2017},
date = {2017-08-24},
urldate = {2017-08-24},
journal = {Engineering Applications of Artificial Intelligence journal},
volume = {66},
pages = {60-74},
abstract = {The objective of this work is the development of a learning system for the automatic assessment of balance abilities in elderly people. The system is based on estimating the Berg Balance Scale (BBS) score from the stream of sensor data gathered by a Wii Balance Board. The scientific challenge tackled by our investigation is to assess the feasibility of exploiting the richness of the temporal signals gathered by the balance board for inferring the complete BBS score based on data from a single BBS exercise.
The relation between the data collected by the balance board and the BBS score is inferred by neural networks for temporal data, modeled in particular as Echo State Networks within the Reservoir Computing (RC) paradigm, as a result of a comprehensive comparison among different learning models. The proposed system results to be able to estimate the complete BBS score directly from temporal data on exercise #10 of the BBS test, with ≈≈10 s of duration. Experimental results on real-world data show an absolute error below 4 BBS score points (i.e. below the 7% of the whole BBS range), resulting in a favorable trade-off between predictive performance and user’s required time with respect to previous works in literature. Results achieved by RC models compare well also with respect to different related learning models.
Overall, the proposed system puts forward as an effective tool for an accurate automated assessment of balance abilities in the elderly and it is characterized by being unobtrusive, easy to use and suitable for autonomous usage.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The relation between the data collected by the balance board and the BBS score is inferred by neural networks for temporal data, modeled in particular as Echo State Networks within the Reservoir Computing (RC) paradigm, as a result of a comprehensive comparison among different learning models. The proposed system results to be able to estimate the complete BBS score directly from temporal data on exercise #10 of the BBS test, with ≈≈10 s of duration. Experimental results on real-world data show an absolute error below 4 BBS score points (i.e. below the 7% of the whole BBS range), resulting in a favorable trade-off between predictive performance and user’s required time with respect to previous works in literature. Results achieved by RC models compare well also with respect to different related learning models.
Overall, the proposed system puts forward as an effective tool for an accurate automated assessment of balance abilities in the elderly and it is characterized by being unobtrusive, easy to use and suitable for autonomous usage.@inbook{iotBook17,
title = {Internet of Robotic Things - Converging Sensing/Actuating, Hyperconnectivity, Artificial Intelligence and IoT Platforms},
author = {Vermesan Ovidiu and Broring Arne and Tragos Elias and Serrano Martin and Bacciu Davide and Chessa Stefano and Gallicchio Claudio and Micheli Alessio and Dragone Mauro and Saffiotti Alessandro and Simoens Pieter and Cavallo Filippo and Bahr Roy},
editor = {Ovidiu Vermesan and Joel Bacquet},
url = {http://www.riverpublishers.com/downloadchapter.php?file=RP_9788793609105C4.pdf},
doi = {10.13052/rp-9788793609105},
isbn = {9788793609105},
year = {2017},
date = {2017-06-28},
booktitle = {Cognitive Hyperconnected Digital Transformation: Internet of Things Intelligence Evolution},
pages = {97-155},
publisher = {River Publishers},
chapter = {4},
keywords = {},
pubstate = {published},
tppubtype = {inbook}
}
@article{jrie2017,
title = {Reliability and human factors in Ambient Assisted Living environments: The DOREMI case study},
author = {Palumbo Filippo and La Rosa Davide and Ferro Erina and Bacciu Davide and Gallicchio Claudio and Micheli Alession and Chessa Stefano and Vozzi Federico and Parodi Oberdan},
doi = {10.1007/s40860-017-0042-1},
isbn = {2199-4668},
year = {2017},
date = {2017-06-17},
journal = {Journal of Reliable Intelligent Environments},
volume = {3},
number = {3},
pages = {139–157},
publisher = {Springer},
abstract = {Malnutrition, sedentariness, and cognitive decline in elderly people represent the target areas addressed by the DOREMI project. It aimed at developing a systemic solution for elderly, able to prolong their functional and cognitive capacity by empowering, stimulating, and unobtrusively monitoring the daily activities according to well-defined “Active Ageing” life-style protocols. Besides the key features of DOREMI in terms of technological and medical protocol solutions, this work is focused on the analysis of the impact of such a solution on the daily life of users and how the users’ behaviour modifies the expected results of the system in a long-term perspective. To this end, we analyse the reliability of the whole system in terms of human factors and their effects on the reliability requirements identified before starting the experimentation in the pilot sites. After giving an overview of the technological solutions we adopted in the project, this paper concentrates on the activities conducted during the two pilot site studies (32 test sites across UK and Italy), the users’ experience of the entire system, and how human factors influenced its overall reliability.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@conference{ijcnn2017,
title = {DropIn: Making Neural Networks Robust to Missing Inputs by Dropout},
author = {Bacciu Davide and Crecchi Francesco and Morelli Davide},
url = {https://arxiv.org/abs/1705.02643},
doi = {10.1109/IJCNN.2017.7966106},
isbn = {978-1-5090-6182-2},
year = {2017},
date = {2017-05-19},
urldate = {2017-05-19},
booktitle = {Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN 2017) },
pages = {2080-2087},
publisher = {IEEE},
abstract = {The paper presents a novel, principled approach to train recurrent neural networks from the Reservoir Computing family that are robust to missing part of the input features at prediction time. By building on the ensembling properties of Dropout regularization, we propose a methodology, named DropIn, which efficiently trains a neural model as a committee machine of subnetworks, each capable of predicting with a subset of the original input features. We discuss the application of the DropIn methodology in the context of Reservoir Computing models and targeting applications characterized by input sources that are unreliable or prone to be disconnected, such as in pervasive wireless sensor networks and ambient intelligence. We provide an experimental assessment using real-world data from such application domains, showing how the Dropin methodology allows to maintain predictive performances comparable to those of a model without missing features, even when 20%–50% of the inputs are not available.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{esann2017,
title = {ELM Preference Learning for Physiological Data},
author = {Bacciu Davide and Colombo Michele and Morelli Davide and Plans David},
editor = {Michel Verleysen},
isbn = {978-2-875870384},
year = {2017},
date = {2017-04-28},
urldate = {2017-04-28},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'17)},
pages = {99-104},
publisher = {i6doc.com},
address = {Louvain-la-Neuve, Belgium},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@article{jlamp2016,
title = {An Experience in using Machine Learning for Short-term Predictions in Smart Transportation Systems},
author = {Bacciu Davide and Carta Antonio and Gnesi Stefania and Semini Laura},
editor = {Alberto Lluch Lafuente and Maurice ter Beek},
doi = {10.1016/j.jlamp.2016.11.002},
issn = {2352-2208},
year = {2017},
date = {2017-01-01},
journal = { Journal of Logical and Algebraic Methods in Programming },
volume = {87},
pages = {52-66},
publisher = {Elsevier},
abstract = {Bike-sharing systems (BSS) are a means of smart transportation with the benefit of a positive impact on urban mobility. To improve the satisfaction of a user of a BSS, it is useful to inform her/him on the status of the stations at run time, and indeed most of the current systems provide the information in terms of number of bicycles parked in each docking stations by means of services available via web. However, when the departure station is empty, the user could also be happy to know how the situation will evolve and, in particular, if a bike is going to arrive (and vice versa when the arrival station is full).
To fulfill this expectation, we envisage services able to make a prediction and infer if there is in use a bike that could be, with high probability, returned at the station where she/he is waiting. The goal of this paper is hence to analyze the feasibility of these services. To this end, we put forward the idea of using Machine Learning methodologies, proposing and comparing different solutions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
To fulfill this expectation, we envisage services able to make a prediction and infer if there is in use a bike that could be, with high probability, returned at the station where she/he is waiting. The goal of this paper is hence to analyze the feasibility of these services. To this end, we put forward the idea of using Machine Learning methodologies, proposing and comparing different solutions.2016
@conference{ie2016,
title = { Detecting socialization events in ageing people: the experienze of the DOREMI project},
author = {Bacciu Davide and Chessa Stefano and Ferro Erina and Fortunati Luigi and Gallicchio Claudio and La Rosa Davide and Llorente Miguel and Micheli Alessio and Palumbo Filippo and Parodi Oberdan and Valenti Andrea and Vozzi Federico},
doi = {10.1109/IE.2016.28},
issn = {2472-7571 },
year = {2016},
date = {2016-10-27},
urldate = {2016-10-27},
booktitle = {Proceedings of the IEEE 12th International Conference on Intelligent Environments (IE 2016), },
pages = {132-135},
publisher = {IEEE},
address = {UK, London},
abstract = {The detection of socialization events is useful to build indicators about social isolation of people, which is an important indicator in e-health applications. On the other hand, it is rather difficult to achieve with non-invasive solutions. This paper reports about the currently work-in-progress on the technological solution for the detection of socialization events adopted in the DOREMI project.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@article{icfNca15,
title = {Unsupervised feature selection for sensor time-series in pervasive computing applications},
author = {Bacciu Davide},
url = {http://pages.di.unipi.it/bacciu/wp-content/uploads/sites/12/2016/04/nca2015.pdf},
doi = {10.1007/s00521-015-1924-x},
issn = {1433-3058},
year = {2016},
date = {2016-07-01},
urldate = {2016-07-01},
journal = {Neural Computing and Applications},
volume = {27},
number = {5},
pages = {1077-1091},
publisher = {Springer London},
abstract = {The paper introduces an efficient feature selection approach for multivariate time-series of heterogeneous sensor data within a pervasive computing scenario. An iterative filtering procedure is devised to reduce information redundancy measured in terms of time-series cross-correlation. The algorithm is capable of identifying nonredundant sensor sources in an unsupervised fashion even in presence of a large proportion of noisy features. In particular, the proposed feature selection process does not require expert intervention to determine the number of selected features, which is a key advancement with respect to time-series filters in the literature. The characteristic of the prosed algorithm allows enriching learning systems, in pervasive computing applications, with a fully automatized feature selection mechanism which can be triggered and performed at run time during system operation. A comparative experimental analysis on real-world data from three pervasive computing applications is provided, showing that the algorithm addresses major limitations of unsupervised filters in the literature when dealing with sensor time-series. Specifically, it is presented an assessment both in terms of reduction of time-series redundancy and in terms of preservation of informative features with respect to associated supervised learning tasks.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
All
Probabilistic Learning on Graphs via Contextual Architectures Journal Article In: Journal of Machine Learning Research, vol. 21, no. 134, pp. 1−39, 2020. Generalising Recursive Neural Models by Tensor Decomposition Conference Proceedings of the 2020 IEEE World Congress on Computational Intelligence, 2020. Continual Learning with Gated Incremental Memories for Sequential Data Processing Conference Proceedings of the 2020 IEEE World Congress on Computational Intelligence, 2020. Learning a Latent Space of Style-Aware Music Representations by Adversarial Autoencoders Conference Proceedings of the 24th European Conference on Artificial Intelligence (ECAI 2020), 2020. Incremental training of a recurrent neural network exploiting a multi-scale dynamic memory Conference Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases 2020 (ECML-PKDD 2020), Springer International Publishing, 2020. A Deep Generative Model for Fragment-Based Molecule Generation Conference Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS 2020) , 2020. A Fair Comparison of Graph Neural Networks for Graph Classification Conference Proceedings of the Eighth International Conference on Learning Representations (ICLR 2020), 2020. Biochemical Pathway Robustness Prediction with Graph Neural Networks Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20), 2020. Theoretically Expressive and Edge-aware Graph Learning Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20), 2020. Perplexity-free Parametric t-SNE Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20), 2020. Tensor Decompositions in Deep Learning Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20), 2020. Tensor Decompositions in Recursive Neural Networks for Tree-Structured Data Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20), 2020. Deep Learning for Graphs Book Chapter In: Oneto, Luca; Navarin, Nicolo; Sperduti, Alessandro; Anguita, Davide (Ed.): Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), vol. 896, pp. 99-127, Springer International Publishing, 2020, ISBN: 978-3-030-43883-8. Measuring the effects of confounders in medical supervised classification problems: the Confounding Index (CI) Journal Article In: Artificial Intelligence in Medicine, vol. 103, 2020. Edge-based sequential graph generation with recurrent neural networks Journal Article In: Neurocomputing, 2019. Sequential Sentence Embeddings for Semantic Similarity Conference Proceedings of the 2019 IEEE Symposium Series on Computational Intelligence (SSCI'19), IEEE, 2019. Reti neurali e linguaggio. Le insidie nascoste di un'algebra delle parole Online Tavosanis, Mirko (Ed.): Lingua Italiana - Treccani 2019, visited: 03.12.2019. A non-negative factorization approach to node pooling in graph convolutional neural networks Conference Proceedings of the 18th International Conference of the Italian Association for Artificial Intelligence (AIIA 2019), Lecture Notes in Artificial Intelligence Springer-Verlag, 2019. Suitable doesn’t mean attractive. Human-based evaluation of automatically generated headlines Conference Proceedings of the 6th Italian Conference on Computational Linguistics (CLiC-it 2019), vol. 2481 , AI*IA series CEUR, 2019. Linear Memory Networks Conference Proceedings of the 28th International Conference on Artificial Neural Networks (ICANN 2019), , vol. 11727, Lecture Notes in Computer Science Springer-Verlag, 2019. Autonomous Grasping with SoftHands: Combining Human Inspiration, Deep Learning and Embodied Machine Intelligence Presentation 11.09.2019. An Ambient Intelligence Approach for Learning in Smart Robotic Environments Journal Article In: Computational Intelligence, 2019, (Early View (Online Version of Record before inclusion in an issue)
). Bayesian Tensor Factorisation for Bottom-up Hidden Tree Markov Models Conference Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN 2019I) , IEEE, 2019. Bayesian Mixtures of Hidden Tree Markov Models for Structured Data Clustering Journal Article In: Neurocomputing, vol. 342, pp. 49-59, 2019, ISBN: 0925-2312. Detecting Black-box Adversarial Examples through Nonlinear Dimensionality Reduction Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'19), i6doc.com, Louvain-la-Neuve, Belgium, 2019. Graph generation by sequential edge prediction Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'19), i6doc.com, Louvain-la-Neuve, Belgium, 2019. Societal Issues in Machine Learning: When Learning from Data is Not Enough Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'19), i6doc.com, Louvain-la-Neuve, Belgium, 2019. Augmenting Recurrent Neural Networks Resilience by Dropout Journal Article In: IEEE Transactions on Neural Networs and Learning Systems, 2019. Learning from humans how to grasp: a data-driven architecture for autonomous grasping with anthropomorphic soft hands Journal Article In: IEEE Robotics and Automation Letters, pp. 1-8, 2019, ISSN: 2377-3766, (Also accepted for presentation at ICRA 2019). Deep Tree Transductions - A Short Survey Conference Proceedings of the 2019 INNS Big Data and Deep Learning (INNSBDDL 2019) , Recent Advances in Big Data and Deep Learning Springer International Publishing, 2019. DeepDynamicHand: A deep neural architecture for labeling hand manipulation strategies in video sources exploiting temporal information Journal Article In: Frontiers in Neurorobotics, vol. 12, pp. 86, 2018. Text Summarization as Tree Transduction by Top-Down TreeLSTM Conference Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence (SSCI'18), IEEE, 2018. A machine learning approach to estimating preterm infants survival: development of the Preterm Infants Survival Assessment (PISA) predictor Journal Article In: Nature Scientific Reports, vol. 8, 2018. Learning Tree Distributions by Hidden Markov Models Workshop Proceedings of the FLOC 2018 Workshop on Learning and Automata (LearnAut'18), 2018. Randomized neural networks for preference learning with physiological data Journal Article In: Neurocomputing, vol. 298, pp. 9-20, 2018. Contextual Graph Markov Model: A Deep and Generative Approach to Graph Processing Conference Proceedings of the 35th International Conference on Machine Learning (ICML 2018), 2018. Concentric ESN: Assessing the Effect of Modularity in Cycle Reservoirs Conference Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN 2018) , IEEE, 2018. Mixture of Hidden Markov Models as Tree Encoder Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'18), i6doc.com, Louvain-la-Neuve, Belgium, 2018, ISBN: 978-287587047-6. Bioinformatics and medicine in the era of deep learning Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'18), i6doc.com, Louvain-la-Neuve, Belgium, 2018, ISBN: 978-287587047-6. Generative Kernels for Tree-Structured Data Journal Article In: Neural Networks and Learning Systems, IEEE Transactions on, 2018, ISSN: 2162-2388 . Hidden Tree Markov Networks: Deep and Wide Learning for Structured Data Conference Proc. of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI'17), IEEE, 2017. On the Need of Machine Learning as a Service for the Internet of Things Conference To appear in the Proc. of the International Conference on Internet of Things and Machine Learning (IML 2017), International Conference Proceedings Series (ICPS) ACM, 2017, ISBN: 978-1-4503-5243-7. A Learning System for Automatic Berg Balance Scale Score Estimation Journal Article In: Engineering Applications of Artificial Intelligence journal, vol. 66, pp. 60-74, 2017. In: Vermesan, Ovidiu; Bacquet, Joel (Ed.): Cognitive Hyperconnected Digital Transformation: Internet of Things Intelligence Evolution, Chapter 4, pp. 97-155, River Publishers, 2017, ISBN: 9788793609105. Reliability and human factors in Ambient Assisted Living environments: The DOREMI case study Journal Article In: Journal of Reliable Intelligent Environments, vol. 3, no. 3, pp. 139–157, 2017, ISBN: 2199-4668. DropIn: Making Neural Networks Robust to Missing Inputs by Dropout Conference Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN 2017) , IEEE, 2017, ISBN: 978-1-5090-6182-2. ELM Preference Learning for Physiological Data Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'17), i6doc.com, Louvain-la-Neuve, Belgium, 2017, ISBN: 978-2-875870384. An Experience in using Machine Learning for Short-term Predictions in Smart Transportation Systems Journal Article In: Journal of Logical and Algebraic Methods in Programming , vol. 87, pp. 52-66, 2017, ISSN: 2352-2208. Detecting socialization events in ageing people: the experienze of the DOREMI project Conference Proceedings of the IEEE 12th International Conference on Intelligent Environments (IE 2016), , IEEE, UK, London, 2016, ISSN: 2472-7571 . Unsupervised feature selection for sensor time-series in pervasive computing applications Journal Article In: Neural Computing and Applications, vol. 27, no. 5, pp. 1077-1091, 2016, ISSN: 1433-3058.2020
2019
2018
2017
2016